perm filename AI.TXT[BB,DOC]17 blob sn#864516 filedate 1988-12-01 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00255 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00038 00002	This file (AI.TXT[BB,DOC]) currently holds volume 6 of the AI-LIST digest.
C00040 00003	∂03-Jan-88  2117	LAWS@KL.SRI.COM 	AIList V6 #1 - Concurrent Prolog, Conferences   
C00064 00004	∂03-Jan-88  2352	LAWS@KL.SRI.COM 	AIList Digest   V6 #2  
C00095 00005	∂09-Jan-88  0046	LAWS@KL.SRI.COM 	AIList V6 #3 - Logics Bulletin, TINLAP3, Methodology 
C00127 00006	∂09-Jan-88  0253	LAWS@KL.SRI.COM 	AIList V6 #4 - Seminars, Conferences  
C00148 00007	∂09-Jan-88  0444	LAWS@KL.SRI.COM 	AIList V6 #5 - Conference on AI Applications    
C00169 00008	∂09-Jan-88  0629	LAWS@KL.SRI.COM 	AIList V6 #6 - Cognitive Science Programs  
C00189 00009	∂09-Jan-88  0808	LAWS@KL.SRI.COM 	AIList V6 #7 - Object-Oriented Databases   
C00216 00010	∂09-Jan-88  0952	LAWS@KL.SRI.COM 	AIList V6 #8 - Voice Synthesizers, Online Dictionaries    
C00234 00011	∂12-Jan-88  0014	LAWS@KL.SRI.COM 	AIList V6 #9 - Synthesizers, Dictionaries, SNOBOL, Psychology List  
C00254 00012	∂15-Jan-88  0016	LAWS@KL.SRI.COM 	AIList V6 #10 - Intelligence, MLNS Neural Network Tool Set
C00274 00013	∂17-Jan-88  2332	LAWS@KL.SRI.COM 	AIList V6 #11 - Seminars    
C00290 00014	∂18-Jan-88  0159	LAWS@KL.SRI.COM 	AIList V6 #12 - Conference on Text and Image Handling
C00326 00015	∂18-Jan-88  0405	LAWS@KL.SRI.COM 	AIList Digest   V6 #13 
C00343 00016	∂22-Jan-88  0137	LAWS@KL.SRI.COM 	AIList V6 #14 - Seminars, Mathematical Logic Conference   
C00361 00017	∂22-Jan-88  0401	LAWS@KL.SRI.COM 	AIList V6 #15 - Selfridge, Mazes, Prolog, BUILD, Ping-Pong
C00382 00018	∂22-Jan-88  0619	LAWS@KL.SRI.COM 	AIList V6 #16 - Neural Nets, Psychiatry, Psychology, Nanocomputers  
C00402 00019	∂25-Jan-88  0119	LAWS@KL.SRI.COM 	AIList V6 #17 - Prolog, CommonLoops, Ball Catching, Nano-Engineering
C00418 00020	∂29-Jan-88  0220	LAWS@KL.SRI.COM 	AIList V6 #18 - Seminars, Conference  
C00442 00021	∂29-Jan-88  0504	LAWS@KL.SRI.COM 	AIList V6 #19 - Ping Pong, Prolog, Expert Systems, IXL    
C00470 00022	∂29-Jan-88  0730	LAWS@KL.SRI.COM 	AIList V6 #20 - Nanoengineering, Philosophy, Deductive Databases    
C00488 00023	∂01-Feb-88  0030	LAWS@KL.SRI.COM 	AIList V6 #21 - Connectionism, XLISP, Ping Pong, Expert Systems
C00515 00024	∂01-Feb-88  0257	LAWS@KL.SRI.COM 	AIList V6 #22 - Self-Consciousness, Poplog 
C00544 00025	∂01-Feb-88  0527	LAWS@KL.SRI.COM 	AIList V6 #23 - Newton, Nanotechnology, Philosophy   
C00574 00026	∂05-Feb-88  0057	LAWS@KL.SRI.COM 	AIList V6 #24 - Seminars, Connectionist Conference, Course
C00608 00027	∂05-Feb-88  0315	LAWS@KL.SRI.COM 	AIList V6 #25 - Software Engineering, XLISP, Vision, Language  
C00630 00028	∂05-Feb-88  0535	LAWS@KL.SRI.COM 	AIList V6 #26 - Connectionism, Nature of AI, Interviewing 
C00652 00029	∂05-Feb-88  0742	LAWS@KL.SRI.COM 	AIList V6 #27 - Consciousness, Nanotechnology   
C00669 00030	∂11-Feb-88  1354	LAWS@KL.SRI.COM 	AIList V6 #28 - XLISP, Genetic Algorithms, Methodology    
C00696 00031	∂14-Feb-88  0116	LAWS@KL.SRI.COM 	AIList V6 #29 - Seminars, Conferences 
C00724 00032	∂14-Feb-88  0342	LAWS@KL.SRI.COM 	AIList V6 #30 - Conferences 
C00765 00033	∂14-Feb-88  0540	LAWS@KL.SRI.COM 	AIList V6 #31 - Neural Network Conference and Journal
C00788 00034	∂15-Feb-88  0042	LAWS@KL.SRI.COM 	AIList V6 #32 - Radio Control, Fuzzy Sets, Chinese, MDBS  
C00810 00035	∂15-Feb-88  0228	LAWS@KL.SRI.COM 	AIList V6 #33 - Genetic Algorithms, CAI, Psychnet, Nanotechnology   
C00832 00036	∂18-Feb-88  0039	LAWS@KL.SRI.COM 	AIList V6 #34 - AI in Management, Software Engineering, Interviewing
C00851 00037	∂18-Feb-88  0231	LAWS@KL.SRI.COM 	AIList V6 #35 - Genetics, Fuzzy Logic, Nanotechnology, Greenblat    
C00879 00038	∂21-Feb-88  0154	LAWS@KL.SRI.COM 	AIList V6 #36 - Grapher, Seminars, Conferences  
C00912 00039	∂21-Feb-88  0345	LAWS@KL.SRI.COM 	AIList Digest   V6 #37 
C00934 00040	∂21-Feb-88  0520	LAWS@KL.SRI.COM 	AIList V6 #38 - Applications, Neuromorphic Tools, Nanotechnology    
C00956 00041	∂27-Feb-88  0014	LAWS@KL.SRI.COM 	AIList V6 #39 - Queries, BBS Abstracts
C00980 00042	∂27-Feb-88  0214	LAWS@KL.SRI.COM 	AIList V6 #40 - Head Count, Neural Simulators, Fuzzy Logic, Refs    
C01003 00043	∂27-Feb-88  0430	LAWS@KL.SRI.COM 	AIList V6 #41 - Supercomputers, Nonotechnology  
C01038 00044	∂01-Mar-88  0000	LAWS@KL.SRI.COM 	AIList Digest   V6 #42 
C01059 00045	∂01-Mar-88  0200	LAWS@KL.SRI.COM 	AIList V6 #43 - Conferences 
C01079 00046	∂01-Mar-88  0348	LAWS@KL.SRI.COM 	AIList V6 #44 - Spang Robinson Review, New JETAI Journal  
C01098 00047	∂02-Mar-88  0011	LAWS@KL.SRI.COM 	AIList V6 #45 - Logic, RuleC, Methodology, Constraint Languages
C01130 00048	∂04-Mar-88  0922	LAWS@KL.SRI.COM 	AIList V6 #46 - Queries
C01145 00049	∂05-Mar-88  0034	LAWS@KL.SRI.COM 	AIList V6 #47 - Head Count, Image Formats, Chemistry, Law, Logic    
C01171 00050	∂08-Mar-88  0038	LAWS@KL.SRI.COM 	AIList V6 #48 - CommonLoops, OPS5, Constraint Languages, Probability
C01193 00051	∂10-Mar-88  0213	LAWS@KL.SRI.COM 	AIList V6 #49 - Seminars, LA SIGART, Conferences
C01229 00052	∂10-Mar-88  0428	LAWS@KL.SRI.COM 	AIList V6 #50 - Constraints, Neural Nets, Prototypical Knowledge    
C01251 00053	∂13-Mar-88  2312	LAWS@KL.SRI.COM 	AIList V6 #51 - Programming, Dependency Propagation, Uncertainty    
C01274 00054	∂20-Mar-88  0114	LAWS@KL.SRI.COM 	AIList V6 #52 - Prolog Digest, CLIPS, OPS5 
C01309 00055	∂20-Mar-88  0357	LAWS@KL.SRI.COM 	AIList V6 #53 - VLSI Testability, Agriculture, Genetic Algorithms   
C01333 00056	∂22-Mar-88  0108	LAWS@KL.SRI.COM 	AIList V6 #54 - Seminars, Conferences 
C01374 00057	∂24-Mar-88  2352	LAWS@KL.SRI.COM 	AIList V6 #55 - Queries
C01392 00058	∂25-Mar-88  0234	LAWS@KL.SRI.COM 	AIList V6 #56 - Mind Simulation & Software Engineering    
C01426 00059	∂25-Mar-88  0437	LAWS@KL.SRI.COM 	AIList V6 #57 - Theorem Prover, Models of Uncertainty
C01447 00060	∂29-Mar-88  0158	LAWS@KL.SRI.COM 	AIList V6 #58 - Seminars, Conferences 
C01477 00061	∂29-Mar-88  0519	LAWS@KL.SRI.COM 	AIList V6 #59 - POPLOG, microExplorer, Inference, Cognitive Agent   
C01505 00062	∂29-Mar-88  0852	LAWS@KL.SRI.COM 	AIList Digest   V6 #60 
C01536 00063	∂01-Apr-88  0055	LAWS@KL.SRI.COM 	AIList V6 #61 - UK Addresses, Seminars
C01551 00064	∂01-Apr-88  0248	LAWS@KL.SRI.COM 	AIList V6 #62 - RACTER, Expert Systems, Circuit Design    
C01578 00065	∂01-Apr-88  0445	LAWS@KL.SRI.COM 	AIList V6 #63 - Future of AI, Evidential Reasoning   
C01602 00066	∂12-Apr-88  0212	LAWS@KL.SRI.COM 	AIList V6 #64 - Seminars, Conferences 
C01631 00067	∂13-Apr-88  0054	LAWS@KL.SRI.COM 	AIList V6 #65 - Applications, Racter, Modal Logic, OPS5, microExplorer   
C01653 00068	∂13-Apr-88  0321	LAWS@KL.SRI.COM 	AIList V6 #66 - Probability, Intelligence, Ethics, Future of AI
C01684 00069	∂13-Apr-88  0546	LAWS@KL.SRI.COM 	AIList V6 #67 - Future of AI
C01714 00070	∂13-Apr-88  0827	LAWS@KL.SRI.COM 	AIList V6 #68 - AI News, Supercomputing, Seminars    
C01736 00071	∂14-Apr-88  0112	LAWS@KL.SRI.COM 	AIList V6 #69 - Queries
C01753 00072	∂14-Apr-88  0322	LAWS@KL.SRI.COM 	AIList V6 #70 - Queries
C01773 00073	∂18-Apr-88  0158	LAWS@KL.SRI.COM 	AIList V6 #71 - Moderator Needed, AI Goals Discussion
C01798 00074	∂18-Apr-88  0502	LAWS@KL.SRI.COM 	AIList V6 #72 - Queries
C01820 00075	∂21-Apr-88  0156	LAWS@KL.SRI.COM 	AIList V6 #73 - Fraud, Theorem Prover, New Lists, Bibliographies    
C01850 00076	∂21-Apr-88  0507	LAWS@KL.SRI.COM 	AIList V6 #74 - Queries, CLOS, ELIZA, Planner, Face Recognition
C01875 00077	∂21-Apr-88  0901	LAWS@KL.SRI.COM 	AIList V6 #75 - Functions, Modal Logic, Explorer, MACSYMA 
C01908 00078	∂21-Apr-88  1233	LAWS@KL.SRI.COM 	AIList V6 #76 - Hot Topics, Goals of AI, Free Will   
C01926 00079	∂22-Apr-88  0218	LAWS@KL.SRI.COM 	AIList Digest   V6 #77 
C01948 00080	∂22-Apr-88  0500	LAWS@KL.SRI.COM 	AIList V6 #77 - Learning, Seminars, Conferences 
C01970 00081	∂22-Apr-88  0759	LAWS@KL.SRI.COM 	AIList V6 #78 - Expert Database Systems    
C01994 00082	∂22-Apr-88  1706	LAWS@KL.SRI.COM 	AIList V6 #79 - Conferences: Automated Deduction, Productivity 
C02028 00083	∂25-Apr-88  0219	LAWS@KL.SRI.COM 	AIList V6 #80 - Moderator Needed, Credit, PatRec, AI Goals
C02052 00084	∂25-Apr-88  0426	LAWS@KL.SRI.COM 	AIList V6 #81 - Queries, BITNET Instructions    
C02069 00085	∂29-Apr-88  0330	LAWS@KL.SRI.COM 	AIList V6 #82 - Seminars, Conferences 
C02099 00086	∂29-Apr-88  0539	LAWS@KL.SRI.COM 	AIList V6 #83 - Texts, Theorem Prover, Graphic Design, Demons  
C02121 00087	∂29-Apr-88  2235	LAWS@KL.SRI.COM 	AIList V6 #84 - Queries
C02140 00088	∂30-Apr-88  0051	LAWS@KL.SRI.COM 	AIList Digest   V6 #85 
C02159 00089	∂30-Apr-88  0239	LAWS@KL.SRI.COM 	AIList V6 #86 - Philosophy  
C02177 00090	∂02-May-88  0051	LAWS@KL.SRI.COM 	AIList V6 #87 - Queries, Causal Modeling, Texts 
C02191 00091	∂02-May-88  0334	LAWS@KL.SRI.COM 	AIList V6 #88 - Philosophy  
C02221 00092	∂05-May-88  0238	LAWS@KL.SRI.COM 	AIList V6 #89 - Seminars, Conferences 
C02246 00093	∂05-May-88  0507	LAWS@KL.SRI.COM 	AIList V6 #90 - Decision Theory, Training Wheels, Fellowship   
C02265 00094	∂05-May-88  0811	LAWS@KL.SRI.COM 	AIList V6 #91 - Philosophy  
C02290 00095	∂09-May-88  0224	LAWS@KL.SRI.COM 	AIList V6 #92 - Seminars, Conferences 
C02322 00096	∂09-May-88  0447	LAWS@KL.SRI.COM 	AIList V6 #93 - Philosophy, Free Will 
C02352 00097	∂09-May-88  0723	LAWS@KL.SRI.COM 	AIList V6 #94 - Philosophy  
C02384 00098	∂09-May-88  1044	LAWS@KL.SRI.COM 	AIList V6 #95 - Philosophy  
C02413 00099	∂09-May-88  1421	LAWS@KL.SRI.COM 	AIList V6 #96 - Philosophy  
C02443 00100	∂09-May-88  1725	LAWS@KL.sri.com 	AIList V6 #97 - Philosophy  
C02473 00101	∂09-May-88  2039	LAWS@KL.sri.com 	AIList V6 #98 - Philosophy  
C02510 00102	∂09-May-88  2302	LAWS@KL.sri.com 	AIList V6 #99 - Applications, Road Follower, Explorer vs. Sun  
C02532 00103	∂10-May-88  0130	LAWS@KL.sri.com 	AIList V6 #100 - New Moderator, Queries, AI News, AI-ED SIG    
C02558 00104	∂14-May-88  1955	@MC.LCS.MIT.EDU:AIList-REQUEST@AI.AI.MIT.EDU 	AIList Digest   V7 #1   
C02589 00105	∂23-May-88  2051	@MC.LCS.MIT.EDU:AIList-REQUEST@AI.AI.MIT.EDU 	AIList Digest   V7 #2   
C02640 00106	∂24-May-88  0107	@MC.LCS.MIT.EDU:AIList-REQUEST@AI.AI.MIT.EDU 	AIList Digest   V7 #3   
C02691 00107	∂24-May-88  1202	@MC.LCS.MIT.EDU:AIList-REQUEST@AI.AI.MIT.EDU 	AIList Digest   V7 #4   
C02762 00108	∂24-May-88  1816	@MC.LCS.MIT.EDU:AIList-REQUEST@AI.AI.MIT.EDU 	AIList Digest   V7 #5   
C02819 00109	∂24-May-88  2359	@MC.LCS.MIT.EDU:AIList-REQUEST@AI.AI.MIT.EDU 	AIList Digest   V7 #6   
C02891 00110	∂25-May-88  1423	@MC.LCS.MIT.EDU:nick%AI.AI.MIT.EDU@XX.LCS.MIT.EDU 	AIList Digest   V7 #7   
C02916 00111	∂26-May-88  2225	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #8   
C02990 00112	∂01-Jun-88  2145	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #10  
C03015 00113	∂02-Jun-88  2006	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #16  
C03030 00114	∂03-Jun-88  0117	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #11  
C03070 00115	∂03-Jun-88  0117	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #12  
C03095 00116	∂03-Jun-88  0118	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #13  
C03176 00117	∂03-Jun-88  0119	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #14  
C03247 00118	∂03-Jun-88  0119	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #15  
C03277 00119	∂03-Jun-88  2225	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #17  
C03317 00120	∂04-Jun-88  0152	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #18  
C03342 00121	∂05-Jun-88  2333	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #19  
C03385 00122	∂06-Jun-88  2009	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #20  
C03405 00123	∂07-Jun-88  2310	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #22  
C03419 00124	∂08-Jun-88  0833	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #21  
C03452 00125	∂08-Jun-88  2149	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #23  
C03473 00126	∂09-Jun-88  0045	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #24  
C03497 00127	∂09-Jun-88  1606	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #25  
C03540 00128	∂09-Jun-88  2056	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #26  
C03570 00129	∂10-Jun-88  1348	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #27  
C03604 00130	∂13-Jun-88  1246	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #28  
C03627 00131	∂13-Jun-88  1636	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #29  
C03656 00132	∂13-Jun-88  2038	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #30  
C03677 00133	∂13-Jun-88  2355	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #31  
C03725 00134	∂14-Jun-88  1122	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #31  
C03773 00135	∂14-Jun-88  1651	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #32  
C03788 00136	∂14-Jun-88  2326	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #33  
C03803 00137	∂15-Jun-88  0217	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #34  
C03812 00138	∂15-Jun-88  2046	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #35  
C03839 00139	∂16-Jun-88  2115	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #36  
C03900 00140	∂17-Jun-88  0356	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #37  
C03921 00141	∂19-Jun-88  1838	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #38  
C03938 00142	∂19-Jun-88  2135	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #39  
C03964 00143	∂21-Jun-88  1512	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #40  
C03993 00144	∂21-Jun-88  1813	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #41  
C04024 00145	∂21-Jun-88  2143	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #42  
C04048 00146	∂23-Jun-88  2145	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #43  
C04098 00147	∂25-Jun-88  1237	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #44  
C04113 00148	∂25-Jun-88  1519	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #45  
C04129 00149	∂28-Jun-88  2041	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #46  
C04150 00150	∂29-Jun-88  2142	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #47  
C04168 00151	∂30-Jun-88  0104	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #48  
C04176 00152	∂30-Jun-88  2219	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #49  
C04193 00153	∂02-Jul-88  1235	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #1   
C04209 00154	∂02-Jul-88  2218	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #2   
C04244 00155	∂11-Jul-88  2150	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #3   
C04258 00156	∂12-Jul-88  0158	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #4   
C04275 00157	∂12-Jul-88  0356	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #5   
C04316 00158	∂12-Jul-88  0630	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #6   
C04342 00159	∂12-Jul-88  1026	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #7   
C04408 00160	∂17-Jul-88  2121	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #8   
C04440 00161	∂18-Jul-88  0249	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #10  
C04479 00162	∂18-Jul-88  0549	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #11  
C04488 00163	∂18-Jul-88  0824	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #12  
C04528 00164	∂18-Jul-88  1651	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #15  
C04538 00165	∂18-Jul-88  1908	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #16  
C04554 00166	∂19-Jul-88  0128	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #9   
C04592 00167	∂19-Jul-88  1157	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #9   
C04630 00168	∂19-Jul-88  1158	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #13  
C04661 00169	∂19-Jul-88  1158	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #14  
C04687 00170	∂19-Jul-88  2145	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #17  
C04719 00171	∂20-Jul-88  1934	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #18  
C04745 00172	∂20-Jul-88  2249	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #19  
C04752 00173	∂21-Jul-88  2124	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #20  
C04768 00174	∂22-Jul-88  0005	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #21  
C04788 00175	∂23-Jul-88  2314	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #22  
C04810 00176	∂24-Jul-88  0126	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #23  
C04840 00177	∂25-Jul-88  2251	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #24  
C04863 00178	∂26-Jul-88  0236	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #25  
C04889 00179	∂26-Jul-88  0656	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #26  
C04947 00180	∂26-Jul-88  1239	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #27  
C04970 00181	∂27-Jul-88  0244	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #28  
C04993 00182	∂28-Jul-88  2017	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #29  
C05009 00183	∂28-Jul-88  2313	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #30  
C05020 00184	∂29-Jul-88  0154	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #29  
C05036 00185	∂31-Jul-88  1448	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #31  
C05051 00186	∂31-Jul-88  1850	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #32  
C05060 00187	∂01-Aug-88  1145	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #33  
C05084 00188	∂01-Aug-88  1535	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #34  
C05103 00189	∂02-Aug-88  1541	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #35  
C05114 00190	∂03-Aug-88  1704	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #36  
C05129 00191	∂03-Aug-88  1956	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #37  
C05135 00192	∂04-Aug-88  2130	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #38  
C05150 00193	∂06-Aug-88  2033	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #39  
C05175 00194	∂07-Aug-88  2142	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #40  
C05197 00195	∂07-Aug-88  2358	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #41  
C05209 00196	∂08-Aug-88  2031	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #42  
C05217 00197	∂08-Aug-88  2236	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #43  
C05235 00198	∂11-Aug-88  2254	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #44  
C05258 00199	∂12-Aug-88  1032	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #45  
C05289 00200	∂12-Aug-88  1424	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #46  
C05312 00201	∂12-Aug-88  1753	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #47  
C05345 00202	∂13-Aug-88  1627	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #48  
C05359 00203	∂13-Aug-88  1834	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #49  
C05391 00204	∂14-Aug-88  2356	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #50  
C05407 00205	∂15-Aug-88  2048	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #51  
C05424 00206	∂15-Aug-88  2321	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #52  
C05434 00207	∂16-Aug-88  0157	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #53  
C05458 00208	∂17-Aug-88  2344	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #54  
C05477 00209	∂18-Aug-88  2202	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #55  
C05497 00210	∂19-Aug-88  0037	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #56  
C05522 00211	∂19-Aug-88  0315	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #57  
C05531 00212	∂19-Aug-88  0550	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #58  
C05554 00213	∂19-Aug-88  2121	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #59  
C05564 00214	∂19-Aug-88  2341	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #60  
C05575 00215	∂21-Aug-88  1912	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #61  
C05585 00216	∂22-Aug-88  1940	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #62  
C05601 00217	∂22-Aug-88  2204	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #63  
C05617 00218	∂24-Aug-88  1436	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #65  
C05642 00219	∂24-Aug-88  1745	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #64  
C05663 00220	∂25-Aug-88  2008	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #66  
C05685 00221	∂25-Aug-88  2259	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #67  
C05702 00222	∂26-Aug-88  2133	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #68  
C05719 00223	∂26-Aug-88  2346	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #69  
C05736 00224	∂27-Aug-88  1740	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #70  
C05747 00225	∂29-Aug-88  2028	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #71  
C05765 00226	∂29-Aug-88  2305	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #72  
C05792 00227	∂31-Aug-88  1510	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #73  
C05843 00228	∂31-Aug-88  1917	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #74  
C05854 00229	∂02-Sep-88  2317	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #76  
C05896 00230	∂03-Sep-88  0130	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #75  
C05915 00231	∂04-Sep-88  2210	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #77  
C05981 00232	∂05-Sep-88  0123	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #78  
C05999 00233	∂05-Sep-88  1150	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #79  
C06033 00234	∂08-Sep-88  1526	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #80  
C06046 00235	∂11-Sep-88  2046	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #81  
C06078 00236	∂13-Sep-88  1842	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #82  
C06088 00237	∂14-Sep-88  1829	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #83  
C06121 00238	∂14-Sep-88  2154	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #84  
C06137 00239	∂15-Sep-88  2246	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #85  
C06154 00240	∂18-Sep-88  1236	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #86  
C06176 00241	∂18-Sep-88  1452	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #87  
C06196 00242	∂19-Sep-88  2123	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #88  
C06212 00243	∂26-Sep-88  1023	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #89  
C06242 00244	∂26-Sep-88  1025	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #91  
C06258 00245	∂26-Sep-88  1023	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #90  
C06270 00246	∂26-Sep-88  1025	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #92  
C06297 00247	∂26-Sep-88  2040	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #93  
C06315 00248	∂08-Oct-88  1129	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #95  
C06350 00249	∂08-Oct-88  1446	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #96  
C06367 00250	∂08-Oct-88  1700	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #97  
C06383 00251	∂08-Oct-88  1915	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #98  
C06398 00252	∂10-Oct-88  1255	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #99  
C06424 00253	∂10-Oct-88  1552	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #100 
C06441 00254	∂10-Oct-88  1815	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #101 
C06457 00255	∂10-Oct-88  2046	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #102 
C06476 ENDMK
C⊗;
This file (AI.TXT[BB,DOC]) currently holds volume 6 of the AI-LIST digest.

The digests are edited by Ken Laws at SRI.  To get added to the list send
mail to AIList-REQUEST@SRI.COME; better yet use CKSUM to read this file.

Mail your submissions to AIList@SRI.COM.

Pointers to previous volumes:

Volume 1 (#1 to #117) of AI-LIST has been archived in file AI.V1[BB,DOC].
Volume 2 (#1 to #184) of AI-LIST has been archived in file AI.V2[BB,DOC].
Volume 3 (#1 to #193) of AI-LIST has been archived in file AI.V3[BB,DOC].
Volume 4 (#1 to #289) of AI-LIST has been archived in file AI.V4[BB,DOC].
Volume 5 (#1 to #288) of AI-LIST has been archived in file AI.V5[BB,DOC].

The old volumes will not be kept on the disk, although they'll be available
from backup tape if necessary.  Archive files are probably available online
at SRI.COM.

∂03-Jan-88  2117	LAWS@KL.SRI.COM 	AIList V6 #1 - Concurrent Prolog, Conferences   
Received: from KL.SRI.COM by SAIL.STANFORD.EDU with TCP; 3 Jan 88  21:16:53 PST
Date: Sun  3 Jan 1988 19:00-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #1 - Concurrent Prolog, Conferences
To: AIList@SRI.COM


AIList Digest             Monday, 4 Jan 1988        Volume 6 : Issue 1

Today's Topics:
  Literature - Concurrent Prolog Book,
  Conferences - COIS88 Conference on Office Information Systems &
    DAI Workshop Announcement

----------------------------------------------------------------------

Date: Thu, 31 Dec 87 9:52:44 PST
From: Kahn.pa@Xerox.COM
Subject: Concurrent Prolog Book Announcement

I'm forwarding this message for Udi Shapiro.

Date: Wed, 30 Dec 87 16:58:31 PST
From: udi%wisdom.bitnet@jade.berkeley.edu
From: Sarah Fliegelmann <MAFLIEG@WEIZMANN>


CONCURRENT PROLOG
COLLECTED PAPERS

(2 Vols.)

Edited by Ehud Shapiro

MIT Press Series in Logic Programming
ISBN 0-262-19266-7 (vol. 1) (pp. 560) $37.50
ISBN 0-262-19257-5 (vol. 2) (pp. 680) $37.50
ISBN 0-262-19255-1 (two volume set)   $65.00


                           Table of Contents

                               Volume 1

The Authors                                                            ix
The Papers                                                             xv
Foreword                                                              xix
Preface                                                               xxi
Introduction                                                          xxv

Part I: Concurrent Logic Programming Languages                          1

Introduction                                                            2
1.  A Relational Language for Parallel Programming                      9
    K. Clark and S. Gregory
2.  A Subset of Concurrent Prolog and Its Interpreter                  27
    E. Shapiro
3.  PARLOG: Parallel Programming in Logic                              84
    K. Clark and S. Gregory
4.  Guarded Horn Clauses                                              140
    K. Ueda
5.  Concurrent Prolog: A Progress Report                              157
    E. Shapiro
6.  Parallel Logic Programming Languages                              188
    A. Takeuchi and K. Furukawa

Part II: Programming Parallel Algorithms                              203

Introduction                                                          204
7.  Systolic Programming: A Paradigm of Parallel Processing           207
    E. Shapiro
8.  Notes on the Complexity of Systolic Programs                      243
    S. Taylor, L. Hellerstein, S. Safra and E. Shapiro
9.  Implementing Parallel Algorithms in Concurrent Prolog: The
    Maxflow Experience                                                258
    L. Hellerstein and E. Shapiro
10. A Concurrent Prolog Based Region Finding Algorithm                291
    L. Hellerstein
11. Distributed Programming in Concurrent Prolog                      318
    A. Shafrir and E. Shapiro
12. Image Processing with Concurrent Prolog                           339
    S. Edelman and E. Shapiro
13. A Test for the Adequacy of a Language for an Architecture         370
    E. Shapiro

Part III: Streams and Channels                                        389

Introduction                                                          390
14. Fair, Biased, and Self-Balancing Merge Operators: Their
    Specification and Implementation in Concurrent Prolog             392
    E. Shapiro and C. Mierowsky
15. Multiway Merge with Constant Delay in Concurrent Prolog           414
    E. Shapiro and S. Safra
16. Merging Many Streams Efficiently: The Importance of Atomic
    Commitment                                                        421
    V.A. Saraswat
17. Channels: A Generalization of Streams                             446
    E.D. Tribble, M.S. Miller, K. Kahn, D.G. Bobrow, C. Abbott and
    E. Shapiro
18. Bounded Buffer Communication in Concurrent Prolog                 464
    A. Takeuchi and K. Furukawa
References                                                            477
Index                                                                 507

                               Volume 2

The Authors                                                            ix
The Papers                                                           xiii
Preface to Volume 2                                                  xvii

Part IV: Systems Programming                                            1

Introduction                                                            2
19. Systems Programming in Concurrent Prolog                            6
    E. Shapiro
20. Computation Control and Protection in the Logix System             28
    M. Hirsch, W. Silverman and E. Shapiro
21. The Logix System User Manual, Version 1.21                         46
    W. Silverman, M. Hirsch, A. Houri and E. Shapiro
22. A Layered Method for Process and Code Mapping                      78
    S. Taylor, E. Av-Ron and E. Shapiro
23. An Architecture of a Distributed Window System and its FCP
    Implementation                                                    101
    D. Katzenellenbogen, S. Cohen and E. Shapiro
24. Logical Secrets                                                   140
    M.S. Miller, D.G. Bobrow, E.D. Tribble and J. Levy

Part V: Program Analysis and Transformation                           163

Introduction                                                          164
25. Meta Interpreters for Real                                        166
    S. Safra and E. Shapiro
26. Algorithmic Debugging of GHC Programs and Its Implementation
    in GHC                                                            180
    A. Takeuchi
27. Representation and Enumeration of Flat Concurrent Prolog
    Computations                                                      197
    Y. Lichtenstein, M. Codish and E. Shapiro
28. A Type System for Logic Programs                                  211
    E. Yardeni and E. Shapiro

Part VI: Embedded Languages                                           245

Introduction                                                          246
29. Object Oriented Programming in Concurrent Prolog                  251
    E. Shapiro and A. Takeuchi
30. Vulcan: Logical Concurrent Objects                                274
    K. Kahn, E.D. Tribble, M.S. Miller and D.G. Bobrow
31. PRESSing for Parallelism: A Prolog Program Made Concurrent        304
    L. Sterling and M. Codish
32. Compiling Or-Parallelism into And-Parallelism                     351
    M. Codish and E. Shapiro
33. Translation of Safe GHC and Safe Concurrent Prolog to FCP         383
    J. Levy and E. Shapiro
34.  Or-Parallel Prolog in Flat Concurrent Prolog                     415
     E. Shapiro
35. CFL --- A Concurrent Functional Language Embedded in a Concurrent
    Logic Programming Environment                                     442
    J. Levy and E. Shapiro
36. Hardware Description and Simulation Using Concurrent Prolog       470
    D. Weinbaum and E. Shapiro

Part VII: Implementations                                             491

Introduction                                                          492
37. A Sequential Implementation of Concurrent Prolog Based on the
    Shallow Binding Scheme                                            496
    T. Miyazaki, A. Takeuchi and T. Chikayama
38. A Sequential Abstract Machine for Flat Concurrent Prolog          513
    A. Houri and E. Shapiro
39. A Parallel Implementation of Flat Concurrent Prolog               575
    S. Taylor, S. Safra and E. Shapiro
References                                                            605
Index                                                                 635

------------------------------

Date: Mon, 28 Dec 87 22:18:44 est
From: rba@flash.bellcore.com (Bob Allen)
Subject: Conference - COIS88 Conference on Office Information Systems


        COIS88 - Conference on Office Information Systems
                        March 23-25,1988
           Hyatt Rickeys Hotel, Palo Alto, California

Sponsored by: ACM SIGOIS and IEEECS TC-OA
In cooperation with: IFIP W.G. 8.4

SPEAKERS
     Keynote: Terry Winograd
     Banquet: Kristen Nygaard, at Tresidder Union, Stanford University

SESSIONS
     Collaborative Work (Chair: Irene Greif)
     Task Modeling, Planning, and Coordination (Chair: Carl Hewitt)
     Organizational Impact (Chair: Rob Kling)
     Social Research: Methods and Principles (Chair: Tora Bikson)
     Multimedia (Chair: Donald Chamberlin)
     Hypertext and Information Retrieval (Chair: Walter Bender)
     Object-Oriented and Distributed Databases
     Object-Oriented Programming Systems

PANELS
     Hypertext and Electronic Publishing (chair: Norm Meyrowitz)
     Distributed Artificial Intelligence (chair: Les Gasser)
     User Design of Interfaces (Chair: Austin Henderson)
     Object-Oriented PS/DBMSs (chair: Stan Zdonik)

For more information contact:
     Najah Naffah, Bull, 1 Rue Ampere, BP 92 91301, Massy, France - or -
     Robert B. Allen, Bellcore, 2A-367, Morristown, NJ 07960 /
       (201)-829-4315 / rba@bellcore.com

------------------------------

Date: 24 Dec 87 00:14:56 GMT
From: pollux.usc.edu!gasser@oberon.usc.edu  (Les Gasser)
Subject: Conference - DAI Workshop Announcement (2nd time)


        WORKSHOP ANNOUNCEMENT - CALL FOR PARTICIPATION

      8th Workshop on Distributed Artificial Intelligence

               Lake Arrowhead Conference Center

                     Lake Arrowhead, CA.

                       May 22-25, 1988

The 8th Distributed AI Workshop will address the problems of
coordinated action and problem-solving among reasonably sophisticated,
intelligent computational "agents." The focus will be be synthetic and
pragmatic, investigating how we can integrate theoretical and
experimental ideas about knowledge, planning, negotiation, action,
etc. in multi-agent domains, to build working interacting agents.

Participation is by invitation only. To participate, please submit an
extended abstract (5-7 double-spaced pages, hard copy only) describing
original work in DAI to the workshop organizer at the address below.
Preference will be given to work addressing basic research issues in
DAI such as those outlined below.  A small number of "interested
observers" will also be invited. If you are interested in being an
observer, please submit a written request to attend (hard copy), with
some justification. Participation will be limited to approximately 35
people.

A number of submitted papers will be selected for full presentation,
critique, and discussion. Other participants will be able to make
brief presentations of their work in less formal sessions. There will
be ample time allowed for informal discussion. All participants should
plan to submit a full paper version in advance, for distribution at
the workshop.

Suggested topics include (but are not necessarily limited to):

  Describing, decomposing, and allocating problems among a
  collection of intelligent agents, including resource allocation,
  setting up communication, dynamic allocation, etc.

  Assuring coherent, coordinated interaction among intelligent agents,
  including allocating control, determining coherence, organization
  processes, the role of communication in coherence, plan
  synchronization, etc.

  Reasoning about other agents, the world, and the state of the
  coordinated process, including plan recognition, prospective
  reasoning, knowledge and belief models, representation techniques,
  domain or situation specific examples, etc.

  Recognizing and resolving disparities in viewpoints, representations,
  knowledge, goals, etc. (including dealing with incomplete,
  inconsistent, and representationally incompatible knowledge) using
  techniques such as communication, negotiation, conflict resolution,
  compromise, deal enforcement, specialization, credibility assessment,
  etc.

  Problems of language and communication, including interaction
  languages and protocols, reasoning about communication acts
  inter-agent dialogue coherence, etc.

  Epistemological problems such as joint concept formation, mutual
  knowledge, situation assessment with different frames of
  reference, etc.

  Practical architectures for and real experiences with building
  interacting intelligent agents or distributed AI systems.

  Appropriate methodologies, evaluation criteria, and techniques for
  DAI research, including comparability of results, basic assumptions,
  useful concepts, canonical problems, etc.

For this DAI workshop, we specifically discourage the submission of
papers on issues such as programming language level concurrency,
fine-grained parallelism, concurrent hardware architectures, or
low-level "connectionist" approaches.

We intend that revised versions of a number of the best papers from
this workshop will be included in a second monograph on "Distributed
Artificial Intelligence," edited by Mike Huhns and Les Gasser, and
that a workshop proceedings will be published.

Please direct inquiries to the workshop organizer at the address below.
----------------------------------------------------------------
DATES:

Deadline for submission of extended abstracts: February 15, 1988

Notification of acceptance:  March 21, 1988

Full papers due (for distribution at the workshop): April 25, 1988
----------------------------------------------------------------

WORKSHOP ORGANIZER:

Les Gasser
Distributed AI Group
Computer Science Department
University of Southern California
Los Angeles, CA. 90089-0782

Telephone: (213) 743-7794
Internet: gasser@usc-cse.usc.edu

----------------------------------------------------------------
WORKSHOP PLANNING COMMITTEE:

  Miro Benda (Boeing AI Center)       Phil Cohen (SRI)
  Lee Erman  (Teknowledge)            Michael Fehling (Rockwell)
  Mike Genesereth (Stanford)          Mike Georgeff (SRI)
  Carl Hewitt (MIT)                   Mike Huhns (MCC)
  Victor Lesser (UMASS)               Nils Nilsson (Stanford)
  N.S. Sridharan (FMC Corp)
----------------------------------------------------------------
Support for this workshop and for partial subsidy of participants'
expenses has been provided by AAAI; other support is pending.

------------------------------

End of AIList Digest
********************

∂03-Jan-88  2352	LAWS@KL.SRI.COM 	AIList Digest   V6 #2  
Received: from KL.SRI.COM by SAIL.STANFORD.EDU with TCP; 3 Jan 88  23:52:46 PST
Date: Sun  3 Jan 1988 19:04-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V6 #2
To: AIList@SRI.COM


AIList Digest             Monday, 4 Jan 1988        Volume 6 : Issue 2

Today's Topics:
  Reviews - Spang Robinson Report 3/12 &
    Spang Robinson Supercomputing 1/4,
  Bindings - Neural Net Researchers in Robotics

----------------------------------------------------------------------

Date: Sat, 2 Jan 88 21:50:14 CST
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: Spang Robinson Report, Volume 3  No. 12, December 1987

Summary of the Spang Robinson Report, Volume 3  NO. 12, December 1987

The lead article is on Expert Systems Tools.
The leader in installations are VP Expert and TI's Personal Consultant
with 15,000 and 10,000 installed.
Nexpert Object is selling at 125 copies per month.
Fusion has 200 units installed while GoldWorks has 500 customers.
Software A&E's KES II has sold 500 units and 1.1 million revenue.

There is a centerfold table listing characteristics of microcomputer
expert system tools including:
  AIon Development Systems   Prices
  Exsys Professional         Features
  Fusion (First Class)       Computer Supported
  Goldworks                  Hooks to other Languages and File Formats
  Guru                       End User Interface Capabilities
  Level Five                 Inferencing Mechanisms supported
  Nexpert Object
  Vp Expert
  KES II
  Personal Consultant

Graphs showing installed base for Expert System Development Tools and
product revenues are also provided.
*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(
Discussion of embedding AI in Conventional Systems

The article includes a history of embeddable AI Software.

CxPert generates C code.
GURU is complete with relational dat base, spread sheet, word processor
and communication software.
TI now offers a package to allow expert systems built with Personal Consultant
or PC Easy to be run on a VAX in C.

(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_(_
Hecht-Nelson has created a 30 hour videotaped courseware on neural networks.
This $5,000 set of tapes of a live classroom
is reviewed quite favorably in this issue.

_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+
Shorts:
Intellicorp's product revenues grew from 12.9 million to 16 million.
In the first quarter of fiscal 1988, KEE on the SUn was the most popular
product.
They have 18 million in cash and no debt.

Teknowledge's Copernicus is a core development and delivery system
and a set of AI libraries.  The charge will be $15,000 per user for
workstations nad up to $90,000 on mainframes.

The next issue of Daedalus, the journal of the American Academy
of ARts and Sciences, is being devoted to Artifical Intelligence.

Sun's new SPARC microprocessor uses a tagged memory architecture which
is usful for processing AI languages.

Intelligent Technology has signed up $35,000,000 in new contract work in
the last two months.

Arthur D. Little and Carnegie Group has signed up a
cooperative marketing agreement.

Texas Instruments and DEC are reorganizing their respective AI groups.

United Airlines and Texs Instruments have developed a Gate Assignment Display
System and interfaces with Unimatic, a flight information database.

Combustion Engineering is using Palladian's Operations Advisor
for manufacturing problems.

Infomart is using GURU, ART and Intellicorp in its factory demonstration.
Applications are shipping route automation, product configuration and
production scheduling.

Richard Fikes has moved from Intellicorp to Price Waterhouse Technology
where he will be Princial Scientist.

Barry Plotkin is now founder  and President of Coherent Thought.

By the way, Spang Robinson Reports has a report evaluating PC Expert Systems

------------------------------

Date: Sun, 3 Jan 88 13:07:31 CST
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: Spang Robinson Report, December 1987, Volume 1, No. 4

Summary of Spang Robinson Report on Supercomputing and Parallel Processing,
December 1987, Volume 1, No. 4

Lead Article is on Parallel Tightly Coupled Share Memory Systems

BBN has installed 90 Butterflys and 50 of its earlier system, the
Pluribus.  The Butterfly Plus is an upgrade path for existing users at
$6,000 per node.  Initial cost is $164,000 for ten processors, 30 for
$429,000 and 100 processors for 1.4 million.
The GP1000 is a UNIX system based on the Mach 1000.  The  RT1000 is
for real time.

Sequent has 350 installations.  The system uses Intel 80386 processors.
The S27 supprts 2 to 10 processors.  The S81 can support up to 30
processors and 1000 users.
Cost ranges from $89,000 to $800,000.

Encore has sold over a 100 Multimax systems and has won a 10.7 million
DARPA contract called Ultramax.  Prototype Ultramax systems are shipping.
The system has a 100 million byte per second bus.  The Multimax 320 uses a
National Semiconductor 32332 with optional Weitek 1164/1165 floating
point set.  A Multimax 320 users with twenty processors costs $900,000
and suports 400 users.  Software includes AT&T and 4.2BSD based OS's
and Quadratron's office automation with Oracle database to follow.
Compilation of the 330,000 line
ADA test suite required 3.5 hours on a Multimax 120 with 16 processors
and 64 MB of memory.


Flexible Computer is now using a 68020 microprocessor as its base and
will have an optional Weitek 1164/1165 unit. Flexible allows up to
20 processors per cabinet and 1024 maximum cabinets. A 40 processor
system is being used
by MCC in its database research.
A four node system costs $200,000 and a twenty node system will be $625,000.

*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(
Article on the Problems at MIT with the potential order for a Honeywell-NEC
processor.

Honeywell-NEC was to install a NEC SX-2 at MIT.  However, it would continue
to own the processor while MIT would pay for its time at reduced rates.
The Acting Secretary of Commerce wrote to MIT saying "it had no objection
to MIT buying a Japanese supercomputer, but if a Japanese company
'dumped' a supercomputer at MIT, it would investigate the you know what
out of the situation." Then Honeywell-NEC and Amdahl withdrew their offers.
Then Honeywell-NEC said, 'We ended our offer for reasons having to do with
on-going trade negotiations between the United States and Japan.'"

&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)&)

Article on the ETA systems low end announcement.

Six systems have been sold; four are in contract/letter of intent phase.

The ETA-10P 1 million dollar machine achieves 23 megaflops on the LINPAK
benchmark.  Cray 1S does the test in 12 megaflops.
&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*
Article on Parallel processing in Europe.

The GMD of Germany will be setting up supercomputer centers, networking,
departmental machines and experimental parallel processing.
!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#
Benchmarks on Plasma Code Benchmark

                                             MFLOPS     MFLOPS/million DM
IBM 3090 VF                                      23.0            4.6
Cray-2                                          283.3            5.7
Siemens (Fujitsu) VP-200                        302.5           15.1
TX3-80387                                       124.8           62.4
TX3-8087 and Weitek Unit                        357.5           143.0

The TX3 is a binary tree based MIMD system based on the INTEL 80386
with optional Weitek floating point unit.
_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+
Shorts;

Multiflow has delivered five TRACE 7/200 systems.

Concurrent has acquired the Navier-Stokes machine technology from Princeton
University.

Cray has intalled a fifteen million dollar computer at MITI (Japan).

Tandem is entering the computer integrated manufacturing business.

Scientific Computer Systems has set up a subsidiary in Paris and
has installed a system at Ecole Polytechnique in France.

Cray has earned over 500 million in revenue and has announced a program
to buy back ten percent of its outstanding hsares.

Alliant's revenue is 14.2 million as opposed to 8.6 million the prvious
year.

Tandem computers has revenue just above one billion a year.

Engineering Systems International has ported its PAM-CRASH software for
analyzing crashworthiness of autos and other vehicles ot hte Convex.

Informix has announced relational database products for the Cray-2.

------------------------------

Date: Wed, 30 Dec 87 06:59 PST
From: nesliwa%nasamail@ames.arpa (NANCY E. SLIWA)
Subject: Neural net researchers in robotics


Thanks to all of you who responded to my request for names of researchers
doing connectionist research in the robotics domain. As I have had several
requests for copies of that list, I am posting it here. A few disclaimers:
I merely took the names sent to me, weeded out the duplicates, and put them
in alphabetical order. Formats and cases are dissimilar for several entries.
There is no guarantee that all the people listed are working in the robotics
domain; in fact, I doubt that is the case. I put *** by the names that were
repeatedly suggested to me, about 10 from the list of ~75.

I have also had several requests about the ACC session on robotic applications
of connectionist systems. I will post that in a subsequent message.

Nancy Sliwa
MS 152D
NASA Langley Research Center
Hampton, VA 23665-5225
804/865-3871
nesliwa%nasamail@ames.arpa

Dr. Albert Ahumada
NASA Ames Research Center
415/694-6257

James Albus
National Bureau of Standards

Aleksander, Igor (UK)
Imperial College of Sci.& Technol.
Department of Computing, 180 Queens Gate
London SW7 2BZ, Tel.:(1)5895111 ext.4985

de Almeida, Luis B. (PORTUGAL)
University of Lisboa
Inst. Eng. Comp. Systems, Rua Alves Redol 9
P-1000 Lisboa, Tel.:(1)544607

Chuck Anderson   cwa@gte-labs  (csnet)
connectionist methods for learning to balance an inverted pendulum
GTE Laboratories Inc.
40 Sylvan Road
Waltham, MA  02254
617-466-4157

Anderson, Dana Z. (USA)
University of Colorado
Department of Physics
Boulder, CO 80309, Tel.:(303)492-5202
Comp.Net: DANA@JILA.BITNET

Anninos, Photios (GREECE)
University of Thraki
Dept. Medicine, Neurol.& Med. Physics
G-68100 Alexandroupolis, Tel.:(551)25292

Arbib, Michael A. (USA)                 ***
University of Southern California
Computer Science Dept., University Park
Los Angeles, CA 90089-0782, Tel.:(213)743-6452
Comp.Net: ARBIB@USC-CSE.USC.EDU.CSNET

Barhen, Jacob (USA)                     ***
Oak Ridge National Laboratory (moved to JPL/CalTech)
P.O.Box X
Oak Ridge, Tennessee 37831, Tel.:(615)5746162
Comp.Net: JBY@ORNL-MSR.ARPA

Andrew Barto
connectionist methods for learning to balance an inverted pendulum
GTE Laboratories Inc.
40 Sylvan Road
Waltham, MA  02254
617-466-4157

George Bekey
grasping, connectionist models of material handling using multiple mobile
robots
bekey@usc-cse.usc.edu

Beroule, Dominique (FRANCE)
LIMSI-CNRS
Lab. Inform. Mecan.& Sci. l'Ing.
F-91406 Orsay, Tel.:(16)9418250

Berthoz, Alain (FRANCE)
CNRS
Laboratoire de Physiol. Neurosensorielle
15 rue de l'Ecole de Medicine
F-75270 Paris, Tel.:(1)4329-6154

Bienenstock, Elie (FRANCE)
Universite de Paris-Sud
Laboratoire de Neurobiol. du Developement
Centre d'Orsay - Bat. 440
F-91405 Orsay, Tel.:(16)941-7825
Comp.Net: UNHA002@FRORS12.BITNET

Dan Bullock
Center for Adaptive Systems
Department of Mathematics
Boston University
Boston, MA  02215

Caianiello, Eduardo R. (ITALY)
Universita di Salerno
Dipartimento di Fiscia Teorica
I-84100 Salerno, Tel.:(89)878299

Dr. Gail Carpenter
Northeastern University
Department of Mathematics, 504LA
360 Huntington Avenue
Boston, MA 02115

Dr. Leon Cooper
Brown University
Center for Neural Science
Providence, RI 02912

Cotterill, Rodney M. J. (DENMARK)
Technical University of Denmark
Div. Molecular Biophysics, Building 307
DK-2800 Lyngby, Tel.:(2)882488

Daunicht, Wolfgang (W.GERMANY)
Universitt Dsseldorf
Dept. Biophysics, Universittsstr. 1
D-4000 Dsseldorf 1, Tel.:(211)311-4538
Comp.Net: DAUNICHT@DD0RUD81.BITNET

Dreyfus, Gerard (FRANCE)
ESPCI
Lab. d'Electronique, 10 rue Vauquelin
F-75005 Paris, Tel.:(1)3377700
Comp.Net: UIFR000@FRORS31.BITNET

Eckmiller, Rolf (W.GERMANY)
Universitt Dsseldorf
Dept. Biophysics, Universittsstr. 1
D-4000 Dsseldorf 1, Tel.:(211)311-4540
Comp.Net: ECKMILLE@DD0RUD81.BITNET

Feldman, Jerome A. (USA)                ***
University of Rochester
Computer Science Department
Rochester, NY 14627, Tel.:(716)275-5492
Comp.Net: FELDMAN@ROCHESTER.ARPA

FUKUSHIMA, KUNIHIKO (JAPAN)             ***
NHK
BROADCASTING SCIENCE RESEARCH LAB.
1-10-11, KINUTA, SETAGAYA
TOKYO 157, JAPAN
TEL.:(3)415-5111

GARTH, SIMON (UK)
TEXAS INSTRUMENTS LTD.
MANTON LANE, M/S 4223
BEDFORD MK41 7PA
TEL.:(234)223843

John Gilmore
Georgia Tech Research Institute
image processing

Nigel Goddard
Recognition from motion, motion control
goddard@venera.isi.edu

Dr. Stephen Grossberg                   ***
Center for Adaptive Systems
Room 244
111 Cummington Street
Boston University
Boston, MA   02215

HARTMANN,KLAUS-PETER(W.GERMANY)
UNIVERSITAET PADERBORN
ELECTRICAL ENGINEERING
POHLWEG 7
D-4790 PADERBORN
TEL:(5251)601-2206

HECHT-NIELSEN, ROBERT (USA)
NEUROCOMPUTER CORP.
5893 OBERLIN DRIVE
SAN DIEGO, CA 92121
TEL.: (619)546-8877

HERTZ, JOHN (DENMARK)
NORDITA
TEORETISK ATOMFYSIK
BLEGDAMSVEJ 17
DK-2100 KOBENHAVN 0
TEL.:(1)421616

Geoffrey Hinton                         ***
University of Toronto (was at CMU)

HOFFMANN, KLAUS-PETER (W.GERMANY)
UNIVERSITAET BOCHUM
DEPT.GEN.ZOOLOGY
UNIVERSITAETSSTR.150
D-4630 BOCHUM
TEL.:(234)700-4364

HUBERMAN, BERNARDO A. (USA)
XEROX PALO ALTO
RESEARCH CENTER
3333 COYOTE HILL ROAD
PALO ALTO, CA 94304
TEL.:(415)494-4147
COMP.NET: HUBERMAN@XEROX.ARPA

Thea Iberall                            ***
Hartford Gradate Center (currently at Toronto for the semester)
neural networks for modeling human prehension

JACKEL, LARRY D.
AT & T BELL LABS.
ROOM 4D-433
HOLMDEL, NJ 07733
TEL.:(201)949-7773

Mike Jordan
Univ. of Massachusetts, Amherst
(413) 545-1596

Dr. Pentti Kanerva                      ***
NASA Ames Research Center
415/694-6922
ARPA: kanerva@riacs.edu
UUCP: ames!riacs!kanrva

KOCH, CHRISTOF (USA)
CALTECH
DIVISION OF BIOLOGY, 216-76
PASADENA, CA 91125
TEL.:(818)356-6855
COMP.NET:KOCH@HAMLET.BITNET

KOENDERNIK, JAN J. (NETHERLAND)
RIJKSUNIVERSITEIT UTRECHT
FYSISCH LAB.
PRINCETONPLEIN 5
NL-3508 TA UTRECHT
TEL.:(30)533985

KOHONEN, TEUVO (FINLAND)
HELSINKI UNIV. OF TECHNOLOGY
DEPT. OF TECHNICAL PHYSICS
SF-02150 ESPOO 15
TEL.:(0)460144

KORN, AXEL (W.GERMANY)
FRAUNHOFER-INSTITUT
INFORMATIONS- UND DATENVERARBEITUNG
SEBASTIAN-KNEIPP-STR. 12-14
D-7500 KARLSRUHE 1
TEL.:(721)60911

V. D. MALSBURG, CHRISTOPH (W.GERMANY)
MPI BIOPHS. CHEMIE
DEPT. NEUROBIOLOGY
NIKOLAUSBERG
D-3400-GOETTINGEN
TEL.:(551)201-623
COMP.NET: MPC07M AT DGOGWD01..BITNET

MAY, DAVID (UK)
INMOS LTD.
1000 AZTEC WEST, ALMONDSBURY
BRISTOL BS124 SQ
TEL.:(454)616616

Tom Miller
Univ. of New Hampshire, Durham (EE Dept)

MOORE, WILL R. (UK)
OXFORD UNIVERSITY
DEPT. OF ENGINEERING SCIENCE
PARKS ROAD
OXFORD OX1 3PJ
TEL.:(865)273000

John Nagle
adaptive control of a skidding autonomous vehicle
Center for Design Research
Stanford
415-856-0767
jbn@glacier.stanford.edu

NIJMAN,A.(LOEK)J.(NETHERLANDS)
PHILIPS RESEARCH LABS.
WB3,P.O.BOX 80 000
NL-5600 JA EINDHOVEN
TEL.:(40)742558

ORBAN, GUY (BELGIUM)
KATHOL. UNIVERSITY LEUVEN
LAB. NEURO- AND PSYCHOPHYSIOLOGY
B-3000 LEUVEN
TEL.:(16)215740

PALM, GUENTHER (W.GERMANY)
MPI FUER BIOLOGISCHE KYBERNETIK
SPEMANNSTR. 38
D-7400 TUEBINGEN 1
TEL.:(7071)601551
COMP.NET:DKWA001@DTUZDV5A.BITNET

PATARNELLO, STEFANO (ITALY)
IBM ECSEC
VIA GIORGIONE 159
I-00147 ROME
TEL.:(6)54861
COMP.NET: PATARNEL AT IECSEC.BITNET

PELLIONISZ, ANDRAS J. (USA)             ***
NEW YORK UNIVERSITY
DEPT. PHYSIOLOGY & BIOPHYSICS
550 FIRST AVENUE
NEW YORK, NY 10016
TEL.:(212)340-5422

PHILLIPS, WILLIAM A. (UK)
UNIVERSITY OF STIRLING
DEPT. PSYCHOLOGY
STIRLING FK9 4LA
TEL.:(786)73171

Gil Pitney
robotic path planning
UCSB Comp. Sci. Dept.
(805)961-8221.

REEKE, GEORGE N.
ROCKEFELLER UNIVERSITY
1230 YORK AVENUE
NEW YORK, NY 10021
TEL.:(212)570-7627
COMP.NET:CDRNI@CUNYVM.BITNET

SAMI, MARIAGIOVANNA (ITALY)
POLITECNICO DI MILANO
DEPT. ELECTRONICS
PLAZA L. DA VINCI 32
I-20133 MILANO
TEL.:(2)2367241

SCHULTEN, KLAUS (W.GERMANY)
TU MUENCHEN
PHYSIK-DEPARTMENT
JAMES-FRANCK-STR.
D-8046 GARCHING B. MUENCHEN
TEL.:(89)3209-2368

V. SEELEN, WERNER (W.GERMANY)
JOHANNES GUTENBERG UNIVERSITAET
DIVISION OF BIOPHYSICS
SAARSTR. 21
D-6500 MAINZ
TEL.:(6131)39-2471

SEJNOWSKI, TERRENCE J. (USA)            ***
JOHNS HOPKINS UNIVERSITY
DEPARTMENT OF BIOPHYSICS, JENKINS HALL
BALTIMORE, MD 21218
TEL.:(301)338-8687

John Shepanski
TRW, MS O2/1779
One Space Park
Redondo Beach, CA, 90278

SINGER, WOLF (W.GERMANY)
MPI FUER HIRNFORSCHUNG
DIV. NEUROPHYSIOLOGY
DEUTSCHORDENSTR. 46
D-6000 FRANKFURT 71
TEL.:(69)6704-218

Dr. Terrence Smith
robotic path planning
UCSB Comp. Sci. Dept.
(805)961-8221.

Paul Scott
Univ. of Michigan, Ann Arbor (ECE Dept.)

Don Soloway
neural nets for robot manipulator kinematics
MS 152D
NASA Langley Research Center
Hampton, VA 23665-5225
804/865-3871

Rich Sutton
GTE Labs
(617) 466-4133
rich@gte-labs.csnet

TANK, DAVID W. (USA)
AT & T BELL LABS
MOLEC BIOPHYS. RES. DEPT.
600 MOUNTAIN AVENUE
MURRAY HILL, NJ 07974
TEL.:(201)582-7058

Dr. Richard F. Thompson
Stanford University
Department of Psychology
Bldg. 4201 -- Jordan Hall
Stanford, CA 94305

TORRAS, CARME (SPAIN)
UNIV. DE POLITECH. DE CATALONIA
INSTITUTE FOR CYBERNETICS, DIAGONAL 647
E-08028 BARCELONA
TEL.:(3)249-2842

TRELEAVEN, PHILIP (UK)
UNIVERSITY COLLEGE
DEPT. OF COMPUTER SCIENCE
GOWER STREET
LONDON WC1E 6BT
TEL.: 13877050
COMP.NET: TRELEAVEN@CS.UCL.AC.UK.ARPA

WALLACE,DAVID J.(UK)
UNIVERSITY OF EDINBURGH
DEPT.PHYSICS
MAYFIELD ROAD
EDINBURGH EH9 3JZ
TEL:(31)6671081 ext.2850

Dr. Andrew B. Watson
NASA Ames Research Center
415/694-5419

Dr Allen Waxman
Laboratory for Sensory Robotics
Boston University
waxman@buengc.bu.edu

WEISBUCH, GERARD (FRANCE)
ECOLE NORMAL SUPERIEURE
PHYSIQUE DES SOLIDES
24 RUE LHOMOND
F-75231 PARIS
TEL.:(1)43291225 EXT.3475

ZEEVI, JOSHUA Y. (ISRAEL)
TECHNION ISRAEL INST. TECHNOL.
DEPT. OD ELECTRICAL ENGINEERING
HAIFA 32000, ISRAEL
TEL.: (4)293111

ZUCKER,STEVEN(CANADA)
MCGILL UNIVERSITY
DEPT.ELECTRICAL ENG.
MONTREAL,P.Q.
TEL:(514)398-7134
COMP.NET:ZUCKER@SRI-IU.ARPA

ZUSE, KONRAD (W.GERMANY)
IM HASELGRUND 21
D-6518 HUENFELD
TEL.:(6652)2928

------------------------------

End of AIList Digest
********************

∂09-Jan-88  0046	LAWS@KL.SRI.COM 	AIList V6 #3 - Logics Bulletin, TINLAP3, Methodology 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 9 Jan 88  00:46:31 PST
Date: Fri  8 Jan 1988 22:06-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #3 - Logics Bulletin, TINLAP3, Methodology
To: AIList@SRI.COM


AIList Digest            Saturday, 9 Jan 1988       Volume 6 : Issue 3

Today's Topics:
  Queries - Software Engineering & Expert System Shell Survey &
    TRC Expert System & Planning for Games & Expert Systems &
    Commercial products Based on Neural Nets & Spang Robinson Report &
    sci.psych,
  Announcements - Non-Classical Logics Bulletin &
    TINLAP3 Position Papers & Error in Concurrent Prolog Book Announcement,
  Philosophy - Methodology & Biological Models

----------------------------------------------------------------------

Date: Thu, 31 Dec 87 11:03 PST
From: andy trumble <trumble@nprdc.arpa>
Subject: Software Engineering


Request for titles:

I am looking for references dealing
with software engineering techniques
with object orientated programming.
However, I'll settle for anything
that discusses good programming
practice with Lisp and/or Flavors.
I'll be happy to pass on information
to interested parties.

Thanks
Andy Trumble
Trumble@NPRDC

------------------------------

Date: Mon,  4-JAN-1988 15:20:31.43 EST
From: Tony Stylianou <styliano%KENTVMS.BITNET@CUNYVM.CUNY.EDU>
Subject: Survey announcement

                 Center for Information Systems
                 Graduate School of Management
                   Kent State University


                 * SURVEY ANNOUNCEMENT NOTICE *
                Request for Survey Participation
                --------------------------------

The Kent State University Center for Information Systems is conducting a study
on the use of Expert System Shells.  The main objectives of this research are:
       1.  To categorize ES Shell applications;
       2.  To identify success criteria for each application category; and
       3.  To develop an ES Shell Evaluation Model

The results of our study will be presented at AI sessions of Information
Systems Conferences and will also be submitted for publication in MIS
and ES journals.  A summary of the results will be made available to all
recipients upon request.

The questionnaires will be mailed out around the end of January.  We hope
to have the completed forms back within the month of February.

To obtain a questionnaire please send a request with your mailing address
to:
         Tony Stylianou
         Center for Information Systems
         Graduate School of Management
         Kent State University
         Kent, Ohio 44242
         (216) 672- 2750

or via BITNET to:
         BITNET"STYLIANO@KENTVMS.BITNET"

------------------------------

Date: 4 Jan 88 18:56:30 GMT
From: grc!don@csd1.milw.wisc.edu  (Donald D. Woelz)
Subject: TRC Expert System

My recent request for information on a PD expert system led to
responses indicating that a piece of software called TRC might
be to my liking.  This was posted in Volume 3 of comp.sources.unix
but that was before our machine was on the network.

Does anyone out there have a copy of this software that I can
obtain through a UUCP connection?

Please email me with information on how I might obtain this
software.

Thanks to all those who responded to my previous request.
--
Don Woelz              {ames, rutgers, harvard}!uwvax!uwmcsd1!grc!don
GENROCO, Inc.                              Phone: 414-644-8700
205 Kettle Moraine Drive North             Fax:   414-644-6667
Slinger, WI 53086                          Telex: 6717062

------------------------------

Date: 7 Jan 88 09:35:38 GMT
From: moran@YALE-ZOO.ARPA  (William L. Moran Jr.)
Subject: Planning for games

I'm looking for references to planning as it relates to game playing,
for games other than chess. I would appreciate any references. Might
as well make this via E-mail, and I'll summarize if there is any
interest. Thanks.


                          William L. Moran Jr.
moran@{yale.arpa, cs.yale.edu, yalecs.bitnet}  ...{ihnp4!hsi,decvax}!yale!moran

Stories of tortures
Used by debauchers,
Lurid, licentious, and vile,
Make me smile. :)               Tom Lehrer

------------------------------

Date: 5 Jan 88 14:43:26 GMT
From: wp3b01!rfc@uunet.uu.net  (4115)
Subject: Expert Systems

Subject  : Expert Shell Analysis

Request  : Share mutual knowledge, experience, problems, and expertise
           in using ES in the area of software development.

Discuss 1:
           I am currently developing an in-house seminar for the purpose
           of promoting interest and use of ES programs for analysis of
           software development, prototyping (vaporware), real-time data
           analysis, and decision support for large batch processes.
Discuss 2:
           I am most concerned with the role of the knowledge engineer
           and the requirements of that individual to function with the
           most effectiveness.  I see that this individual may very well
           be the most critical link in the process of applying an ES
           Shell to a particular problem.
Discuss 3:
           I intend to provide details of a sample shell that give insight
           to the internal functions of the program, how rules are read,
           certainty factors, probability, backward chaining, forward
           chaining, etc.  I will probably even exceed my ability to
           provide all the answers that someone may ask questions about.
           Hopefully, an in-depth probe of ES will encourage others to
           reach out and get involved with this concept of programming.
Summary  :
           In the event that you feel inclined to share this endeavor,
           I will be most happy to provide you with the results of my
           efforts.  Please feel free to call, write, or mail.
Conclude :
           Thank you for your kind consideration of this note.
Signature:
           Robert F. Crandall
           1 No. Lexington Ave
           White Plains, NY   10601
           914-397-4115

------------------------------

Date: Thu, 7 Jan 88 05:33 PST
From: nesliwa%nasamail@ames.arc.nasa.gov (NANCY E. SLIWA)
Subject: Commercial products based on neural nets?


I've had a request for information about the existence of any commercial
products based on neural net technology. Not to develop neural net
applications, like HNC and Sigma neurocomputers, but actual products
that use neuromimetic approaches.

I've heard/read somewhere long since about two things:
        (1) a California-based product for processor board layout
        (2) a McLean, VA-based company that has been selling neural-based
                products since the 60's

Does anyone know the specifics of these items, and/or especially any
other examples? Please respond to me directly, and I'll summarize to
the list. Thanks!

Nancy Sliwa
MS 152D
NASA Langley Research Center
Hampton, VA 23665-5225
804/865-3871

nesliwa%nasamail@ames.arpa        or         nancy@grasp.cis.upenn.edu

------------------------------

Date: 4 Jan 88 13:48 -0600
From: Mike Attas <attas%wnre.aecl.cdn%ubc.csnet@RELAY.CS.NET>
Subject: Spangggg!

A quick question:  how does one go about obtaining these Spang-Robinson
reports (e.g. Vol.3, no.12 on Expert Systems tools and the evaluation of
PC Expert Systems)?  What do they cost?    I'm at attas@wnre.aecl.cdn.

Thanks for your help.                       Michael Attas
Atomic Energy of Canada
Pinawa, Manitoba  R0E 1L0

  [I've forwarded this to the publisher, LouRobinson@SRI.COM. -- KIL]

------------------------------

Date: 8 Jan 88 08:21:34 GMT
From: uhccux!todd@humu.nosc.mil  (The Perplexed Wiz)
Subject: time for sci.psych???


It's been at least three or four years since I last saw an attempt to
create a newsgroup devoted to psychology.  So, I thought I'd test the
waters once more.  A few weeks ago, I tested the waters by asking the
readers of sci.med if they would be interested in a new group called
'sci.psych'.  I received about a dozen 'yeas' and no 'nays' in
response to that water testing query.

If created, I see 'sci.psych' as a group for discussions that are in
the cracks between: comp.ai, comp.cog-eng, misc.kids, misc.legal,
news.groups, sci.bio, sci.lang, sci.math.stat, sci.med, sci.misc,
sci.philosophy.tech, sci.research, soc.college, soc.culture.misc,
soc.misc, talk.philosophy.misc.

'sci.psych' would be a forum for discussions on topics such as visual
illusions and their explanations; psychopathology [etiology,
diagnosis, treatment, etc.]; cognitive development theories and their
implications for child rearing, education, and artificial
intelligence; the difference between the legal definition of insanity
and the psychological "definitions" of mental disorder; animal
communication; the problems in selecting appropriate statistical tests
in social-behavioral studies and experiments; intelligence testing;
cross-cultural differences...etc.

If you are interested in seeing a newsgroup where discussions such as
these can take place, please MAIL ME YOUR VOTES (please, do not post
your votes to any of the newsgroups!) before February 8.  The USENET
voting convention requires that there be at least 100 more 'yea' votes
than 'nay' votes, so please vote if you would like to see 'sci.psych'
created.

[BTW: I was told that there is an INET group called 'sci.psychology'
which some USENET sites already receive.  If 'sci.psych' is created, I
would be interested in seeing an INET feed from 'sci.psychology' into
'sci.psych'.]

Please mail your votes to one of the addresses given below.
Thank you...todd

--
Todd Ogasawara, U. of Hawaii Faculty Development Program
UUCP:           {ihnp4,uunet,ucbvax,dcdwest}!sdcsvax!nosc!uhccux!todd
ARPA:           uhccux!todd@nosc.MIL            BITNET: todd@uhccux
INTERNET:       todd@uhccux.UHCC.HAWAII.EDU

------------------------------

Date: 6 Jan 88 19:09:47 GMT
From: mcvax!inria!geocub!farinas@uunet.uu.net  (Luis Farinas)
Subject: Non-classical logics bulletin announcement


                        BULLETIN ANNOUNCEMENT
                        =====================

         -The applications of non-classical logics in Artificial Intelligence
have become more and more popular.

         -Many automated proof procedures have been developed for these logics.

         -There are no natural means of exchanging information quickly about
them (e.g. epistemic logics, temporal logics, deontic logics, logics ot theory
of change, non-monotonic logics ...)

         Therefore :
           We plan to edit an informal bulletin on applied non-classical
logics and proof methods for them containing:
        (1)  short communications about current research work (1-2 pages)
        (2)  abstracts of papers
        (3)  presentations of research groups and projects
        (4)  information about seminars, workshops, conferences.

        If you are interested in this enterprise, please send to one of us the
 relevant information. If you would like to receive (free) this Bulletin
 please send to one of us your name and direction and we shall put you on the
 mailing list.

        Please distribute this information among your colleagues.


Ewa ORLOWSKA                    Luis FARINAS DEL CERRO
Polish Academy of Sciences      Universite Paul Sabatier
P.O. Box 22, 00-901 Warsaw      Langages et Systemes Informatiques
Poland                          31062 Toulouse cedex - France

                                e-mail:  geocub!farinas  on uucp

------------------------------

Date: Wed, 6 Jan 88 20:49:02 est
From: walker@flash.bellcore.com (Don Walker)
Subject: TINLAP3 Position Papers available from ACL

TINLAP-3 POSITION PAPERS AVAILABLE FROM ACL

The Association for Computational Linguistics has just published the
Position Papers prepared for TINLAP-3, the Third Conference on
Theoretical Issues in Natural Language Processing.  TINLAP-3 was
organized by Yorick Wilks and held at New Mexico State University, 7-9
January 1987.  There were sessions on "Words and World
Representations," "Unification and the New Grammatism," "Connectionist
and Other Parallel Approaches to Natural Language Processing,"
"Discourse Theory and Speech Acts," "Why Has Theoretical NLP Made so
Little Progress?," "Formal Versus Common Sense Semantics," "Reference:
The Interaction of Language and the World," "Metaphor," "Natural
Language Generation."  Many of the papers in this proceedings were
revised by their authors following the meeting, so it is different from
the one distributed there.  The price is $20 for ACL personal and
student members, $30 for individual nonmembers, and $40 for
institutions.  Copies are available from the ACL Office:  D.E. Walker
(ACL), Bell Communications Research, 435 South Street - MRE 2A379,
Morristown, NJ 07960-1961, USA.

------------------------------

Date: Mon, 4 Jan 88 9:50:04 PST
From: Kahn.pa@Xerox.COM
Subject: Error in Concurrent Prolog Book Announcement

Message from Udi Shapiro

If this can still be fixed, then the catalog number for vol. 2 is
19267-5, and not 19257-5,
and the table of content should read:
Foreword by Kazuhiro Fuchi, not just Foreword.
[...]
        Udi

------------------------------

Date: 30 Dec 87 12:10 PST
From: hayes.pa@Xerox.COM
Subject: Methodology

Im sure this is not going to convince Mike Sellers, but his comments suggest an
obvious response.  Of course its true that

>the problem for most active researchers is one of scale: you cannot >possibly
hope to create a program that models human cognitive >processing, and you have
to get *something* running, so you set your >sights a little lower

The issue is whether one feels that the best approach to understanding human
cognition is to approximate it by looking at parts of it, as in `classical AI',
or by looking at the behavior

> of a flatworm or a sea slug

Personally, I put my money on the former.

Pat Hayes

------------------------------

Date: 30 Dec 87 10:34:00 EDT
From: wallacerm@afwal-aaa.arpa
Reply-to: <wallacerm@afwal-aaa.arpa>
Subject: Biological Models, Their Real Value for AI

In attempts to reconcile the vast amount of information that is being said on
the topic of biological modeling I do not hear mention of the de facto
requirements of all living organisms.  These are: greed, fear, pain, and
pleasure.  From my observations on the experiments performed on vertebrates
and resolution of the experimenters results, I find that these drives are
foremost in control of all situations that the subject undergoes.  I am not a
terminology bigot, so if you have an equivalent word that caries the same
semantic content, for greed, fear, pain, and pleasure substitute it for the
remainder of this squib!

To elaborate, each organism -- once given that spark of electrochemical
activity -- demonstrates a multilevel control structure that is geared for
immediate survival.  Fright of the new environment (which is often cooler than
a womb or clutch of eggs) stimulates the next control structure of search for
heat.  The pain of hunger causes the search for food.  Once heat and food are
found the characteristics of pleasure and greed (desire for all the food and
heat that is available) start.  By now you've noticed that I've glossed over
what are called the instincts, the innate abilities, of the organism that
control all the electromechanical operations of the organism.  I will return
to these, but first it is important to concentrate on the characteristics
enumerated above.  An organism's life is constantly, intrusively altered by
its "mother."  I quote mother as it is generally the female of the species,
but doesn't always have to be female past birth or the laying of the egg
(reptiles, fish, birds).  This is an important concept that we often fail to
recognize.  This is an extremely interactive phased development of the
organism's learning process; its "self" is intruded and it learns to accept
stimuli from another source other that its inanimate environment.

With these four characteristics, the phases of newborn, infant, pass with much
teaching for the organism's "mother" and environment (niche).  These phases
have no definite time span, and vary per phylum, family, genus, and species.
What is taught, of course varies, and is highly dependent on environment.
Once the organism has achieved a certain autonomous status its basic four
characteristics drive it in its life.

Returning now to the instincts.  I have noticed that there is an expectation
that the AI community puts on its silicon based electronic, electromechanical
"protoplasm;" and that is that it has connections, but no initial "program-
ming!"  I feel that this is quite silly, as all of the expected higher mental
functions are formed by experience/interaction with the "mother" and environ-
ment (ignoring societal interaction for the moment).  The state-of-the-art is
still at the instincts stage.  We are therefore expecting a non-instinctive,
non-greedy, non-pain feeling, non-pleasure feeling, non-fearing lump of
connections to "boot," via our programming to a state past the infant phase!
I feel that this is a gross error in our hypothesis on trying to get
non-biological machines to learn.  In our experiments here with a one neuron
model, the four characteristics proved to be crucial to the development of an
"organism" that could learn.

Turning our attention to the task at hand -- creation of expert systems,
consciousness, and generally a context adaptive decision making entity -- we
must first concentrate on the learning.  To do this we must insert the de
facto characteristics.  Easily said, now how does one do such a thing?
Remembering that we are in fear of pain from our greedy pain or pleasure
giving task masters (a.k.a. "mom") for whom we work; this can be a time and
resource sink in a development process.  Hence we tend not to concentrate on
putting in a baseline of characteristics, but instead try to get our
"brain-damaged" systems to exhibit some set of output for some set of input.
First there must be a baseline.  "Baseline," is defined as, "The point at
which infancy ends, and autonomy begins."

We are all versed in the concepts, operation and use of virtual, multiprocess-
ing, multiprocessor (any function not supported by the CPU), systems.  To
quote the comic strip character Pogo, "We have met the enemy, and they are
us!" because we have these physical items innate.  Any neurologists care to
comment?  Our second definition is computer.  "Computer" is defined as, "The
silicon based, electrically stimulated machine that has computational and
logic (boolean expected, others accepted) capability."  To baseline our
computer in the characteristics that are necessary is quite a task.  Here I am
going to stop this message as I hope that it will stimulate a response from
the reading community.  I have my own opinions and results, but as to not
prejudice the respondents reply, I will not include them yet.  Instead I will
leave the following question for your rumination.

How is a computer to be baselined with the characteristics of greed, fear,
pain, and pleasure for it to learn a higher task/function?

Richard M. Wallace
AFWAL/AADE
Wright-Patterson, AFB, OH 45433
ARPA: <wallacerm@afwal-aaa.arpa>

------------------------------

End of AIList Digest
********************

∂09-Jan-88  0253	LAWS@KL.SRI.COM 	AIList V6 #4 - Seminars, Conferences  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 9 Jan 88  02:53:21 PST
Date: Fri  8 Jan 1988 22:19-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #4 - Seminars, Conferences
To: AIList@SRI.COM


AIList Digest            Saturday, 9 Jan 1988       Volume 6 : Issue 4

Today's Topics:
  Seminars - Recovery From Incorrect Knowledge In SOAR (GMR) &
    Open-Ended Learning Through Machine Evolution (Siemens),
  Conference - 2nd Workshop on Qualitative Physics &
    Neural Controls Session at ACC

----------------------------------------------------------------------

Date: Mon, 4 Jan 88 11:38 EST
From: "R. Uthurusamy" <SAMY%gmr.com@RELAY.CS.NET>
Subject: Seminar - Recovery From Incorrect Knowledge In SOAR (GMR)

Seminar at the  General Motors Research Laboratories in Warren, Michigan.
Wednesday, January 20, 1988 at 10 a.m.


             RECOVERY  FROM  INCORRECT  KNOWLEDGE  IN  SOAR

                            JOHN  E.  LAIRD

 Assistant Professor, Electrical Engineering and Computer Science Dept.
                      The University of Michigan

ABSTRACT:

In previous work, we have demonstrated some of the generality of Soar's
problem solving and learning capabilities.  We even gone so far as to
hypothesize that the simple learning mechanism in Soar, chunking, combined
with its general problem solving capabilities, is sufficient for all
cognitive learning.  This is a radical hypothesis especially when we
consider Soar's difficulty with recovery from incorrect knowledge.
Soar acquires incorrect knowledge whenever it chunks over invalid
inductive inferences made during problem solving.  Recovery requires
some  form of identification and correction of the incorrect knowledge.
Recovery is complicated in Soar by the fact that we have made the following
assumptions: chunking is the only learning mechanism; long-term knowledge,
represented as production rules, is only added, never forgotten, modified
or replaced; and the productions are not open for direct examination by the
learning mechanism or the problem solver.

In this talk I will review chunking in Soar and present recent results in
developing a domain-independent approach for the recovery from incorrect
knowledge in Soar.  This approach does not require any change to the Soar
architecture, but uses chunking to learn rules that overcome the incorrect
knowledge.  The key is to use the problem solving to deliberately reconsider
decisions that might be in error.  If a decision is found to be incorrect,
the problem solving corrects it and a new chunk is learned that will correct
the decision in the future.

Non-GMR personnel interested in attending this seminar please contact
R. Uthurusamy [ samy@gmr.com ] 313-986-1989

------------------------------

Date: 7 Jan 88 00:39:29 GMT
From: siemens!hudak@princeton.edu  (Michael J. Hudak)
Subject: Seminar - Open-Ended Learning Through Machine Evolution
         (Siemens)


Speaker:       Peter Cariani
               Systems Science Dept., Thomas J. Watson School of Engineering
               State University of New York at Binghamton

Title:         Structural Preconditions for Open-Ended Learning
               through Machine Evolution

Location:      Siemens Corporate Research & Support, Inc.
               3rd floor Multi-Purpose Room
               Princeton Forrestal Center
               105 College Road East
               Princeton, NJ   08540-6668

Date:          Thursday, 14 January 1988

Time:          10:00 am   (refreshments: 9:45)

For more information call Mike Hudak:  609/734-3373

                               Abstract

One  of the  basic  problems  confronting  artificial life  simulations is
the apparent open-ended  nature of structural evolution, classically known
as the problem  of emergence.   Were it possible to construct devices with
open-ended behaviors and capabilities,  fundamentally  new learning  tech-
nologies would become possible.  At present, none of our devices or models
are open-ended, due to the nature of their design and construction.

The best  devices we have,  in the form of trainable machines,  neural net
simulations,   Boltzmann  machines  and  Holland-type  adaptive  machines,
exhibit  learning  within the  categories fixed  by their  feature spaces.
Learning occurs  through the performance dependent  optimization of alter-
native I/O  functions.   Within  the adaptive  machine  paradigm  of these
devices,  the measuring devices,  feature spaces, and hence the real world
semantics of  such devices  are stable.   Such machines  cannot create new
primitive  categories;  they will  not expand  their feature  and behavior
spaces.

Over phylogenetic time spans, however,  organisms have evolved new sensors
and effectors,   allowing them to perceive more and more  aspects of their
environments  and to act in more and  more ways  upon those  environments.
This involves a whole new level of learning: the learning of new primitive
cognitive and  behavioral categories.   In terms of constructible devices,
this  level of learning  encompasses machines which  construct and  select
their own sensors and effectors,  based upon their real world performance.
The semantics  of the  feature and  behavior spaces of  such devices  thus
changes so as to optimize their effectiveness as  categories of perception
and action.  Such devices construct their own primitive categories,  their
own primitive  concepts.   Evolutionary  devices  could be  combined  with
adaptive  ones  to both  optimize  primitive  categories and  I/O mappings
within those categories.

Evolutionary machines  cannot be constructed  through computations  alone.
New  primitive  category  construction  necessitates   that  new  physical
measuring structures and controls come into being.   While the behavior of
such  devices can be  represented to a  limited  degree by  formal models,
those models cannot  themselves create  new categories vis-a-vis  the real
world,  and hence are  insufficient as  category-creating devices in their
own right.   Computations must  be augmented by  the physical construction
of new  sensors and effectors  implementing processes  of measurement  and
control respectively.   This construction  process must be inheritable and
replicable,  hence encodable into symbolic form, yet involving the autono-
mous, unencoded dynamics of the matter itself.

The paradigmatic  example of  a natural  construction  process  is  protein
folding.    A one-dimensional  string of  nucleotides,  itself a  discrete,
rate-independent symbolic  structure,  is transformed into continuous, rate
dependent dynamics  having biological  function  through  the action of the
physical properties inherent in the protein  chain itself.   The functional
properties of  speed,  specificity,  and  reliability  of action  are  thus
achieved with  symbolic constraints but  without the  explicit direction of
rules.

------------------------------

Date: Tue, 5 Jan 88 15:50:05 CST
From: forbus@p.cs.uiuc.edu (Kenneth Forbus)
Subject: Conference - 2nd Workshop on Qualitative Physics


                CALL FOR PARTICIPATION
                SECOND WORKSHOP ON QUALITATIVE PHYSICS
                PARIS, JULY 26-28, 1988

Following last year's success of the first workshop on Qualitative
Physics organized by the Qualitative Reasoning Group at the University
of Illinois (with AAAI sponsorship), the second workshop on
Qualitative Physics will be organized by the European Group on
Qualitative Physics and the IBM Paris Scientific Center.  The
workshop, sponsored by the Commission of the European Community
(JRC-Ispara) and in cooperation with AAAI, will be held in Paris on
July 26-28, 1988.  It is intended as a forum for discussion of
ongoing research in Qualitative Physics and related areas.

To develop interaction and exchange of ideas, a number of panels will
be organized.  We invite proposals for panels on ongoing debates in
the area, such as:

        -- Causal Reasoning
        -- Mathematical Aspects of Qualitative Models
        -- Naive Physics versus Qualitative Physics

Another suggested panel format is to pose a particular problem which
panelists must use to focus discussion.  Proposers for panels should
obtain the agreement of the panelists and submit the proposal,
including an outline of the suggested discussion, to the program
chairman by March 8, 1988.

ATTENDANCE:

To encourage lively discussion, attendance will be by invitation only.
If you are interested in attending, please submit five (5) copies of
an extended abstract, up to 6 pages long, to the program chairman:

        Francesco Gardin
        Dipartimento di Scienze dell'Informazione,
        Universita degli Studi di Milano
        Via Moretto da Bresica, 9
        20133 Milano, ITALY
        tel. +39-2-2141230

The deadline for submissions is MARCH 8th, 1988 and invitations will be
mailed APRIL 5th, 1988.  Abstracts will be reviewed by an international
scientific committee.  Results already submitted for publication elsewhere
are acceptable since no proceedings of the workshop will be published.
A subset of the authors may be asked to contribute to a book based on the
workshop.  Besides presenters of papers, a limited number of observers
may be accepted.  For further information about the organization of the
workshop, contact any member of the organizing committee, or:

        Olivier Raiman
        IBM Paris Scientific Center
        3/5 Place Vendome,
        75001 Paris, FRANCE
        tel. +33-1-4296-1475

====================
ORGANIZING COMMITTEE

Johan De Kleer (Xerox PARC)
Ken Forbus (University of Illinois, Urbana)
Pat Hayes (Xerox PARC),
Ben Kuipers (University of Texas, Austin)

and all the members of the European Qualitative Physics Committee:
Flavio Argentesi (JRC-Ispra)
Ivan Bratko (University of Ljubljana)
John Campbell (Univ. College of London)
Jean-Luc Dormoy (EDF)
Boi Faltings (E.P.F. Lausanne)
Francesco Gardin (University of Milan)
Bernd Hellingrath (Fraunhofer-Institute ITW)
Roy Leitch (Heriot-Watt University)
Nicools J. Mars (Univ. of Twente)
Pierre Van Nypelseer (AITECH, Brussels)
Olivier Raiman (IBM Paris Scientific Centre)
Peter Struss (Siemens)

------------------------------

Date: Fri, 8 Jan 88 07:27 PST
From: nesliwa%nasamail@ames.arc.nasa.gov (NANCY E. SLIWA)
Subject: Conference - Neural Controls Session at ACC


In response to requests for information about the ACC session in
neural applications to robotics, about which I recently solicited names,
I am posting the current status of the session, along with minimal
conference information. Registration information can probably be obtained
from the general chair.


1988 American Controls Conference
June 15-17, 1988
The Atlanta Hilton and Towers
Atlanta, Georgia

General Chair:  Wayne Book
                The George W. Woodruff School of Mechanical Engineering
                Georgia Institute of Technology
                Atlanta, Georgia 30332
                (404) 894-3247



                Invited Session on Neural Networks in Control
                    (A 4-hour session, 8 regular papers)

Chairs: Moshe Kam, Drexel University
        Don Soloway, NASA Langley Research Center

"How Neural Networks Factor Problems of Sensory Motor Control"
        Danial Bullock, Boston University

"Neural and Adaptive Control: Similarities and Differences"
        A. Sideris, D. Psaltis, A. Yamakura, California Inst. of Technology

"On State Space Analysis for Neural Networks"
        Moshe Kam, Roger Cheng, Allon Guez, Drexel University

"Adaptive Neural Model for Hand-Eye Coordination"
        M. Kuperstein, Wellesley College

"A Neural Network for Planning Preshape Postures of the Human Hand"
        Thea Ibarall, University of Southern California

"Strategy Learning with Multilayer Connectionist Representations"
        Charles Anderson, GTE Labs Inc.

"Neural-Networks-Based Learning Systems for Material Handling
Using Multiple Robots"
        D-Y Yeung, George Bekey, University of Southern California

"Using Neural Networks to Characterize Complex Systems"
        Philip Daley, A. Thornbrugh, Martin-Marietta Astronautics Group



Nancy Sliwa
NASA Langley Research Center
804/865-3871

nesliwa%nasamail@ames.arpa

------------------------------

End of AIList Digest
********************

∂09-Jan-88  0444	LAWS@KL.SRI.COM 	AIList V6 #5 - Conference on AI Applications    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 9 Jan 88  04:44:33 PST
Date: Fri  8 Jan 1988 22:24-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #5 - Conference on AI Applications
To: AIList@SRI.COM


AIList Digest            Saturday, 9 Jan 1988       Volume 6 : Issue 5

Today's Topics:
  Conference - IEEE Conference on AI Applications

----------------------------------------------------------------------

Date: Wed, 6 Jan 88 15:05 CST
From: Jim Miller <hi.jmiller@MCC.COM>
Subject: Conference - IEEE Conference on AI Applications


The Fourth IEEE Computer Society Conference on Artificial Intelligence
Applications will be held in San Diego on March 14-18, 1988.  The conference
will open with two days of tutorials, followed by the technical program,
described below.  More details on the conference, including registration
and hotel reservation forms, are available from the IEEE Computer Society
(1730 Massachusetts Avenue NW, Washington DC, 20036; 202-371-1013).

Jim Miller
MCC Human Interface
CAIA-88 General Chair

------------------------------------------------------------------------------

                        Tutorial Program for CAIA-88

MONDAY, MARCH 14, 1988

8:00 - 12:00:
        Avron Barr: Managing Knowledge System Development
        Yoh-Han Pao and Steven R. LeClair: AI in Manufacturing

1:30 - 5:00:
        Eric Mielke and Mary Garrison: Knowledge Structuring Techniques for
                Knowledge Acquisition
        Marilyn Stelzner and Allan Cypher: User Interfaces



TUESDAY, MARCH 15, 1988

8:00 - 12:00:
        Gary Hendrix: Is Natural Language Processing Useful in My
                Application?
        Kurt J. Schmucker: Introduction to Object-Oriented Programming
                Concepts

1:30 - 5:00:
        Jan Aikins and Paul Harmon: Doing it on Mainframes
        Robert L. Simpson, Jr.: A Survey of DOD Research And Applications in
                AI


------------------------------------------------------------------------------

                 Preliminary Technical Program for CAIA-88


WEDNESDAY, MARCH 16, 1988

 9:00 - 10:00           KEYNOTE ADDRESS

        The Impact of AI on the Corporate Enterprise
        with special emphasis on Computer Integrated Manufacturing
        Scott Flaig, Digital Equipment Corp.

10:00 - 10:30   BREAK

10:30 - 12:00

        Paper Session 1A:  DIAGNOSIS

AI-Test : A Real Life Expert System for Electronic
Troubleshooting Description and a Case Study).
Moshe Ben-Bassat, Dahpna Ben-Arie, Israel Beniaminy, Jonathan
Cheifetz, and  Mordechai Klinger, Tel Aviv University and Intelligent
Electronics Inc.

A.I. in the Power Plants:  PERF-EXS, a PERFormance Diagnostics EXspert  System.
A. D'Ambrosio, APERION and A. Servanti and M. Oriati, ANSALDO

CONSOLIDATE:  Merging Heuristic Classification with Causal
Reasoning in Machine Fault Diagnosis.
Scott C. Bublin and R.L. Kashyap, Purdue University

     Paper Session 1B:   PROGRAM DEVELOPMENT AIDS

A Discourse-Based Consultant for Interactive Environments.
Ursula Wolz and Gail E. Kaiser, Columbia University

Rapid Prototyping Techniques for Expert Systems.
Mildred L. G. Shaw, University of Calgary;  Jeffrey M. Bradshaw, Brian R.
Gaines, and John H. Boose, Boeing

Directionality and Stability in System Behaviors.
Kaizhi Yue, University of Southern California/Information Sciences Institute


     Paper Session 1C:  KNOWLEDGE REPRESENTATION

Representation of Command Language Behavior for an Operating
System Consultation Facility.
Stephen J. Hegner, University of Vermont

Dialog Modeling in M-KRYPTON: A Hybrid Language for Multiple
Believers.
Alessandro Saffiotti and Fabrizio Sebastiani, University of Pisa

Experience with K-Rep: An Object-Centered Knowledge-
Representation Language.
Eric Mays, Chidanand Apte, James Griesmer, and John Kastner,  IBM Thomas J.
Watson Research Center

        Panel Session

Qualitative Modeling Meets Engineering Problem Solving

 12:00 -  1:30  LUNCH

  1:00 -  4:00  Poster Session 1

  1:30 -  3:30

     Paper Session 2A:  VISION AND ROBOTICS - 1

A Connectionist Approach to Primitive Shape Recognition in Range
Data. Ruud M. Bolle, Daniel Sabbah, and Rick Kjeldsen, IBM Thomas J. Watson
Research Center

Cooperative Focus and Stereo Ranging.
Eric Krotkov and Ralf Kories, University of Pennsylvania

Free Space Modeling and Geometric Motion Planning Under Unexpected Obstacles.
Alex C-C Meng, Texas Instruments, Inc.

Trajectory Planning of Robot Manipulators in Noisy Workspaces Using
Stochastic Automata.
B. J. Oommen, Carleton University, and S. Sitharam Iyengar, Louisiana State
University, and Nicte Andrade, Carleton University

     Paper Session 2B:  DESIGN - 1

ENCORE:  A Knowledge-Based System for Designing Power
Transformers and Inductors.
James H. Garrett, Jr., University of Illinois; Arun Jain, Schlumberger Well
Services

Knowledge-Based Synthesis of Custom VLSI Physical Design Tools.
Dorothy E. Setliff and Rob A. Rutenbar, Carnegie Mellon University

MES:  An Expert System for Reusing Models of Transmission Equipment.
S. Roody Rosales and Prem K. Mehrotra, AT&T Bell Laboratories

A Coupled Expert System for Optimum Design of Bridges.
H. Adeli, M. Asce, and K.V. Balasubramanyam, Ohio State University

     INVITED TALKS

The Process of Knowledge Acquisition
Nancy Martin, Softpert Systems

Experience with Simulation-Based Training Environments
James D. Hollan, MCC

   3:30 -  4:00 BREAK

   4:00 -  5:30

     Paper Session 3A:   DIAGNOSTIC REASONING

An Application of Qualitative Reasoning to Process Diagnosis:
Automatic Rule Generation by Qualitative Simulation.
Yoshiteru Ishida, Kyoto University

The Application of Differential Logic to Diagnosis.
Gilbert B. Porter III, GE Corporate R&D Center

Diagnosing Multiple Failures Using Knowledge of Component State.
Lester J. Holtzblatt, the MITRE Corporation

     Paper Session 3B:  DESIGN - 2

Argo:  A System for Design by Analogy.
Michael N. Huhns and Ramon D. Acosta, Microelectronics and Computer Technology
Corporation

Learning Preference Rules for a VLSI Design Problem-Solver.
Steven W. Norton and Kevin M. Kelly, Rutgers University

Knowledge-Based System for Relational Normalization of GDBMS
Conceptual Schemas.
Aime Bayle, Honeywell Information Systems; Esen Ozkarahan, Arizona State
University

             INVITED PANEL

AI and CASE: Constructive Convergence
Moderator: Esther Dyson, EDventure Holdings



THURSDAY, MARCH 17, 1988

 9:00 - 10:00           KEYNOTE ADDRESS

What's Missing in AI --- Can Massively Parallel Architectures Help?
Scott Fahlman, Carnegie-Mellon University

10:00 - 10:30   BREAK

10:30 - 12:30

        Paper Session 4A:  MANUFACTURING AND PROCESS CONTROL

INCA: An Expert System for Process Planning in PCB Assembly Line.
P. Cavalloro, E. Cividati, ITALTEL

The Dynamic Rescheduler: An AI Based Assistant for Conquering the
Changing Production Environment.
Mathilde C. Brown, Arthur Andersen & Co.

CABPRO: A Rule-Based Expert System for Process Planning of
Assembled Multiwire Cables.
R. M. Schaefer, J. S. Colmer, and  M. Miley, Allied-Signal Inc.

      Paper Session 4B:  LEARNING

Learning Control Information in Rule-Based Systems: A Weak Method.
Usama M. Fayyad, The University of Michigan; Kristina E. Van Voorhis,
Environmental Research Institute of Michigan; Mark D. Wiesmeyer, The
University of Michigan

Learning Structural Descriptions of Radar Backscatter Images.
Mieczyslaw M. Kokar, Subbiah Gopalraman, and Amitabh Shukla, Northeastern
University

A Learning Model for the Selection of Problem Solving Strategies in
Continuous Physical Systems.
Xiaodong Xia and Dit-Yan Yeung, University of Southern California

     Paper Session 4C:  KNOWLEDGE REPRESENTATION AND REASONING

Dynamic Assessment of Relevancy in a Case-Based Reasoner.
Kevin D. Ashley and Edwina L. Rissland, University of Massachusetts

The Automated Symbolic Derivation of State Equations for Dynamic
Systems.
Jane Macfarlane, Lawrence Livermore Laboratory; Max Donath, University of
Minnesota

Doing Time without Getting Hurt.
J. Scott Penberthy, IBM Thomas J. Watson Research Center

        Panel Session

Financial Applications

 12:00 -  1:30  LUNCH


  1:00 -  4:00  Poster Session 2

  1:30 -  3:30

     Paper Session 5A:  VISION AND ROBOTICS - 2

Model-Based Object Recognition - A Truth Maintenance Approach.
Gregory M. Provan, University of Oxford

Self-Organizing Model for Pattern Learning and Its Application to
Robot Eyesight.
Hisashi Suzuki and Suguru Arimoto, Osaka University

An Overview of ANDES:  A Knowledge-Based Scene Analysis System.
Paulo Ouvera Simoni, Instituto de Pesquisas Espaciais

A Model for Robotic Perception.
S.A. Stansfield, University of Pennsylvania/Sandia National Laboratories

      Paper Session 5B:  TOOLS AND IMPLEMENTATION ALGORITHMS

The HICLASS Software System: A Manufacturing Expert System Shell.
David Liu, Hughes Aircraft Company

A Development Environment for Field Diagnosis Tools.
K. P. Lee, John Martin, Paul Rutter, and Richard L. Wexelblat, Philips
Laboratories

High-Level Language Approach to Parallel Execution of OPS5.
Hiroshi G. Okuno and Anoop Gupta, Stanford University

Parallel Set Covering Algorithms.
Srinivasan Sekar, National Semiconductor Company and James A. Reggia,
University  of Maryland

        INVITED TALK (1:30 - 2:30)

Blackboard Architectures: Definitions and Applications
Barbara Hayes-Roth, Stanford University

        INVITED PANEL  (2:30 - 3:30)

Expert Systems That Didn't Make It The First Time: Why?
Moderator: Richard Wexelblat, Philips Laboratories

   3:30 -  4:00 BREAK

   4:00 -  5:30

       Paper Session 6A:  APPLICATIONS OF PLANNING

Knowledge-Based Planning and Replanning in Naval Command and Control.
J. A. Gadsden, Admiralty Research Establishment

An Interactive Planner for Open Systems.
Frank v. Martial and Frank Victor, Institute of Applied Information
Technology, GMD

An Expert System for Alleviating Overloads in Electric Power Systems:
General Concepts and Applications.
B. Delfino, B. B. Denegri, and M. Invernizzi, Unversity of Genoa, and A.
Canonero and  P. Forzano, ANSALDO

     Paper Session 6B:  LEARNING AND REASONING

A Machine Learning Approach to the Automatic Synthesis of
Mechanistic Knowledge for Engineering Decision Making.
Kaihu Chen and Stephen C-Y. Lu, University of Illinois at Urbana-Champaign

Constraint Processing Incorporating Backjumping, Learning, and Cutset
Decomposition.
Rina Dechter, Hughes Aircraft Company

Interactive Induction.
Wray Buntine, New South Wales Institute of Technology and David Stirling, BHP

     INVITED PANEL

Is There a Future for Lisp Machines?
Moderator: Scott Fahlman, Carnegie-Mellon University



FRIDAY, MARCH 18, 1988

 9:00 - 10:00           KEYNOTE ADDRESS

The NASA System's Autonomy Program: Applying AI in Space
Henry Lum, NASA

10:00 - 10:30   BREAK

10:30 - 12:30

        Paper Session 7A:  NATURAL LANGUAGE INTERFACES

Hierarchical Multilevel Processing Model for Natural Language
Database Interface.
H. Jappinen, T. Honkela, A. Lehtola, and K. Valkonen, SITRA Foundation

Text Condensation as Knowledge Base Abstraction.
Ulrich Reimer, University of Constance; Udo Hahn, University of Passau

An Expert/Expert-Locating System Based on Automatic Representation
of Semantic Structure.
Lynn A. Streeter and Karen E. Lochbaum, Bell Communications Research

A Friendly Merger of Conceptual Expectation and Linguistic Analysis in
a Text Processing System.
Paul S. Jacobs and Lisa F. Rau, GE Company

    Paper Session 7B:  REASONING

Cluster-based Representation of Hydraulic Systems.
Arthur M. Farley, University of Oregon

Solving Dynamic-Input Interpretation Problems Using the Hypothesize-
Test-Revise Paradigm.
Kathleen D. Cebulka, Sandra Carberry, and Daniel L. Chester, University of
Delaware

Building Numerical Sensitivity Analysis Systems Using a Knowledge-
Based Approach.
Chinanand Apte and Robert Dionne, IBM Thomas J. Watson Research Center

Qualitative Measurement Interpretation Using Quantity Spaces.
Stephen Todd, Hewlett-Packard Laboratories

     INVITED PANEL

Integrating AI and Databases: Today's Capabilities, Tomorrow's Goals
Moderator:  Jan Aikins, Aion Corporation

------------------------------

End of AIList Digest
********************

∂09-Jan-88  0629	LAWS@KL.SRI.COM 	AIList V6 #6 - Cognitive Science Programs  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 9 Jan 88  06:29:28 PST
Date: Fri  8 Jan 1988 22:29-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #6 - Cognitive Science Programs
To: AIList@SRI.COM


AIList Digest            Saturday, 9 Jan 1988       Volume 6 : Issue 6

Today's Topics:
  Education & Psychology - Cognitive Science Programs

----------------------------------------------------------------------

Date: 5 Jan 88 05:19:51 GMT
From: mnetor!utzoo!utgpu!jarvis.csri.toronto.edu!utai!tjhorton@uunet.u
      u.net  (Timothy J. Horton)
Subject: cognitive science programs - summary of responses

This is a summary of responses to a question posed in comp.ai a few weeks ago,
about (university) programs in cognitive science.  The original question in-
cluded the following (slightly fixed) information (and some misinformation?):

MIT: Department of Brain and Cognitive Science

Brown: Department of Linguistics and Cognitive Science, 12 Faculty
Fields of study: Linguistics, Vision, Reasoning, Neural Models, Animal Cognition

UCSD: interdisciplinary PhD in Cognitive Science exists
a Dept of Cognitive Science is in the works
undergraduate program in Cog Sci currently offered by psychology
emphases in Connectionism, Psychology, AI, Linguisitics, Neuroscience,
            Philosophy, Social Cognition

Stanford: Graduate Program in Cognitive Science
Psychology (organizing dept), Linguistics, Computer Science, Philosophy

Rochester: interdisciplinary PhD in Cognitive Science

UC Berkley: Cognitive Science Program, focus on linguistics

Princeton: interdisciplinary program in Cognitive Science

Toronto: Undergraduate Major in Cognitive Science and Artificial Intelligence

Michigan: no current program in Cognitive Science, but some opportunities

University of Western Ontario: Center for Cognitive Science

Edinburgh: department of Cognitive Science (formerly School of Epistemics)
focus on linguistics

Sussex: School of Cognitive Science


--------------------- RESPONSES (partially EDITED) ---------------------------

From: "Donald A. Norman" <norman%ics@sdcsvax.ucsd.edu> at UCSD

>At UCSD, we are indeed in the process of establishing a Department of Cognitive
>Science.  We are now hiring, but formal classes will not start until the Fall
>of 1989.  We will have both an undergraduate and a PhD program.  We now have
>an Interdisciplinary PhD program:  students enter some department, X, and join
>the interdisciplinary program after completing the first year requirements of
>X.  They then receive a "PhD in X and Cognitive Science."  We have about 20
>students now and have given out about 3 PhDs.
>  The strengths are in the computational understanding of cognition, with
>strong emphasis in psychology, AI, linguisitics, neuroscience, philosophy, and
>social cognition.  PDP (connectionism) is one of the strengths at UCSD, and
>the approach permeates all of the different areas of Cognitive Science, even
>among those of us who do not directly do work on weights, algorithms, or
>connectionist architectures
>  Yes, there is a Cognitive Science Society.  It hosts an annual conference
>(the next one will be in Montreal).  It publishes the journal "Cognitive
>Science."  You can find out about it by writing the secretary treasurer:
>    Kurt Vanlehn                       vanlehn@a.psy.cmu.edu
>    Department of Psychology
>    Carnegie-Mellon University
>    Pittsburgh, PA 15213

-----
From: Jeff Elman <elman@amos.ling.ucsd.edu> at UCSD (taken from comp.ai)

>The University of California, San Diego is considering the establishment of a
>Department of Cognitive Science ...  The Department will take a broadly-based
>approach to the study of cognition.  It will be concerned with the neurological
>basis of cognition, individual cognition, cognition in social groups, and
>machine intelligence.  It will incorporate methods and theories from a wide
>variety of disciplines including Anthropology, Computer Science, Linguistics,
>Neuroscience, Philosophy, Psychology, and Sociology.

-----
From: Tom Olson <olson@cs.rochester.edu> at Rochester

>The University of Rochester has an interdisciplinary Ph. D. in Cog Sci,
>basically a bridge between Comp. Sci., Psych and Philosophy.  I don't know
>much about how it is organized.  If you're interested, you might write to
>alice@cs.rochester.edu or lachter@cs.rochester.edu who are among the first
>students in the program.  Presumably we're strong in linguistics, vision,
>connectionism, and inexact ("probabilistic") reasoning.
>PS Connectionism is not fading at San Diego as far as I know.

-----
From: Michael McInerny <mcinerny@cs.rochester.edu> at Rochester

>Here at the UofRochester (Hi Neighbor!), we have an "interdisciplinary"
>Cog Sci dept. that includes fac. from Comp Sci, Psych, Philosophy, and
>Neuroscience.  I'm a grad student enrolled in the program, via the Comp
>Science dept., which means I have to get my own committee together,
>and build my own program, on top of passing regular CS stuff like Quals.
>I understand there is an undergraduate major in the dept too.

-----
From: William J. Rapaport <rapaport@cs.buffalo.edu> at SUNY

>State University of New York at Buffalo has several active cognitive science
>programs.  What follows is a slightly outdated on-line information sheet on
>two of them.
   [contact the author (or myself) for the full text.  The description reads
   in part: "(the group's) activities have focused upon language-related
   issues and knowledge representation... "]
>The newest is the SUNY Buffalo Graduate Studies and Research Initiative in
>Cognitive and Linguistic Sciences, whose Steering Committee is currently
>planning the establishment of a Cog and Ling Sci Center and running a
>colloquium series.  For more information, please contact me.  In addition,
>let me know if you wish to be on my on-line mailing list for colloquium
>announcements.

-----
From: Marie Bienkowski <bienk@spam.istc.sri.com>

>Princeton University has an excellent Cognitive Science program, although
>there is no department by that name.  They have active research programs
>on automated tutoring, vocabulary acquisition, reasoning, belief revision,
>connectionism (with Bellcore), computational linguistics, cognitive
>anthropology, and probably more that I've missed.  The main sponsoring
>departments are Psychology, Philosophy and Linguistics.
>  A good person to contact is bjr@mind.princeton.edu, who is, in real life,
>a professor in the Psychology Dept.  His p-mail address is:
>       Brian Reiser
>       Cognitive Science Laboratory
>       21 Nassau St.
>       Princeton, NJ  08542

-----
From: Rodney Hoffman <Hoffman.es@xerox.com>

>There is an undergraduate program in Cognitive Science at Occidental College
>(Los Angeles).  The director is Saul Traiger <oxy!traiger@CSVAX.Caltech.edu>;
>write to him for more information.

-----
From: "Saul P. Traiger" <oxy!traiger@csvax.caltech.edu> at Occidental College

>The following appeared in Ailist Digest last summer. Let me know if you'd
>like more information.
>  Occidental College,  a liberal arts college which enrolls approximately
>1600 students, is pleased to announce a new Program in Cognitive
>Science. The Program offers an undergraduate major and minor in Cognitive
>Science. Faculty participating in this program include members of the
>departments of mathematics, linguistics, psychology, and philosophy.
>[...]  The undergraduate major in Cognitive Science at Occidental College
>includes courses in mathematics, philosophy, psychology and linguistics.
>Instruction in mathematics introduces students to computer languages,
>discrete mathematics,  logic, and the mathematics of computation.
>Philosophy offerings  cover the philosophy of mind, with emphasis on
>computational models of the mind, the theory of knowledge, the philosophy
>of science, and the philosophy of language. Psychology courses include
>basic psychology, learning, perception, and cognition. Courses in
>linguistics provide a theoretical foundation in natural languages, their
>acquisition, development, and structure.  For more information about
>Occidental College's Cognitive Science Program:
>  Professor Saul Traiger    ARPANET: oxy!traiger@CSVAX.Caltech.EDU
>  Cognitive Science Program BITNET:  oxy!traiger@hamlet
>  1600 Campus Road          CSNET:   oxy!traiger%csvax.caltech.edu@RELAY.CS.NET
>  Occidental College        UUCP:    {seismo,rutgers,ames}!cit-vax!oxy!traiger
>  Los Angeles, CA 90041

-----
From: Roy Eagleson <deepthot.UWO.CDN!elroy@julian.uucp> at Western Ontario

>"The Centre for Cognitive Science" at UWO is a community of professors,
>research assistants, and graduate students from: Psychology, Computer Science,
>Philosophy, Neurobiology, Engineering, and Library Science.  In addition to
>the related graduate and undergraduate courses offered by those faculties
>and departments, there is an undergraduate course in Cognitive Science
>offered through Psychology.  We can send you more info if you want it.
>
>As for the Cognitive Science Society, you can drop them a line at:
>       Cognitive Science Society,
>       Department of Psychology
>       Carnegie-Mellon University
>       Schenley Park
>       Pittsburgh, PA 15213
>Zenon Pylyshyn was their President for 1985-86.

-----
From: John Laird <laird@caen.engin.umich.edu> at Michigan

>There is no formal undergraduate or graduate program in Cognitive Science
>at this time.  We will be offering an undergraduate course in Cognitive Science
>next term, co-taught by AI, Psych., Ling., and Philosophy.  We also have the
>Cognitive Science and Machine Intelligence Lab.   It is supported by three
>colleges: Engineering; Business; and Literature, Sciences and the Arts.
>The Lab sponsers a variety of Cognitive Science activities: talks, workshops,
>research groups, etc.  I expect that in a few years we will have undergraduate
>and graduate programs in Cognitive Science, but for now, students must be in
>a specific department and take cross-listed courses.
-----

From Professor Tom Perry, Simon Fraser University, Vancouver

>The Cognitive Science Program does not yet have a graduate program, but one is
>planned for the near future.  At present, qualified students can do advanced
>degrees under Special Arrangements.
[...]
>   Cognitive Science Program
>   Department of Philosophy
>   Simon Fraser University
>   Burnaby, BC, Canada V5A 1S6

[Special arrangements means: "Exceptionally able applications, who wish to work
for a Master's or Doctoral degree outside or between existing programs at Simon
Fraser University, may apply to work under Special Arrangements.  (the student)
must have a well-developed plan of studies in an area which can be shown to
have internal coherence and academic merit, and which the University has appro-
priate expertise and interests among its faculty members ..."]

-----
From Donald H. Mitchell of Bendix Aero. Tech. Ctr <DON@atc.bendix.com>

>In 1985, the president of Northwestern University set aside a decent pot of
>money and charged the Cognitive Psychology program to find a chairman for an
>interdisciplinary Cognitive Science program.  They aggressively set out and
>brought dozens of big names in for show-and-tell.  They made offers to
>several; however, as far as I know, they never caught one.  Maybe they have
>one now?  I do not know.
>Northwestern has a small but high-quality group of Cognitive Psychologists
>[...] The work is primarily on human cognition: verbal information processing
>... human decision making... human expertise in game-playing, ... heuristic
>search, and machine learning (genetic algorithms).

-----------------------------------------------------------------------------
--
Timothy J Horton (416) 979-3109   tjhorton@ai.toronto.edu (CSnet,UUCP,Bitnet)
Dept of Computer Science          tjhorton@ai.toronto     (other Bitnet)
University of Toronto,            tjhorton@ai.toronto.cdn (EAN X.400)
Toronto, Canada M5S 1A4           {seismo,watmath}!ai.toronto.edu!tjhorton

------------------------------

End of AIList Digest
********************

∂09-Jan-88  0808	LAWS@KL.SRI.COM 	AIList V6 #7 - Object-Oriented Databases   
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 9 Jan 88  08:08:36 PST
Date: Fri  8 Jan 1988 22:32-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #7 - Object-Oriented Databases
To: AIList@SRI.COM


AIList Digest            Saturday, 9 Jan 1988       Volume 6 : Issue 7

Today's Topics:
  AI Tools - Object-Oriented Database Summary

----------------------------------------------------------------------

Date: 5 Jan 88 18:43:21 GMT
From: mfidelma@bbn.com  (Miles Fidelman)
Subject: summary of object oriented db query

A while back I posted a query on object-oriented databases. Here are
the replies that I received. Thanks to all who responded.

Miles


::::::::::::::
databases/1
::::::::::::::
Date: Fri, 13 Nov 87 09:54:15 EST
From: Robert Goldman <rpg%cs.brown.edu@RELAY.CS.NET>

I know that Stan Zdonik here at Brown has been doing some work on this with
his student Andrea Skarra.  I think that they have a paper in last year's
OOPSLA.

Good luck,

Robert

::::::::::::::
databases/2
::::::::::::::
From: Jorge Gautier <gautier@CS.WISC.EDU>
Organization: U of Wisconsin CS Dept


Check out Won Kim, J. Banerjee et al.'s ORION Project at MCC.
I can give you more specific references if you want...

gautier@ai.cs.wisc.edu

::::::::::::::
databases/4
::::::::::::::
Return-Path: kj@ohio-state.arpa
From: Kathy Johnson <kj@BBN.COM>
Organization: The Ohio State University Dept of Computer and
  Information Science

I have been looking at some of these issues since our approach at Ohio State
needs such a database.  The most interesting work I have seen so far is
on the KODIAK system which was developed by Robert Wilensky at UC Berkeley.
There is a chapter in a book which is written by him -- Chap 2 of "Experience,
Memory, and Reasoning" edited by Janet Kolodner and Christopher Riesbeck
(published by Lawrence Erlbaum Assoc. 1986 Hillsdale, NJ).  I would be
interested in hearing about anything else you find out.

--Kathy Johnson
The Ohio State University
Laboratory for Artificial Intelligence Research
::::::::::::::
databases/5
::::::::::::::
From: "Jorge A. Gautier" <gautier@NEUFCHATEL.WISC.EDU>

On the ORION system at MCC:

Jay Banerjee et al., Data Model Issues for Object-Oriented Applications,
``ACM Transactions on Office Information Systems,'' vol. 5 no. 1,
January 1987, pp. 3-26.

Jay Banerjee et al., Queries in Object-Oriented Databases,
MCC Technical Report DB-188-87, June 1987.

There are more MCC reports about ORION, I've just read these two.
I think Won Kim is the ORION project leader.

On the IRIS system at HPLabs:

D.H. Fishman et al., Iris: An Object-Oriented Database Management System,
``ACM Transactions on Office Information Systems,'' vol. 5 no. 1,
January 1987, pp. 48-69.

There's also a group in France, Altair (that's an i with two dots), that's
doing OODB, but from the talk I heard earlier this semester by
Francois Bancilhon it doesn't seem like they're doing it the `right' way.

OODB is the latest `fad' in DB.  Check recent proceedings/
issues of SIGMOD, VLDB, TOIS, OOPSLA and I'm sure you'll find more about it.

Disclaimer: I am not a DB student, just taking a DB class this semester
and reading up for my project.  I may still be missing more basic/
important works.

Good luck,
Jorge
::::::::::::::
databases/6
::::::::::::::
Return-Path: welch@tut.cis.ohio-state.edu
From: Arun Welch <welch@OHIO-STATE.ARPA>

There's a company in France called Graphael or some such doing work on OO
databases, running on TI Explorers.  I seem to remember seeing them at AAAI.
We've also done work on intelligent databases, built on top of Loops.

...arun

Arun Welch
Systems Programmer, Lab for AI Research, Ohio State University.
welch@ohio-state.{CSNET,ARPA}

::::::::::::::
databases/7
::::::::::::::
From: Mike Percy <grimlok@HUBCAP.CLEMSON.EDU>

Well, I am currently working on an effort to design a breadth-first
Prolog for our FPS T hypercube here at Clemson University for an
independent study class (and possibly, hopefully, it will carry over
into master's work). My goal resolution and unification systems are
turning out to be more and more like database problems. At least, it
seems like I will eventually apply database techniques to the systems.
If you receive any helpful responses on this, I would appreciate if you
could forard a reading list. Most of what I have is from reading
Warner's papers, and Shapiro's, and a few others. Then I saw the need
for database technology and started digging into the db stuff. But I
have not seen any ties between the two so far in the literature.

Mike Percy
Computer Science
Clemson University
::::::::::::::
databases/8
::::::::::::::
From: "Jesper L. Lauritsen" <mcvax!diku!jesper@UUNET.UU.NET>

Below is references to a few articles. I don't think I have seen anything
realy interesting on the subject (I have not seriusly been looking for it
thoug). I'd like to hear what find.

Gio Wiederhold, Views, Objects, and Databases
IEEE someting, Dec 86

A. J. Baroody, jr. & D. J. DeWitt
An Object-Oriented Approch to Database System Implementation
TODS, vol.6,no.4, Dec 81

S.L. Osborn & T.E. Heaven
The Design of a Relational Database System with Abstract Data
Types for Domains
TODS, vol.11,no.3, Sep 86

----------
Jesper L. Lauritsen, U. of Copenhagen, Inst. of Datalogy
...!mcvax!diku!jesper
::::::::::::::
databases/9
::::::::::::::
From: Greg Michelson <blia.UUCP!blic!gregm@CGL.UCSF.EDU>

One commercial product is
        KEE Connection from
        INTELLICORP, Mt. View, CA
        (415) 965-5500
works with some relational database products
including Britton Lee's Rel. DBMS
from:  Greg Michelson (gregm)
          Britton Lee
          Los Gatos CA, 95030
          (408) 378-7000
::::::::::::::
databases/10
::::::::::::::
From: lepine%debit.DEC@DECWRL.DEC.COM

Miles,

I saw your posting for information on object-oriented databases.  I have been
involved with database work for several years (currently doing contract work
for the Database Systems Group at DEC).  I too have recently become interested
in the concept of object-oriented databases.  I would appreciate hearing any
replies that you might receive and will try to put together what information I
have on the topic (it is at home, unfortunately).

Thanks,
Normand Lepine

lepine@nova.dec.com
::::::::::::::
databases/11
::::::::::::::
From: harvard!AMES.ARPA!ucbcad!zodiac.UUCP!jdye@BBN.COM
Organization: Advanced Decision Systems, Mt. View, CA (415) 941-3912

>Has anyone been working on making a production object oriented environment?

Miles that is exactly what needs to be done.

  The problem with the objects in OOP's today is that they only have meaning
while they are "loaded into lisp" ie read into the lisp processes virtual
memory.  Objects arent queryable by a database system (or higher level query
language ie lisp ;-). Smalltalk is better than lisp but still not saveable
(correct me please).

  Also having non-permanent objects means you either need:
  1) twice as much storage to hold your objects and load them into.
  2) infinitely faster processors so you can re-process your results each
     time the power goes out.

  The 87 OOPSLA conference is addresses these issues somewhat although
there isnt a Borland OOP out yet that has save/restore/query
capability.  Fast query will be the metric for measurment of OOP dbase
systems for years (not loading time!!).  Lisp systems should stop
pretending that disk drives dont exist.

JD
::::::::::::::
databases/12
::::::::::::::
From: John Henshaw <geac!john@UUNET.UU.NET>

Professor Stavros Christodoulakis at the Universty of Waterloo, in
Waterloo, Ontario has done a lot of work w.r.t. O-O database technology.
You might try querying root@watmath for his path - he's on the net.

-john-

--
John Henshaw,                   (mnetor, yetti, utgpu !geac!john)
Geac Computers Ltd.             "...and bring along a major credit card
Markham, Ontario, Canada, eh?            and a piece of ID..."
::::::::::::::
databases/13
::::::::::::::
From: Jonathan Delatizky <DELATIZKY@V1.BBN.COM>

I saw your query in AIList. There is a company in the area (I think they're
in Billerica - at least, they used to be) called Ontologic, which is
developing a commercial object oriented database. Some people from
dept 45 went to visit them recently. I don't know how much of the traditional
DBMS baggage of transaction locking, rollbacks, etc. they will support,
but I suspect quite a bit to make a viable commercial product.

I have an old number for them - 667-2383. I don't guarantee its accuracy.

...jon
::::::::::::::
databases/14
::::::::::::::
From: harvard!rutgers!bellcore!ulysses!sfsup!sivan@BBN.COM
Organization: AT&T Information Systems

Object oriented databases is still a very new field.  We here at AT&T
have built our own in support of a software administration
environment.  There is at least one commercial object oriented
database, called GemStone from a Servio Logic in Oregon.  The best
reference I can offer you is the Jan 1987 issue of the "ACM
Transactions on Office Information Systems."  This was a special issue
on object oriented databases.

                                Sivan Mahadevan
                                AT&T R&D
                                attunix!sivan



::::::::::::::
databases/15
::::::::::::::
From: Brian Nixon <nixon%ai.toronto.edu@RELAY.CS.NET>

Dear Miles Fidelman,
  Our group has been very interested in the application of DB technology
to implementation of (entity-based) semantic data models.  If you'd like to
receive some reports from the Taxis project, please send your postal
address.
  Cordially,
    Brian Nixon
    Dept. of Computer Science
    University of Toronto
    Toronto, Ontario
    Canada.  M5S 1A4
    nixon@ai.toronto.edu
::::::::::::::
databases/16
::::::::::::::
From: "Daniel L. Weinreb" <DLW@alderaan.scrc.symbolics.com>

Here's my own personal bibliography of papers on this topic, organized
by group:

  This is an annotated list of groups working on or planning to work on
  "object-oriented database systems".  It is based on what I heard at
  OOPSLA, as well as the published literature.  With each group name, I've
  listed some of the people involved, mainly including the ones who seem
  to be the leaders and the ones who we know personally.  When I say
  "TOIS" I refer to the issue of ACM Transactions on Office Information
  Systems, Vol. 5 #1 Jan 1987, the special issue on object-oriented
  databases.

  Orion
  MCC Database Program
  Won Kim, Jay Bannerjee, Darrell Woelk, Nat Ballou, Hong-Tai Chou, Jorge Garza
  "A Unifying Framework for Version Control in a CAD Environment" 12th VLDB 84
  "An Object-Oriented Approach to Multimedia Databases" SIGMOD 86
  "Data Model Issues for Object-Oriented Applications" TOCS Jan 87
  "Composite Object Support in an Object-Oriented Database System" OOPSLA-87
  "Semantics and Implementation of Schema Evolution in Object-Oriented
      Databases" SIGMOD 87
  "Operations and Implementation of Complex Objects" Proc Data Eng Conf Feb 87

  GemStone
  Servio-Logic Corp. and Oregon Graduate Center
  David Maier, Alan Purdy, Jacob Stein, G. Copeland
  "Making Smalltalk a Database System" SIGMOD 84
  "Data Model Requirements for Engineering Applications" Proc Intl Workshop on
    Expert Database Systems, 84
  "A Decomposition Storage Model" SIGMOD 85
  "Object-Oriented DBMS Development at Servio-Logic" Database Eng 18:4 Dec 85
  "Integrating an Object Server with Other Worlds" TOCS Jan 87
  "Class Modification in the GemStone Object-Oriented DBMS" OOPSLA-87
  "Indexing in an Object-Oriented DBMS" Intl Workshop on OODB Sept 86
  "Development of an Object-Oriented DBMS" OOPSLA-86
  "Is the Disk Half Full or Half Empty?: Combining Optimistic and
    Pessimistic Concurrency Mechanisms in a Shared, Persistent
    Object Base" To appear: Workshop on Persistent Object Stores,
    Appin, Scotland, Aug 87

  Iris
  HP Labs
  Dan Fishman, Peter Lyngbaek, Jim Kempf
  "Iris: An Object-Oriented Database Management System" TOCS Jan 87
  "Some Aspects of operations in an Object-Oriented Database" Database
      Eng 8:4 Dec 85

  Encore/ObServer
  Brown Univ.
  Stan Zdonik, Mark Hornik
  "A Shared, Segmented Memory System for an Object-Oriented Database"
      TOCS Jan 87
  "Object management system concepts" Proc SIGOA Conf on Office Info Sys 84
  "Object management systems for design environments" Database Eng 8:4 Dec 85
  "Towards object-oriented database environments" Brown Univ TR 85
  "The Management of Changing Types in an Object-Oriented Database" OOPSLA-86
  "Language and Methodology for Object-Oriented Database Environments" 19th
      HICSS Jan 86

  Postgres/Objfads
  U. C. Berkeley
  Mike Stonebraker, Larry Rowe
  "The Design of Postgres" SIGMOD Record Vol 15 No 2 June 86
  "Extending a Database System with Procedures" SIGMOD Vol 12 No 3 Sept 87
  "A Form Application Development System" SIGMOD 82
  "QUEL as a Datatype" SIGMOD 84
  "Object Management in a Relational Database System" IEEE ??? (I have a copy)
  "Application of Abstract Datatypes and Abstract Indices to CAD Data" Database
    Week Conference on Eng App, IEEE, May 83
  "Inclusion of New Types in Relational Database Systems" Proc 2nd Intl Conf
      on Data Eng Feb 86

  Vbase
  Ontologic Corp
  Tim Andrews, Craig Harris, Craig Damon
  "Combining Language and Database Advances in an Object-Oriented
      Development Environment"
    OOPSLA-87

  <Follow-on to Owl/Trellis>
  DEC Hudson
  Craig Schaffert, Pat O'Brien
  "Persistent and Shared Objects in Trellis/Owl" DEC-TR-440 July 86, also
    1986 Workshop on OODB Systems, Sept 86

    It strikes me that database technology tends to focus on supporting large
    production databases, with attention to fast processing speeds, maintaining
    database integrity, journalizing/checkpointing, etc.; while object oriented
    environments are basically prototyping environments.

    Has anyone been working on making a production object oriented environment?

The Symbolics Genera programming environment is certainly a production
system, rather than a prototype, and is makes heavy use of object
oriented programming.  You could also look at several papers published
in the OOPSLA proceedings, which were distributed as the Nov 86 and Dec
87 SIGPLAN Notices.

I am working on an object-oriented database for Genera, which will fit
in with our object-oriented programming language features.  I haven't
written a full-length paper yet, but I have a summary of what it's
about, if you're interested.

::::::::::::::
databases/17
::::::::::::::
From: Richard Fritzson <fritzson@BIGBURD.PRC.UNISYS.COM>

Graphael
255 Bear Hill Rd
Waltham MA 02154

        Sells G-BASE and G-LOGIS. The former is a production quality object
oriented database; the latter is a prolog which can operate on it.

Servio Logic Corp
15025 S.W. Koll Parkway, 1a
Beaverton, OR 97006

        Sells GEMSTONE, an object oriented database.

Ontologic, Inc.
47 Manning Rd
Bilerica, MA 01821

        Sells VBASE, an object oriented database.

There are also a significant number of ongoing research projects at
universities, other corporations and the MCC.  A significant portion
of OOPSLA-87 was about object-oriented databases.

::::::::::::::
databases/18
::::::::::::::
Organization: Micro Database Systems, Inc., Lafayette IN
Sender: harvard!rutgers!pur-ee!pur-phy!mrstve!mdbs!kbc@BBN.COM

GURU is a rule based development environment that includes
inference engine, data base, spreadsheet and many other decision support
tools. Although it is not strictly object oriented it offers a real
development environment for rule based applications that integrates db.
Besides being an employee of mdbs I am enthusiastic GURU user.
for more info contact:

         Micro Data Base Systems Inc.
         P.O. Box 248
         Lafayette, IN  47902
         (317) 463-2581

I would certainly be interested in participating in discussions about GURU
use.

------------------------------

End of AIList Digest
********************

∂09-Jan-88  0952	LAWS@KL.SRI.COM 	AIList V6 #8 - Voice Synthesizers, Online Dictionaries    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 9 Jan 88  09:52:36 PST
Date: Fri  8 Jan 1988 22:37-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #8 - Voice Synthesizers, Online Dictionaries
To: AIList@SRI.COM


AIList Digest            Saturday, 9 Jan 1988       Volume 6 : Issue 8

Today's Topics:
  AI Tools - Voice Synthesizers & Online Dictionaries

----------------------------------------------------------------------

Date: 6 Jan 88 00:51:46 GMT
From: ndsuvax!nebezene@uunet.uu.net  (Todd Michael Bezenek)
Subject: voice synthesizer package needed


I am looking for a voice synthesizer package that produces good
quality voice.

   The North Dakota State University Amateur Radio Society is
developing a microprocessor-based control unit for a remote radio
site.  We need a good voice synthesizer which will be interfaced to the
control unit.  The audio must be of a quality that will be easily
understood when transmitted via radio.  It is not necessary that the
synthesizer package come with any support hardware whatsoever.

   Our budget for the interface is $100.

   If you know of a device that is available, please give me
information concerning voice quality, price, and availability.  Also
let me know of any special pricing for which our student club may
qualify.


Sicerely,

-Todd
--
Todd M. Bezenek                    --=---+---=--
                                          \___
Student of Electrical and           ---=---+-I-=---
 Electronics Engineering                   |\
                                     ---=----+----=---
Bitnet:  nebezene@ndsuvax                  |
UUCP:    uunet!ndsuvax!nebezene            ↑   Amateur Radio Station KO0N

------------------------------

Date: 7 Jan 88 03:20:43 GMT
From: krc@purdue.edu  (Kenny "RoboBrother" Crudup)
Subject: Re: voice synthesizer package needed


In article <611@ndsuvax.UUCP>, nebezene@ndsuvax.UUCP (Todd Michael Bezenek)
writes:
> I am looking for a voice synthesizer package that produces good
> quality voice.

I used to play around with the General Instruments (GI) chip set. There
is a member of its SP- series morpheme(?) generators masked-out to produce
allophones. There is a companion chip based around one of its micros that
takes regular old asynch serial and spits it out to the SP-AL2 so that
you get speech from ascii. The chip has a lot of options concerning
inputs/speed/etc. so you could probably do parallel (I forget.) These
chips, and a 2k x 8 RAM (as a buffer) is all you need.

The best part is that you can get it all from [Radio Shack] for about $40.
I got mine from GI a long time ago and just proto-boarded it (the
chips were free) just to see what it sounded like. I did'nt have the
suggested filtering, or even the right speed xtal, but it worked good
enough for me. Of course, it screws up some words (like my last name,
and doesnt know context like the difference between 'wind' (air)
and 'wind' (watch)), but that is expected. You should have seen
folks faces, though! (Shall we play a game?)

Hope this helps.
--
Kenny "RoboBrother" Crudup              krc@arthur.cs.purdue.edu
Purdue University CS Dept.
W. Lafayette, IN 47907                  The above is practically Official
+1 317 494 7842                         University Policy. So there.

------------------------------

Date: 7 Jan 88 17:54:57 GMT
From: lawrence@bbn.com  (Gabe Lawrence)
Subject: Re: voice synthesizer package needed

In article (Todd Michael Bezenek) writes:
>
>I am looking for a voice synthesizer package that produces good
>quality voice.

Check out the "What's New" article on pg. 86 of the January '88 BYTE citing
the new Heath HV-2000 speech processing system.  It's an IBM-compatible
half-size plug in card consisting of a speech synthesizer, audio amplifier,
a speaker, and a 60K buffer.  It will read ASCII text files or ASCII data
received through the serial port and it even includes some terminal-emulation
software to add speech to modem communications.

Technically speaking, the HV-2000 uses a basic set of 64 phonemes for word
and/or sentence construction and allows you to specify up to 4 durations,
16 rates, 4096 inflection levels, 32 transition levels, 8 transition rates,
8 articulation rates, 49 musical notes and 16 amplitude settings.  Pricing
for these things is $89.95/each.  Not bad considering I used to work for
a company which used those stupid VOTRAX beasts which costs us $400.00/each...

Details and orders should be addressed to Heath Company, Dept. 350-020, Hilltop
Rd., Benton Harbor, MI 49022.

Please send them all net inquiries, I know nothing beyond what I've described
in this posting.  Having never even _seen_ one of these boards, all standard
disclaimers apply.

                                =Gabriel Lawrence=
                                =BBN Communications=

USENET: ...!harvard!bbn!ccv!lawrence
INTERNET: lawrence@bbn.com

------------------------------

Date: 5 Jan 88 15:19:28 GMT
From: ucsdhub!hp-sdd!ncr-sd!ncrcae!gollum!rolandi@sdcsvax.ucsd.edu 
      (rolandi)
Subject: online dictionaries


Several people have written to me personally in reference to an request I
made earlier for an online dictionary.  This is a collective response to
those people.

Two sources have been suggested.

        the Microsoft CD ROM version of the American Heritage Dictionary
and
        the OED from Oxford University Press

I called my local Microsoft dealer but he had no idea what I was talking
about.  I have not been able to get further information about the OED
either.  If anyone can locate these sources, I would appreciate what
they find out.

Thanks.


walter rolandi
rolandi@gollum.UUCP ()
NCR Advanced Systems, Columbia, SC
u.s.carolina dept. of psychology and linguistics

------------------------------

Date: 6 Jan 88 12:36:09 GMT
From: dave@mimsy.umd.edu  (Dave Stoffel)
Subject: online dictionaries


    *** reposted in response to recent request for online dictionaries ***

Subject: Re: machine-readable dictionaries


    Here's a summary of replies to my query on sci.lang.  I also received
some papers on MRDs; let me know if you would like copies.

    I recently queried the net community about computerized
    dictionaries which contained part-of-speech information.  Here's
a digest of the responses.


----


>From the Oxford Text Archives:
     Oxford Advanced Learner's Dictionary of Contemporary English
     Collins English Dictionary.

>From ?
     Webster's Pocket Dictionary (Amsler's thesis used this one)
     Longmans Dictionary of Contemporary English.

>From Gage Publishers:
     Gage Canadian Dictionary

----

        Automated Language Processing Systems
        190 West 800 North
        Provo, UT  84601
        Tel. (801) 375-0090

They have a wide variety of machine readable dictionaries (in several
languages).  They are not on USENET but you could get in touch with
them by telephone or mail.  Talk to either Robert Goode or Logan Wright.


----

You may wish to consult a report by Robert Amsler on computerized
dictionaries that appeared in the Annual review for Inf Sc and Tech
Vol 19,  1984, pp 161-209.


----

A book you may be interested in:
 Erik Akkerman
 Pieter Masereeuw
 Willem Meijs
 1985
 Designing a Computerized Lexicon for Linguistic Purposes
 ASCOT Report No. 1
 Rodopi
 Amsterdam
 A comparison of the Longman Dictionary of Contemporary English and
           the Oxford Advanced Learner's Dictionary for the purposes of NLP
           research.
Both dictionaries are apparently available on tape, and both have part of
speech info included.  (The report favors Longman's dictionary.)
--
       Dave Stoffel (703) 790-5357
       seismo!mimsy!dave
       dave@Mimsy.umd.edu

------------------------------

Date: 6 Jan 88 17:53:06 GMT
From: ptsfa!pbphb!pbhyd!rjw@ames.arpa  (Rod Williams)
Subject: Re: online dictionaries

My understanding is that the online Oxford English Dictionary (OED)
is still a work-in-progress and is not yet commercially available.

------------------------------

Date: 6 Jan 88 19:29:59 GMT
From: mary@csd4.milw.wisc.edu  (Mary Patricia Lowe)
Subject: Re: online dictionaries

In article <29@gollum.Columbia.NCR.COM> rolandi@gollum.UUCP () writes:
>
>       the Microsoft CD ROM version of the American Heritage Dictionary
>       the OED from Oxford University Press
>
>If anyone can locate these sources, I would appreciate what they find out.

In the January 1988 issue of IEEE Spectrum, the section on Tools and Toys
(p. 73) contains a short blurb on the Microsoft Bookshelf. The CD-ROM
includes the following reference works:

        The World Almanac and Book of Facts,
        The American Heritage Dictionary,
        The U.S. ZIP Code Directory,
        The Chicago Manual of Style,
        Bartlett's Familiar Quotations,
        Roget's II: Electronic Thesaurus,
        Houghton Mifflin Spelling Verifier and Corrector,
        Houghton Mifflin Usage Alert,
        Business Information Sources.

For more information, contact: Microsoft Corp., Box 97017, Redmond, WA. 98073,
(206)-882-8088.

                        -Mary

Mary Patricia Lowe      mary@csd4.milw.wisc.edu       ...ihnp4!uwmcsd1!mary
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

------------------------------

Date: 7 Jan 88 10:32:13 GMT
From: otter!sjmz@hplabs.hp.com  (Stefek Zaba)
Subject: Re: online dictionaries

/ otter:comp.ai / rjw@pbhyd.UUCP (Rod Williams) /  5:53 pm  Jan  6, 1988 /
>My understanding is that the online Oxford English Dictionary (OED)
>is still a work-in-progress and is not yet commercially available.

Oxford Advanced Learner's is available as indicated.  The mammoth work of
reference on the historical development of the English Language, the multi-
volumed Oxford English Dictionary, is being reworked and will be made available
in electronic form with extensive tagging (i.e. *not* just flat text).
Overall manager of this project is Timothy Benbow at Oxford University Press,
Oxford, England (no email link that I know of!); there's also active academic
involvement at A Canadian University - Waterloo? - which has set up a unit to
do great things on this project.  Mail me if you want the correct details
on that (I can dig them out at home).

------------------------------

Date: 8 Jan 88 14:51:10 GMT
From: craig@think.com  (Craig Stanfill)
Subject: Re: online dictionaries

In article <1092@pbhyd.UUCP> rjw@pbhyd.UUCP (Rod Williams) writes:
>My understanding is that the online Oxford English Dictionary (OED)
>is still a work-in-progress and is not yet commercially available.

There is a new edition of the OED, which is currently in preparation,
and will eventually be available in electronic form.  There is also
the old (1932?) edition plus numerous supplements, which is available
in electronic form through Oxford University Press.

------------------------------

End of AIList Digest
********************

∂12-Jan-88  0014	LAWS@KL.SRI.COM 	AIList V6 #9 - Synthesizers, Dictionaries, SNOBOL, Psychology List  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 12 Jan 88  00:14:15 PST
Date: Mon 11 Jan 1988 22:06-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #9 - Synthesizers, Dictionaries, SNOBOL, Psychology List
To: AIList@SRI.COM


AIList Digest            Tuesday, 12 Jan 1988       Volume 6 : Issue 9

Today's Topics:
  Queries - Expert System Shells for IBM Mainframes &
    Scott Fahlman and BUILD,
  AI Tools - Voice Synthesizers & Dictionaries &
    SNOBOL4 Language and AI Software,
  Bindings - Psychology List,
  Philosophy - Open Systems & Evolution of Intelligence

----------------------------------------------------------------------

Date: Fri, 8 Jan 88 15:31:28 EST
From: gould!dsacg1!ntm1169@uunet.UU.NET (Mott Given)
Subject: Request info on expert system shells for IBM mainframes


 I am doing a market survey of expert system shells available for IBM
 mainframes, including but not limited to, IBM's ESE, Cullinet's
 Application Expert, Aion's ADS/MVS, Nixdorf's Twaice, and KES from
 Software A&E.  I am interested in comments on the strengths and weaknesses
 of these products from people who have had experience using them.  I am
 also interested in comments on why users selected one of these packages over
 the other ones that were available.
 Please send your replies to AIList.

------------------------------

Date: 8 Jan 88 12:46:09 GMT
From: mcvax!botter!klipper!biep@uunet.uu.net  (J. A. "Biep" Durieux)
Subject: Scott Fahlman and BUILD

Does anyone know whether Scott Fahlman is reachable by email, or whether
his BUILD-program is available (and from where, if so)?

Thanks a lot in advance!
--
                                                Biep.  (biep@cs.vu.nl via mcvax)
        Protect endangered species: Forbid line-eater hunting!


  [Try Fahlman@C.CS.CMU.EDU on the ARPANET.  -- KIL]

------------------------------

Date: 8 Jan 88 22:30:14 GMT
From: hubcap!ncrcae!gollum!rolandi@gatech.edu  (rolandi)
Subject: speech gizmos for pc's


Regarding inexpensive PC speech devices, there is an ad for a $69.95
PC add-on in the winter 1987 issue of PC AI. It is from COVOX, Inc. of
675 Conger St., Eugene, OR 97402. Ph. (503) 342-1271.  I have not seen
it but it is said to do text-to-speech and such.  A few years back, this
company made an inexpensive speech recognition device for the C64.  It
was very impressive and well worth the money.


walter rolandi
rolandi@gollum.UUCP ()
NCR Advanced Systems, Columbia, SC
u.s.carolina dept. of psychology and linguistics

------------------------------

Date: 9 Jan 88 00:18:43 GMT
From: caset!catuc!peter@arizona.edu  (Peter Collins)
Subject: Re: voice synthesizer package needed

In article <5854@ccv.bbn.COM>, lawrence@bbn.COM (Gabe Lawrence) writes:
> In article (Todd Michael Bezenek) writes:
> >
> >I am looking for a voice synthesizer package that produces good
> >quality voice.
> >
>
> Check out the "What's New" article on pg. 86 of the January '88 BYTE citing
> the new Heath HV-2000 speech processing system.  It's an IBM-compatible
> half-size plug in card consisting of a speech synthesizer, audio amplifier,
> a speaker, and a 60K buffer.  It will read ASCII text files or ASCII data
  .....

I've played with this board at a local Heath store. Not bad for the price
but be carefull - the pc board itself is not of that great quality. The
local Heath Tech and myself managed to inadvertantly lift several traces
off the board while trying to debug the board after he assembled the kit.
I hope Heath comes out with a new batch of higher quality boards.

>                               =Gabriel Lawrence=
>                               =BBN Communications=

                peter collins
                Computer Automation

------------------------------

Date: 8 Jan 88 15:51:31 GMT
From: grc!don@csd1.milw.wisc.edu  (Donald D. Woelz)
Subject: TRC and thanks

Just a quick thanks to all those who graciously pointed me
toward obtaining the PD sources to TRC.  I had many responses
to my request.
--
Don Woelz              {ames, rutgers, harvard}!uwvax!uwmcsd1!grc!don
GENROCO, Inc.                              Phone: 414-644-8700
205 Kettle Moraine Drive North             Fax:   414-644-6667
Slinger, WI 53086                          Telex: 6717062

------------------------------

Date: Sun, 10 Jan 88 13:35 CST
From: Christopher Maeda <maeda@MCC.COM>
Subject: Microsoft CD-ROM dictionary

The Human Interface group at MCC wanted to use the Microsoft CD-ROM as a
data base in a natural language project.  Microsoft, however, would not
give out (or sell) any documentation on how the information was stored,
making it practically unusable.  Another company, Facts on File, Inc.,
was happy to sell us a developer's license for their CD based picture
database, the Visual Dictionary.  So unless you just want to look up
words with your PC, stay away from the Microsoft product.

Chris Maeda
(maeda@ai.ai.mit.edu)

------------------------------

Date: Sun, 10 Jan 88 00:08:11 est
From: amsler@flash.bellcore.com (Robert Amsler)
Subject: The OED story

Date:         30 November 1987, 10:54:07 EST
Reply-To:     MCCARTY%UTOREPAS.BITNET@wiscvm.wisc.edu
Sender: HUMANIST Discussion <HUMANIST%UTORONTO.BITNET@wiscvm.wisc.edu>
From: MCCARTY%UTOREPAS.BITNET@wiscvm.wisc.edu
To: Robert Amsler <amsler@flash.bellcore.com>

Contributor: May Katzen <MAY@VAX.LEICESTER.AC.UK>
Subject: 1st edn. of the OED in CD-ROM and 2nd edn. in hardcopy

I have received the following information from Tim Benbow of
Oxford University Press about its publishing plans for the
OED, in response to the query from Mr Wall.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

Oxford University Press has announced that early in 1988
it will publish the original Oxford English Dictionary,
1884-1928, issued in twelve printed volumes, on two CD ROM disks.

OUP states that this product is very user-friendly, much more so
than other similar products on the market.

These CD ROMs can run on a PC, XT or AT or an IBM clone with
a 640 K memory with either a CGA or EGA device.  A Hitachi,
Philips, or Sony disk drive is required.  The display monitor
may be monochrome, but a colour monitor is preferable, as colour
is used to distinguish different types of information.

OUP also plan to make the original OED available on magnetic tape
in a fully structured version with embedded codes, written in IBM
format.

In 1989, OUP will publish the Oxford English Dictionary, Second
Edition, which is the text of the original OED, plus supplements,
plus new material which has been added recently.  This will be
published in a printed version of 20 volumes.

The database containing this material will be made available in a
number of electronic forms.

------------------------------

Date: Mon, 11 Jan 88 16:59:19 -0800
From: Richard Nelson <nelson@ICS.UCI.EDU>
Subject: SNOBOL4 language and AI at SIMTEL20


SNOBOL4 is a remarkably powerful if unstructured symbolic processing and
pattern matching language.  A SNOBOL4 interpreter for MS-DOS and a report
discussing use of SNOBOL4 for artificial intelligence, with many AI-related
routines are now available via standard anonymous FTP from SIMTEL20:

Filename                        Type           Bytes      CRC

Directory PD1:<MSDOS.SNOBOL4>
VSNOBOL4.ARC                    Binary        286049      805EH
VSNOBOL4.TXT                    ASCII           5750      296DH
AISNOBOL.ARC                    Binary        258956      AC71H
AISNOBOL.TXT                    ASCII           5984      2E23H

Short Descriptions:
                           VANILLA SNOBOL4

Vanilla SNOBOL4 provides the entire Bell Labs SNOBOL4 programming
language, except for real numbers and external functions.  The total size
of the object program and data cannot exceed 30K bytes in this entry-level
version.  Vanilla SNOBOL4 was released by Catspaw, Inc., maker of a
commercial enhanced version of SNOBOL4, because they believe that many
people would enjoy programming in SNOBOL4, if there was a version of the
language that was widely and freely available.  Included in the package
is an overview of the SNOBOL4 programming language, the Vanilla SNOBOL4
interpreter, and numerous example SNOBOL4 programs. The file VSNOBOL4.TXT
contains a more detailed description of the files contained in VSNOBOL4.ARC.

                      AI PROGRAMMING IN SNOBOL4

AISNOBOL.ARC contains text and code from the report "Artificial
Intelligence Programming in SNOBOL4," by Michael G. Shafto.  Included is
SNOBOL4 code for things such as augmented transition networks, word
endings, semantic information retrieval, and Wang's algorithm for
theorem-proving.  A more detailed description is contained in AISNOBOL.TXT.

------------------------------

Date: Sat,  9 Jan 88 17:55:21 EST
From: "Keith F. Lynch" <KFL@AI.AI.MIT.EDU>
Subject: Psychology list

> From: uhccux!todd@humu.nosc.mil  (The Perplexed Wiz)

> It's been at least three or four years since I last saw an attempt to
> create a newsgroup devoted to psychology.  So, I thought I'd test the
> waters once more.

There IS a psychology list: EPSYNET%UHUPVM1.BITNET@CUNYVM.CUNY.EDU.

                                                                ...Keith

------------------------------

Date: 9 Jan 88 10:52:00 GMT
From: goldfain@osiris.cso.uiuc.edu
Subject: Re: Seminar Announcement


            An announcement of a seminar by Peter Cariani, titled
 "Structural Preconditions for Open-Ended Learning through Machine Evolution"
     contained an abstract, which made the following (excerpted) claims :

>----------------------------------------------------------------------------<
> Evolutionary machines  cannot be constructed  through computations  alone. <
> New  primitive  category  construction  necessitates   that  new  physical <
> measuring structures and controls come into being.   While the behavior of <
> such  devices can be  represented to a  limited  degree by  formal models, <
> those models cannot  themselves create  new categories vis-a-vis  the real <
> world,  and hence are  insufficient as  category-creating devices in their <
> own right.   Computations must  be augmented by  the physical construction <
> of new  sensors and effectors  implementing processes  of measurement  and <
> control respectively.   This construction  process must be inheritable and <
> replicable,  hence encodable into symbolic form, yet involving the autono- <
> mous, unencoded dynamics of the matter itself.                             <
>----------------------------------------------------------------------------<

Is Peter Cariani on the net and is he able to get involved in  a discussion of
the items in this  abstract?  I  find  these claims  very  open to question  -
either I am not understanding them fully, or they are debateable.

             Mark Goldfain            arpa:  goldfain@osiris.cso.uiuc.edu
             (A lowly student at)-->         University of Illinois at U-C

------------------------------

Date: Mon, 11 Jan 88 10:15 EST
From: GODDEN%gmr.com@RELAY.CS.NET
Subject: Evolution of Intelligence

I am reading (of all things) a book about dogs (>The Canine Clan< by
John McLoughlin) and I came across a comment about intelligence that I
share with the net without comment.  Since I don't have the book with
me at the moment I cannot offer an exact quote, but this is close:
Intelligence arises in order to make efficient use of the senses.

-Kurt Godden
 godden@gmr.com

  [For a more elaborate development of this viewpoint see the
  recent book by Fischler and Firschein on The Eye and the Brain.
  A major premise is that perception is a goal of AI (or of any
  intelligence) rather than just a preprocessing stage.  -- KIL]

------------------------------

End of AIList Digest
********************

∂15-Jan-88  0016	LAWS@KL.SRI.COM 	AIList V6 #10 - Intelligence, MLNS Neural Network Tool Set
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 15 Jan 88  00:15:52 PST
Date: Thu 14 Jan 1988 21:56-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #10 - Intelligence, MLNS Neural Network Tool Set
To: AIList@SRI.COM


AIList Digest            Friday, 15 Jan 1988       Volume 6 : Issue 10

Today's Topics:
  Queries: Table-Tennis-Playing Robot & M. Selfridge &
    TRC Users & Graphical Representation of Rule Base,
  Philosophy - Evolution of Intelligence & Empirical Science,
  Neuronal Systems - MLNS Public-Domain Simulator Tool Set Effort

----------------------------------------------------------------------

Date: 14 Jan 88 00:32:47 GMT
From: dlfe91!hucka@umix.cc.umich.edu  (Michael Hucka)
Subject: query: table-tennis-playing robot?


       Within the last half-year I read an article which described a successful
robotic device capable of playing table-tennis.  Unfortunately I can't remember
where I came across it.  Has anyone else read about this or know where I can
get more information about it?  I am interested in learning about the research
and technical issues the system's creators had to address.

       Mike
--
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Computer Aided Engineering Network, University of Michigan, Ann Arbor MI 48109
ARPA: hucka@caen.engin.umich.edu

------------------------------

Date: 14 Jan 88 20:04:39 GMT
From: ece-csc!ncrcae!gollum!rolandi@mcnc.org  (rolandi)
Subject: M. Selfridge


Does anyone know the email (or other mail) address of M. Selfridge of:

Selfridge, M. 1980. A Process Model of Language Acquisition.  Ph.D.
        diss., Technical Report, 172, Dept of Computer Science, Yale
        University.

?

Thanks.


walter rolandi
rolandi@gollum.UUCP ()
NCR Advanced Systems, Columbia, SC
u.s.carolina dept. of psychology and linguistics

------------------------------

Date: 12 Jan 88 02:41:49 GMT
From: linc.cis.upenn.edu!levy@super.upenn.edu  (Joshua Levy)
Subject: Looking for TRC users

I'm interested in how many people are using TRC, and
what they are using it for.  If you  use TRC, plan to,
or are just interested in it, please send me email.
(Especially if you have modified or improved it in any way.)
Thanks.

TRC: Translate Rules to C, is a program which takes an OPS
like rule language and compiles it into C code.  It is a PD
program.

Joshua Levy
levy@linc.cis.upenn.edu

------------------------------

Date: 14 Jan 88 12:11:57 GMT
From: mcvax!hermanl@uunet.uu.net  (Herman Lenferink)
Subject: graphical representation of rule base


I am searching for algorithms / approaches to represent a (production)
rule base in the form of a graph / tree.

The premise of a rule can have several conditions, connected with
AND / OR connectives. A rule may also have more than one conclusion
(using an AND connective).
However, I am also interested in suggestions for representations of other
rule formats.

Any hints, literature references, or even source code are VERY welcome.
If there is interest, I will summarize the responces.

Thanks in advance,

Herman Lenferink
CWI, Amsterdam, Netherlands
hermanl@piring.cwi.nl

------------------------------

Date: 14 Jan 88 07:30:59 GMT
From: well!wcalvin@lll-crg.llnl.gov (William Calvin)
Reply-to: well!wcalvin@lll-crg.llnl.gov (William Calvin)
Subject: Re: Evolution of Intelligence


My favorite short definition is that of Horace B. Barlow in 1983:
        "Intelligence... is the capacity to guess right by
        discovering new order."
There are some related quotes at p.187 of my book THE RIVER THAT FLOWS UPHILL.
                William H. Calvin
                University of Washington NJ-15, Seattle WA 98195
                   wcalvin@well.uucp    206/328-1192

------------------------------

Date: Tue, 5 Jan 88 11:50 EST
From: Bruce E. Nevin <bnevin@cch.bbn.com>
Subject: empirical science of language

  [Excerpted from the NL-KR Digest.  Bruce mentions fundamental
  difficulties in the study of lingistics and psychology.  Are
  there similar viewpoints on AI?  Roger Schank mentions at least
  one in his recent AI Magazine article:  If AI is the study of
  uniquely human capabilities, then any algorithm derived from AI
  negates its own domain.  -- KIL]

The status of linguistics as a science has been a vexed question for a
very long time.  There are a number of good reasons.  Probably the
central one is this:  in all other sciences and in mathematics, you can
rely on the shared understanding of natural language to provide a
metalanguage for your specialized notations and argumentation.  In
linguistics you cannot without begging fundamental questions that define
the field.  There is an exactly parallel difficulty in psychology:  a
psychological model must account for the investigator on the same terms
as it accounts for the object of investigation.  The carefully crafted
suspension of subjectivity that is so crucial to experimental method
becomes unattainable when subjectivity itself is the subject.  (See
Winograd's recent work, e.g. _Understanding Computers and Cognition_ for
reasons why computer modelling of natural language is not possible, on
the usual construal of what computer modelling is.  I have references to
work that gets around this "Framer Problem" if you are interested.)

[...]

------------------------------

Date: Sun, 10 Jan 88 22:05:37 EST
From: weidlich@ludwig.scc.com (Bob Weidlich)
Subject: MLNS Announcement


             A PROPOSAL TO THE NEURAL NETWORK RESEARCH COMMUNITY
                                 TO BUILD A
       MULTI-MODELED LAYERED NEURAL NETWORK SIMULATOR TOOL SET (MLNS)

                               Robert Weidlich

                           Contel Federal Systems

                              January 11, 1988


The technology of neural networks is in its infancy.  Like all other major new
technologies  at  that  stage, the development of neural networks is slowed by
many impediments along the road to realizing its potential to solve many  sig-
nificant  real  world problems.  A common assumption of those on the periphery
of neural network research is that the major factor holding back  progress  is
the  lack  of hardware architectures designed specifically to implement neural
networks.  But those of us who use neural networks on a day to day basis real-
ize  that  a  much more immediate problem is the lack of sufficiently powerful
neural network models. The pace of progress in the technology will  be  deter-
mined  by the evolution of existing models such as Back Propagation, Hopfield,
and ART, as well as the development of completely new models.

But there is yet another significant problem that inhibits  the  evolution  of
those  models:  lack  of  powerful-yet-easy-to-use,  standardized, reasonably-
priced toolsets.  We spend months of time building our  own  computer  simula-
tors,  or  we spend a lot of money on the meager offerings of the marketplace;
in either case we find we spend more  time  building  implementations  of  the
models  than  applying  those  models to our applications.  And those who lack
sophisticated computer programming skills are cut out altogether.

I propose to the  neural  network  research  community  that  we  initiate  an
endeavor  to  build  a suite of neural network simulation tools for the public
domain.  The team will hopefully be composed of a cross-section  of  industry,
academic  institutions,  and  government, and will use computer networks, pri-
marily Arpanet, as its  communications  medium.   The  tool  set,  hereinafter
referred  to  as  the  MLNS,  will ultimately implement all of the significant
neural network models, and run on a broad range of computers.

These are the basic goals of this endeavor.

     1.   Facilitate the growth and evolution of neural network technology  by
          building  a set of powerful yet simple to use neural network simula-
          tion tools for the research community.

     2.   Promote standardization in neural network tools.

     3.   Open up neural network technology to  those  with  limited  computer
          expertise  by  providing powerful tools with sophisticated graphical
          user interfaces.  Open up neural network technology  to  those  with
          limited budgets.

     4.   Since we expect neural network models to evolve rapidly, update  the
          tools to keep up with that evolution.

This announcement is a condensation of a  couple  of  papers  I  have  written
describing  this proposed effort.  I describe how to get copies of those docu-
ments and get involved in the project, at the end of this announcement.

The MLNS tool will be distinctive in that will incorporate a layered  approach
to its architecture, thus allowing several levels of abstraction.  In a sense,
it is a really a suite of neural net tools,  one  operating  atop  the  other,
rather  than  a  single tool. The upper layers enable users to build sophisti-
cated applications of neural networks which provide  simple  user  interfaces,
and hide much of the complexity of the tool from the user.

This tool will implement as many significant neural network models (i.e., Back
Propagation,  Hopfield, ART, etc.) as is feasible to build.  The first release
will probably cover only 3 or 4 of the more popular models.  We will  take  an
iterative  approach  to  building  the  tool and we will make extensive use of
rapid prototyping.

I am asking for volunteers to help build the tool.  We will rely  on  computer
networks,  primarily  Arpanet  and those networks with gateways on Arpanet, to
provide our communications utility.  We will need a variety of skills  -  pro-
grammers  (much  of  it  will  be written in C), neural network "experts", and
reviewers.  Please do not be reluctant to  help  out  just  because  you  feel
you're  not  quite experienced enough; my major motivation for initiating this
project is to round-out my own neural networking  experience.   We  also  need
potential  users  who  feel they have a pretty good feel for what is necessary
and desirable in a good neural network tool set.

The tool set will be 100% public domain; it will not be the  property  of,  or
copyrighted by my company (Contel Federal Systems) or any other  organization,
except for a possible future non-commercial organization that we may  want  to
set up to support the tool set.

If you are interested in getting involved as a designer, an advisor, a poten-
tial  user,  or if you're just curious about what's going on, the next step is
to download the files in which I describe this project in detail.  You can  do
this by ftp file transfer and an anonymous user.  To do that, take the follow-
ing steps:

        1.   Set up an ftp session with my host:

                     "ftp ludwig.scc.com"
                        (Note:  this is an arpanet address.  If you are
                         on a network other than arpanet with a gateway
                         to arpanet, you may need a modified address
                         specification.  Consult your local comm network
                         guru if you need help.)

             [Note: FTP generally does not work across gateways.  -- KIL]

        2.   Login with the user name "anonymous"
        3.   Use the password "guest"
        4.   Download the pertinent files:

                     "get READ.ME"         (the current status of the files)
                     "get mlns_spec.doc    (the specification for the MLNS)
                     "get mlns_prop.doc    (the long version of the proposal)

If for any reason you cannot download the files, then call  or  write  me  the
following address:

             Robert Weidlich
             Mail Stop P/530
             Contel Federal Systems
             12015 Lee Jackson Highway
             Fairfax, Virginia  22033
                     (703) 359-7585  (or)  (703) 359-7847
                              (leave a message if I am not available)
                     ARPA:  weidlich@ludwig.scc.com

------------------------------

End of AIList Digest
********************

∂17-Jan-88  2332	LAWS@KL.SRI.COM 	AIList V6 #11 - Seminars    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 17 Jan 88  23:32:40 PST
Date: Sun 17 Jan 1988 21:41-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #11 - Seminars
To: AIList@SRI.COM


AIList Digest            Monday, 18 Jan 1988       Volume 6 : Issue 11

Today's Topics:
  Seminars - Rational Choice and Cognitive Illusion (HP) &
    Cooperative Inference Machine: CHI (SRI) &
    OBJ as a Theorem Prover (SRI) &
    Four-Valued Semantics for Terminological Logics (AT&T)

----------------------------------------------------------------------

Date: Mon 11 Jan 88 09:38:06-PST
From: Oscar Firschein <FIRSCHEIN@IU.AI.SRI.COM>
Subject: Seminar - Rational Choice and Cognitive Illusion (HP)

HP Labs Colloquium: Prof. Amos Tversky
Subject: Rational Choice and Cognitive Illusion

The analysis of decision and judgment under uncertainty reveals
pervasive and systematic departures from the rational theory of
judgment and choice. In particular, people exhibit overconfidence in
action and belief, susceptability to framing effects, and
inconsistent attitudes toward risk. These pheenomena are traced to
the operation of a limited number of heuristic principles, which are
generally useful but often produce illusion and bias.

Time: 4 P.M.  Thursday, Feb. 4, 1988
Place:  Hewlett-Packard
        5M Auditorium
        1501 Page Mill Road, Palo Alto

Non-HP employees: Welcome! Please come to the lobby on time
                  so that you may be escorted to the 5M auditorium

Refreshment:      A wine and cheese reception will follow.

------------------------------

Date: Thu, 14 Jan 88 10:05:24 PST
From: seminars@csl.sri.com (contact lunt@csl.sri.com)
Subject: Seminar - Cooperative Inference Machine: CHI (SRI)


SRI COMPUTER SCIENCE LAB SEMINAR ANNOUNCEMENT:


     A Cooperative High-Performance Sequential Inference Machine: CHI

                             Akihiko Konagaya
                   Computer Systems Research Laboratory
                          NEC Corporation, Japan

                    Tuesday, January 26 at 11:00 am
            SRI International, Conference Room B, Building A

This talk will describe the design principles and new compiler
technique of CHI-II, a deskside backend inference machine, which
has been developed as part of the Fifth Generation Computer System (FGCS)
Project in Japan.  CHI-II achieves 500 KLIPS for deterministic append
program execution by means of specialized hardware and machine code
optimization techniques.  The architecture also executes interpretive
predicates efficiently.  The merit of the micro-programmable "Wide Spectrum
Instruction Set," as opposed to the "Reduced Instruction Set (RISC),"
in terms of logic programming language execution, will be addressed.

One of the most important features of the CHI-II hardware is its large capacity
main memory (600MBytes) for a single user.  Cutting-edge memory device
technology makes it possible to develop a large-capacity, but small-size,
main memory system comparable to an ordinal secondary storage. This
large-capacity memory allows us to store massive amounts of data, such as
the knowledge base of an expert system, in memory,
and greatly contributes to achieving high performance in large-scale
application systems by eliminating the overhead of virtual memory
management.

The CHI software system aims at integrating heterogeneous programming
environments on a workstation in a distributed operating system fashion. The
software enables CHI to act like a stand-alone computer without
input/output devices, rather than as an accelerator of a host machine. An
object-oriented means of intermachine communication, called "virtual objects,"
greatly simplifies the realization of bus-transparent input/output
operations, such as a remote file system and a remote window system.  The
multiple process facility, although rather conservative, enables us
to develop practical system programs dedicated to logic programming. It
is also expected to be used as a good vehicle in which to study parallel logic
programming.


NOTE FOR VISITORS TO SRI:

Please arrive at least 10 minutes early in order to sign in and
be shown to the conference room.

SRI is located at 333 Ravenswood Avenue in Menlo Park.  Visitors
may park in the visitors lot in front of Building A (red brick
building at 333 Ravenswood Ave) or in the conference parking area
at the corner of Ravenswood and Middlefield.  The seminar room is in
Building A.  Visitors should sign in at the reception desk in the
Building A lobby.

IMPORTANT: Visitors from Communist Bloc countries should make the
necessary arrangements with Fran Leonard, SRI Security Office,
(415) 859-4124, as soon as possible.

------------------------------

Date: Thu, 14 Jan 88 10:04:42 PST
From: seminars@csl.sri.com (contact lunt@csl.sri.com)
Subject: Seminar - OBJ as a Theorem Prover (SRI)


SRI COMPUTER SCIENCE LAB SEMINAR ANNOUNCEMENT:


                       OBJ AS A THEOREM PROVER

                          Joseph A. Goguen
                     Computer Science Laboratory
                          SRI International

                    Monday, January 25 at 4:00 pm
            SRI International, Conference Room B, Building A

This talk has two goals: to introduce OBJ, and to present some techniques for
using OBJ as a theorem prover.  OBJ is a wide-spectrum, first order functional
programming language rigorously based on *order-sorted* equational logic,
which provides a notion of *subtype* to support overloading, coercion,
multiple inheritance, and exception handling.  This rigorous semantic basis
allows both a declarative programming style, and the direct use of OBJ for
theorem proving.

Parameterized programming is a powerful technique for software design,
production, reuse and maintenance, involving abstraction through two kinds of
module: *objects* to encapsulate executable code, and in particular to define
abstract data types; and *theories* to specify both syntactic and semantic
structure of modules.  Each kind of module can be parameterized, where actual
parameters are modules.  Modules can also import other modules, yielding
hierarchies of parameterized modules.  Interfaces of parameterized modules are
defined by theories.  For parameter instantiation, a *view* binds the formal
entities in an interface theory to actual entities in a module, and also
asserts satisfaction of the theory by the module.  Views are first class
citizens that can be named, can import modules, and can even be parameterized.
*Module expressions* allow complex instantiations and may include commands
that transform already defined modules.

Typical higher order programming examples can be captured with just first
order functions, by the systematic use of parameterized programming.  Some
examples are given, including a hardware verification example.  New results
include a simple but useful extension of first order equational logic to allow
quantification over arbitrary function symbols, a perhaps surprising technique
for proving such equations using only ground term reduction, and some general
induction principles.  All this provides a very powerful first order calculus
for reasoning about (first order)
functions.


NOTE FOR VISITORS TO SRI:

Please arrive at least 10 minutes early in order to sign in and
be shown to the conference room.

SRI is located at 333 Ravenswood Avenue in Menlo Park.  Visitors
may park in the visitors lot in front of Building A (red brick
building at 333 Ravenswood Ave) or in the conference parking area
at the corner of Ravenswood and Middlefield.  The seminar room is in
Building A.  Visitors should sign in at the reception desk in the
Building A lobby.

IMPORTANT: Visitors from Communist Bloc countries should make the
necessary arrangements with Fran Leonard, SRI Security Office,
(415) 859-4124, as soon as possible.

------------------------------

Date: Fri, 15 Jan  08:45:27 1988
From: dlm%research.att.com@RELAY.CS.NET
Subject: Seminar - Four-Valued Semantics for Terminological Logics
         (AT&T)


Title:          A Four-Valued Semantics for Terminological Logics

Speaker:        Peter F. Patel-Schneider
                Schlumberger Palo Alto Research
                3340 Hillview Ave.
                Palo Alto, California  94304

Date:           Monday, January 18, 1988
Time:           10:30 AM

Place:          AT&T Bell Laboratories - Murray Hill 3D-473


Terminological logics formalize and extend the notions of concepts,
roles, and restrictions present in semantic networks, frame-based
systems, and object-oriented programming systems.  The most important
semantic relationship in these logics is subsumption-whether one
concept is more general than another.  Subsumption is a non-trivial
relationship and if the terminological logic is expressively powerful,
then determining whether one concept subsumes another is
computationally intractable.  Because of this intractability,
knowledge representation systems based on terminological logics are
not suitable for use in knowledge-based systems.

This problem can be solved by using a four-valued semantics, resulting
in an expressively powerful terminological logic which has tractable
subsumption.  The subsumptions supported by the logic are a type of
"structural" subsumption, where each structural component of one
concept must have an analogue in the other concept.  Structural
subsumption captures an important set of subsumptions, similar to the
subsumptions computed in KL-ONE and NIKL.  The four-valued semantics
can thus be used to develop object-based knowledge representation
systems suitable for use in knowledge-based systems.

Sponsor: Ron Brachman

------------------------------

End of AIList Digest
********************

∂18-Jan-88  0159	LAWS@KL.SRI.COM 	AIList V6 #12 - Conference on Text and Image Handling
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 18 Jan 88  01:59:11 PST
Date: Sun 17 Jan 1988 21:44-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #12 - Conference on Text and Image Handling
To: AIList@SRI.COM


AIList Digest            Monday, 18 Jan 1988       Volume 6 : Issue 12

Today's Topics:
  Conference - User-Oriented Content-Based Text and Image Handling

----------------------------------------------------------------------

Date: Sat, 16 Jan 88 17:14:13 est
From: walker@flash.bellcore.com (Don Walker)
Subject: Conference - User-Oriented Content-Based Text and Image
         Handling

************************************************************************

                             RIAO 88
                           CONFERENCE
 with presentation of prototypes and operational demonstrations

       USER-ORIENTED CONTENT-BASED TEXT AND IMAGE HANDLING
              Massachusets Institute of Technology
                          Cambridge MA.
                        March 21-24 1988

                          organized by

  CENTRE DE HAUTES INTERNATIONALES d'INFORMATIQUE DOCUMENTAIRE

                     with the assistance of
     CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE (C.N.R.S.)
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATISME (INRIA)
          ECOLE NATIONALE SUPERIEURE DES MINES DE PARIS
     CENTRE NATIONAL D'ETUDES DES TELECOMMUNICATIONS (CNET)

                 US Participating Organizations

 AMERICAN FEDERATION OF INFORMATION PROCESSING SOCIETIES (AFIPS)
         AMERICAN SOCIETY FOR INFORMATION SCIENCE (ASIS)
             INFORMATION INDUSTRY ASSOCIATION (IIA)

          under the direction of Professor Lichnerowicz
               de l'Academie des Sciences de Paris

                        Conference Chair
                         Pierre Aigrain


                        SPECIFIC THEMES

      A)  Linguistic  processing and interrogation  of  full
text databases:
          - automatic indexing,
          - natural language queries,
          - computer-aided translation,
          - multilingual interfaces.
      B)  Automatic thesaurus construction,
      C) Expert system techniques for retrieving information
in  full-text  and multimedia databases:
          - expert  systems  reasoning on open-ended domains
          - expert   systems  simulating   librarians   accessing
            pertinent information.
       D)  Friendly user interfaces to classical information
retrieval systems.
       E) Specialized machines and system architectures designed
for  treating  full-text data,  including managing and  accessing
widely distributed databases.
      F)  Automatic  database construction  scanning  techniques,
optical character readers, output document preparation, etc...
      G)    New  applications   and  perspectives  suggested   by
emerging new technologies:
              - optical storage techniques  (videodisk,
                CD-ROM, CD-I, Digital Optical Disks);
              - integrated  text,  sound and image retrieval
                systems;
              - electronic mail and document delivery  based
                on content;
              - voice  processing technologies for  database
                construction;
              - production    of    intelligent     tutoring
                systems;
              - hypertext, hypermedia.


                 CONFERENCE PROGRAM AND SCHEDULE

                        GENERAL SESSION

          MONDAY, MARCH 21, 1988

 9:00 - 9:15 WELCOME STATEMENT
 Pierre Aigrain
President of CID and Conference Chair of RIAO88

 9:15 - 9:30 AIMS AND GOALS for RIAO88
Donald Walker
US Co-Chair, RIAO88 Program Committee

 9:30 - 9:50 INVITED SPEAKER
Goery Delacotte
Directeur de l'Information Scientifique et Technique aux CNRS

 9:50 - 10:30 INVITED SPEAKER
Karen Sparck Jones
Cambridge University ( United Kingdom )

          SESSION 1: HYPERMEDIA (Room 10-250)

Chair: Edward A. Fox

10:30 - 10:50  Hypermedia Design.
Brian R. Gaines, Joan N. Vickers
University of Calgary  (  Canada )

10:50 - 11:10  CLORIS:  A Prototype Video-Based Intelligent Compu-
ter-Assisted Instruction System.
Alan P. Parkes
University of Lancaster  (  United Kingdom )

11:10 - 11:30  A Multimedia Information Based and Career  Guidance
of Secondary School pupils.
Jean Paul Anton, Francoise Dagorret, Francoise Larrieux
Universite Paul Sabatier  (  France )

11:30 - 11:50  Multimedia Information Management and Optical  Disk
Technologies as a Basis for Advanced Information Retrieval.
Ray Cordes, R. Buck-Emden, H. Langendorfer
Technische Universitat Braunschweig  (  Federal Republic of Germany )

12:00 - 1:30  LUNCH

                       PARALLEL SESSIONS

          SESSION 2: HYPERTEXT (Room 10-250)

Chair: Roland Hjerppe

 1:30 - 1:50  Effective Browsing in Hypertext Systems.
Carolyn L. Foss
Institut National de Recherche en Informatique et Automatique  (France )

 1:50 - 2:10  Introducing Hypertext in Primary Health Care:  Sup-
porting the Doctor-Patient Relationship.
Toomas Timpka, Roland Hjerppe, John Zimmer, Marie Ekstrom
University of Linkoping  (  Sweden )

 2:10 - 2:30  A New Multimedia Electronic Book and its  Functional
Capabilities.
Yoshinori Hara, Asao Kaneko
NEC Corporation  (  Japan )

 2:30 - 2:50  Documentation Management for Large Systems of Equi-
pment.
Peter L. Van Sickel,  Kenneth F. Sierzega, Catherine A. Herring,
Jonathan J. Frund
ALCOA Technical Center  (  United States )

          SESSION 3: IR INTERROGATION IMPROVEMENTS (Room 4-270)

Chair: Christoph Schwarz

 1:30 - 1:50  National Language-Specific Evaluation Sites for  Re-
trieval Systems and Interfaces.
Paul B. Kantor
OCLC Inc.  (  United States )

 1:50 - 2:10  A Technique to Improve the Precision  of  Full-Text
Database Search.
Gregory S. Hoppenstand, David K. Hsiao
Naval Postgraduate School  (  United States )

 2:10 - 2:30  Intelligent Search of Full-Text Databases.
Susan Gauch, John B. Smith
University of North Carolina  (  United States )

 2:30 - 2:50  Towards a Friendly Adaptable Information  Retrieval
System
Shih-Chio Chang, Anita Chow
GTE Laboratories Inc.  (  United States )

 2:50 - 3:10  Structure of Information in Full-Text Abstracts.
Elizabeth D. Liddy
Syracuse University  (  United States )

 3:10 - 4:00  BREAK AND PROTOTYPE DEMONSTRATIONS

          SESSION 4: OPTICAL STORAGE (Room 10-250)

Chair: Xavier Dalloz

 4:00 - 4:20  Inverted Signature Trees:  An Efficient Text  Sear-
ching Technique for Use with CD-ROMs.
Alan L. Tharp, Lorraine K. D. Cooper
North Carolina State University  (  United States )

 4:20 - 4:40  Integration of Write Once Optical Disk with  Multi-
media DBMS.
B. C. Ooi, A. D. Narasimhalu, I. F. Chang
National University of Singapore  (  Singapore )

 4:40 - 5:00   FLAME  - An Efficient Access  Method  for  Optical
Disks.
Uri Shani, Michael Rodeh, Alan J. Wecker, Ike Sagie
IBM Israel Scientific Center  (  Israel )

 5:00 - 5:20  An Object-Oriented Approach to Interactive Access to
Multimedia Databases on CD-ROM.
Daniel A. Menasce, Roberto Ierusalimschy
Pontifica Universidade Catolica do Rio de Janeiro  (  Brazil )

          SESSION 5: NOVELTIES (Room 4-270)

Chair: Patrick Mordini

 4:00 - 4:20  Transmedia Machine and its Keyword Search over Image
Texts.
Y. Tanaka, H. Torii
Hokkaido University  (  Japan )

 4:20 - 4:40  STARGUIDE: A Generator for Self Training.
Gerard Claes, O. Ounis, Z. Razoarivelo, P. Salembier, M.S.
Sridharan
BULL MTS  (  France )

 4:40 - 5:00   Voice Recognition in Database Building :  A  Model
Workstation.
R. David Nelson
Chemical Abstracts Service  (  United States )

 5:00 - 5:20  French Yellow Pages,  Access to the nomenclature in
natural language
Bernard Normier
ERLI ( France )

 5:20 - 6:30  PROTOTYPE DEMONSTRATIONS

                         GENERAL SESSION

          TUESDAY, MARCH 22, 1988

          SESSION 6: NATURAL LANGUAGE PROCESSING - Part 1
                      (Room 10-250)

Chair: Donald Walker

 8:30 - 8:50  Using English for Indexing and Retrieval
Boris Katz
Massachusetts Institute of Technology  (  United States )

 8:50 - 9:10  Inflections and Compounds:  Linguistic Problems  for
Automatic Indexing.
Harri Jappinen, J. Niemisto
SITRA Foundation  (  Finland )

 9:10 - 9:30  About Reformulation in Full-Text IRS
Fathi Debili, Pierre Radasoa, Christian Fluhr
Centre National de Recherche Scientifique  (  France )

 9:30 - 9:50   The  TINA Project:  Text Content Analysis  at  the
Central Research Laboratories at SIEMENS.
Christoph Schwarz
Siemens  (  Federal Republic of Germany )

 9:50 - 10:10  TEX-NAT:  A Tool for Indexing and Information  Re-
trieval.
J. M. Lancel, N. Simonin
Cap Sogeti Innovation  (  France )

10:10 - 11:10  BREAK AND PROTOTYPE DEMONSTRATIONS

          SESSION 7: NATURAL LANGUAGE PROCESSING - Part 2

Chair: R. Marcus

11:10 - 11:30  Who Knows:  A System Based on Automatic  Representa-
tion of Semantic Structure.
Lynn A. Streeter, Karen E. Lochbaum
Bell Communications Research  (  United States )

11:30 - 11:50  Information Aids for Technological Decision-Making:
New  Data  Processing  and  Interrogation  for  Full-Text  Patent
Databases.
William A. Turner
Centre National de la Recherche Scientifique ( France )

11:50 - 12:10  AMI: An Intelligent Message Routing System.
C. Vansteelandt
CIMSA-SINTRA  (  France )

12:10 - 12:30  Conceptual Information Extraction and Retrieval from
Natural Language Input.
Lisa F. Rau
GE Company  (  United States )

12:30 - 2:00  LUNCH

                        PARALLEL SESSIONS

          SESSION 8: USER INTERFACES - Part 1  (Room 10-250)

Chair: Agnes Beriot

 2:00 - 2:20   Self-Structured Data Banks Semantic Integrity  and
Query Assistance Interface.
Patrick Mordini, Mostafa Jarmouni Idrissi, Anne-Marie Guimier-Sorbets
Ecole Nationale Superieure des Mines de Paris  (  France )

 2:20 - 2:40 The Electronic Directory Service.
Jean-Claude Marcovici
Direction Generale des Telecommunications  (  France )

 2:40 - 3:00  A Desktop Tool for Computer-Assisted Research in the
Humanities.
Christophe Schnell
Saint Gall Graduate School of Econ., Law, Business and Pub. Admin.
(  Switzerland )

          SESSION 9: NATURAL LANGUAGE PROCESSING (Room 4-270)

Chair: Ezra Black

 2:00 - 2:20  CONTEXT:  Natural Language Full-Text Retrieval System.
Zeev Menkes
Contahal Ltd.  (  Israel )

 2:20 - 2:40  Natural Language Data Bases on Small Computers.
Hans Paymans
Katholic University Brabant  (  Netherlands )

 2:40 - 3:00  An Application of Artificial Intelligence Techniques
to Automated Key-Wording.
James R. Driscoll
University of Central Florida  (  United States )

 3:00 - 4:00  BREAK AND PROTOTYPE DEMONSTRATIONS

          SESSION 10: USER INTERFACES - Part 2  (Room 10-250)

Chair: Anne-Marie Guimier-Sorbets

 4:00 - 4:20  User Interfaces to Scientific Databases.
Mary G. Reph, Blanche W. Meeson, Lola M. Olsen
Goddard Space Flight Center  (  United States )

 4:20 - 4:40  Image and Text Information Retrieval Systems in the
Registro de la Propiedad Industrial (Spain).
Luis Roberto Martinez Diez
Registro de la Propiedad Industrial  (  Spain )

 4:40 - 5:00  Hypermedia-Based Documentation System for the Office
Environment.
Fumiyasu Hirano
NEC Corporation  (  Japan )

 5:00 - 5:20  MenUSE for Medicine: End-User Browsing and Searching
of MEDLINE via The MeSH Thesaurus.
Arthur S. Pollitt
National Library of Medicine  (  United States )

 5:20 - 5:40  Conceptual Methods for Text Retrieval.
Jon Bing
University of Oslo  (  Norway )

          SESSION 11: AUTOMATIC THESAURUS CONSTRUCTION (Room 4-270)

Chair: Christian Fluhr

 4:00 - 4:20    Automatic  Thesaurus  Construction  by   Machine
Learning from Retrieval Sessions.
Ulrich Guntzer, G. Juttner, G. Seegmuller, F. Sarre
Technical University of Munich  (  Federal Republic of Germany )

 4:20 - 4:40  Automatic Construction of a Phrasal Thesaurus for an
Information Retrieval System from a Machine Readable Dictionary.
Martha Evens, T. Ahlswede, J. Anderson, J. Neises, S. Pin-
Ngern, J. Markowitz
Illinois Institute of Technology  (  United States )

 4:40 - 5:00  Looking for Needles in a Haystark or Locating  Inte-
resting Collocational Expressions in Large Textual Databases.
Yaacov Choueka
Bar-Ilan Univ.  (  Israel )

 5:00 - 5:20  Browsing and Authoring Tools for a Unified  Medical
Language System.
Henryk Jan Komorowski, Robert A. Greenes, Charles Barr,
Edward Pattison-Gordon
Harvard Medical School  (  United States )

 5:20 - 5:40   The Informatics Calculus:  A Graphical  Functional
Query Language for Information Resources.
Gary Epstein
West Chester University  (  United States )

 5:40 - 6:30  PROTOTYPE DEMONSTRATIONS


                         GENERAL SESSION

          WEDNESDAY, MARCH 23, 1988

          SESSION 12: IR AND ARTIFICIAL INTELLIGENCE
                     (Room 10-250)

Chair: Yaacov Choueka

 8:30 - 8:50   Factors Affecting Interface Design  for  Full-Text
Retrieval.
Martha J. Gordon, Martin Dillon
OCLC, Inc.  (  United States )

 8:50 - 9:10   Semantics of User Interface for  Image  Retrieval:
Possibility Theory and Learning Techniques Applied on Two  Proto-
types.
Gilles Halin, N. Mouaddib, O. Foucaut, M. Crehange
Centre Nationale de Recherche Scientifique  (  France )

 9:10 - 9:30  A Logic Programming Approach to Full-Text  Database
Manipulation.
R. Marshall
Loyola College  (  United States )

 9:30 - 9:50  Implementing a Distributed Expert-Based Information
Retrieval System.
Edward A. Fox, Marybeth T. Weaver, Qi-Fan Chen, Robert K. France
Virginia  Polytechnic Institute and State University   (   United
States )

 9:50 - 10:50 BREAK AND PROTOTYPE DEMONSTRATIONS

          SESSION 13: DATA BASES FOR GRAPHICS & ANIMATION
                          (Room 10-250)

Chair: Ching-Chih Chen

10:50 - 11:10  A Picture Display Language for a Multimedia Database
Environment.
Gregory Y. Tang
National Taiwan University  (  Taiwan, Republic of China )

11:10 - 11:30 From Concepts to Film Sequences.
Gilles R. Bloch
Yale University  (  United States )

11:30 - 11:50  RAVI:  Representation for Audiovisual  Interactive
Applications.
Joseph Fromont, Francis Kretz, Pierre Louazel, Maryse Quere,
Christine Schwartz
Centre  Commun d'Etudes de Telediffusion et Telecommunications  (
France )

11:50 - 1:30    LUNCH

            SESSION 14: AUTOMATIC TRANSLATION (Room 10-250)

Chair: Gregory Grefenstette

 1:30 - 1:50  Universal Multilingual Information Interchange  Sys-
tems.
Suban Krishnamoorthy, Ching Y. Suen
Framingham State College  (  United States )


 1:50 - 2:10   A Statistical Approach to French/English  Transla-
tion.
P. Brown, J. Cocke, S. and V. Della Pietra, F. Jelinek, R.
Mercer, P. Roossin
IBM Research Division  (  United States )

 2:10 - 2:30  METEO: An Operational Translation System.
John Chandioux
John Chandioux Consultants Inc.  (  Canada )

 2:30 - 3:30  BREAK AND DEMONSTRATIONS

          SESSION 15: KNOWLEDGE BASED SYSTEMS
                     (Room 10-250)

Chair: Jean-Claude Bassano

 3:30 - 3:50  IRX:  An Information Retrieval System for Experimen-
tation and User Applications.
Donna Harman, Dennis Benson, Larry Fitzpatrick, Rand Huntzinger,
Charles Goldstein
National Library of Medicine  (  United States )

 3:50 - 4:10  GENNY: A Knowledge Based Text Generation System.
Mark T. Maybury
Cambridge University  (  United Kingdom )

 4:10 - 4:30  DOD Gateway Information System (DGIS) Common Command
Language; The Decision for Artificial Intelligence.
Allan D. Kuhn
Defense Technical Information Center  (  United States )

 4:30 - 4:50  Interactive Knowledge-Based Indexing:  The  MedIndEx
System.
Susanne M. Humphrey
National Library of Medicine  (  United States )

 4:50 - 5:10  Conceptual Information Retrieval from Full-Text.
Richard M. Tong, Lee A. Appelbaum
Advanced Decision Systems  (  United States )

          THURSDAY, MARCH 24, 1988

          SESSION 16: DATABASE CONSTRUCTION (Room 10-250)

Chair: Ernesto Garcia Camarero

 8:30 - 8:50   Automatic  Recognition  of  Sentence   Dependency
Structures.
Timothy Craven
The University of Western Ontario  (  Canada )

 8:50 - 9:10   Parsing Textual Structures from a Typewritten  Au-
thor's Work.
Said Tazi
Universite des Sciences Sociales de Toulouse  (  France )

 9:10 - 9:30  Document Description and Analysis by Cuts.
Andreas Dengel, Gerhard Barth
University of Stuttgart  (  Federal Republic of Germany )

 9:30 - 9:50   Information Retrieval System  Manipulation  and  a
Posteriori Structuring.
Florence Sedes
Centre National de Recherche Scientifique  (  France )

 9:50 - 10:50  BREAK AND PROTOTYPE DEMONSTRATION

          SESSION 17: DOCUMENT AND IMAGE ANALYSIS

Chair: Jean Rohmer

 10:50 - 11:10 Adding Analysis Tools to Image Data Bases:  Facili-
tating Research in Geography & Art History.
Howard Besser
University of California Berkeley  (  United States )

 11:10 - 11:30  Query Processing for Information Extraction  from
Image of Paper-Based Maps.
Mukesh Amlane, Rangachar Kasturi
Pennsylvania State University  (  United States )

11:30 - 11:50  A Segmentation Method of Color Document Images  for
Multimedia Content Retrieval Systems.
Yoshihiro Shima, T. Murakami, J. Higashino, Y. Nakano, H. Fujisawa
Hitachi Ltd.  (  Japan )

11:50 - 1:30 LUNCH

          SESSION 18: HARDWARE ARCHITECTURE FOR IR
                      (Room 10-250)

Chair: Pierre Asancheyev

 1:30 - 1:50  The Utah Retrieval System Architecture:  A  Distri-
buted  Information Retrieval System Using  Workstations,  Windows
and Specialized Backend Processors.
Lee A. Hollaar
University of Utah  (  United States )

 1:50 - 2:10  Adaptive Information Retrieval Using a Fine-Grained
Parallel Computer.
Robert N. Oddy, B. Balakrishnan
Syracuse University  (  United States )

 2:10 - 2:30   Browsing  Image Databases Via  Data  Analysis  and
Neural Networks.
Alain Lelu
Direction Generale des Telecommunications  (  France )

 2:30 - 2:50  Multilingual Information Retrieval Mechanism  Using
VLSI.  Requirements and Approaches for Information Retrieval Sys-
tem in the Computer-Aided Software Engineering and Document  Pro-
cessing Environment.
Hiroaki Kitano
NEC Corporation  (  Japan )

 2:50 - 3:10   An Intelligent Backend System for Text  Processing
Applications.
Hans Diel, H. Schukat
IBM Laboratory Boeblingen  (  Federal Republic of Germany )

 3:10 - 3:30  Integrated Image Management on a Local Area Network.
M. Fantini, F. Prampolini, A. Turtur
IBM Italy  (  Italy )

 3:30  - 3:50   A Fast Machine for Prototyping String  Correction
Algorithms.
Patrice Frison, Dominique Lavenier
Institut  de  Recherche en Informatique  et  Systemes  Aleatoires
( France )

 3:50 - 4:00 BREAK

 4:00  CONCLUSIONS
J. Arsac, A. Bookstein, R. Marcus

          FRIDAY, MARCH 25, 1988

10:00 - 12:00 Visit to the MIT Media Laboratory.

                         DEMONSTRATIONS

         PROTOTYPE DEMONSTRATIONS

In conjunction with the presentation of papers at the conference,
prototypes  will  be  demonstrated in an exhibit  hall  near  the
conference room (lobby of bldg.  13).   These demonstrations will
take  place on a rotating basis during the breaks,  following  the
presentation  of the author's work.   A detailed program of these
demonstrations will be available at the conference.

          OPERATIONAL SYSTEMS DEMONSTRATIONS

21  operational  systems will be displayed throughout the  confe-
rence  in the Exhibit Hall (lobby of bldg.  13).   Some of  these
systems  will be demonstrated with an oral presentation (room  4-
270).  A detailed program of these demonstrations will be availa-
ble at the conference.


REGISTRATION INFORMATION AND DIRECTIONS

PREREGISTRATION SHOULD BE RECEIVED PREFERABLY BY MARCH  4,  1988;
The  registration fee for the conference is $ 275.00 if  received
before March 4,  and $325.00 after that date.  It includes admis-
sion to all sessions, luncheons Monday through Wednesday, and the
Conference Proceedings.

     Registration  will be conducted on Sunday,  March  20,  1988
from 5 PM - 8 PM and Monday,  March 21, 1988 beginning at 7:30 AM
in  Room 10-280,  opposite the conference meeting room (Room  10-
250).   This area will be staffed from 8:00 AM until 6:00 PM each
day  of the conference.   A telephone and message board  will  be
located  in this area.  The conference telephone number is  (617)
253-8864;   Participants  may  give out this number and  messages
will be posted in this room.

RECEPTIONS  :  There will be a cocktail and dinner at the  Boston
Museum of Science,  Wednesday,  March 23,  1988 at 6:30 PM.   The
fee  is  $30.00  and includes a visit of the  West  Wing  of  the
Museum.  There will a number of guest speakers at this event.  To
assist in planning this event,  we ask that you complete the RSVP
on the registration form.  The maximum capacity for this event is
200 persons.   Reservations will be handled on a first come first
serve basis.

TOURS :  A visit of the MIT Media Laboratory will be held Friday,
March 25, 1988 from 10 AM - 12 PM. Major areas of interest of the
lab are computer graphics,  3D imaging,  computer animation,  new
media  for  communication and computer music.   There will  be  a
general  presentation of the laboratory's work during the   first
hour.    The   second  hour  will  allow  for  exchanges  between
scientists and researchers of the lab and conferees.  There are a
limited number of spaces.



FOR INFORMATION AND REGISTRATION FORMS, CONTACT:

In the United States:
RIAO88-CID
MIT Conference Services Office
Room 7-111
77 Massachusetts Avenue
Cambridge, MA 02139 USA
Telephone: (1-617)253-1703

In Europe:
RIAO88-CID
36 bis rue Ballu
F-75009 PARIS
FRANCE
(33-1) 42 85 04 75

------------------------------

End of AIList Digest
********************

∂18-Jan-88  0405	LAWS@KL.SRI.COM 	AIList Digest   V6 #13 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 18 Jan 88  04:05:15 PST
Date: Sun 17 Jan 1988 21:48-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V6 #13
To: AIList@SRI.COM


AIList Digest            Monday, 18 Jan 1988       Volume 6 : Issue 13

Today's Topics:
  Queries - AI in Psychiatry & Parallelized LISP &
    Computerized Mice and Mazes,
  Robotics - Table Tennis-Playing Robot,
  Linguistics - Machine-Readable Dictionaries,
  Reviews - Spang Robinson Supercomputing V1/N3 & Intelligent Nanocomputers

----------------------------------------------------------------------

Date: Fri, 15 Jan 88 16:43:01 PST
From: George Thomsen <THOMSEN@SUMEX-AIM.Stanford.EDU>
Subject: AI applications in Psychiatry

I am interested in learning about recent AI applications in Psychiatry.
I would appreciate any references or suggestions.

Thanks,
  George Thomsen

------------------------------

Date: 16 Jan 88 17:39:30 GMT
From: jharris@pyr.gatech.edu  (JERRY L. HARRIS)
Subject: parallelized LISP


Hi!

     I am a graduate student at Georgia Tech and I am researching the
efficiency and execution of parallelized LISP programs. Has anyone out
there done any work with parallel LISP? I would appreciate any
bibliographical references or personal experience.

     Thanks in advance.

Jerry Harris
bitnet: jharris@pyr.gatech.edu

------------------------------

Date: Fri, 15 Jan 88 16:05:47 EST
From: russelr@radc-lonex.arpa
Subject: Computerized Mice and Mazes

Does anyone have any info as to designing/building computerized mice to run
mazes?
                           Bob Russel, russelr@radc-lonex.arpa

------------------------------

Date: Fri, 15 Jan 88 07:53 EST
From: "William E. Hamilton, Jr."
      <"RCSMPA::HAMILTON%gmr.com"@RELAY.CS.NET>
Subject: re: table tennis playing robot

I believe the table tennis playing robot work was done by Bell Labs at
one of their New Jersey locations (probably Murray Hill or Holmdel).

Bill Hamilton
GM Research Labs
Computer Science
Warren, MI
48090-9055

------------------------------

Date: Fri, 15 Jan 88 11:25:39 est
From: france@vtopus.cs.vt.edu (Robert France)
Subject: Machine-Readable Dictionari(es) for AI/NL

At Virginia Tech we have been working for a few years with dictionaries
available through the Oxford Text Archive.  Thte OTA is a depository
for machine-readable literary texts.  They have assembled by this point
a considerable body of both primary and secondary (lexicographic)
material, all of which is available for research use only at a nominal
fee.  Restrictions and a list of their materials can be obtained from

    The Oxford Text Archive
    Oxford University Computing Service
    13 Banbury Rd.
    Oxford  GREAT BRITAIN  OX2 6NN

    Telephone: Oxford (0865) 56721
    They are on the net but I'm afraid I've misplaced their Eaddress.

Most of OTA's material is available only in (some) typesetters' format,
and often the formatting conventions are no longer available.  They are
also archiving re-formatted versions as they become available, though,
so in some cases the data is fairly directly useable.  A case in point
is the following:

One of our early efforts with machin-readable dictionaries involved
translating the Collins English Dictionary from typesetters' format into
a set of files of Prolog facts.  These facts include, for the c. 80,000
headwords in the CED:  syllabification, variant spellings, abbreviations,
irregular inflections and morphological variants;  parts of speech and
semantic register information; "also called", "related adjective", and
"compare" cross-references; and the texts of definitions, sample uses
and usage notes.  We ignored only etymology and pronunciation.  A
syntactially corrrect copy of these facts (i.e., a set of facts in
Edinburgh standard syntax that can be consulted without blowing up a
Prolog compiler) is now on deposit at the Archive and available under
the same terms as the raw data.  We are working on a semantically
correct version (i.e., one where the data in the facts is in all cases
the data that ought to be there), and will deposit that when we have
it complete.

Currently, our group here, headed by E.A. Fox and Terry Nutter,
is coordinating with a group at the Illinois Institue of Technology
headed by Martha Evens to analyse the definition texts of this
and other M-R dictionaries and to integrate our findings into a
*VERY* large semantic net.  This product will also be made available
to the community for research use only.  Anyone desiring further
information on this project is invited to contact any of the principles.
Believe me, we have some stories to tell.

        Good luck,

            Robert France

Department of Computer Science
Virginia Tech
Blacksburg, VA 24061

france@vtopus    fox@vtopus    nutter@vtopus    csevans%iitvax


    "Believing people is a very bad habit.  I stopped years ago."

                (Miss Marple)

------------------------------

Date: Sat, 16 Jan 88 09:03:30 CST
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: Review - Spang Robinson Supercomputing V1/N3

Summary of The Spang Robinson Report on Supercomputing and Parallel Processing
        Volume 1 No. 3

Minisupercomputers, The Megaflop Boom  (Lead Story)

There have been a total of 850 minicomputer systems installed since 1981.
   (This article considers Floating Point Systems general purpose version of
    array processor the first entrant to this market)

Possible new entrants to this race are Supertek (a Cray-compatible product),
Cydrome's Dataflow-based system, Quad Design (a spin-off from Vitesse and
still hunting for venture capital), Gould Computer Systems Division,
Celerity, Harris (rumors only), Digital Equipment (project in force but
no announcements as to exact nature).  They estimate sales for this year
at between 275 million and 300 million.

50 percent of minisupers customers required DEC system compatibility.
10 percent required Cray compatibility while more buyers were concerned
with SUN compatability with Cray compatibility.

The Consortium for Supercomputer Research estimates  a total
of 500 Cray-1 equivalent units by 1992 world-wide for academic research
and development.  Seventy percent of all applications were migrated form
VAX-class machines, 12 percent from workstations, 10 percent from mainframes
and eight percent from Cray machines.

_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+

Shorts; Cray Research has authorized to sell a X-MP14 to the Indian Minsitry
of Science and Technology

NEC announced the sale of single processor SX-2 to the Bangalor Science
Research center.

MASSCOMP received a large contract to sell its units to the government,
presumably the National Security Agency.

Three PhD's and a PhD candidate are part of Parasoft, a consulting company
specializing in software for hypercube architectures.

The Consortium for Supercomputer Research has released Volume II of
the series, "An Analysis and Forecast of International Supercomputing."
It concludes that supercomputer can cost its owner more than 50 million
dollars over the first five years not counting applications.

Encore Computer Corporation is using VMARK's PICK compatible applications
environment on its Multimax line.

Amdahl's 1400E now runs at 1700 MFLOPS and is thus the highest single
processor performance in the industry.

Ridge's new 5100 has 2MFLOP performance on the LINPACK benchmark and
14 million Whetstones per second.

MASSCOMP now runs 8 processors and thus is a 20 MIP machine.

Numerical Algorithms Corporation has released a version of its NAG
Fortran Library for the 3090 Vecctor facility.
↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*↑&*
This issue also has a table of Superminicomputers listing various
information.  Here are the number of installations:

Alliant        171
Celerity just shipping
Convex         200
ELXSI           80
FPS            365
Gould            6
Multiflow        5
Scientific      25
  Computing
Supertek    not shipping yet

------------------------------

Date: Fri, 15 Jan 88 09:46 EST
From: GODDEN%gmr.com@RELAY.CS.NET
Subject: Intelligent Nanocomputers


I would like to recommend to the readers of ailist that they take a
look at the book >Engines of Creation< by K. Eric Drexler of MIT.
He presents in layman's terms the basics of nanotechnology, which
is the emerging field of molecular-sized machines, including computers.
(In the notes are references to technical works.)  Of particular
interest to AI folks is the chapter on AI and nanocomputers.  Let me
just relate one item to give you a hint of what it's about.  Drexler
makes the fascinating claim (no doubt many will vehemently disagree)
that to create a true artificial intelligence it is not necessary to
first understand intelligence.  All one has to do is simulate the
brain, which can be done given nanotechnology.  He suggests that a
complete hardware simulation of the brain can be done, synapse-for-
synapse and dendrite-for-dendrite, in the space of one cubic centimeter
(this figure is backed up in the notes).  Such a machine could then
just be allowed to run and should be able to accomplish a man-year of
work in ten seconds.  The unstated assumption is that a computer that
is isomorphic to the human brain will ipso facto be intelligent, and
presumably will be able to construct its own 'mental' models once power
is supplied.  No need to supply it with software.  (I may be misinter-
preting the book on this point.)  Interesting reading in any case.
He even predicts (!) in chapter one that the initial nanomachines will
be with us in ten to fifty years.  Forward is by Minsky.  In paperback.

-Kurt Godden
 godden@gmr.com

------------------------------

End of AIList Digest
********************

∂22-Jan-88  0137	LAWS@KL.SRI.COM 	AIList V6 #14 - Seminars, Mathematical Logic Conference   
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 22 Jan 88  01:37:11 PST
Date: Thu 21 Jan 1988 23:03-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #14 - Seminars, Mathematical Logic Conference
To: AIList@SRI.COM


AIList Digest            Friday, 22 Jan 1988       Volume 6 : Issue 14

Today's Topics:
  Seminars - DSRS: Distributed/Parallel Discrete-Event Simulation (SU) &
    Virtual Environment Systems (SU) &
    Solving Dynamic-Input Interpretation Problems (Unisys) &
    Generate, Text and Debug for Planning (AT&T) &
    Distributing Backward-Chaining Deductions (BBN),
  Course & Conference - Mathematical Logic (Bulgaria)

----------------------------------------------------------------------

Date: Fri, 15 Jan 88 12:07:20 PST
From: Bruce L Hitson <hitson@Pescadero.stanford.edu>
Subject: Seminar - DSRS: Distributed/Parallel Discrete-Event
         Simulation (SU)


      DISTRIBUTED/PARALLEL DISCRETE-EVENT SIMULATION:  A SURVEY
                  Marc Abrams - Stanford University

           DSRS: Distributed Systems Research Seminar (CS548)
                     Thursday, January 21, 4:15pm
                        Margaret Jacks Hall 352

Work has increased through the last decade on algorithms to execute
discrete-event simulations on distributed and, more recently, multiprocessor
computer systems.  Some simple ways of exploiting parallelism were quickly
proposed -- running independent sequential simulation replicas in parallel,
and distributing the support functions of a simulation.  However progress
beyond this in executing two or more events in parallel is only beginning to
be demonstrated.  This talk surveys the current state of research in the
field.  Proposed simulation algorithms are discussed.  Performance
characterizations of the algorithms obtained through theoretical analysis,
simulation, and measurement of implementations are summarized.  The talk
concludes with some encouraging directions for future research.

------------------------------

Date: Fri, 15 Jan 88 13:45:24 PST
From: Susan Gere <susan@umunhum.stanford.edu>
Subject: Seminar - Virtual Environment Systems (SU)


                 EE380---Computer Systems Colloquium


Title:   Virtual Environment Display Systems

Speaker:  Scott Fisher
From:     NASA/Ames Research Center

Time:    4:15 p.m. on Wednesday, January 20
Place:   Skilling Auditorium

                              Abstract

The presentation will describe recent research trends toward development
of virtual interactive simulation environments for aerospace and commercial
applications.  The primary focus will be the Virtual Environment Workstation
(VIEW) developed at NASA Ames Research Center. This system provides a
multisensory, interactive display environment in which a user can virtually
explore a 360-degree synthesized or remotely sensed environment and can
viscerally interact with its components through use of head-mounted,
stereoscopic displays, tactile input gloves and 3D sound technology.
Applications for Telepresence, Telerobotics and Spatial Data Management
will also be discussed.

Scott S. Fisher is a Research Scientist in the Aerospace Human Factors
Research Division at NASA/Ames Research Center, Moffett Field, California.

------------------------------

Date: Mon, 18 Jan 88 22:43:31 EST
From: finin@PRC.Unisys.COM (Tim Finin)
Subject: Seminar - Solving Dynamic-Input Interpretation Problems
         (Unisys)


                                AI Seminar
                         UNISYS Knowledge Systems
                           Paoli Research Center
                                 Paoli PA


            SOLVING DYNAMIC-INPUT INTERPRETATION PROBLEMS

                         Kathleen D. Cebulka
                  Computer and Information Sciences
                        University of Delaware
                           Newark, DE 19716

Many AI problems can be viewed as interpretation problems which have the
common goal of producing a solution that "explains" a given input.  The
solution usually takes the form of a set of beliefs called a hypothesis.
Although a number of researchers have developed strategies for handling the
static case where the input is fixed, there are many problems where the
input is received dynamically in relatively small increments.  Usually the
problem solver is interacting with a user who expects a timely response
after every input, so it can not postpone forming a solution while it waits
for more complete information.  As a result, the problem solver must rely
on default reasoning and belief revision techniques since new evidence may
reveal that the current hypothesis is not the final answer.

This talk describes a characterization of the solution of dynamic-input
interpretation problems as a search through a hypothesis space.  A domain
independent algorithm, called the Hypothesize-Test-Revise algorithm, is
presented and contrasted with the traditional static approach.  An
advantage of this algorithm is a more efficient strategy for generating and
revising hypotheses in a dynamic environment.

                       2:00pm Wednesday, January 20
                            BIC Conference Room
                       Unisys Paloi Research Center
                        Route 252 and Central Ave.
                              Paoli PA 19311

     -- non-Unisys visitors who are interested in attending should --
     --   send email to finin@prc.unisys.com or call 215-648-7446  --

------------------------------

Date: Tue, 19 Jan  11:54:54 1988
From: dlm%research.att.com@RELAY.CS.NET
Subject: Seminar - Generate, Text and Debug for Planning (AT&T)


                 Generate, Test and Debug: a Paradigm for
              Solving Interpretations and Planning Problems

                               Reid Simmons
                  Massachusetts Institute of Technology

      January 21, 1988, AT&T Bell Labs-Murray Hill 3D-473, 11:00 am
       January 22, 1988, AT&T Bell Labs-Holmdel 4C-513, 10:30 am



                                 ABSTRACT

       We describe how the Generate, Test and Debug (GTD) paradigm
       solves interpretation and planning problems, and why its
       combination of associational and causal reasoning
       techniques enables it to achieve efficient and robust
       performance.  The Generator constructs hypotheses using
       domain dependent rules.  The Tester verifies hypotheses and
       supplies the Debugger with causal explanations if the test
       fails, and the Debugger uses domain-independent algorithms to
       repair hypotheses by analyzing the causal explanations and
       models of the domain.  The GTD paradigm has been implemented
       and tested in the domains of geologic interpretation, the
       blocks world, and the Tower of Hanoi problem.

Sponsor: Ron Brachman

------------------------------

Date: Thu 21 Jan 88 10:16:21-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Seminar - Distributing Backward-Chaining Deductions (BBN)

                    BBN Science Development Program
                       AI Seminar Series Lecture

    DISTRIBUTING BACKWARD-CHAINING DEDUCTIONS TO MULTIPLE PROCESSORS

                              Vineet Singh
                     Stanford University, and SPAR
                     (VSINGH@SPAR-20.SPAR.SLB.COM)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                     10:30 am, Friday January 29th


The talk presents PM, a parallel execution model for backward-chaining
deductions.  PM exploits more parallelism than other execution models
that use data-driven control and non-shared memory architectures.  The
talk also presents an application-independent, compile-time allocation
strategy for PM that is both fast and effective.  Effectiveness is
demonstrated by comparing speedups obtained from an implementation of
the allocator to an unreachable upper bound and speedups obtained from
random allocations.  The resource allocator uses probabilistic
techniques to predict the amount of communication and the parallelism
profile; this should be useful for other allocation strategies as well.

------------------------------

Date: 12 Jan 88 21:04:23 GMT
From: mcvax!inria!geocub!farinas@uunet.uu.net  (Luis Farinas)
Subject: Course & Conference - Mathematical Logic (Bulgaria)


                CURRENT ANNOUNCEMENT AND CALL FOR PAPERS
                ----------------------------------------

        SUMMER SCHOOL & CONFERENCE ON MATHEMATICAL LOGIC
        honourably dedicated to the 90th anniversary of Arend Heyting

                September 13-23, 1988

                Chaika near Varna (BULGARIA)

                ORGANIZED BY:
        Sofia University, Bulgarian Academy of Sciences

TOPICS of the meeting:
        * recursion theory
        * modal and non-classical logics
        * intuitionism and constructivism
        * related applications to computer science
        * life and work of Arend Heyting (1898-1980)

INVITED LECTURERS:
        M.Cresswell     V.Lifschitz     G. Sacks
        D.de Jongh      L.Maksimova     D.Skordev
        A. Ditchev      G.Mints         G.Takeuti
        D.Harel         A.Muchnik       A.Toelstra
        M.Kanovich      H.Nishimura     D.van Dalen
        A.Kechris       D.Normann       A.Visser
        B.Kushner       H.Ono           S.Wainer

Submission to the conference are invited from the above areas and will be
evaluated by the Programme Commitee. We shall expect 5 copies of a draft full
paper (in English) of no more than 15 double spaced pages, containing original
contributions. Papers should reach the PC chairman by the deadline below,
accompanied by a one page abstract.

DEADLINES :
                March, 15: Submission
                June,15: Notification
                August,15: final version for publication in the proceedings

ORGANIZATION COMMITTEE :
        Valentin Goranko : local arragements
        Lyubomir Yvanov : Treasurer
        Solomon Passy: Secretary
        Petio Petkov: Chairman
        Tinko Tinchev

Please send all correspondence to the appropriate member of the organization
committee.
                HEYTING'88
                Sector of Logic
                Mathematics Faculty
                boul. Anton Ivanov 5
                SOFIA 1126
                BULGARIA

------------------------------

End of AIList Digest
********************

∂22-Jan-88  0401	LAWS@KL.SRI.COM 	AIList V6 #15 - Selfridge, Mazes, Prolog, BUILD, Ping-Pong
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 22 Jan 88  04:01:23 PST
Date: Thu 21 Jan 1988 23:15-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #15 - Selfridge, Mazes, Prolog, BUILD, Ping-Pong
To: AIList@SRI.COM


AIList Digest            Friday, 22 Jan 1988       Volume 6 : Issue 15

Today's Topics:
  Queries - Planning for Games & Network Access to Knowledge Systems &
    Dik Gregory & Goldworks and Nexpert & Object and Frame Languages &
    Computer Aided Teaching in Bioscience & Qualitative Economics &
    PRECARN Canadian AI RFP,
  Bindings - Mallory Selfridge,
  Psychology & Learning - Computerized Rats and Mazes,
  AI Tools - PROLOG for an IBM 3090 under CMS & Scott Fahlman's BUILD,
  Robotics - Ping-Pong Playing Robot

----------------------------------------------------------------------

Date: 18 Jan 88 04:33:20 GMT
From: moran@YALE-ZOO.ARPA  (William L. Moran Jr.)
Subject: Planning for games

Well, since I last posted asking for references about planning as it
relates to the playing of games other than chess, I have gotten about
twenty five responses. However, they have all been of the form - "yes
I'm interested too; please summarize." Not too helpful or encouraging.
One person did suggest looking at autorogue from CMU; does anyone have
any info about this? A Tech report number would be most helpful. Thanks.

                          William L. Moran Jr.
moran@{yale.arpa, cs.yale.edu, yalecs.bitnet}  ...{ihnp4!hsi,decvax}!yale!moran

------------------------------

Date: Mon, 18 Jan 88 17:41:29 PST
From: lls@Sun.COM (Lynn Snyder)
Subject: Network Access to Knowledge Systems

        Can anyone give me a lead to people working on distributed or network
access to expert systems and knowledge bases?  I interested
in the interface between the remote system and the expert system, over a
network, particularly if multiple users can access the knowledge system.
        I will compile and circulate any responses I get to this query.
Thanks. - Lynn Snyder

------------------------------

Date: Tue, 19 Jan 88 09:48:54 EST
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: binding: Dik Gregory?

Anyone know where Dik Gregory is?  He was at ARI a couple of years ago.

------------------------------

Date: 19 Jan 88 17:14:20 GMT
From: decvax!dartvax!creare!gda@ucbvax.Berkeley.EDU  (Gray Abbott)
Subject: Goldworks and Nexpert

We  are considering several expert  system tools on  several different
machines.   Two of these  tools  are Gold Hill's Goldworks  and Neuron
Data's Nexpert.  Goldworks runs on ATs and 386s; Nexpert runs  on ATs,
Macintosh, and uVax.  Rumors are that both will soon run on Suns.

Does anyone  have  any experience  with these  tools?  What  are  your
comments?  How is their speed?  Have you run LARGE expert systems with
them?  How  much RAM do you  really need?  How much disk  space?  What
happens  when you don't  have enough?  Have  you interfaced with other
programs?  How well did that work?  Both  of these tools are much less
expensive than other tools that appear to offer similar features (e.g.
ART,  KEE, KnowledgeCraft, S1).   Why  is  this possible?   Is there a
catch (that  is,  what's  the    downside)?     Goldworks includes  an
underlying LISP; Nexpert allows you do interface with C, FORTRAN, etc.
Is this a problem for Nexpert?

Please reply by mail and I will summarize to the net.

                        ...dartvax!creare!gda
                        gda%creare%dartmouth.edu

------------------------------

Date: 19 Jan 88 17:28:26 GMT
From: mcvax!lifia!gb@uunet.UU.NET (Guilherme Bittencourt)
Reply-to: mcvax!lifia!gb@uunet.UU.NET (Guilherme Bittencourt)
Subject: Object and frame languages wanted


        I am searching for an object oriented extension and/or a
frame extension of Common Lisp (preference for KCL and public domain).

        I would like to use these extensions for the implementation of
a system to aid in the design of knowledge-based systems.

        Any hints, literature references, or even source code are VERY
welcome. If there is interest, I will summarize the responces to the net.

        Thanks in advance.

--
 Guilherme BITTENCOURT          +-----+         gb@lifia.imag.fr
 L.I.F.I.A.                     | <0> |
 46, Avenue Felix Viallet       +-----+
 38031 GRENOBLE Cedex                              (33) 76574668

------------------------------

Date: Tue, 19 Jan 88 10:50:06 EST
From: Deba Patnaik <DEBA%UMDC.BITNET@CUNYVM.CUNY.EDU>
Subject: Computer Aided Teaching in Bioscience area ?

We need information on anyone who is doing research or has Computer Aided
Teaching Software for Bioscience area. I will appreciate your help. Thanks,

deba@umdc.umd.edu

------------------------------

Date: Wed, 20 Jan 88 14:07:23 EST
From: Nicky Ranganathan <nicky@vx2.GBA.NYU.EDU>
Subject: Request for references


I would much appreciate any references/pointers to (not necessarily AI)
literature dealing with qualitative modeling of domains such as
economics or related areas. Please reply to me. If anyone else is
interested, I will e-mail. Thanks,
--Nicky Ranganathan
------------------------------------------------------------------------------
Nicky Ranganathan         Arpa: nicky@vx2.gba.nyu.edu
Information Systems Dept. UUCP: ...{allegra,rocky}!cmcl2!vx2.gba.nyu.edu!nicky
New York University       Bitnet: pranganathan@nybvx1

------------------------------

Date: 21 Jan 88 13:28:00 EST
From: Daniel (D.R.) Zlatin <DANIEL%BNR.BITNET@CUNYVM.CUNY.EDU>
Subject: RFP available


PRECARN is a not-for-profit consortium of over 30 Canadian
corporations.  PRECARN (sometimes expanded as Pre-Competitive
Applied Research Network) has as its overriding objective the
increase of industrial competitiveness in Canada.  This will be
accomplished by funding applied research projects to raise awareness
of, and exploitation of, artificial intelligence and robotics
technologies within industry.

PRECARN is now commencing the process that will identify, and
eventually fund, the research and development activities in artificial
intelligence and robotics that hold the greatest promise for eventual
exploitation by the member corporations.  PRECARN is inviting
applications for the funding of feasibility studies.  Applications
to this RFP may come from:

a) Canadian universities or colleges;

b) Canadian corporations or subsidiaries of foreign corporations
   having, or actively moving towards, a significant research and
   development operation in Canada;

c) Crown corporations with an arm's-length relationship with the
   government.

For more information, a complete copy of the RFP, and application
forms, contact:
Mail:   PRECARN Associates Inc.,
        30 Colonnade Road,
        Suite 300,
        Nepean, Ontario
        K2E  7J6

Phone:  (613) 727-9576

Email:  Sorry, no email address.

------------------------------

Date: Sun, 17 Jan 88  17:20 EST
From: WURST%UCONNVM.BITNET@CUNYVM.CUNY.EDU
Subject: Address request...


Walter Rolandi (ROLANDI@GOLLUM.UUCP) writes:
>Does anyone know the email (or other mail) address of M. Selfridge of:
>
>Selfridge, M. 1980. A Process Model of Language Acquisition.  Ph.D.
>        diss., Technical Report, 172, Dept of Computer Science, Yale
>        University.


     Mallory Selfridge is currently a member of the faculty of the
     University of Connecticut.  He may be reached at the following
     address(es):

     U.S. Mail:
          Computer Science and Engineering, U-155
          University of Connecticut
          Storrs, CT   06268

     CSNET:
          MAL@UCONN.CSNET

----------
Karl R. Wurst
Computer Science and Engineering, U-155
University of Connecticut
Storrs, CT   06268

BITNET:  WURST@UCONNVM.BITNET   'Things fall apart.  It's scientific.'
CSNET :  WURST@UCONN.CSNET                           - David Byrne

------------------------------

Date: 18 Jan 88 12:17:54 GMT
From: hubcap!ncrcae!gollum!rolandi@gatech.edu  (rolandi)
Subject: computerized rats and mazes


Regarding a recent request for computerized rats and rat mazes, you might
find something of interest in:

Steinhauer, Gene D. (1986) Artificial Behavior: Computer Simulation of
        Psychological Processes. Englewood Cliffs, NJ: Prentice-Hall.

Several attempts to contact the author were unsuccessful however.  Good
luck.

walter rolandi
rolandi@gollum.UUCP ()
NCR Advanced Systems, Columbia, SC
u.s.carolina dept. of psychology and linguistics

------------------------------

Date: Tue, 19 Jan 88 16:09:36 EST
From: "Thomson, Steve" <STEVE@UKCC.BITNET>
Reply-to: AIList@Stripe.SRI.COM
Subject: PROLOG for an IBM 3090 under CMS

    Does any one have direct or indirect knowledge that they would be
willing to share about implementations of PROLOG under CMS?  We would
try to install it on a 3090-300VF under CMS release 5.
    Do any implementations use the vector facility? Have plans to use
parallelism (when it becomes available), extended architecture (ditto)?
    I admit my knowledge on this is bounded above at zero.
                                             Thankyou very much.
                                             STEVE@UKCC.BITNET

------------------------------

Date: Thu, 21 Jan 88 10:40:26 IST
From: Oren Regev <CERRLOR%TECHNION.BITNET@CNUCE-VM.ARPA>
Subject: Re: PROLOG for an IBM 3090 under CMS

Dear Steve
   We do use prolog under cms here in the technion.If you specify your
subjects of interest I shall try to help you.
                Oren Regev

------------------------------

Date: 20 Jan 88 03:03:05 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Scott Fahlman and BUILD


     I last year brought Fahlman's Build back to life, converting it from
Conniver to Common Lisp, and used it as a base for work of my own in
robot control.   Interested researchers can contact me for more
information.

                                        John Nagle

------------------------------

Date: 17 Jan 88 09:27:36 GMT
From: pv@OHIO-STATE.ARPA  (Vuorimaa Petri Kalevi)
Subject: Re: query: table-tennis-playing robot?

My mail bounced back, so here we go!

To: hucka@caen.engin.umich.edu
Subject: Re: query: table-tennis-playing robot?

hucka@caen.engin.umich.edu (Michael Hucka):
>
> Within the last half-year I read an article which described a successful
> robotic device capable of playing table-tennis.  Unfortunately I can't
> remember where I came across it.  Has anyone else read about this or know
> where I can get more information about it?  I am interested in learning
> about the research and technical issues the system's creators had to address.
>

You can find a short description about that particular robot from the
byte-magazine (July 1987, page 37). Here in Europa we have more robots
(about 5). Our robots (Finland, England, Belgium and Switzerland) are
not as good as that one, but they are much smaller.
So, we can and have arranged competitions between them!

If want more information about European ping-pong-robots, mail me!
--
Petri Vuorimaa     Tampere University of Technology / Computer Systems Lab
pv@tut.FI          PO. BOX. 527, 33101 Tampere, Finland

------------------------------

Date: Sat 16 Jan 88 20:39:44-EST
From: John R. Kender <KENDER@CS.COLUMBIA.EDU>
Subject: Ping-pong playing robot

The robot is the creation of Russ Anderson at AT&T Bell Labs at Holmdel, NJ.
He has a brief video tape showing it in action.  Notable are its use
of a custom VLSI chip for calculating momemts of blobs in binary images,
custom modification of a PUMA arm, and the real-time handling of the
blurred ellipsoidal imagery that the ball presents to standard automation
cameras.  Some years ago there was talk about a robot ping-pong match,
but I am unaware of any opponent for it.

------------------------------

Date: 21 Jan 88 14:43:33 GMT
From: Rob Elkins <relkins@vax1.acs.udel.edu>
Reply-to: Rob Elkins <relkins@vax1.acs.udel.edu>
Subject: Re: table tennis playing robot


In article <8801180618.AA08105@ucbvax.Berkeley.EDU> RCSMPA::HAMILTON@gmr.COM
("William E. Hamilton, Jr.") writes:
>I believe the table tennis playing robot work was done by Bell Labs at
>one of their New Jersey locations (probably Murray Hill or Holmdel).

I remember reading in Levi's "Hackers" about a table tennis playing robot
that was built at MIT during the 60's.  Whether or not this is hearsay,
I'm not sure.  I do remember reading that the robot confused Professor
Minsky for a ping-pong ball and tried to take a swat at him.  This could
be hearsay as well.

Rob Elkins
--
ARPA:   relkins@vax1.acs.udel.edu
BITNET: FFO04688 AT ACSVM

Live Long and Prosper!

------------------------------

End of AIList Digest
********************

∂22-Jan-88  0619	LAWS@KL.SRI.COM 	AIList V6 #16 - Neural Nets, Psychiatry, Psychology, Nanocomputers  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 22 Jan 88  06:18:55 PST
Date: Thu 21 Jan 1988 23:32-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #16 - Neural Nets, Psychiatry, Psychology, Nanocomputers
To: AIList@SRI.COM


AIList Digest            Friday, 22 Jan 1988       Volume 6 : Issue 16

Today's Topics:
  Bibliographies - Annotated Connectionist Bibliography &
    AI Applications in Psychiatry,
  Psychology - comp.cog-sci/sci.psych & Biological Models,
  Linguistics - Grammar Checking,
  Humor - Intelligent Nanocomputers

----------------------------------------------------------------------

Date: Thu, 21 Jan 88 14:42:34 EST
From: MaryAnne Fox <mf01%gte-labs.csnet@RELAY.CS.NET>
Subject: Tech Report -- Annotated Connectionist Bibliography


        -------------------------------------------------------------------

                        SELECTED BIBLIOGRAPHY ON CONNECTIONISM

                                 Oliver G. Selfridge
                                  Richard S. Sutton
                                 Charles W. Anderson

                                      GTE Labs


        An annotated bibliography of 38 connectionist works of historical
        or current interest.

        -------------------------------------------------------------------


        For copies, reply to this message with your USmail address, or send
        to: Mary Anne Fox
            GTE Labs  MS-44
            Waltham, MA  02254
            mf01@GTE-Labs.csNet

------------------------------

Date: Mon, 18 Jan 88 22:19:50-1000
From: todd@uhccux.uhcc.hawaii.edu (The Perplexed Wiz)
Subject: AI Applications in Psychiatry

>AIList Digest            Monday, 18 Jan 1988       Volume 6 : Issue 13
>From: George Thomsen <THOMSEN@SUMEX-AIM.Stanford.EDU>
>I am interested in learning about recent AI applications in Psychiatry.
>I would appreciate any references or suggestions.

I thought other readers of AIList might be interested in a short reference
list on this topic also.  So, here it is.  If anyone has other recent
references, please let me know.  I am in the final stages of cleaning up my
dissertation and a few more appropriate references wouldn't hurt :-)

Todd Ogasawara, U. of Hawaii Faculty Development Program
UUCP:           {ihnp4,uunet,ucbvax,dcdwest}!sdcsvax!nosc!uhccux!todd
ARPA:           uhccux!todd@nosc.MIL            BITNET: todd@uhccux
INTERNET:       todd@uhccux.UHCC.HAWAII.EDU

--AI/Computer diagnostic aids/Psychiatry/Psychology References--

Adams, K.M., Kvale, V.I., & Keegan, J.F. (1984). Relative ac-
     curacy of three automated systems for neuropsychological in-
     terpretation. Journal of Clinical Neuropsychology, 6, 413-
     431.

Erdman, H.P., Klein, M.H., & Greist, J.H. (1985).  Direct patient
     computer interviewing.  Journal of Consulting and Clinical
     Psychology, 53, 760-773.

Fischer, M. (1974).  Development and validity of a computerized
     method for diagnoses of functional psychoses (DIAX).  Acta
     Psychiatria Scandanavia, 50, 243-288.

Fowler, R.D. (1985).  Landmarks in computer-assisted psychologi-
     cal assessment.  Journal of Counseling and Clinical Psychol-
     ogy, 53, 748-759.

Hand, D.J. (1985). Artificial intelligence and psychiatry. Cam-
     bridge: Cambridge University Press.

Hardt, S.L., & MacFadden, D.H. (1987).  Computer assisted
     psychiatric diagnosis:  Experiments in software design.
     Computers in Biology and Medicine, 17, 229-237.

Jelliffe, R.W. (1985). Managing patients by computer. Research
     Resources Reporter, ??, 8-12.

Reilly, K.D., Freese, M.R., & Rowe, P.B. Jr. (1984).  Computer
     simulation modeling of abnormal behavior: A program ap-
     proach.  Behavioral Science, 29, 186-211.

Rome, H.P. (1985). Computers and psychiatry: An historical per-
     spective. Psychiatric Annals, 15, 519-523.

Servan-Schreiber, D. (1986).  Artificial intelligence and
     psychiatry.  The Journal of Nervous and Mental Disease, 174,
     191-202.

------------------------------

Date: 21 Jan 88 03:54:12 GMT
From: tektronix!sequent!mntgfx!msellers@ucbvax.Berkeley.EDU  (Mike
      Sellers)
Subject: Re: Another vote for comp.cog-sci (was Re: time for
         sci.psych???)

In article <2990@arthur.cs.purdue.edu>, spaf@cs.purdue.EDU (Gene Spafford)
writes:
> There is already a "comp.cog-eng" for cognitive science and engineering.
> Why don't you use the groups already in existence rather than
> ask for a new one?
>
> This is an example of why we want to limit the number of newsgroups:
> users don't realize what groups already exist when there are so many.
> --
> Gene Spafford

Sorry Gene, but in this case I realize quite well what groups exist that
might be a good home for cognitive science discussions.  Comp.cog-eng is
described as being the home for discussions on "cognitive engineering",
which many people take to be the same as "human factors" or ergonomics.
This is very different from the broad-based synthetic discussions that
tend to occur when "cognitive science" is the topic.

Comp.ai and comp.cog-eng have both been used to some degree in the past for
cognitive science discussions.  In both cases someone almost always posts
or e-mails a message requesting that the cognitive science folks please stop
diluting the discussion.  Thus the call for a separate group.

I would like a newsgroup where discussions regarding cognitive science could
be fostered; using comp.cog-eng is fine with me, but other people may disagree.
In general, I think a definition of 'cognitive engineering' is drifting away
from ergnomics and toward the operational parts of cognitive science -- more
actions and less theory.  This may be a result of cognitive science enfolding
those parts of ergonomics that deal with intelligent HCI into itself; at any
rate, that's my rationale for using comp.cog-eng for this purpose.  Though
its a bit like using [mythical] comp.expert-systems for general AI discussions.

Any other votes?


--
Mike Sellers                ...!tektronix!sequent!mntgfx!msellers
Mentor Graphics Corp., Electronic Packaging and Analysis Division
"The goal of AI is to take the meaningful and make it meaningless."
                                  -- An AI prof, referring to LISP

------------------------------

Date: 19 Jan 88 01:24:55 GMT
From: jbm@AMES-AURORA.ARPA  (Jeffrey Mulligan)
Subject: Re: time for sci.psych???

From article <2476@cup.portal.com>, by Hoosier@cup.portal.com:
>
>
> Another oxymoron:    scientific psychology

Sounds like you might be trying to start a war.

One of the pioneers of sensory psychology (my area) was
Hermann von Helmholtz.  He contributed two books:
"Treatise on Physiological Optics", which deals with vision,
and "The sensation of tone," which deals with hearing.
(His scientific publications probably number in the hundreds.)

Helmholtz' training was in medicine, but he is also noted for major
contributions to physics.  It is sometimes said that 90% of what
is known in the field today was known by Helmholtz; although some of
his ideas have not held up, noone would suggest that he was not
scientific.

Sensory psychology is an interdisciplinary area, combining with
physics (optics in the case of vision) and physiology.  Investigations
which make inferences about the underlying mechanisms on the basis
of a behavioral response are generally classified as "psychology."

If you want to evaluate the scientific content of the field, why
don't you look into a few current journals, like

Journal of the Optical Society of America
Journal of the Acoustical Society of America
Vision Research
Perception
Perception & Psychophysics


--

        Jeff Mulligan (jbm@ames-aurora.arpa)
        NASA/Ames Research Ctr., Mail Stop 239-3, Moffet Field CA, 94035
        (415) 694-5150

------------------------------

Date: 19 Jan 88 16:22:44 GMT
From: gls@odyssey.att.com (g.l.sicherman)
Subject: biological models?


While I agree with R. M. Wallace's observation that meaningful
biological modelling must consider organic requirements, I think
his description of these requirements needs refining.  He proposes
four "basic" requirements: greed, fear, pain, and pleasure.
This is a mixed bag.  Fear is an emotion, pain and pleasure are
responses, and greed, as Wallace uses the term, seems to describe
wants that are impelled by needs and may persist beyond them.

From our personal experience of pain and pleasure, how can we abstract
them?  Pain, for instance, tells us that we are hurt and suggests (by
its rise or fall) what we can do to help mend the hurt or avoid
aggravating it.  Like pleasure, it serves us as an internal function.
Anything else that serves a being in like wise can be the counterpart of
pain in ourselves--or we may choose to call it "pain," to identify it
with what we experience.  This identification is artificial, but then
so is the identification of my pain with yours.

But I would not go so far as to call pain a requirement for all beings.
A species prolific enough to outbreed attrition and predation can ignore
injury.  Of course, we might not find such a species interesting enough
to model!

As to genuine survival requirements, computers already have them.  A
computer must carry out its instructions faithfully or its users will
have it destroyed.  That is, the computer's survival depends on the
complicated and sometimes undefinable task of satisfying human beings.
Take away the users, and the computer ceases to exist as such; it loses
its meaning.  But this is sidetracking us into cybernetics....
--
Col. G. L. Sicherman
...!ihnp4!odyssey!gls

------------------------------

Date: 21 Jan 88 02:50:00 GMT
From: kadie@b.cs.uiuc.edu
Subject: Grammar Checking


Remember last year's heated discussion about grammar and
style checkers? Well here is a little data (some few data?).

I recently had a 13 page double-spaced document proof read
by a person (my advisor). He suggested about 22 simple grammar corrections;
I made every correction. Then I feed the same document to RIGHTWRITER
a commercial program for the IBM AT. It make 159 suggestions;
I took 9 of them. The person and the program make one identical
suggestion.

So:
* Humans are much better proofreaders than (today's) programs.
* Program might still be worth using since they may find errors
  that a human misses and since they are convenient.


Also:
The most important comments a human makes are about the understandability
of the document. In a sense the human is telling how well
the document "executes."  Since, the program only looks a syntax,
it can only guess about this part.


Carl Kadie
Inductive Learning Group
University of Illinois at Urbana-Champaign
UUCP: {ihnp4,pur-ee,convex}!uiucdcs!kadie
CSNET: kadie@UIUC.CSNET
ARPA: kadie@M.CS.UIUC.EDU (kadie@UIUC.ARPA)

------------------------------

Date: 19 Jan 88 18:43:43 GMT
From: umix!umich!eecs.umich.edu!dwt@uunet.UU.NET (David West)
Reply-to: umix!umich!eecs.umich.edu!dwt@uunet.UU.NET (David West)
Subject: Re: Intelligent Nanocomputers


In article <8801180618.AA08132@ucbvax.Berkeley.EDU> GODDEN@gmr.COM writes:
> [...] the book >Engines of Creation< by K. Eric Drexler of MIT. [...]
>it is not necessary to first understand intelligence.  All one has to do is
>simulate the brain [...] a complete hardware simulation of the brain can be
>done [...] in the space of one cubic centimeter [...] h a machine could then
>just be allowed to run and should be able to accomplish a man-year of
>work in ten seconds.

The breathtaking simplicity of the idea is awesome.  Of course, some
technological advances will be necessary for its realization, but note that
to attain them, it is not necessary to understand technology ... all one has
to do is simulate its development.  A complete hardware simulation of the
U.S. technological enterprise can be done in the space of one cubic meter
(see appendix A) ... such a machine could then just be allowed to run, and
should be able to accomplish a century of progress in one hour.

------------------------------

End of AIList Digest
********************

∂25-Jan-88  0119	LAWS@KL.SRI.COM 	AIList V6 #17 - Prolog, CommonLoops, Ball Catching, Nano-Engineering
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 25 Jan 88  01:19:25 PST
Date: Sun 24 Jan 1988 23:07-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #17 - Prolog, CommonLoops, Ball Catching, Nano-Engineering
To: AIList@SRI.COM


AIList Digest            Monday, 25 Jan 1988       Volume 6 : Issue 17

Today's Topics:
  Queries - Eye and Brain Reference &
    Software Development and Expert Systems &
    Joshua for Symbolics & IXL Machine Learning System,
  AI Tools - PROLOG for an IBM 3090 under CMS & CommonLoops,
  History - Ball-Catching Robot,
  Comments - Nano-Engineering & CogSci List

----------------------------------------------------------------------

Date: 19 Jan 88 09:21:52 GMT
From: prlb2!ronse%mcvax@uunet.UU.NET (Christian Ronse)
Subject: Re: Evolution of Intelligence

>   [For a more elaborate development of this viewpoint see the
>   recent book by Fischler and Firschein on The Eye and the Brain.
>   A major premise is that perception is a goal of AI (or of any
>   intelligence) rather than just a preprocessing stage.  -- KIL]

Could I have the complete reference, please?

Christian Ronse         maldoror@prlb2.UUCP
{uunet|philabs|mcvax|...}!prlb2!{maldoror|ronse}


  [M.A. Fischler and O. Firschein, Intelligence: The Eye, the Brain,
  and the Computer.  Addison-Wesley Publishing Co., Reading, Massachusetts,
  1987.  331 pp.  -- KIL]

------------------------------

Date: Fri, 22 Jan 88 10:10:23 GMT
From: "Simon.Ross" <sross@Cs.Ucl.AC.UK>
Subject: Software Development and Expert Systems

Request for information please:

I am looking into Performance Measures for Knowledge-Based Systems
(including Expert Systems). In particular, I am interested in what software
development techniques/measures etc for conventional software may be useful
for knowledge-based software. Furthermore, what are the special problems of
evaluating, testing and measuring the performance of knowledge-based systems
which make conventional tools and methods inappropriate.

Any information regarding this subject (even if it is informed
anecdotes) will be gratefully received.

Depending on the response I may get back to you on this.

Simon Ross
                Department of Computer Science
                University College London
                London WC1E 6BT
                Phone: (+44) 1 387 7050 Ext. 3701

                ARPA :  sross@cs.ucl.ac.uk
....if this does not work try;
           EAST COAST:  sross%cs.ucl.ac.uk@relay.cs.net
           WEST COAST:  sross%cs.ucl.ac.uk@a.isi.edu
                UUCP :  mcvax!ukc!ucl-cs!sross

------------------------------

Date: 22 Jan 88 22:46:23 GMT
From: ssc-vax!bcsaic!dorothy@beaver.cs.washington.edu  (Dorothy Dube)
Subject: Joshua for Symbolics


there's been a recent reference to some tool called Joshua,
which presumably sits on a Symbolics, and is purported to be
as robust as Knowledge Craft (tm).

does anyone have any details on this tool?

thanx

dorothy@boeing.com

------------------------------

Date: Fri 22 Jan 88 14:17:32-EST
From: Steven M. Kearns <KEARNS@CS.COLUMBIA.EDU>
Subject: IXL - A Machine Learning System

Hi.
I remember seeing an advertisement for IXL - A Machine Learning System;
it sounded interesting but I have lost the pointer to the company.

Has anybody used this, or know the address/phone number of the company.
As I remember it, it was some sort of combination of frame system,
database, expert system, and kitchen sink.
-steve
(kearns@cs.columbia.edu)

------------------------------

Date: Fri, 22 Jan 88 20:21 SET
From: Renzo Beltrame <BELTRAME%ICNUCEVM.BITNET@CNUCE-VM.ARPA>
Subject: Re: PROLOG for an IBM 3090 under CMS

We have VMPROLOG on an IBM 3081 under VM/CMS.  It was used by our
collegues that work on natural language analisys.
I did not heard of a version of VMPROLOG using the vector feature of
IBM 3090.
The only languages for which I know this possibility are MPSX and APL2.
Regards,
--renzo
Acknowledge-To: Renzo Beltrame <BELTRAME@ICNUCEVM>

------------------------------

Date: 22 Jan 88 22:35:40 GMT
From: luis@postgres.Berkeley.EDU  (Luis Miguel)
Subject: Re: Object and frame languages wanted

What you want is Portable Common Loops, an implementation of CLOS
(Common Lisp Object Standard) form Xerox Parc.

Send mail to CommonLoops-Coordinator.pa@Xerox.com for information regarding
availability, documentation, etc. It is available for ftp'ing over the
network, and is all public domain code.

/Luis


Luis Miguel CS Division, UC Berkeley.
arpa: luis@ingres.Berkeley.EDU
uucp: {ihnp4,decvax}!ucbvax!ingres!luis
at&t: 415/642-3560 (W)

------------------------------

Date: Fri, 22 Jan 1988  23:08 EST
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList V6 #15 -  table tennis playing robot

We did make a ball-catching robot in the late 1950's.  Richard
Greenblatt and William Gosper were involved with it.  We considered
ping-pong but concluded that our mechanical arm - then an AMF
versatran machine - would be too slow.  Somehow, though , a rumor
spread that we had a project to play ping-pong.  The only substance
was that Gosper did make an attitude-controllable paddle that could be
attached to the arm in case we were able to speed it up.  It should be
mentioned that Gosper was - and presumably still is - a master level
ping-pong player.

The rumor about the ball-catcher trying to catch me was true, however.
It would try to catch anything it could.  We built a railing around
it.  It was no good at catching people because the algorithm was: find
anything that moves and extrapolate the appropriate parabola for free
flight.  But it sure was dangerous being around it.

Our pincer-like mechanical hand was also to slow to catch the ball,
given the low accuracy we were getting from our TV camera.  Greenblatt
finally attached a rustic-looking straw cornucopia baskets to the arm
- you know, the sort of thing shaped like a curved horn with the
flared end up.  Most visitors assumed that we were simply trying to be
quaintly anachronistic.  The sad fact is the the cornucopia was about
the only thing that worked.  We had tried all sorts of little cups and
pails, etc., but the ball would usually bounce out of them.  The ball
was too dumb, though, to figure out how to get out of the cornucopia.

Later, as we got into the problems of robotics - for example, the
problems Fahlman faced with the BUILD program - we decided to leave
the problems of high-speed but low-level robotics aside.  For the same
reasons we stuck with stationary rather than mobile robots.

------------------------------

Date: Fri, 22 Jan 88 14:21:05 est
From: Mr. David Smith <dsmith@gelac.arpa>
Subject: Nano-engineering

>In article <8801180618.AA08132@ucbvax.Berkeley.EDU> GODDEN@gmr.COM writes:
>> [...] the book >Engines of Creation< by K. Eric Drexler of MIT. [...]
>>it is not necessary to first understand intelligence.  All one has to do is
>>simulate the brain [...] a complete hardware simulation of the brain can be
>>done [...] in the space of one cubic centimeter [...] h a machine could then
>>just be allowed to run and should be able to accomplish a man-year of
>>work in ten seconds.
>
>The breathtaking simplicity of the idea is awesome.  Of course, some
>technological advances will be necessary for its realization, but note that
>to attain them, it is not necessary to understand technology ... all one has
>to do is simulate its development.  A complete hardware simulation of the
>U.S. technological enterprise can be done in the space of one cubic meter
>(see appendix A) ... such a machine could then just be allowed to run, and
>should be able to accomplish a century of progress in one hour.

Some time ago, I asked a net question about nano-engineering and all roads
led to Eric Drexler.  Frankly, I was pleased to see this net mail putting
such activities into perspective.  At the risk of sounding Pharisaic, I
believe that the cause of "serious AI" is seriously hindered by such blatant
blather. This has to be the only forum in the civilized world which allows
such claims to be perpetrated without receiving equal portions of ridicule
and abuse.  Can it not be stopped?

David Smith:  DSMITH@gelac.arpa

------------------------------

Date: 23 Jan 88 01:24:46 GMT
From: joglekar@icarus.riacs.edu  (Umesh D. Joglekar)
Subject: Re: Another vote for comp.cog-sci (was Re: time for
         sci.psych???)


  .... I for one,  miss Steven Harnad's frequent postings.
  A cognitive Science Newsgroup would provide an appropriate forum
  for such postings which were voted out from this newsgroup sometime back.


 Research Institute for Advanced Computer Science   ARPA: joglekar@riacs.edu
 MS 230-5, NASA Ames Research Center,               UUCP: ..ames!riacs!joglekar
 Moffett Field, Ca 94305                                  (415)  694-6921


  [Please note that the vote was initiated by Steven Harnad, not
  by the AIList moderator or readers.  There was considerable
  controversy over his postings, but the complaints were mostly
  about length rather than subject matter.  I favor creation of
  a separate CogSci list, but AIList is still available for such
  discussions.  The same is true for philosophy, which also describes
  Harnad's postings.  -- KIL]

------------------------------

End of AIList Digest
********************

∂29-Jan-88  0220	LAWS@KL.SRI.COM 	AIList V6 #18 - Seminars, Conference  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 29 Jan 88  02:20:28 PST
Date: Fri 29 Jan 1988 00:04-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #18 - Seminars, Conference
To: AIList@SRI.COM


AIList Digest            Friday, 29 Jan 1988       Volume 6 : Issue 18

Today's Topics:
  Seminar - The Psychology of Everyday Things (SUNY) &
    Combining O-O and DB Programming Languages? (Unisys) &
    Learning Search Control Knowledge (AT&T) &
    Thinkertools (BBN) &
    BREAD, FRAPPE, and CAKE: Automated Deduction (SRI),
  Conference - AAAIC88 Aerospace Applications of AI

----------------------------------------------------------------------

Date: Mon, 25 Jan 88 08:53:54 EST
From: rapaport@cs.Buffalo.EDU (William J. Rapaport)
Subject: Seminar - The Psychology of Everyday Things (SUNY)


                STATE UNIVERSITY OF NEW YORK AT BUFFALO

                     The Steering Committee of the

              GRADUATE STUDIES AND RESEARCH INITIATIVE IN

                   COGNITIVE AND LINGUISTIC SCIENCES

                                PRESENTS

                            DONALD A. NORMAN

                    Institute for Cognitive Science
                  University of California, San Diego

                   THE PSYCHOLOGY OF EVERYDAY THINGS

How do we manage the tasks of everyday life?  The traditional answer  is
that  we  engage  in  problem solving, planning, and thought.  How do we
know what to do?  Again, the traditional answer is  that  we  learn,  in
part  through  experience,  in part through instruction.  I suggest that
this view is misleading.  Less planning and problem solving is  required
than  is  commonly  supposed.   Many  tasks  need never be learned:  the
proper behavior is obvious from the start.  The problem space  for  most
everyday  tasks  is  shallow  or narrow, not wide and deep as the tradi-
tional approach suggests.  The minimization of the problem space  occurs
because  natural  and contrived properties of the environment combine to
constrain the set of possible actions.  The effect is as if one had  put
the knowledge required to do a thing on the thing itself:  the knowledge
is in the world.

I show that seven stages are relevant to the performance of  an  action,
including  three  stages  for execution of an act, three for evaluation,
and a goal stage.  Consideration of the rule of each stage,  along  with
the  principles  of natural mappings and natural constraints, leads to a
set of psychological principles for  design.   Couple  these  principles
with  the  suggestion that most real tasks are shallow or narrow, and we
start to have a psychology of everyday things and everyday actions.

The talk itself is meant to be light and enjoyable.  However, there  are
profound implications for the type of theory one develops for simulating
cognitive computation.  There are  serious  implications  for  massively
parallel  structures  (what  we  call Parallel Distributed Processing or
connectionist approaches), for memory storage and retrieval via descrip-
tions  or coarse coding, and, in general, for a central role for pattern
matching, constraint satisficing, and nonsymbolic processing  mechanisms
in  human cognition.   But the main implications of the work are for the
design of understandable and usable objects.

                        Monday, February 1, 1988
                               4:00 P.M.
                        Park 280, Amherst Campus

There will also be an informal evening discussion that evening at  David
Zubin's  house, 157 Highland St., at 8:00 P.M.  Call Bill Rapaport (Com-
puter Science, (716) 636-3193, 3180) for further information.

------------------------------

Date: Tue, 26 Jan 88 11:44:22 EST
From: finin@PRC.Unisys.COM (Tim Finin)
Subject: Seminar - Combining O-O and DB Programming Languages?
         (Unisys)


                              AI Seminar
                       UNISYS Knowledge Systems
                        Paoli Research Center
                               Paoli PA


  Can We Combine Object-Oriented and Database Programming Languages?

                            Peter Buneman
                   Computer and Information Science
                      University of Pennsylvania

The inadequate expressive power of the relational data model for many
database representation tasks -- especially those that do not conform
to requirements of traditional data processing -- has led several
database systems developers to adopt an alternative "object-oriented"
approach to the representation of data.  But if we do this, must we
necessarily sacrifice the the high-level languages and the
considerable implementation technology that have been developed for
relational databases?  I shall argue that if we take a more liberal
attitude to what a relation is, we can generalize relational
languages, and even some of the ideas in relational database design,
to work for sets of objects.

A closely related problem is how we represent sets of objects as typed
values in a programming language.  If we can find such a
representation, can data types be checked statically as in languages
like Pascal and Ada, or must we live with the difficulties and dangers
of run-time type checking?  Some recent results by Atsushi Ohori
indicate that it is not only possible to do static type checking, but
that the types can be automatically inferred: the programmer does not
even have to declare the data types!

                 2:00 pm Wednesday, February 3, 1988
                         BIC Conference Room
                     Unisys Paloi Research Center
                      Route 252 and Central Ave.
                            Paoli PA 19311

   -- non-Unisys visitors who are interested in attending should --
   --   send email to finin@prc.unisys.com or call 215-648-7446  --

------------------------------

Date: Wed, 27 Jan  14:47:03 1988
From: dlm%research.att.com@RELAY.CS.NET
Subject: Seminar - Learning Search Control Knowledge (AT&T)


Learning Effective Search Control Knowledge: An Explanation-Based Approach

Steven Minton
Carnegie-Mellon University

Monday, February 1, 1988
10:30 AM

AT&T Bell Laboratories - Murray Hill  3C-436


In order to solve problems more effectively with accumulating
experience, a problem solver must be able to learn and exploit search
control knowledge. In this talk, I will discuss the application of
explanation-based learning (EBL) techniques for acquiring
domain-specific control knowledge. Although previous research has
demonstrated that EBL is a viable approach for acquiring control
knowledge, in practice EBL may not always generate useful control
knowledge. For control knowledge to be effective, the cumulative
benefits of applying the knowledge must outweigh the cumulative costs of
testing whether the knowledge is applicable. Generating effective
control knowledge may be difficult, as evidenced by the complexities
often encountered by human knowledge engineers. In general, control
knowledge cannot be indiscriminately added to a system; its costs and
benefits must be carefully taken into account.

To produce effective control knowledge, an explanation-based learner
must generate "good" explanations -- explanations that can be profitably
employed to control problem solving.  In this talk, I will discuss the
utility of EBL and describe the PRODIGY system, a problem solver that
learns by searching for good explanations. I will also briefly describe
a formal model of EBL and a proof that PRODIGY's generalization
algorithm is correct.

Sponsor:  Ron Brachman

------------------------------

Date: Thu 28 Jan 88 14:20:59-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Seminar - Thinkertools (BBN)

                    BBN Science Development Program
                  AI/Education Seminar Series Lecture

                       THE THINKERTOOLS PROJECT:
        CAUSAL MODELS, CONCEPTUAL CHANGE, AND SCIENCE EDUCATION

                   Barbara Y. White and Paul Horwitz
                       BBN Labs, Education Dept.
                (BYWHITE@G.BBN.COM, PHORWITZ@G.BBN.COM)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
            10:30 am, Thursday February 4 (NOTE UNUSUAL DAY)


This talk will describe an approach to science education that enables
sixth graders to learn principles underlying Newtonian mechanics, and to
apply them in unfamiliar problem solving contexts.  The students'
learning is centered around problem solving and experimentation within a
set of computer microworlds (i.e., interactive simulations).  The
objective is for students to acquire gradually an increasingly
sophisticated causal model for reasoning about how forces affect the
motion of objects.  To facilitate the evolution of such a mental model,
the microworlds incorporate a variety of linked alternative
representations for force and motion, and a set of game-like problem
solving activities designed to focus the students' inductive learning
processes.  As part of the pedagogical approach, students formalize what
they learn into a set of laws, and critically examine these laws, using
criteria such as correctness, generality, and parsimony.  They then go
on to apply their laws to a variety of real world problems.  The
approach synthesizes the learning of the subject matter with learning
about the nature of scientific knowledge -- what are scientific laws,
how do they evolve, and why are they useful?  Instructional trials found
that the curriculum is equally effective for males and females, and for
students of different ability levels.  Further, sixth graders taught
with this approach do better on classic force and motion problems than
high school students taught using traditional methods.

------------------------------

Date: Thu, 28 Jan 88 12:23:33 PST
From: Amy Lansky <lansky@venice.ai.sri.com>
Subject: Seminar - BREAD, FRAPPE, and CAKE: Automated Deduction (SRI)

                       BREAD, FRAPPE, AND CAKE:
              THE GOURMET'S GUIDE TO AUTOMATED DEDUCTION

                          Yishai A. Feldman (YISHAI@AI.AI.MIT.EDU)
                         AI Laboratory, MIT

                   11:00 AM, WEDNESDAY, February 3
              SRI International, Building E, Room EJ228

Cake is the knowledge representation and reasoning system developed as
part of the Programmer's Apprentice project.  Cake can be thought of
as an active database, which performs quick and shallow deduction
automatically; it supports both forward-chaining and backward-chaining
reasoning.  The Cake system has a layered architecture: the kernel of
the system, called Bread (for Basic REAsoning Device), is a
truth-maintenance system with equality and demons.  Built on top of
this is Frappe (for FRAmes in a ProPositional Engine), which
implements a typed logic with special-purpose decision procedures for
various algebraic properties of operators (such as commutativity and
associativity), sets, partial functions, and structured objects
(frames).  Only the topmost layer of Cake, which implements the Plan
Calculus, is specific to reasoning about programs.  This talk will
describe the architecture and features of Bread, Frappe, and Cake,
including a transcript of a demonstration session.  This is joint work
with Charles Rich.

VISITORS:  Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk.  Thanks!

------------------------------

Date: 26 Jan 88 16:54:00 EST
From: "ETD2::WILSONJ" <wilsonj%etd2.decnet@afwal-aaa.arpa>
Reply-to: "ETD2::WILSONJ" <wilsonj%etd2.decnet@afwal-aaa.arpa>
Subject: Conference - AAAIC88 Aerospace Applications of AI


                           AAAIC88 CALL FOR PAPERS

     AEROSPACE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE 1988

                With Neural Networks Aerospace Applications
                          Special Interest Sessions
              Stouffer's Hotel, Dayton, OH, October 25-27, 1988

Particulars - Tutorials will be held on 24 Oct 88.  Workshops will be held on
28 Oct 88.  There will be exhibits by AI companies and related industries as
well as product familiarization sessions.  There will be up to 18 technical
sessions in 5 half-day periods, luncheon speakers and a banquet.

The 4th Aerospace Applications of Artificial Intelligence Conference will
investigate a wide range of topics with heavy emphasis this year on neural
network applications in aerospace.  Topic areas for which timely, original,
technical papers are solicited include:

Integrating Neural Networks and        Knowledge Processing with Neural Nets
   Expert Systems                     Robotics
Neural Networks and Signal Processing Data Fusion/Sensor Fusion
Machine Learning, Cognition & the      Combinatorial Optimization for
   Cockpit                                Scheduling and Resource Control
Machine Vision & Avionics Applications Natural Language Recognition and
Neural Networks and Man-Machine           Synthesis
   Interface Issues                    Self-Organization in Avionics
 Neural Network Development Tools       Applied Adaptive-Resonance
Applied Biological Models              Cooperative and Competitive Network
Parallel Processing & Neural Networks     Dynamics in Aerospace
Automatic Target Recognition           Learning Theory and Techniques
Back Propagation with  Momentum,       Simulation and Implementation of
   Shared Weights or Recurrent            Neural Networks
Network Architectures                  Technology - Microchips, Optics, etc.
 Expert System Development Tools        Applications of Expert Systems in
Aerospace Scheduling                     Manufacturing
Operational and Maintenance Issues     Design Automation
   Using Expert Systems                Data Management
Real Time Expert Systems               Acquisition Management
Knowledge Base Simulation              Verification and Validation of ES
Advanced Problem Solving Techniques    Diagnostics and Fault Isolation

ABSTRACT DEADLINE :  26 Feb 88

Authors are invited to submit abstracts of 500 words in any of the above topic
areas.  Please avoid acronyms or abbreviations in the title of the paper.  A
short biographical sketch of the author(s) to include citizenship, mailing
address and telephone number must be included with the abstract.  Final
manuscripts for papers are due 19 Aug 88.
                       James R. Johnson
   Send abstracts to:  AFWAL/AAOR
                       WPAFB, OH 45433

Sponsored by Dayton SIGART and the Association of Computing Machinery.






≠

------------------------------

End of AIList Digest
********************

∂29-Jan-88  0504	LAWS@KL.SRI.COM 	AIList V6 #19 - Ping Pong, Prolog, Expert Systems, IXL    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 29 Jan 88  05:04:28 PST
Date: Fri 29 Jan 1988 00:14-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #19 - Ping Pong, Prolog, Expert Systems, IXL
To: AIList@SRI.COM


AIList Digest            Friday, 29 Jan 1988       Volume 6 : Issue 19

Today's Topics:
  Queries - Neural Net Study Group & Edinburgh TRs &
    Knowledge Acquisition References & Ten Best Vision References &
    Comparative Language Structures & Engineering Data Modelling &
    XLISP 1.5 & Testing and Evaluating an Expert System &
    AI in Management,
  Application - Table Tennis-Playing Robot,
  AI Tools - PROLOG for an IBM 3090 under CMS &
    Software Development and Expert Systems &
    IXL Machine Learning System

----------------------------------------------------------------------

Date: 25 Jan 88 13:43:30 PST (Monday)
From: Rockwell.HENR801c@Xerox.COM
Reply-to: Rockwell.HENR801c@Xerox.COM
Subject: Neural Net Study Group


I'm trying to find members of Xerox in Rochester,NY interested in
joinning/forming a neural net/connectionist study group. Interested parties
should reply to  ROCKWELL:HENR801C:XEROX.

------------------------------

Date: 26 Jan 88 21:22:37 GMT
From: rao@cvl.umd.edu  (Subbarao Kambhampati)
Subject: Request for an Edinburgh TR (Austin Tate, Non-Lin)


I am trying to get hold of the following two technical reports.
I would appreciate it very much if anyone who has them can send me copies.
I could write to Edinburgh, but I need them fast and thought this request
may bear quicker results..


The TRs:
        Austin Tate, Project planning using a Hierarchic Non-linear Planner,
        Research Report 245, Dept of AI, Univ of Edinburgh, 1976

        Austin Tate, Using Goal sturcture to direct search in a problem
        solver, Department of AI, University of Edinburgh, 1975.

I am particularly interested in the former.


My USMail address is:

Subbarao Kambhampati
GRA, Center for Automation Research
University of Maryland
College Park, MD 20742

Thanks in anticipation
-rao

------------------------------

Date: 25 Jan 88 08:51:16 GMT
From: fordjm@byuvax.bitnet
Subject: Knowledge Acquisition References (Request)

I am interested in knowledge elicitation issues and have run across
several references to the following papers:

Welbank, M. (1983).  A review of knowledge acquisition techniques
for expert systems.  Martelsham Consultancy Services, Ipswich, UK.

and

Wielinga, B. J. & Brueker, J. A.  (1984).  Interpretation of verbal
data for knowledge acquisition.  Proceedings of ECAI-84, Pisa, Italy,
pp. 41-50.

Does anyone know where I can obtain these papers?

Thanks in advance,

John M. Ford        fordjm@byuvax.bitnet
131 Starcrest Drive
Orem, UT 84058

------------------------------

Date: 28 Jan 88 00:15:09 GMT
From: hunt@spar.SPAR.SLB.COM (Neil Hunt)
Reply-to: hunt@spar.UUCP (Neil Hunt)
Subject: Ten best vision references..


I would like to collect votes for the ten most important references
in the field of computer vision. I will compile a list and repost
if there is sufficient response. Feel free to vote for one paper,
or as many as you like; each mention by a separate person
counts as one vote.

Neil/.

        ...{amdahl|decwrl|hplabs}!spar!hunt    hunt@spar.slb.com

------------------------------

Date: Wed, 20 Jan 88 15:19:48 MST
From: yorick%nmsu.csnet@RELAY.CS.NET
Subject: Query - Comparative Natural/Formal Language Structures


Appeal for references/pointers to a possible literature on
comparative structures of natural and formal languages.


We propose to investigate the possible relationship between (1)
 the minimum structures of natural language, and (2) the minimum
structures of programming languages, and seek help in the form
of references to work already done.

There are two ways to approach the minimum structures necessary
to a natural language -- formal and empirical.
  The formal structure of natural languages is still under debate,
 and that debate is easily found in grammar studies.

As for the empirical, lists of phenomena common to all known
natural languages are known, such as NP's, VP's, direct objects,
interrogation, negation, and sentences.  What are the standard, and
even non-standard, references for such lists?

Lastly, is there existing work on the relationship itself?  Has anyone
compared:
 a) subject and predicate to data and control structures, or
 b) declarative, imperative, and interrogative utterances, to types of
accesses to a variable (declaration, definition, and reference), or
 c) phonemes, morphemes, and comprehension in speaking, to tokens, objects, and
compilation in programming?

Thank you for any suggestions or references, even those which seem obvious.

Please reply to rhill@nmsu.csnet

------------------------------

Date: Wed, 27 Jan 1988 23:50:12 EST
From: Deeptendu Majumder
Subject: Engineering Data Modelling Info

I am working in the area of Engineering Databases, here at Georgia
Tech, and  looking for information on Enginnering Data Modeling.
Can anybody provide me with a list of good references in this area.
Information on software packages for data modeling and names and
address of people actively involved in this area will be also very
helpful. The stress is on Engineering Data. I would really
appreciate any help.

Thanx in advance

Deeptendu Majumder
MEIBMDM@GITVM2
Box 30963
Georgia Tech
Atlanta, GA 30332

------------------------------

Date: Thu, 28 Jan 88 12:33:11 EST
From: Bill Delaney <WPD@IRISHVM.BITNET>
Reply-to: AIList@Stripe.SRI.COM
Subject: Query - XLISP 1.5

Does anyone out there know where I can find a copy of XLISP version 1.5?

Thanks in advance.

------------------------------

Date: Thu, 28 Jan 88 16:09:57 PST
From: lambert@cod.nosc.mil (David R. Lambert)
Subject: Procedures for Testing & Evaluating an Expert System

Ken-- Please post this in the next issue of AIList.  There was a similar
request in a recent issue, but since I have a specific deadline, I'd
appreciate the additional posting.  Thanks.  --Dave
---------------

I need information on how to test and evaluate an expert system.  I have seen
the Feb 88 AI-Expert article, and would greatly appreciate additional relevant
information which I can obtain by Feb 10.

lambert@nosc.mil

David R. Lambert
Naval Ocean Systems Center
San Diego, CA  92152
(619) 553-1093

------------------------------

Date: Fri, 29 Jan 88 10:30:03 SST
From: Joel Loo <ISSLPL%NUSVM.BITNET@CUNYVM.CUNY.EDU>
Subject: Query: AI works in Management

I am posting this for a colleague: (Please reply to ISSAD@NUSVM.BITNET)

  There aren't many AI research works on Management that I've
  come across. I hope to get to know those who are doing
  research to apply AI in the various disciplines of Management.

------------------------------

Date: Mon, 25 Jan 88 08:58:19 PST
From: Stephen Smoliar <smoliar@vaxa.isi.edu>
Subject: Re: table tennis playing robot

Since I was at the MIT Artificial Intelligence Laboratory during the 60's,
I can flesh out some of Rob Elkins comments.  When I first arrived, there
was a major effort to build a robot which could play table tennis.  (I think
this was the summer of 1966.)  There was an "arm" which was an AMF manipulator
which operated in cylindrical coordinates and a simple vidicon "eye."  When
I arrived, a basket had been attached to the end of the arm, and development
was concerned with getting the arm to catch the ball based on the trajectory
tracked by the eye.  Because of the relatively primitive environment, there
were all sorts of problems.  For example, the initial version really didn't
work in three dimensions.  Thus, for a successful test, one had to throw
the ball in a specific plane;  and the arm could only adjust itself on
the up-down and forward-back axes.  As I recall, there was a joke to the
effect that, after six months of intensive development of both hardware and
software (the MIDAS assembled on a PDP-6), all researchers had learned how
to throw the ball in such a way that the arm didn't have to move!

I was not there when the arm tried to catch Marvin Minksy's (bald) head.
So I don't know if the story is really true.  Given the intricacies of
setting up a test, I suspect it is at least a slight exaggeration.
Attention subsequently shifted to an arm with four (I think) hydraulically
controlled flex-extend joint.  However, I do not think this arm ever caught
any flying objects.  (As a matter of fact, I'm not sure it was ever
controlled effectively for any purpose.)

------------------------------

Date: 28 Jan 88 00:55:40 GMT
From: munnari!augean.oz.au!pfranzon@uunet.UU.NET (Paul Franzon)
Reply-to: pfranzon%augean.OZ@uunet.UU.NET (Paul Franzon)
Subject: Re: table tennis playing robot


>I believe the table tennis playing robot work was done by Bell Labs at
>one of their New Jersey locations (probably Murray Hill or Holmdel).

Yes:
Robotic Systems Research Dept.,
AT&T Bell Labs
Holmdel NJ 07733

Head: John Jarvis, room 4B-601 (e-mail: jfj@vax135)

------------------------------

Date: Tue, 26 Jan 88 21:34:07 EST
From: humphrey@mcs.nlm.nih.gov (Susanne M. HUMPHREY)
Subject: robot ping-pong player in Philadelphia

Among the dissertation abstracts in a forthcoming edition of AI-Related
Dissertations in SIGART Newsletter:

AN University Microfilms Order Number ADG87-14001.
AU ANDERSSON, RUSSELL LENNART.
IN University of Pennsylvania Ph.D 1987, 339 pages.
TI REAL TIME EXPERT SYSTEM TO CONTROL A ROBOT PING-PONG PLAYER.
SO DAI v48(05), SecB, pp1412.
DE Computer Science.
AB A real time "expert" control system has been designed and forms
   the nucleus of functioning robot ping-pong player.

   Robot ping-pong is underconstrained in the task specification (hit
   the ball back), and heavily constrained by the manipulator
   capabilities. The expert system must integrate the sensor data,
   robot capabilities, and task constraints to generate an acceptable
   plan of action. The robot ping-pong task demands that the planner
   anticipate environmental changes occurring during planning and
   robot motion. The inability to generate accurate, timely plans
   even in the face of a capricious environment and limited actuator
   performance would result in a nonfunctional system.

   The program must continuously update the task plan as new sensor
   data arrives, selecting appropriate modifications to the existing
   plan, rather than treating each datum independently. The difficult
   task and the stream of sensor data result in an interesting system
   architecture. The expert system operates in the symbolic and
   numeric domains, with a blackboard to enable global optimization
   by local agents. The architecture interrelates initial planning,
   temporal updating, and exception handling for robustness.

   A sensor and processing system produces three dimensional
   position, velocity, and spin vectors plus a time coordinate at 60
   Hz. Novel processing algorithms and careful attention to camera
   modeling were necessary to obtain adequate accuracy.

   A robot controller provides accurate, predictable performance
   close to the envelope of robot capabilities using modeling and
   feed-forward techniques. The controller plans motions in the
   temporal domain including specified terminal velocities, and
   supports smooth changes to motions in progress.

   The performance of the sensor subsystem, actuator and robot
   controller, and expert system have been demonstrated. The system
   successfully plays against both human and machine opponents.

------------------------------

Date: 26 Jan 88 02:03:38 GMT
From: quintus!jbeard@sun.com (Jeff Beard)
Subject: Re: PROLOG for an IBM 3090 under CMS


as of the fall of '87, Quintus Prolog was delivered as a runtime
library for the MVS/{SP,XA} and VM/370 Rel 4 environments.

This is system independant runtime support such that the compiler
*.text decks from one environment DO NOT require re-compilation
to support the other (given total absence of system dependant
file names).

The product is a cross-compiler, with the Prolog sources existing on
a Sun work-station and the generated objects uploaded and link-edited
with the runtime library supplied.

Contact Don Hester 415-965-7700
        or
        mail to
        Quintus Computer Systems
        1310 Villa Street
        Mountain View, CA 94041

------------------------------

Date: Tue, 26 Jan 88 19:04:24 EST
From: jacob@nrl-css.arpa (Rob Jacob)
Subject: Re: Software Development and Expert Systems

To: sross@cs.ucl.ac.uk

Saw your message about software engineering techniques for expert
systems on the AIList.  This may not be quite what you had in mind,
but, here at the Naval Research Laboratory Judy Froscher and I have
been working on developing a software engineering method for expert
systems.  We are interested in how rule-based systems can be built so
that they will be easier to change.  Our basic solution is to divide
the set of rules up into pieces and limit the connectivity of the
pieces.

I'm going to attach a short abstract about our work to the end of this
message and some references.   Hope it's of use to you.

Good luck,
Rob Jacob

ARPA:   jacob@nrl-css.arpa
UUCP:   ...!decvax!nrl-css!jacob
SNAIL:  Code 5530, Naval Research Lab, Washington, D.C. 20375


    Developing a Software Engineering Methodology for Rule-based Systems

                            Robert J.K. Jacob
                           Judith N. Froscher

                        Naval Research Laboratory
                            Washington, D.C.

Current expert systems are typically difficult to change once they are
built.  The objective of this research is to develop a design
methodology that will make a knowledge-based system easier to change,
particularly by people other than its original developer.  The basic
approach for solving this problem is to divide the information in a
knowledge base and attempt to reduce the amount of information that
each single programmer must understand before he can make a change to
the expert system.  We thus divide the domain knowledge in an expert
system into groups and then attempt to limit carefully and specify
formally the flow of information between these groups, in order to
localize the effects of typical changes within the groups.

By studying the connectivity of rules and facts in several typical
rule-based expert systems, we found that they seem to have a latent
structure, which can be used to support this approach.  We have
developed a methodology based on dividing the rules into groups and
concentrating attention on those facts that carry information between
rules in different groups.  We have also developed algorithms for
grouping the rules automatically and measures for coupling and cohesion
of alternate rule groupings in a knowledge base.  In contrast to the
homogeneous way in which the facts of a rule-based system are usually
viewed, the new method distinguishes certain facts as more important
than others with regard to future modifications of the rules.

                           REFERENCES

R.J.K. Jacob and J.N. Froscher, "Facilitating Change in Rule-based
Systems," pp. 251-286 in Expert Systems: The User Interface, ed. J.A.
Hendler, Ablex Publishing Co., Norwood, N.J. (1988).

R.J.K. Jacob and J.N. Froscher, "Software Engineering for Rule-based
Systems," Proc. Fall Joint Computer Conference pp.  185-189, Dallas,
Tex. (1986).

J.N. Froscher and R.J.K. Jacob, "Designing Expert Systems for Ease of
Change," Proc. IEEE Symposium on Expert Systems in Government pp.
246-251, Washington, D.C. (1985).

R.J.K. Jacob and J.N. Froscher, "Developing a Software Engineering
Methodology for Rule-based Systems," Proc. 1985 Conference on
Intelligent Systems and Machines pp. 179-183, Oakland University
(1985).

R.J.K. Jacob and J.N. Froscher, "Developing a Software Engineering
Methodology for Knowledge-based Systems," NRL Report 9019,  Naval
Research Laboratory, Washington, D.C. (1987).

------------------------------

Date: 27 Jan 88 20:36:05 GMT
From: harvard!bunny!harvard!bunny!gps0.UUCP@seismo.css.gov (Gregory
      Piatetsky-Shapiro)
Reply-to: harvard!bunny!gps0.UUCP@seismo.css.gov (Gregory
          Piatetsky-Shapiro)
Subject: Re: IXL - A Machine Learning System


IXL is a product of IntelligenceWare Corp, 9800 S. Sepulveda Blvd,
Suite 730, Los Angeles, CA 90045.  They also make Intelligence/Compiler.
Call (213) 417 8896 for information.
These products run on an IBM PC.  IXL is a very interesting product that
takes a database (ASCII, 1-2-3, dbase formats) and derives rules of the
form (if A > a and B between b1 & b2, then C = c with confidence XX).
Their product was advertised in AI Expert, IEEE Expert and AI Magazine.
I would be interested to see any experiences with IXL, and whether it
can derive some actually useful rules.

        Gregory Piatetsky-Shapiro
Include <standard funny disclaimer>

--
Gregory Piatetsky-Shapiro       gps0@gte-labs.relay.cs.net
GTE Laboratories,               (617) 466-4236
40 Sylvan Road, Waltham MA 02254
************ A standard humorous disclaimer *************

------------------------------

End of AIList Digest
********************

∂29-Jan-88  0730	LAWS@KL.SRI.COM 	AIList V6 #20 - Nanoengineering, Philosophy, Deductive Databases    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 29 Jan 88  07:30:21 PST
Date: Fri 29 Jan 1988 00:24-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #20 - Nanoengineering, Philosophy, Deductive Databases
To: AIList@SRI.COM


AIList Digest            Friday, 29 Jan 1988       Volume 6 : Issue 20

Today's Topics:
  Philosophy - Disregard And Abuse Of Nanoengineering &
    Importance of Philosophy,
  Announcement - Foundations of Deductive Databases and Logic Programming

----------------------------------------------------------------------

Date: Mon, 25 Jan 88 12:14:31 MST
From: t05rrs%mpx1@LANL.GOV (Dick Silbar)
Subject: Re: intelligent nanocomupters

David West replied to Godden's query about Drexler's work in a sardonic way,
I think, by extrapolating the earlier remark to "...such a machine could then
just be allowed to run and should be able to accomplish a century of progress
in one hour."  I am reminded of a novel some years back by Robert Forward,
"Dragon's Egg", in which just that did happen in a civilization living on the
surface of a neutron star.

------------------------------

Date: Wed, 27 Jan 88 9:35:58 PST
From: jlevy.pa@Xerox.COM
Subject: disregard and abuse of Nano-engineering (V6#17)

David Smith writes:

"Date: Fri, 22 Jan 88 14:21:05 est
From: Mr. David Smith <dsmith@gelac.arpa>
Subject: Nano-engineering

... [deleted quote] ...

Some time ago, I asked a net question about nano-engineering and all roads
led to Eric Drexler.  Frankly, I was pleased to see this net mail putting
such activities into perspective.  At the risk of sounding Pharisaic, I
believe that the cause of "serious AI" is seriously hindered by such blatant
blather. This has to be the only forum in the civilized world which allows
such claims to be perpetrated without receiving equal portions of ridicule
and abuse.  Can it not be stopped?"

I think this reasoning is wrong, since it smacks of "acceptance by reputation".
How about this
argument:

Some time ago I asked a lot of physicists about why an apple falls from a tree
downwards, and all roads led to Isaac Newton. Frankly, I was pleased to see all
those skeptics question Newton's results and put his activities into
perspective. At the risk of sounding Pharisaic, I believe that the advance of
serious research in physics is severely hindered by such blatant blather. This
has to be the only forum in the world which allows such claims as Newton's to be
perpetrated without receiving equal portions of ridicule and abuse. Can it not
be stopped?

The point is of course, that while Newton originated Newtonian physics, and thus
it is right to expect all references to this field to lead back to him, at the
time he did his research he was a nobody just like Eric Drexler. This was
unimportant at that time, since his ideas were judged on MERIT, not on
reputation. I am myself ill-equiped to judge Eric's work, but would be VERY
careful in deciding that it "deserves ridicule and abuse" at all. Newton had a
much harder time at propagating his ideas than Drexler today. In appreciation of
this fact, scientists of olden times usually were more careful and methodical in
judging new ideas (not always, to be sure!). Maybe it is time to try and look at
the facts and claims instead of at the names of proponents of such claims?

To paraphrase, this must be the only forum in the world (claiming to be
scientifically
based) which allows such careless and unjustified disregard, abuse and ridicule
of new ideas
(whether they be of merit or not) without receiving equal portions of ridicule
and abuse. So
there!

Jacob Levy
jlevy.pa@xerox.com

------------------------------

Date: 25 Jan 88 14:27:27 GMT
From: rochester!ur-tut!sunybcs!rapaport@bbn.com  (William J. Rapaport)
Subject: Re: Not Another vote (arg.gag.sigh)

In article <20@chemstor.UUCP> bob@chemstor.UUCP (Bob Weigel) writes:
>
>      Just another biannual reminder that philosophy is a futile game.

Hardly.  You might be interested in the following article:

Rapaport, William J., "Unsolvable Problems and Philosophical Progress,"
American Philosophical Quarterly 19 (1982) 289-98.

                                        William J. Rapaport
                                        Assistant Professor

Dept. of Computer Science||internet:  rapaport@cs.buffalo.edu
SUNY Buffalo             ||bitnet:    rapaport@sunybcs.bitnet
Buffalo, NY 14260        ||uucp: {ames,boulder,decvax,rutgers}!sunybcs!rapaport
(716) 636-3193, 3180     ||

------------------------------

Date: 26 Jan 88 16:19:53 GMT
From: sdcc6!calmasd!wlp@sdcsvax.ucsd.edu  (Walter Peterson)
Subject: Re: Philosophy is a futile game...


> In article <20@chemstor.UUCP>, bob@chemstor.UUCP (Robert Weigel) writes:
>
>       Just another biannual reminder that philosophy is a futile game.  It
> engages itself in a battle, yet leaves behind the tools needed to win it.
> I ask again, who is more foolish?  Him that claims to know truth, or he that
> scoffs prejudiciously,....yet searches for it!
>

For an excellent explanation as to why philosophy is NOT futile and why
everyone NEEDS philosophy, see

   "Philosophy: Who Needs It"   By Ayn Rand




--
Walt Peterson   GE-Calma San Diego R&D
"The opinions expressed here are my own and do not necessarily reflect those
GE, GE-Calma nor anyone else.
...{ucbvax|decvax}!sdcsvax!calmasd!wlp        wlp@calmasd.GE.COM

------------------------------

Date: 25 Jan 88 20:58:29 GMT
From: Jack Minker <minker@mimsy.umd.edu>
Reply-to: minker@mimsy.UUCP (Jack Minker)
Subject: Foundations of Deductive Databases and Logic Programming


               The book,

             FOUNDATIONS OF DEDUCTIVE DATABASES AND LOGIC PROGRAMMING
             Edited by Jack Minker (University of Maryland),

          will be available from Morgan-Kaufmann Publishers  in  early
          March, 1988.  Orders for the book can be made now.  The ISBN
          No. is: 0-934613-40-0.  The  book  contains  752  pages  and
          costs $36.95.

               This landmark volume explores  the  close  relationship
          between  deductive  databases  and logic programming and the
          foundational issues they share.  A  collection  of  original
          research,  contributed by leading researchers, the book grew
          out of preliminary work presented at the Workshop on Founda-
          tions  of  Deductive Databases and Logic Programming held in
          Washington DC, August 1986.  All the papers have been exten-
          sively refereed and revised.

               Part 1 introduces and examines the import of stratified
          databases, and its relationship to circumscription, and pro-
          vides a comprehensive survey of negation in deductive  data-
          bases  and  logic programming.  Part 2 addresses fundamental
          theoretical and practical issues in  developing  large-scale
          deductive  databases and treats problems such as informative
          answers,  semantic  optimization,  updates   and   computing
          answers  in non-Horn theories.  Part 3 provides results con-
          cerning unification, equivalence and optimization  of  logic
          programs and provides a comprehensive survey of results con-
          cerning logic programs and parallel complexity. An introduc-
          tory  survey offering background material and an overview of
          research topics, name and  subject  indexes,  and  extensive
          bibliographic references complete the work.

               Invaluable to  graduate  students  and  researchers  in
          deductive  databases  and  logic programming, FOUNDATIONS OF
          DEDUCTIVE DATABASES AND LOGIC PROGRAMMING will  also  be  of
          interest  to  those  working  in  automated theorem proving,
          artificial intelligence and expert systems.

                               TABLE OF CONTENTS


          INTRODUCTION

                  Minker, J., 1-16
                  Introduction to Foundations of Deductive Databases and Logic
                  Programming

          PART 1 - NEGATION AND STRATIFIED DATABASES 17

           Chapter 1  Shepherdson, J., 19-88
                  Negation in Logic Programming
           Chapter 2  Apt, K.R., Blair, H. and Walker, A., 89-148
                  Towards a Theory of Declarative Knowledge
           Chapter 3  Van Gelder, A., 149-176
                  Negation as Failure Using Tight Derivation for General Logic
                  Programs
           Chapter 4  Lifschitz, V., 177-192
                  On the Declarative Knowledge of Logic Programs with Negation
           Chapter 5  Przymusinski, T., 193-216
                  On the Semantics of Stratified Deductive Databases
           Chapter 6  Topor, R. and Sonenberg, E.A., 217-240
                  On Domain Independent Databases

          PART 2 -  FUNDAMENTAL  ISSUES  IN  DEDUCTIVE  DATABASES  AND
          IMPLEMENTATIONS 241

           Chapter 7  Chakravarthy, U.S., Grant,  J.  and  Minker,  J.,
           243-273
                  Foundations of Semantic  Query  Optimization  for  Deductive
                  Databases
           Chapter 8  Imielinski, T., 275-312
                  Intelligent Query Answering in Rule Based Systems
           Chapter 9  Sadri, F. and Kowalski, R.A., 313-362
                  An Application of General Purpose Theorem Proving  to  Data-
                  base Integrity
           Chapter 10  Manchanda, S. and Warren, D.S., 363-394
                  A Logic-Based Language for Database Updates
           Chapter 11  Henschen, L.J. and Park, H., 395-438
                  Compiling the GCWA in Indefinite Deductive Databases
           Chapter 12  Bancilhon F. and Ramakrishnan, R., 439-517
                  Performance Evaluation of Data Intensive Logic Programs
           Chapter 13  Thom, J., Naish, L. and Ramamohanaro,  K.,  519-
           543
                  A Superjoin Algorithm for Deductive Databases

          PART 3 - UNIFICATION AND LOGIC PROGRAMS, 545

           Chapter 14  Kanellakis, P., 547-585
                  Logic Programming and Parallel Complexity
           Chapter 15  Lassez, J-L.,  Maher,  M.J.  and  Marriott,  K.,
           587-625
                  Unification Revisited
           Chapter 16  Maher, M.J., 627-658
                  Equivalences of Logic Programs
           Chapter 17  Sagiv, Y., 659-698
                  Optimizing Datalog Programs
           Chapter 18  van Emden, M.H. and Szeredi, P., 699-709
                  Converting AND-Control to OR-Control Using Program Transfor-
                  mation

          AUTHOR ADDRESSES 711-714

          REFEREES 715-716

          AUTHOR INDEX 717-721

          SUBJECT INDEX 723-746
--
JACK MINKER
minker.umcp-cs@udel-relay

------------------------------

End of AIList Digest
********************

∂01-Feb-88  0030	LAWS@KL.SRI.COM 	AIList V6 #21 - Connectionism, XLISP, Ping Pong, Expert Systems
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 1 Feb 88  00:30:42 PST
Date: Sun 31 Jan 1988 22:04-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #21 - Connectionism, XLISP, Ping Pong, Expert Systems
To: AIList@SRI.COM


AIList Digest             Monday, 1 Feb 1988       Volume 6 : Issue 21

Today's Topics:
  Queries - Self_Organizing Systems & TURBO PROLOG Problem &
    Ambiguous Speech & Radio Gear for Mobile Robots &
    Pattern Recognition Papers & 1981 BBN Technical Report,
  Neuromorphics - Dreyfus on Connectionism & Genetic Algorithms,
  AI Tools - XLISP 1.5,
  Application - Ping Pong,
  Software Engineering - Modular Expert Systems

----------------------------------------------------------------------

Date: Fri, 29 Jan 88 14:21:17 EST
From: Peter Beck (LCWSL) <pbeck@ARDEC.ARPA>
Subject: self organizing systems


COMPLEXITY OF SYSTEMS VS THEIR COMPONENTS

RECENTLY I ASKED SOMEBODY IF PEOPLE ORGANIZATIONS (EG, AN EMPLOYEE UNION) COULD
BE CONSIDERED A "SELF-ORGANIZING" SYSTEM THAT IS "SYMBIOTIC" WITH ITS HOST.  I
RECIEVED, WHAT I THINK IS A RATHER DISTURBING AND TYPICAL ANSWER TO BE EXPECTED
FROM HUMANS:
 > It is hard to call any human organization a "self organizing system"
 > since its parts (humans) are so much more - % complex %- than the
 > system itself.

Is this a generally accepted proposition, ie, that complex constituent elements
can "NOT" form self organizing systems??

the future is puzzling,
but CUBING is forever !!

pete beck     <pbeck@ardec>

------------------------------

Date: 26 Jan 88 00:30:30 GMT
From: ihnp4!alberta!pat@ucbvax.Berkeley.EDU  (Patrick Fitzsimmons)
Subject: Do you consider yourself a TURBO PROLOG expert?


If so, then I need your help.

I am a new Turbo Prolog user having great difficultly trying
to implement a prolog program in Turbo Prolog.   The program works
fine in other prologs.  My problem is in the DOMAINS sections trying
to get it to accept what I put.  I thought that a more experienced
user may be able to offer some help.

Don't ask why I am using Turbo Prolog, I really don't want to but I
must.  To expand on my problem a bit, I currently have in the DOMAINS
section:

entry = symbol ;
        n(entry)
list  = entry*

The n(entry) is meant to represent NOT entry.

I have a negate predicate defined as:

negate(n(G), G) :-
      ne(n(_), G).
negate(G, n(G)) :-
      ne(n(_), G).
ne(X, Y) :-
      not(X = Y).

P.S. Sorry if I have posted this to an inappropriate newsgroup.
     Please send responses my e-mail and I will summarize in
         comp.lang.prolog if there is enough interest.


|------------------------------------------------------------------------|
| Patrick Fitzsimmons                | pat@alberta.UUCP                  |
| Computing Science Department       |                                   |
| University of Alberta              |                                   |
| Edmonton, Alberta                  |                                   |
| T6G 2H1                            |                                   |
|------------------------------------------------------------------------|

------------------------------

Date: 29 Jan 88 19:20:59 GMT
From: ulysses!sfmag!sfsup!saal@ucbvax.Berkeley.EDU  (S.Saal)
Subject: ambiguous speech

I don't know if speech recognition is still considered a part of
AI, but I thought these two groups would be the most appropriate
for this question.

One of the difficulties of speech recognition is to have the
computer understand whether the statement "makes sense."  this is
something (most) humans do automatically.  When the sentence
doesn't make sense we slow down our assimilation rate or, if
necessary, ask the speaker to repeat.  computers can't/don't do
that.  The classic example is:

"It's hard to wreck a nice beach." vs
"Its hard to recognize speech."

(If you don't see the difficulty, say each one out loud.)

What I am looking for is more examples of these sentence pairs.

Here is another example - one that a human listener would be able
to discern easily, though I have my doubts about the computer.

"She bought her son dresses for $5."  vs
"She bought her sundresses for $5."

Please send all sentence pairs to me directly (via email) instead
of posting them.

rec.humor folks, no need to re-start the discussion on
misunderstood song lyrics.

Thanks.
--
Sam Saal         ..!attunix!saal
It's a retelling of the campaigns of Julius Ceasar, with the addition
of aircraft. ...  I call it "Veni, Vidi, Vici Through Air Power."
        from "God Save the Mark"

------------------------------

Date: 29 Jan 88 06:07:02 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Radio gear for mobile robots


      I'm looking for a good way to establish two-way digital
radio communication between a mobile vehicle and a base station.
1200 or 2400 baud is sufficent.  Small size (cigarette-pack or
smaller) is esssential.  Some scheme involving modified model R/C
gear would be ideal.  Before getting into such modifications
ourselves, we'd like to find out if anyone has an off-the
shelf solution.

                                        John Nagle


      (Sadly, comp.ai is the most relevant newsgroup available.)

------------------------------

Date: 28 Jan 88 03:43:57 GMT
From: CENTRO.SOAR.CS.CMU.EDU!acha@PT.CS.CMU.EDU  (Anurag Acharya)
Subject: pattern recognition papers sought

Does any one in the netland have either of the following papers ?

   i) C. W. Therrien et al , " Application of Feature extraction to Radar
      Signature Classification", Proceedings of the Second International
      Joint Conference on Pattern Recognition, 1974

  ii) F. LeChevalier, G. Bohillot, and C. Fugier-Garrel, "Radar Target and
      Aspect Angle Identification", Proceedings of the IEEE International
      Association of Pattern Recognition Conference, 1978

The second reference might be slightly off the mark in the name of the
Conference.


Thanx in advance,

-- anurag

Anurag Acharya
Computer Science Department
Carnegie Mellon UNiversity, Pittsburgh, PA 15213-3890
USA
--
Anurag Acharya                  arpanet: acharya@cs.cmu.edu
                                bitnet : acharya@cs.cmu.edu@cmuccvma

"Programming is debugging the null program until it does what you want"

------------------------------

Date: 28 Jan 88 23:40:02 GMT
From: CENTRO.SOAR.CS.CMU.EDU!acha@pt.cs.cmu.edu  (Anurag Acharya)
Subject: Another pattern recognition request


I am looking for a copy of the following Tech. Report. Could someone
give me a pointer ?

        "Implementation and Testing of Ship Classifier Algorithm -
         Task II", Norden Systems Inc., Final Technical Report
         1288 R0014, Jan 4, 1979

Thanx in advance,


-- anurag

--
Anurag Acharya                  arpanet: acharya@cs.cmu.edu
                                bitnet : acharya@cs.cmu.edu@cmuccvma

"Programming is debugging the null program until it does what you want"

------------------------------

Date: Fri, 29 Jan 88 09:55:04 PST
From: Marie Bienkowski <bienk@spam.istc.sri.com>
Subject: 1981 BBN Technical Report

I am trying to find a copy of:
C. Steinberg and A. Stevens, "A Typology of Explanations and
its Application to Intelligent Computer-Aided Instruction."
According to BBN, it is no longer available.  If anyone has
it, please e-mail to bienk@istc.sri.com as I'd like to try
to get a copy.

Thanks,
Marie

------------------------------

Date: 31 Jan 88 06:09:06 GMT
From: g451252772ea@deneb.ucdavis.edu  (0040;0000005410;0;327;142;)
Subject: Dreyfus on connectionism


Would anyone who attended S. Dreyfus' public talk on connectionism at
U.C. Berkeley (Friday, 1/29) give a summary?

S. and H. Dreyfus (brothers) are Berkeley's "loyal opposition" to AI work
in the 'GOFAI' tradition (Haugeland's Good Old-Fashioned AI, based on
symbolic/logical manipulations).  Their assessment of the neural net
pre-symbolic paradigm should have been interesting.

thanks!


Ron Goldthwaite / UC Davis, Psychology and Animal Behavior
'Economics is a branch of ethics, pretending to be a science;
 ethology is a science, pretending relevance to ethics.'

------------------------------

Date: Sun 31 Jan 88 21:57:28-PST
From: Ken Laws <LAWS@KL.SRI.COM>
Subject: Dreyfus on Connectionism

I didn't attend the lecture, but have read the Dreyfuses' paper in
the new Daedalus issue previously mentioned in AIList.  The paper
says little about connectionism specifically (except that it may now
be getting a deserved chance to fail just as symbolic AI has done),
but the authors are favorable to holistic approaches in general and
seem less negative about an AI field that includes this emphasis.
Their paper and one by Papert were mainly concerned with the
history of AI and how the symbolic paradigm gained supremacy.

Almost every paper in the Daedalus issue comments on (or indeed
focusses on) connectionism and neural models, mainly at a
philosophical level of discussion.  Sherry Turkle, for instance,
discusses connectionism and psychoanalysis as resonant fields
that need to initiate a dialog about object-oriented mental models.
It's a very interesting collection of essays.

                                        -- Ken

------------------------------

Date: 26 Jan 88 14:42:20 GMT
From: dg1v+@andrew.cmu.edu  (David Greene)
Subject: Re: Cognitive System using Genetic Algorithms.


>P.S.: Does any one know the email addresses of J. Holland( U of Michigan),

>S.F. Smith ( Carnegie-Mellon, I guess) or anyone who've been related in

>genetic algorithms ?



For Steve Smith try:

               stephen.smith@cs.cmu.edu

           or sfs@isl1.cmu.edu



-David

dg1v@andrew.cmu.edu

------------------------------

Date: 25 Jan 88 06:39:33 GMT
From: g451252772ea@deneb.ucdavis.edu  (0040;0000006866;0;327;142;)
Subject: Re: Cognitive System using Genetic Algorithms.

About a month or so ago I complained of the engineering focus of disserta-
tions done by Holland's students.  I got a very nice reply from a former
student, Lashon Booker, who cited a number of more abstract projects
(including his own).  All these theses are available through U. Microfilms
(which happens to be based in Michigan), at about $25 each.

Lashon is still quite active; he's at booker@nrl-aic.ARPA.  There is a BBS
for genetic algorithms; to subscribe, send mail to GA-List-Request@nrl-aic.ARPA.
(I did some time ago but have no reply yet... hmmm)

And a standard set of C subroutines for classifier systems is available for
media cost from Rick Riolo at U.Mich.  Contact him at
Rick_Riolo@ub.cc.umich.edu for details - I got mine on a 1.2 meg AT disc (just
fits).  Other formats available (Sun, Mac, ... ).  This is ver 0.98, so it's
not totally stable yet.  I'm slowly getting acquainted with it all...

Oh yes: the books INDUCTION, 1986, by Holland et al; GENETIC ALGORITHMS AND
SIMULATED ANNEALING, 1987, L. Davis; and GENETIC ALGORITHMS AND THEIR
APPLICATION, Proceed. 2nd Intl. Conf. Gen. Alg. (L. Erlbaum Assoc, Pub),
are all of interest.

I, for one, would be curious what else you learn, although my interests are
more in the theoretical arena (population genetics, et al).


Ron Goldthwaite / UC Davis, Psychology and Animal Behavior
'Economics is a branch of ethics, pretending to be a science;
 ethology is a science, pretending relevance to ethics.'

------------------------------

Date: Fri, 29 Jan 88 16:07:37 EST
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: XLISP

You can get a copy of 1.5 from Public Brand Software (call 1-800-IBM-DISK)
for about $5.  Oops, that's 1.6.  You can get a copy of 2.0 by writing the
author, Dave Betz and sending him a formatted diskette and a stamped,
self-addressed diskette mailer.  His address is 127 Taylor Road, Peterborough,
NH 03458.  His number is 603/924-6936.  He's a real nice guy.  Release 2.0
has 1.7 documentation plus update patches that have not been integrated into
the document, if you can't live with that then ask for 1.7, or maybe it's
1.8.  Actually, that is what he will send you unless you specify that
you want 2.0 with the doc as-is.  It's ready for release but for that,
and I believe is supposed to be full CommonLisp, plus the OO extensions
that are after all the main motivation for XLISP.

        Cheers,

        Bruce
Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: 31 Jan 88 10:01:14 CST (Sun)
From: texsun!killer!aimania@Sun.COM (Walter Rothe)
Subject: Re: Query - XLISP 1.5

You can find latest version of XLISP on BYTE information exchange.
Phone is 1-800-227-2983.

---
Walter Rothe at the UNIX(Tm) Connection, Dallas, Tx
UUCP: {rutgers}!smu.killer.aimania

------------------------------

Date: Fri 29 Jan 88 17:52:14-PST
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: Ping Pong

Oscar Firschein has pointed out to me a paper on the AT&T ping-pong
robot:

  R.L. Andersson, Investigating Fast, Intelligent Systems with a
  Ping-Pong Playing Robot, Proc. 4th Int. Symp. of Robotics
  Research, Santa Cruz, California, pp. 1-8.

------------------------------

Date: 31 Jan 88 14:40:41 GMT
From: mcvax!chorus.fr!will@uunet.UU.NET (Will Neuhauser)
Subject: Re: Software Development and Expert Systems


In article <8801270004.AA12634@nrl-rjkj.arpa>, jacob@NRL-CSS.ARPA
(Rob Jacob) writes:
> Saw your message about software engineering techniques for expert
> systems on the AIList.  [...]  Our basic solution is to divide
> the set of rules up into pieces and limit the connectivity of the
> pieces.   [...]

This would appear to be the basic definition of "modularity", and
the usual hints should apply given a little thought.


To achieve greater modularity in a prototype expert system, in C++,
I created classes for "expert systems" (inference engines),
for rule-bases, and for fact-bases.

Inference Engines.
-----------------
Separate engines allows the user to select an appropriate inference
engine for the tasks at hand.  It would have been nice to add a
language construct for defaulting engines; as it was you had to
code these.  The expert sytems could be organized hierarchically.
Each system had a pointer to its parent(s) and vice versa.  In truth,
I only ever used one engine because I was really more interested in
modularizing the rule-base, but the potential was there, right?

Fact-bases.
----------
Separate fact-bases allowed for the use of the same rule-base in
different situations: when a rule-base appeared more that once,
it was given a new fact-base, and the rule-base was re-used.
Hypothetically, the separate fact-bases could have been useful
in "trial and error" situations: one could create new fact-base
instances (objects) and then throw them away when they didn't
pan out.  I never had a chance to try it out.

Rule-bases.
----------
Separate rule-bases were the important factor in this current
discussion, and my main interest.  I used a very simple default,
that could obviously be extended to provide finer control.

The separate rule-bases were very useful for modularizing the
total rule-base.  Each "coherent set of rules" was located in a different
file, and when read in, was read into a separate rule-base instance
(it was a prototype so don't give me too much grief!).  The
default rule for connection was that terminal goals, those which
never appeared on the left-hand side of a rule, were automatically
exported to the calling expert system(s) (via the parent-pointers).
This was sort of nice in that when a sub-expert-system had new
goals added, they were automatically made a part of the callers
name space.  (Of course there could be conflicts, but in the prototype
I just lived with the problem and the new meanings suddenly given to
existing names, but, again, I was just trying out some modularization
concepts in a prototype.)

Aside from the obvious advantages of modularization to reduce the
size of the name space and thus the difficulty of understanding
a single giant  set of rules (actually, it seemed that 100 rules
was hard for one person to remember for long), I had another
reason for wanting modularization: I wanted to clearly separate the
experts, facts, and rules into different computational tasks (coherent
systems of sub rules) so that one could divide the rule-base up onto
separate processors in a multi-processor computer.  (Again, never tried.)

------------------------------

End of AIList Digest
********************

∂01-Feb-88  0257	LAWS@KL.SRI.COM 	AIList V6 #22 - Self-Consciousness, Poplog 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 1 Feb 88  02:57:14 PST
Date: Sun 31 Jan 1988 22:11-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #22 - Self-Consciousness, Poplog
To: AIList@SRI.COM


AIList Digest             Monday, 1 Feb 1988       Volume 6 : Issue 22

Today's Topics:
  Theory - Self-Conscious Code and the Chinese Room,
  AI Tools - Compilation to Assembler in Poplog

----------------------------------------------------------------------

Date: Sun, 31 Jan 88 20:02:48 CST
From: ted reichardt <rei3@sphinx.uchicago.edu>
Reply-to: rei3@sphinx.uchicago.edu.UUCP (ted reichardt)
Subject: Self-conscious code and the Chinese room

From: Jorn Barger, using this account.
Please send any mail replies c/o rei3@sphinx.uchicago.edu



I'm usually turned off by any kind of
philosophical speculation,
so I've been ignoring the Chinese Room melodrama
from day one.
But I came across a precis of it the other day
and it struck me
that a programming trick I've been working out
might offer a partial solution to the paradox.

Searle's poser is this:
when you ask a question of a computer program,
even if it gives a reasonable answer
can it really be said to exhibit "intelligence,"
or does it only _simulate_ intelligent behavior?
Searle argues that the current technology
for question-answering software
assumes a database of rules
that are applied by a generalized rule-applying algorithm.
If one imagines a _human_ operator
(female, if we want to be non-sexist)
in place of that algorithm,
she could still apply the rules
and answer questions
even though they be posed
in a language she doesn't _understand_--
say, Chinese.
So, Searle says,
the ability to apply rules
falls critically short of our natural sense
of the word "intelligence."

Searle's paradigm for the program
is drawn from the work of Roger Schank
on story-understanding and scripts.
Each domain of knowledge
about which questions can be asked
must be spelled out as an explicit script,
and the rule-applying mechanism
should deduce from clues (such as the vocabulary used)
which domain a question refers to.
Once it has identified the domain,
it can extract an answer from the rules of that domain.

Again, these rules can be applied
by the rule-applying algorithm
to the symbols in the question
without reference to the _meaning_ of the symbols,
and so, for Searle, intelligence is not present.

But suppose now that one domain we can ask about
is the domain of "question-answering behavior in machines"?
So among the scripts the program may access
must be a question-answering script.
We might ask the program,
"If a question includes mathematical symbols,
what domains need to be considered?"
The question-answering script will include rules
like "If MATH-SYMBOL then try DOMAIN (arithmetic)"

But the sum of all these
rules of question-answering
will be logically identical to
the question-answering algorithm itself.
In Lisp, the script (data) and the program (code)
could even be exactly the same set of Lisp expressions.

Now, Searle will say, even so,
the program is still answering these questions
without any knowledge of the meanings of the symbols used.
A human operator could similarly
answer questions about answering questions
without knowing what is the topic.
In this case, for the human operator
the script she examines will be pure data,
no executing code,
Her own internal algorithms
as they execute
will not be open to such mechanical inspection.

Yet if we ask the program to modify one of its scripts,
as we have every right to do,
and the script we ask it modify is one that also executes,
_its_ behavior may change
while the human operator's never will.

And in a sense we might see evidence here
that the program _does_ understand Chinese,
for if we ask a human to change her behavior
and she subsequently does
we would have little doubt that understanding took place.
To explain away such a change as blind rule-following
we would have to picture her as
changing her own brain structures
with microtomes and fiber optics.
(But the cybernetic equivalent of this ought to be
fiber optics and soldering irons...)

Self-modifying code
has long been a skeleton key in the programmer's toolbox,
and a skeleton in his closet.
It alters itself blindly, dangerously,
inattentive to context and consequences.
But if we strengthen our self-modifying code
with _self-conscious_ code,
as Lisp and Prolog easily can,
we get something very _agentlike_.

Admittedly, self-consciousness about question-answering behavior
is pretty much of a triviality.
But extend the self-conscious domain
to include problem-solving behavior,
goal-seeking behavior,
planning behavior,
and you have the kernel of something more profound.

Let natural selection build on such a kernel
for a few million, or hundreds of millions of years,
and you might end up with something pretty intelligent.

The self-reference of Lisp and Prolog
takes place on the surface of a high-level language.
Self-referent _machine code_ would be more interesting,
but I wonder if the real quantum leap
might not arrive when we figure out how to program
self-conscious _microcode_!

------------------------------

Date: Sat, 30 Jan 88 23:18:17 GMT
From: Aaron Sloman <aarons%cvaxa.sussex.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Compilation to Assembler in Poplog

This is a  response to a  discussion in comp.compilers,  but as it  is
potentially of wider interest I'm offering  it to all of you for  your
bulletin boards. There  does not  seem to be  anything comparable  for
Lisp, so I suppose I just have to post it direct to comp.lang.lisp for
people interested in Lisp implementations? Or should I assume any such
people will read comp.compilers?

I hope  it  is of  some  interest, and  I  apologise for  its  length.
Although Sussex University has a commercial interest in Poplog I  have
tried to avoid raising any commercial issues.
                         ---------------------

               COMPILING TO ASSEMBLY LANGUAGE IN POPLOG

There have  been  discussions  on  the network  about  the  merits  of
compiling to  assembly  language. Readers  may  be interested  in  the
methods used  for implementing  and porting  Poplog, a  multi-language
software  development  system  containing  incremental  compilers  for
Common Lisp, Prolog, ML and POP-11,  a Lisp-like language with a  more
readable Pascal-like syntax. Before I explain how assembly language is
used as output from the compiler during porting and system building, I
need to explain how the running system works. The mechanisms described
below  were  designed  and  implemented  by  John  Gibson,  at  Sussex
University.

All the languages in Poplog compile  to a common virtual machine,  the
Poplog VM which  is then  compiled to  native machine  code. First  an
over-simplified description:

The Poplog system allows different  languages to share a common  store
manager, and common data-types, so that a program in one language  can
call another and share data-structures.  Like most AI environments  it
also allows  incremental  compilation: individual  procedures  can  be
compiled and re-compiled and  are immediately automatically linked  in
to the rest of the system, old versions being garbage collected if  no
longer pointed to. Moreover, commands to run procedures or interrogate
data-structures can be typed in interactively, using exactly the  same
high level language  as the  programs are written  in. The  difference
between this  and  most AI  systems  is  that ALL  the  languages  are
compiled in the same way. E.g.  Prolog is not interpreted by a  POP-11
or Lisp program: they all compile (incrementally) to machine code.

The languages are all implemented using a set of tools for adding  new
incremental compilers. These tools include procedures for breaking  up
a text stream into items, and tools for planting VM instructions  when
procedures are compiled.  They are  used by the  Poplog developers  to
implement the four Poplog languages  but are also available for  users
to implement new  languages suited to  particular applications.  (E.g.
one user claims he  implemented a complete Scheme  in Poplog in  about
three weeks,  in  his spare  time,  getting a  portable  compiler  and
development  environment  for  free  once  he  had  built  the  Scheme
front-end compiler in Poplog.)

All this makes it  possible to build a  range of portable  incremental
compilers for different  sorts of programming  languages. This is  how
POP-11, PROLOG, COMMON LISP and  ML are implemented. They all  compile
to  a  common  internal  representation,  and  share  machine-specific
run-time code generators.  Thus several different  machine-independent
"front ends"  for different  languages  can share  a  machine-specific
"back end" which compiles to native machine code, which runs far  more
quickly than if the new language had been interpreted.

The actual story  is more  complicated: there are  two Poplog  virtual
machines, a high level and a low level one, both of which are language
independent and machine  independent. The high  level VM has  powerful
instructions, which  makes  it convenient  as  a target  language  for
compilers for high level  languages. This includes special  facilities
to  support  Prolog  operations,   dynamic  and  lexical  scoping   of
variables, procedure  definitions,  procedure  calls,  suspending  and
resuming processes, and so on.  Because these are quite  sophisticated
operations, the mapping from the Poplog  VM to native machine code  is
still fairly complex.

So  there   is  a   machine  independent   and  language   independent
intermediate compiler which compiles  from the high  level VM to  to a
low level VM, doing a considerable amount of optimisation on the  way.
A machine-specific back-end then translates the low-level VM to native
machine code, except when  porting or re-building  the system. In  the
latter case the final stage is translation to assembly language. (See
diagram below.)

The bulk of the core Poplog  system is written in an extended  dialect
of POP-11, with provision for C-like addressing modes, for efficiency.
We call it  SYSPOP. The system  sources, written in  SYSPOP, are  also
compiled to  the high-level  VM, and  then to  the low  level VM.  But
instead of  then  being translated  to  machine code,  the  low  level
instructions are automatically translated  to assembly language  files
for the  target machine.  This is  much easier  than producing  object
files, because there is a fairly straight-forward mapping from the low
level  VM  to  assembly  language,  and  the  programs  that  do   the
translation don't have  to worry  about formats for  object files:  we
leave that to the assembler and linker supplied by the manufacturer.

In fact, the system sources need facilities not available to users, so
the two  intermediate  virtual  machines  are  slightly  enhanced  for
SYSPOP. The following diagram summarises the situation.

                {POP-11, COMMON LISP, PROLOG, ML, SYSPOP}
                                    |
                               Compile to
                                    |
                                    V
                             [High level VM]
                          (extended for SYSPOP)
                                    |
                          Optimise & compile to
                                    |
                                    V
                             [Low level VM]
                          (modified for SYSPOP)
                                    |
                         Compile (translate) to
                                    |
                                    V
                      [Native machine instructions]
                       [or assembler - for SYSPOP]

So for ordinary  users compiling or  re-compiling their procedures  in
the system, the machine code generator is used and compilation is very
fast, with no linking required. For rebuilding the whole system we  go
via assembly language for maximum flexibility and it is indeed a  slow
process. But it does not need to be done very often, and not (yet)  by
ordinary users. Later  (1989) they  will have  the option  to use  the
system building route in order to configure the version of Poplog they
want. So we sit on  both sides of the  argument about speed raised  in
comp.compilers.

All the compilers and translators are implemented in Poplog (mostly in
POP-11). Only the last stage is machine specific. The low level VM  is
at a level that makes it possible on the VAX, for example, to generate
approximately one machine instruction per low level VM instruction. So
writing the code  generator for  something like  a VAX  or M68020  was
relatively easy. For a RISC machine  the Clipper the task is a  little
more complicated.

Porting to a new computer requires  the run-time "back end", i.e.  the
low level  VM compiler,  to be  changed and  also the  system-building
tools which output assembly language programs for the target  machine.
There are  also a  few  hand-coded assembly  files  which have  to  be
re-written for each machine. Thereafter  all the high level  languages
have   incremental    compilers   for    the   new    machine.    (The
machine-independent  system  building  tools  perform  rather  complex
tasks, such as  creating a  dictionary of procedure  names and  system
variables that have to be accessible to users at run time. So  besides
translating system source files, the tools create additional assembler
files and  also check  for consistency  between the  different  system
source files.)

I  believe  most  other  interactive   systems  provide  at  most   an
incremental compiler for one language,  and any other language has  to
be interpreted. If  everything is  interpreted, then  porting is  much
easier, but  execution is  much slower.  The advantage  of the  Poplog
approach is that  it is  not necessary to  port different  incremental
compilers to each new machine.

This makes it relatively easy  for the language designer to  implement
complex languages, since the Poplog  VM provides a varied,  extendable
set of  data-types and  operations thereon,  including facilities  for
logic  programming,  list,  record   and  array  processing,   'number
crunching',  sophisticated  control  structures  (e.g.   co-routines),
'active variables' and 'exit  actions', that is instructions  executed
whenever a procedure exits, whether normally or abnormally. Indefinite
precision arithmetic, ratios and complex numbers are accessible to all
the languages  that need  them. Both  dynamic and  lexical scoping  of
variables are provided. A tree-structured "section" mechanism  (partly
like packages)  gives further  support  for modular  design.  External
modules (e.g. programs in C or  Fortran) can be dynamically linked  in
and unlinked. A set of  facilities for accessing the operating  system
is also  provided. Poplog  allows functions  to be  treated as  "first
class" objects, and this is used to great advantage in POP-11 and ML.

The VM facilities are relatively easy to port to a range of  computers
and operating systems because the core system is mostly implemented in
SYSPOP, and is largely machine independent. Only the machine-dependent
portions mentioned above (e.g. run-time code generator, and translator
from low level  VM to  assembler), plus  a small  number of  assembler
files need be changed for a  new machine (unless the operating  system
is also new). Since the translators are all written in a high level AI
language, altering them is relatively easy.

Porting requires compiling all the SYSPOP system sources, to  generate
the corresponding  new  assmbler  files,  then  moving  them  and  the
hand-made assembler files to the new machine, where they are assembled
then linked. The  same process  is used to  rebuild the  system on  an
existing machine when new features are added deep in the system. Much
of the system is in source libraries compiled as needed by users, and
modifying those components does not require re-building.

Using this mechanism an experienced programmer with no prior knowledge
of Poplog or the target  processor was able to  port Poplog to a  RISC
machine in about  7 months.  But for  the usual  crop of  bugs in  the
operating system, assembler, and other software of the new machine the
actual porting time would have been shorter. In general, extra time is
required for user  testing, producing  system specific  documentation,
tidying up loose ends etc.

Thus 7  to  12  months  work  ports  incremental  compilers  for  four
sophisticated languages, a screen editor, and a host of utilities. Any
other languages implemented by users using the compiler-building tools
should also run immediately. So  in principle this mechanism  allows a
fixed  amount  of  work  to  port  an  indefinitely  large  number  of
incremental  compilers.  Additional  work  will  be  required  if  the
operating system  is different  from  Unix or  VMS,  or if  a  machine
specific window  manager  has  to  be provided.  This  should  not  be
necessary for workstations supporting X-windows.

The use of assembler output considerably simplifies the porting  task,
and also aids  testing and  debugging, since  the output  is far  more
intelligible to the programmer than if object files were generated.

Comments welcome.

Aaron Sloman,
School of Cognitive Sciences, Univ of Sussex, Brighton, BN1 9QN, England
    ARPANET : aarons%uk.ac.sussex.cvaxa@nss.cs.ucl.ac.uk
    JANET     aarons@cvaxa.sussex.ac.uk
    BITNET:   aarons%uk.ac.sussex.cvaxa@uk.ac

As a last resort
    UUCP:     ...mcvax!ukc!cvaxa!aarons
            or aarons@cvaxa.uucp

Phone:  University +(44)-(0)273-678294  (Direct line. Diverts to secretary)

------------------------------

End of AIList Digest
********************

∂01-Feb-88  0527	LAWS@KL.SRI.COM 	AIList V6 #23 - Newton, Nanotechnology, Philosophy   
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 1 Feb 88  05:27:33 PST
Date: Sun 31 Jan 1988 22:26-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #23 - Newton, Nanotechnology, Philosophy
To: AIList@SRI.COM


AIList Digest             Monday, 1 Feb 1988       Volume 6 : Issue 23

Today's Topics:
  History - Newton and Fame,
  Nanotechnology - Metadiscussion,
  Philosophy - Worth & AI Encroachment

----------------------------------------------------------------------

Date: Fri, 29 Jan 88 12:23:26 PDT
From: ladkin@kestrel.ARPA (Peter Ladkin)
Subject: another wonderful comment


I read the following in the ailist today:

> The point is of course, that while Newton originated Newtonian physics,
> and thus it is right to expect all references to this field to lead
> back to him, at the time he did his research he was a nobody just
> like Eric Drexler.

A cursory survey of history would show this is a much mistaken view
of Newton.  I think he was Lucasian Professor of Mathematics when he
did the physics.  His reputation was well established.

The basic point being made is that every important researcher has a
first work.  This is true.  But it doesn't help to mix it up in
false history.

peter ladkin
ladkin@kestrel.arpa

------------------------------

Date: 29 Jan 88 11:44 PST
From: hayes.pa@Xerox.COM
Subject: Re: Newton and fame

Look, it doesnt really MATTER, but since you raise the subject you ought to get
it right.   At the time Newton did his research he wasnt a `nobody'.   He was a
famous and respected professor of mathematics, and had been internationally
noted since his teens, having been a remarkable prodigy ( his professor of
mathematics resigned to give his chair to Newton because he considered him so
superior in ability ). At the time his work was published ( in the form of
`Principia' )  he was a senior member of the Royal Society and had been famous
for many years.  Far from having a hard time propogating his ideas, Newton was
careless about writing them down and only did so when urged and nagged by his
friends, such as Sir Christopher Wren and a couple of other nobodies, and when
he thought there was a danger that Descartes might get some of the credit.  His
work was an immediate sellout all over the Western civilised world, went through
many editions, making a fortune for his publisher, and instantly became the
accepted perspective on understanding cosmology and mechanics.  Sermons were
preached about his ideas in St. Pauls cathedral within weeks of them appearing.
Speeches were made at meetings of the Royal Society about what an incredible
breakthrough this all was, the King gave Newton a medal, and so on.  The only
comparable fuss in our time is probably that made over Einstein when the eclipse
observations of the transit of Mercury confirmed general relativity.
I am sure there are examples which make Levys point about new ideas having a
hard time ( how about haloid-process copying ? )  but Newton isnt one of them.

------------------------------

Date: Fri, 29 Jan 88 12:21:15 PST
From: larry@VLSI.JPL.NASA.GOV
Subject: Feynman & Nanotechnology

--
Actually, Drexler credits a 1959 lecture (also published in a journal)
by Richard Feynman, Nobel prize-winner in physics, as one of the first
to look at the idea of molecular engineering with some rigor.  The idea
itself has been around a good deal longer than that.  For instance,
Robert Heinlein's 1945-1955 Future History series included "molar
mechanics" as an important field of science and engineering.

                  Larry @ jpl-vlsi

------------------------------

Date: 30 Jan 88 06:42:42 GMT
From: jbn@glacier.STANFORD.EDU (John B. Nagle)
Reply-to: jbn@glacier.UUCP (John B. Nagle)
Subject: Re: disregard and abuse of Nano-engineering (V6#17)


      Nanotechnology isn't a silly idea, but it's very difficult to
see how to get started on working at the molecular level.  Drexler's
popular books don't offer too much insight on what to do first, but
they give some idea of what can be accomplished, and what to worry
about, if it starts to work.  Neither physics nor biochemistry seem
to forbid much of what Drexler proposes.

      Nanotechnology is more of an engineering problem than is AI.
We really have no idea what a general-purpose artificial intelligence
would look like, what its components would be, or even roughly what
its complexity would be.   We cannot today draw a block diagram of
an artificial intelligence with any confidence that a system built
to that diagram would work.

      Nanotechnology is different.  We could begin to design
nanomachines today, and Drexler has indeed roughed out some designs.
But our manfacturing technology is not equal to the task of building them.

      This is classically the sort of problem that will yield to money
and determination.  Like the original Manhattan Project and the Apollo
program, much research and massive engineering efforts will be necessary.
To justify such an effort, it will be necessary to demonstrate that
something can be accomplished with this technology.

      I therefore put the question "what nanomachine can we build first?"
What can we build with current bioengineering technology?  Can some
simple mechanical component be fabricated?  It need not be useful.
It need not be very complex.  But if one part can be fabricated,
a beginning will have been made.  And other work will follow.  Rapidly.

                                        John Nagle

------------------------------

Date: Sat, 30 Jan 88 08:18:40 CST
From: smu!lewis@uunet.UU.NET (Eve Lewis)
Subject: Intelligent Nanocomputers

Re: Godden's review of >Engines of Creation< by K. Eric Drexler

> Drexler makes the fascinating claim (no doubt many will vehemently
> disagree) that to create a true artificial intelligence it is  not
> necessary to first understand intelligence. All one has to  do  is
> simulate  the brain, which  can  be  done given nanotechnology. He
> suggests that a complete hardware simulation of the brain  can  be
>  done, synapse-for-synapse and dendrite-for-dendrite, in the space
> of one cubic centimeter (this figure is backed up in  the  notes).
>  Such  a  machine  could then just be allowed to run and should be
> able to accomplish a man-year of work in ten seconds. The unstated
> assumption is that a computer that  is  isomorphic  to  the  human
>  brain will ipso facto be intelligent, and presumably will be able
> to construct its own 'mental' models once power  is  supplied.  No
> need to supply it with software.


Perhaps  we  can  suggest an approach to dovetail with, and enhance,
Drexler's. One supposes that the structure of the brain is  alright,
as  far  as  it  goes.  It's  even  been  referred  to  as "Nature's
Masterpiece." Nonetheless, it's important  to  understand  that  the
human  brain  per  se, is a product of the structural genes, to wit:
the respected exons. I say the following:

1)  Get an advance copy of that gene map of the entire human genome,
a  project  now  underway. (Be apprised that the human genome is the
nanocomputer to end all nanocomputers, and anything else that  comes
along can only be second-best.)

2) Discard the real junk, the exon sequences.

3) Retain the alleged "junk," the intron sequences. Design a program
to look for matches between two stores of complex data. Feed all the
intron "junk" (one store of complex data) into the program.  Accumu-
late  from  all  cultures,  but  particularly  from Western "civ," a
plethora of human thought fossils, i.e., Edward O. Wilson's  cultur-
gens (the second store of complex data), and feed the stuff into the
program.  Set  the  program going, to look for "matches," and voila!
There you have it. A functional facsimile  of  the  code  underlying
human thought processes.

> The unstated assumption is that a computer that is  isomorphic  to
>  the  human  brain  will ipso facto be intelligent, and presumably
>  will  be  able to construct its own 'mental' models once power is
> supplied. No need to supply it with software.

4)  The structure of the human brain is an excellent starting point,
and I wouldn't neglect it entirely, "form following  function,"  and
all  that  sort  of  thing.  Nonetheless,  when push comes to shove,
molecular biology is really where it's at. I mean, there are degrees
of fidelity, when it comes to isomorphism. Besides, what A.I. really
needs, is isofunctionalism. For example:

One  could  construct  a  life-sized  metal and plastic model of the
human brain, even one good enough,  in  terms  of  isomorphism,  for
teaching  purposes  in  a neuroanatomy class. Even one that could be
taken apart, and put back together, with the nuclei properly enscon-
sed, and the tracts properly aligned.

If one then put clock innards inside, and supplied electrical power,
would it "think"? We know that it would not. Indeed, if we  imbedded
a small clock face, with hands and numerals, in Broca's speech area,
it would give us excellent time, just like any old Westclox!!! If we
put  in  one  of those talking mechanisms, Broca's speech area would
TELL us the time. But our poor isomorphic  brain,  here,  would  not
come  up  with Einstein's Theory of Relativity, or anything like it.
And that's really what we're after, isn't it?

5) Now, what kind of a  mind  would  we  really  like  in  our  A.I.
act-alike?  Do  we  want the mind of a conformist? Or do we want the
mind of a meshuggener? No, on both counts. We want  the  mind  of  a
quality  Zeitgeist  smasher.  So  we  have  to  discern  the in vivo
give-and-take of the repressors and enhancers, as well as the coding
sequences themselves, and work that into any program, along with the
base pair sequences that determine the culturgens. ("This  one's  no
good,"  "That  one's  terrific,"  "Maybe we can use that one another
time," etc.) We may even  be  able  to  avail  ourselves  of  binary
simplicity  by  equating the purine bases with 1, and the pyrimidine
bases with 0. The entire thing is degenerate, anyway, and  could  do
with some streamlining.

6) Some closing comments: Don't get hung up on  morphology;  it's  a
Linnaean  trap,  set  for  the  sentimental. There isn't creature, a
physiological structure, neurons included,  or  a  neurotransmitter,
that  doesn't owe its life to several strings of base pairs. Really,
it isn't the brain that thinks, or the neuron(s)  that  think;  it's
the differentially-expressed genome in the neuron(s) that thinks.

In  regard  to the disdained pseudogenes, referred to as "junk" DNA,
and believed to be "silent," which like Rodney  Dangerfield,  "Don't
get  no  respect," perhaps like Alexander Fleming's penicillin mold,
they could be  the  "bluebird  of  artificial/natural  intelligence,
right in our own DNA," and so, worthy of attention.

It holds true for natural intelligence, artificial intelligence, and
impressive isomorphs: "You can lead a 'mind' to information, but you
can't make it think."

And finally, hopefully there  are  no  male  chauvinists  among  the
readers  of,  and  contributors to, this net. If so, let me warn you
that any aspirant to the Holy Grail, i.e., the comprehension of  in-
ternal  rep, is doomed from the start if he overlooks the "junk" DNA
in the maternally-inherited mitochondrial genome.

- Eve Lewis

------------------------------

Date: Fri, 29 Jan 88 09:45 EST
From: GODDEN%gmr.com@RELAY.CS.NET
Subject: RE: RE: Nano-engineering


Since I brought this topic up a short while ago, let me comment on
David Smith's reply (following:)

>Date: Fri, 22 Jan 88 14:21:05 est
>From: Mr. David Smith <dsmith@gelac.arpa>
>Subject: Nano-engineering
>
>... [deleted quote] ...
>
>Some time ago, I asked a net question about nano-engineering and all roads
>led to Eric Drexler.  Frankly, I was pleased to see this net mail putting
>such activities into perspective.  At the risk of sounding Pharisaic, I
>believe that the cause of "serious AI" is seriously hindered by such blatant
>blather. This has to be the only forum in the civilized world which allows
>such claims to be perpetrated without receiving equal portions of ridicule
>and abuse.  Can it not be stopped?

Obviously, ailist IS a forum where ridicule and abuse is permitted.
Interestingly, in his book Drexler calls for setting up public forums
where ideas of alleged scientific merit can be scrutinized openly and
subjected to ridicule if such is deemed appropriate.

-Kurt Godden
 godden@gmr.com

------------------------------

Date: 28 Jan 88 08:53:14 GMT
From: mcvax!varol@uunet.uu.net  (Varol Akman)
Subject: Re: Philosophy is a futile game...

In article <2638@calmasd.GE.COM> wlp@calmasd.GE.COM (Walter Peterson) writes:
>
>For an excellent explanation as to why philosophy is NOT futile and why
>everyone NEEDS philosophy, see
>
>   "Philosophy: Who Needs It"   By Ayn Rand
>

I don't much care about Rand but a really good reference is

         Philosophical Investigations,  Ludwig Wittgenstein

Another excellent book by Wittgenstein is Zettel.

Needless to say, Wittgenstein was an extraordinary mathematician too.

These books also show why you shouldn't call every mental activity
philosophy and set a standard (very difficult to reach though).

------------------------------

Date: 28 Jan 88 13:48:59 GMT
From: mcvax!ukc!its63b!hwcs!hci!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: words order in English and Japanese

In article <3532@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>Finally, this topic really belongs in sci.lang.  It has little to do with AI.

If every topic raised in AI was restricted to its proper discipline,
this news group would be empty.  Personally, I would rather see the
computational paradigm properly distributed through the disciplines,
rather than left to the intellectual margins and scholarly wastelands
of AI.  However, given that AI exists, takes funding and pushes itself
onto the popular consciousness as a valid contribution to the
understanding of humanity, it is far more sensible for disciplined and
informed discussion to keep pushing into comp.ai.  AI workers complain
regularly about the oppressiveness of other disciplines when trying to
develop a computational view of human nature.  Lets keep up the oppression :-)

--
Gilbert Cockton, Scottish HCI Centre, Heriot-Watt University, Chambers St.,
Edinburgh, EH1 1HX.  JANET:  gilbert@uk.ac.hw.hci
ARPA: gilbert%hci.hw.ac.uk@cs.ucl.ac.uk UUCP: ..{backbone}!mcvax!ukc!hci!gilbert

------------------------------

Date: 29 Jan 88 13:54:13 GMT
From: mcvax!ukc!its63b!hwcs!hci!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: Philosophy - not a pejorative (Re: time for sci.psych???)

In article <4222@utai.UUCP> tjhorton@ai.UUCP (Timothy J. Horton) writes:
>>...People who flounder hopelessly are probably short on their
>>philosophical training.
>
>Not true.  Realize, also, that there are conceptual chasms between fields.
>
>Philosophical arguments about computational models of intelligence, for
>instance, among those without comprehensive conceptual bases in computer
>science, often seem to reduce to expressions of superstition and ignorance,
>at least among the vocal.

On conceptual chasms, what - philosophical analysis apart - can bridge them?

On ignorance of computability in 'philosophical' arguments on natural
and artificial intelligence, perhaps the Theory of Computation needs
to be as much a part of a proper philosophical training as
linguistic analysis and formal logic.  Some people in AI could do with
it as well (i.e. those who don't have it).

As for reduction to superstition, isn't this the outcome for an
analysis of many 'natural truths'.  On the existence of objects,
nothing is 'proven', but nevertheless, we find no reason for rejecting
the natural truth of their existence.  Arguments based on ignorance
must be discounted, but are we not left with the case that we still
have no reason for rejecting the natural truth that human and machine
intelligence are different?  Not only is the case for the equivalence
of human and machine intelligence not proven, no analysis exists, to my
knowledge, which points to a way of establishing the equivalence.  This
leaves AI as a piece of very expensive speculation based on beliefs
which insult our higher views of ourselves.  Superstition no doubt, but
a dominant and moral superstition which needs to command some respect.
Vocal polemic is as much a reaction to the arrogant unreasonableness of
some major AI pundits, as it is a reflection of the incompetence of the
advocate.  The debate has been fair on neither side, and the ability
of AI pundits to stand their ground is due to their social marginality
as round-the-clock scientists and their cultural marginality as workers
outwith a proper discipline (look up Sociology of Deviance). People
who live in bunkers don't get hit by stones ;-)  The big AI pundits
just remind me of Skinner.

BTW: NOT(AI pundit = AI worker) -  most AI workers know their systems
     aren't working (yet?) and do leave their bunkers to mingle :-)

>I suggest, in balance, Russell's "The Cult of Common Usage," for instance.
Great - keep balancing, more competent philosophy for the reading list.

>Experience would seem to indicate that a few vocal individuals may press
>their arguments on the entire network, rather than delivering ambivalent
>analysis or investigating before disseminating.

Sounds like a netiquette proposal which I thoroughly endorse.  Whilst
guilty of advocacy on occasions, I think that everyone should strive for
an ambivalent analysis in this sort of public forum, and leave people to make
their own minds up.  Sounds like good philosophy to me.  However,
ambivalence cannot be expected in response to imcompetence, however candid.
Witness the current debate on economic structure and diachronic syntax.  Nor,
as with tolerance of the intolerant, I can't be ambivalent about dogmatists.
--
Gilbert Cockton, Scottish HCI Centre, Heriot-Watt University, Chambers St.,
Edinburgh, EH1 1HX.  JANET:  gilbert@uk.ac.hw.hci
ARPA: gilbert%hci.hw.ac.uk@cs.ucl.ac.uk UUCP: ..{backbone}!mcvax!ukc!hci!gilbert

------------------------------

End of AIList Digest
********************

∂05-Feb-88  0057	LAWS@KL.SRI.COM 	AIList V6 #24 - Seminars, Connectionist Conference, Course
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 5 Feb 88  00:57:37 PST
Date: Thu  4 Feb 1988 22:11-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #24 - Seminars, Connectionist Conference, Course
To: AIList@SRI.COM


AIList Digest             Friday, 5 Feb 1988       Volume 6 : Issue 24

Today's Topics:
  Seminars - Emerald: Object-Based Distributed Programming (SU) &
    Non-Atomic Concurrency Control (SU) &
    Capacity of Associative Memory Models (HP) &
    Reasoning about Reliability (SRI) &
    Synthesizing Context-Dependent Plans (BBN) &
    AI, NL, and Object-Oriented Databases at HP (NASA),
  Conference - Connectionist Modeling and Brain Function (Princeton),
  Course - Connectionist Summer School

----------------------------------------------------------------------

Date: Sat, 30 Jan 88 17:15:09 PST
From: Bruce L Hitson <hitson@Pescadero.stanford.edu>
Subject: Seminar - Emerald: Object-Based Distributed Programming (SU)


   EMERALD:  AN OBJECT-BASED LANGUAGE FOR DISTRIBUTED PROGRAMMING

                           Henry M. Levy
                     University of Washington

         DSRS: Distributed Systems Research Seminar (CS548)
                   Thursday, February 4, 4:15pm
                      Margaret Jacks Hall 352


Despite the existence of many distributed systems, programming
distributed applications is still a complex task.  We have designed
a new object-based language called Emerald to simplify distributed
programming.  The novel features of Emerald include (1) a single
object model used for programming both small local objects and large
distributed objects, (2) support for abstract types, and (3) an
explicit notion of location and mobility.  While local and
remote Emerald objects are semantically identical, the compiler
is able to select from several implementation styles;  Emerald provides
performance commensurate with procedural languages in the local case
and with remote procedure call systems in the remote case.  The
Emerald system has been prototyped on a small local area network
of MicroVAX IIs running Ultrix.

This talk will give an overview of the Emerald project and the design of
the Emerald language and run-time kernel.

------------------------------

Date: Sun 31 Jan 88 14:19:58-PST
From: Anil R. Gangolli <GANGOLLI@Sushi.Stanford.EDU>
Subject: Seminar - Non-Atomic Concurrency Control (SU)


4-February-1988:  Nir Shavit

                Toward a Non-Atomic Era:
               L-Exclusion as a Test Case

Most of the research in concurrency control has been based on the
existence of strong synchronization primitives such as test and set.
Following Lamport, recent research promoting the use of weaker
``safe'' rather then ``atomic'' primitives has resulted in
construction of atomic registers from safe ones, in the belief that
they would be useful tools for process synchronization.  It has been
shown that using such atomic registers it is impossible to create
strong synchronization primitives such as test and set.  We therefore
advocate a different approach, to skip the intermediate step of
achieving atomicity, and solve problems directly from safe registers.
We show how to achieve a fair solution to $\ell$-exclusion, a problem
previously solved assuming a very powerful form of test and set.  We
do so using safe registers alone and without introducing atomicity.
The solution is based on the construction of simple novel
synchronization primitives that are non-atomic.

*** Time and Place: 12:30pm, Th, Feb. 4, Margaret Jacks Hall (MJH), Room 352

------------------------------

Date: Mon, 1 Feb 88 09:29:13 PST
From: COE%PLU@ames-io.ARPA
Subject: Seminar - Capacity of Associative Memory Models (HP)


*********************** OPEN TECHNICAL MEETING ***********************
        IEEE Computer Society, Santa Clara Valley Chapter
              Tuesday  Febuary 9, 1988    8:00 p.m.
           Hewlett-Packard Cupertino (Wolfe & Homestead)
                      Building 48, Oak Room

"CAPACITY FOR PATTERNS AND SEQUENCES IN KANERVA'S SDM AS COMPARED TO
                 OTHER ASSOCIATIVE MEMORY MODELS"

ABSTRACT: Dr. James Keeler of Stanford University will be speaking on the
          information capacity of Kanerva's Sparse, Distributed Memory
          (SDM) and Hopfield-type neural networks will be discussed.  Using
          certain approximations, it is shown that the total information
          stored in these systems  is proportional to the number connections
          in the network. The proportionality constant is the same for the
          SDM and Hopfield-type models independent of the particular model,
          or the order of the model.  This same analysis can be used to show
          that the SDM can store sequences of spatiotemporal patterns, and
          the addition of time-delayed connections allows the retrieval of
          context dependent temporal patterns with varying time delays.

Dr. Keeler Received his Ph.D in Physics from U. C. San Diego, March 1987.
His dissertation was on reaction-diffusion systems and neural network models.
He is now a Postdoctoral student at Stanford University's Department of
Chemistry working on neural network models, as well as consulting for Penti
Kanerva's (RIACS, NASA Ames Research Center) SDM research group.

For additional information contact Coe Miles-Schlichting:
     coe@pluto.arc.nasa.gov or (408) 279-4773

------------------------------

Date: Mon, 1 Feb 88 15:28:23 PST
From: seminars@csl.sri.com (contact lunt@csl.sri.com)
Subject: Seminar - Reasoning about Reliability (SRI)


SRI COMPUTER SCIENCE LAB SEMINAR ANNOUNCEMENT:


    Unified Diagnosis for Reliability Enhancement in Real-Time Systems

                            Roy A. Maxion

                    Department of Computer Science
                      Carnegie Mellon University
                       Pittsburgh, Pennsylvania

                      Friday, February 5 at 4:00 pm
                SRI International, Room AD202, Building A


The intuitive sense of reliability is availability.  To most users, if a
system is available for its specified purpose at the moment the user (or
other process) wishes to use it, then the system is perceived as being
reliable.  Hence high availability can be used to enhance the perception of
high reliability.  When a system fails, then, it is important to restore
service, or availability, as quickly as possible.

Diagnosis is an important response to system failure. In critical applications
that demand high reliability, such as process control systems or hospital
patient-monitoring systems, a swift and accurate failure diagnosis and
subsequent prescription for remediation can provide the operating margin
necessary for maintaining operational reliability.  Diagnosis, usually a
retrospective technique, can also be employed predictively to avoid certain
classes of failure.

This talk presents an architecture that supports an empirical testbed for a
unified theory of diagnostic reasoning.  The testbed supports theoretical
modeling as well as practical applications; the underlying diagnostic engine
can be fine tuned, and the practical performance results can be subsequently
observed.  Machine learning is an integral part of the system, enabling the
system to tune itself to its environment.  The testbed has been migrated
to a real-world environment -- the 5,000 station nonhomogeneous campus
computing network at Carnegie Mellon University -- and used in real-time
detection and diagnosis of network failures.  Examples and explanations of
problems discovered and diagnosed by the system will be presented.

In discussing his talk, the author noted that a central issue in his
approach is how to define expected behavior so that a monitor may
justifiably decide that system behavior is worthy of detailed
diagnosis (Jack Goldberg).



NOTE FOR VISITORS TO SRI:

Please arrive at least 15 minutes early in order to sign in and
be shown to the conference room.  Those not arriving by 4pm may
not be able to attend the talk.

SRI is located at 333 Ravenswood Avenue in Menlo Park.  Visitors
may park in the visitors lot in front of Building A (red brick
building at 333 Ravenswood Ave) or in the conference parking area
at the corner of Ravenswood and Middlefield.  The seminar room is in
Building A.  Visitors should sign in at the reception desk in the
Building A lobby.

IMPORTANT: Visitors from Communist Bloc countries should make the
necessary arrangements with Fran Leonard, SRI Security Office,
(415) 859-4124, as soon as possible.


UPCOMING SRI COMPUTER SCIENCE LAB SEMINARS:

Monday, February 8 at 4pm, Arthur Keller of Stanford University and
the University of Texas at Austin will speak on a "Framework for the
Security Component of an Ada* DBMS."  Room IS109, in the International
Building.

Thursday, February 11 at 4pm, Janice Glasgow of Queen's University,
Kingston Ontario, will speak on "A Formal Model for Reasoning About
Distributed Systems."   Conference Room B, Building A.

------------------------------

Date: Wed 3 Feb 88 16:38:18-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Seminar - Synthesizing Context-Dependent Plans (BBN)

                    BBN Science Development Program
                       AI Seminar Series Lecture

                SYNTHESIZING PLANS THAT CONTAIN ACTIONS
                     WITH CONTEXT-DEPENDENT EFFECTS

                          Edwin P.D. Pednault
                         AT&T Bell Laboratories
                          Holmdel, New Jersey
                   (!vax135!epdp@UCBVAX.BERKELEY.EDU)

                                BBN Labs
                           10 Moulton Street
                    3rd floor large conference room
                      10:30 am, Tuesday February 9th

                         *********************
                         * note unusual room *
                         *********************


Conventional domain-independent planning systems have typically excluded
actions whose effects depend on the situations in which they occur,
largely because of the action representations that are employed.
However, some of the more interesting actions in the real world have
context-dependent effects.  In this talk, I will present a planning
technique that specifically addresses such actions.  The technique is
compatible with conventional methods in that plans are constructed via
an incremental process of introducing actions and posting subgoals.  The
key component of the approach is the idea of a secondary precondition.
Whereas primary preconditions define executability conditions of actions,
a secondary precondition defines a context in which an action produces a
desired effect.  By introducing and then achieving the appropriate
secondary preconditions as additional subgoals to actions, we ensure
that the actions are carried out in contexts conducive to producing the
effects we desire.  The notion of a secondary preconditions will be
defined and analyzed.  It will also be shown how secondary preconditions
can be derived in a general and domain-independent fashion for actions
specified in ADL, a STRIPS-like language suitable for describing
context-dependent effects.

------------------------------

Date: Tue, 2 Feb 88 10:46:24 PST
From: JARED%PLU@ames-io.ARPA
Subject: Seminar - AI, NL, and Object-Oriented Databases at HP (NASA)

              National Aeronautics and Space Administration
                         Ames Research Center

                        SEMINAR ANNOUNCEMENT


SPEAKER:   Dr. Steven Rosenberg
           Hewlett Packard, HP Labs

TOPIC:     AI, Natural Language, and Object-Oriented Databases at HP

ABSTRACT:

Hewlett Packard Labs is the research arm of the Hewlett Packard Corporation.
HP labs conducts research in technologies ranging from AI to super-
conductivity. A brief overview of computer science research at HP Labs
will be presented with a focus on AI, Natural Language, and object-oriented
databases.


BIOGRAPHY:

Dr. Steven Rosenberg is the former department manager, Expert Systems
Department, Hewlett-Packard Laboratories. Prior to joining HP, he worked
at the Lawrence Berkeley Laboratories, and at the MIT AI Lab.  At HP
he has led the development of expert systems such as Photolithography
Advisor, an expert system that diagnoses wafer flaws due to photolithography
errors, and recommends corrective action. He has also led the development
of expert system programming languages such as HP-RL, an expert system
language that has been used within HP and at several universities for
constructing expert systems.  He is currently involved in developing new
research collaborations between HP and the university community.

DATE: Tuesday,     TIME: 1:30 - 3:00 pm     BLDG. 244   Room 103
      February 9, 1988       --------------


POINT OF CONTACT: Marlene Chin   PHONE NUMBER: (415) 694-6525
     NET ADDRESS: chin%plu@ames-io.arpa

***************************************************************************

VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18.  Do not
use the Navy Main Gate.

Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance.  Submit requests to the point of
contact indicated above.  Non-citizens must register at the Visitor
Reception Building.  Permanent Residents are required to show Alien
Registration Card at the time of registration.
***************************************************************************

------------------------------

Date: Mon, 1 Feb 88 14:13:38 EST
From: jose@tractatus.bellcore.com (Stephen J. Hanson)
Subject: Conference - Connectionist Modeling and Brain Function
         (Princeton)


                 Connectionist Modeling and Brain Function:
                        The Developing Interface

                         February 25-26, 1988
                         Princeton University
                        Lewis Thomas Auditorium

This symposium explores the interface between connectionist modeling
and neuroscience by bringing together pairs of collaborating speakers
or researchers working on related problems.  The speakers will consider
the current state and future prospects of four fields in which convergence
between experimental and computational approaches is developing rapidly.


        Thursday                                   Friday
Associative Memory and Learning          Sensory Development and Plasticity

           9:00 am                                 9:00 am
   Introductory Remarks                         Preliminaries
   Professor G. A. Miller                       Announcements

         9:15 am                                   9:15 am
Olfactory Process and Associative     Role of Neural Activity in the
Memory: Cellular and Modeling         Development of the Central Visual
Studies                               System: Phenomena, Possible Mechanism
                                      and a Model
   Professor A. Gelperin                 Professor Michael P. Stryker
   AT&T Bell Laboratories                University of California, San Francisco
   Princeton University

         10:30 am                              10:30 am
  Simple Neural Models of               Towards an Organizing Principle for a
  Classical Conditioning                Perceptual Network

   Dr. G. Tesauro                        Dr. R. Linsker, M.D., Ph.D.
Center for Complex Systems Research      IBM Watson Research Lab

Noon-Lunch                            Noon-Lunch

           1:30 pm                               1:30 pm
Brain Rhythms and Network Memories:   Biological Constraints on a Dynamic
I. Rhythms Drive Synaptic Change      Network: Somatosensory Nervous System

    Professor G. Lynch                    Dr. T. Allard
University of California, Irvine      University of California, San Francisco

          3:00 pm                               3:00 pm
Brain Rhythms and Network Memories:   Computer Simulation of Representational
II. Rhythms Encode Memory             Plasticity in Somatosensory Cortical
Hierarchies                           Maps

     Professor R. Granger                  Professor Leif H. Finkel
University of California, Irvine           Rockefeller University
                                           The Neuroscience Institute

4:30 pm  General Discussion           4:30 pm General Discussion

5:30 pm  Reception                    5:30 pm Reception
Green Hall, Langfeld Lounge           Green Hall, Langfeld Lounge

Organizers                            Sponsored by

Stephen J. Hanson Bellcore &          Department of Psychology
Princeton U.                          Cognitive Science Laboratory
Carl R. Olson Princeton U.            Human Information Processing Group
George A. Miller, Princeton U.


Travel Information

Princeton is located in central New Jersey, approximately 50 miles
southwest of New York City and 45 miles northest of Philadelphia.  To
reach Princeton by public transportation, one usually travels through
one of these cities.  We recommend the following routes:

By Car
>From NEW YORK - - New Jersey Turnpike to Exit #9, New Brunswick; Route
18 West (approximately 1 mile) to U.S. Route #1 South, Trenton.  From
PHILADELPHIA - - Interstate 95 to U.S. Route #1 North.  From
Washington - - New Jersey Turnpike to Exit #8, Hightstown; Route 571.
Princeton University is located one mile west of U.S. Route #1.  It
can be reached via Washington Road, which crosses U.S. Route #1 at the
Penns Neck Intersection.

By Train

Take Amtrak or New Jersey Transit train to Princeton Junction, from
which you can ride the shuttle train (known locally as the "Dinky")
into Princeton.  Please consult the Campus Map below for directions on
walking to Lewis Thomas Hall from the Dinky Station.
For any further information concerning the conference please
contact our conference planner:

                        Ms. Shari Landes
                        Psychology Department
                        Princeton University, 08544

                        Phone: 609-452-4663
                        Elec. Mail: shari@mind.princeton.edu

------------------------------

Date: Thu 4 Feb 88 00:29:33-EST
From: Dave.Touretzky@C.CS.CMU.EDU
Subject: Course - Connectionist Summer School

Subject: revised and final call for applications

                  THE 1988 CONNECTIONIST MODELS SUMMER SCHOOL


ORGANIZER:              David Touretzky

ADVISORY COMMITTEE:     Geoffrey Hinton, Terrence Sejnowski

SPONSORS:  The Sloan Foundation; AAAI; AFOSR; in cooperation with ACM SIGART.

DATES:  June 17-26, 1988

PLACE:  Carnegie Mellon University, Pittsburgh, Pennsylvania

PROGRAM:  The  summer  school  program  is  designed  to introduce young neural
networks researchers to the latest developments in the field.   There  will  be
sessions  on  learning,  theoretical analysis, connectionist symbol processing,
speech recognition, language understanding, brain structure,  and  neuromorphic
computer  architectures.    Students  will  have  the opportunity to informally
present their own research and to interact closely with some of the leaders  of
the field.

 LIST OF FACULTY:

   Yaser Abu-Mostafa (Caltech)      Yann Le Cun (Toronto)
   Dana Ballard (Rochester)         James McClelland (Carnegie Mellon)
   Andrew Barto (U. Mass.)          David Rumelhart (Stanford)
   Gail Carpenter (Boston U.)       Terrence Sejnowski (Johns Hopkins)
   Scott Fahlman (Carnegie Mellon)  Mass Sivilotti (Cal Tech)
   Geoffrey Hinton (Toronto)        Paul Smolensky (UC Boulder)
   Michael Jordan (MIT)             David Tank (AT&T Bell Labs)
   Scott Kirkpatrick (IBM)          David Touretzky (Carnegie Mellon)
   George Lakoff (Berkeley)         Alex Waibel (ATR International)

EXPENSES: Students are responsible for their meals and travel expenses; a small
amount of travel funds  may  be  available.    Free  dormitory  space  will  be
provided.  There is no tuition charge.

WHO  SHOULD  APPLY: The summer school's goal is to assist young researchers who
have chosen to work in the  area  of  neural  computation.    Participation  is
limited  to  graduate  students  (masters  or  doctoral level) who are actively
involved in some aspect of neural network research.  Persons who  have  already
completed  the  Ph.D.  are  not  eligible.    Applicants  who are not full time
students will still be  considered,  provided  that  they  are  enrolled  in  a
doctoral degree program.  A total of 50 students will be accepted.

HOW  TO  APPLY:  By March 1, 1988, send your curriculum vitae and a copy of one
relevant paper, technical report, or research proposal to: Dr. David Touretzky,
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, 15213.
Applicants will be notified of acceptance by April 15, 1988.

------------------------------

End of AIList Digest
********************

∂05-Feb-88  0315	LAWS@KL.SRI.COM 	AIList V6 #25 - Software Engineering, XLISP, Vision, Language  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 5 Feb 88  03:15:25 PST
Date: Thu  4 Feb 1988 22:43-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #25 - Software Engineering, XLISP, Vision, Language
To: AIList@SRI.COM


AIList Digest             Friday, 5 Feb 1988       Volume 6 : Issue 25

Today's Topics:
  Administrivia - Head Count,
  Queries - Imbeddable Inference System & Common Lisp OPS5 &
    Genetic Algorithms & AI Surveys,
  Expert Systems - Software Engineering and Expert Systems,
  AI Tools - Engineering Data Modeling & XLISP,
  Vision - Ten Best Vision References,
  Linguistics - Natural vs. Artificial Languages & Ambiguous Speech

----------------------------------------------------------------------

Date: Thu 4 Feb 88 21:37:20-PST
From: Ken Laws <LAWS@KL.SRI.COM>
Reply-to: AIList-Request@SRI.COM
Subject: Head Count

I would like to get an estimate of the AIList readership.
If your birthday is in the range February 1 to February 15,
please send an empty reply to this message or digest.  I
will multiply by the appropriate factor to estimate the number
of list members.

                                        -- Ken

------------------------------

Date: Mon 1 Feb 88 01:23:04-PST
From: Joe Karnicky <KARNICKY@KL.SRI.COM>
Subject: seeking imbeddable inference system

        A friend of mine creates programs and devices to enable severely
handicapped individuals to interact with computers (IBM PC
compatibles because of financial considerations).   He and I have
spent time discussing possible ways that knowledge based programming
(my own area of practice) can be usefully incorporated into
these programs.
        For example, he has a program that repeatedly
scans an array of icons.  At a signal from the user, an icon is
selected and a corresponding message is output - possibly through
a speech synthesizer.   One possibility is to have the icon selection
generate a fact or goal into an inference system.   The final output
from this system would then be "customized", i.e. modified by the
context of the interaction. (for example, the time of day, the
mood or condition of the user, the current activity etc.)
        Now, to implement this economically what we would like to
have is a simple, cheap ($100), IMBEDDABLE inference system.   Something on the
level of turboprolog would be adequate, except I'm told that it pretty
much insists on being boss.
        Any suggestions would be appreciated.

Thanks,
Joe Karnicky <KARNICKY@STRIPE.SRI.COM>

------------------------------

Date: 3 Feb 88 20:48:00 CST
From: "PERRY ALEXANDER" <alexander@space-tech.arpa>
Reply-to: "PERRY ALEXANDER" <alexander@space-tech.arpa>
Subject: Common Lisp OPS5

Hello,

Does anybody out there know of a common lisp version of OPS5 I can get ahold
of?  I have a franz version, but have not seemed to be able to locate a
common lisp version.  Please respond directly to me,  I will summarize.

Thanks,
Perry Alexander
alexander@space-tech.arpa
University of Kansas
Telecommunications and Information Sciences Lab (TISL)

------------------------------

Date: 2 Feb 88 22:02:00 GMT
From: goldfain@osiris.cso.uiuc.edu
Subject: Re: Cognitive System using Genetic Algo


Would someone do me a favor and post or email a short definition of the
term "Genetic Learning Algorithm" or "Genetic Algorithm" ?

Thanks.    - Mark Goldfain  arpa:  goldfain@osiris.cso.uiuc.edu

------------------------------

Date: 2 Feb 88 21:38:42 GMT
From: mtune!codas!killer!wybbs!meyers@rutgers.edu  (John Meyers)
Subject: Sources for research?

I have to do some research on Artificial Intelligence (primarily the
history , but also current applications) and I would like to know if
anyone could recommend a good (and recent) book dealing with the two areas
of AI I have mentioned. Thank you.

                         John Meyers


--
  __        ,                 |John Michael Meyers -> meyers@wybbs.UUCP
 (_/_ /     /)) _    _  _   _ |
 _/(//)/)  / / (-'(/(-'/ ' '-,|DISHCLAIMER:Well, the one with the pizza
(/""""""""""""""(_/"""""""""" |            is mine, but...

------------------------------

Date: 3 Feb 88 02:21:03 GMT
From: kohen@rocky.STANFORD.EDU (Abe Kohen)
Reply-to: kohen@rocky.UUCP (Abe Kohen)
Subject: Re: Software Development and Expert Systems


The query and responses seem to be geared to custom-built systems.
I'd like to ask about s/w development for expert systems using
commercially available tools.

How do tools like S.1, Art, or Nexpert lend themselves to good s/w
engineering. Are some tools better for s/w engineering? Are they
better (whatever that means) at the expense of clear and efficient
data representation.

It seems that S.1 has the potential for providing a good s/w engineering
environment, but it fails on data representation, and is lacking forward
chaining (vaporware not withstanding). Art has good data representation,
but doesn't (yet) integrate well into a workstation (read: Sun) environment.

How does Nexpert perform?

kohen@rocky.stanford.edu
kohen@sushi.stanford.edu

------------------------------

Date: Wed, 3 Feb 88 11:14:51 est
From: france@vtopus.cs.vt.edu (Robert France)
Subject: Re:  Software engineering and expert systems.

Regarding the discussion by (among others) Ron Jacobs and Will Neuhauser,
we have also tried applying the basic software engineering principles
of modularity and information hiding to expert systems construction, and
can report good success.  For about the last four years, a group of
graduate students under the direction of Ed Fox has been working here
at Virginia Tech to build a large expert system to test the applicability
of knowledge-based techniques to information retrieval tasks.  Faced
with the size and volatility of the intended system, as well as the
amount of knowledge it is intended to use and the size of the research
community involved in its construction and use, we opted for a highly
modular design.

Basically, the system is broken down into domain experts, each with its
own rule base and inference style.  This sounds pretty close to the
approaches of Messrs. Jacobs and Neuhauser, and is really the obvious
countermeasure to the problems of unwanted inference and rule inter-
action.  The intention is to have a set of generic inference engines;
at the moment I think they are all either forward- or backward-chaining
rule followers.  These experts all communicate through a blackboard
that serves both as a communication interlink and a summary of the
system state at any point in the task.  State information on the
blackboard then provides the data for task planning by a separate
module called the *strategist*, that tells the experts (in general
terms) what to do.  The system also has large fact bases shared among
the experts and a separate user interface module.  Details of the design
can be found in:

    Fox, Edward A. and Robert K. France.  "Architecture of an expert
    system for composite document analysis, representation, and
    retrieval." *International Journal of Approximate Reasoning*
    v. 1 (1987), pp. 151-175.

We are currently working up a paper on the details of the blackboard/
strategist complex and the implementation of the initial version of
the system:  anyone who wants a preprint can send me a line, and I'll
send out copies when they're available.

The good news is that we have a minimal functional system at this point
(Just.  We're still congratulating ourselves before we start running
experiments comparing it to classical information retrieval methods.)
and that the modularity contributed highly to both ease of construction
and ease of modification.  We were even able to take experts imple-
mented as stand-alone modules and integrate them into the system
with a minimum of fuss and bother.  Further, while the modularity
means that the system has a *lot* of potential communication
overhead, the cost is less than we feared (it is, for instance,
overwhelmed by the cost of inferential processing) and bearable
(most system operations take less than a second running on a lightly
loaded VAX 11/785, even if they involve moderation by the blackboard
and scheduling activity by the strategist as well as activity by
the called expert).  And this in the relatively unsophisticated
implementation of `Version 1.0'.

To sum up, our experience with expert system modularity is that
it is practical, functional, and that it greatly eases system
constuction and modification.  But then, isn't that what the
software engineers told us to expect?

        Cheers,

            Robert France
            france@vtopus
            Department of Computer Science
            Virginia Tech
            Blacksburg, VA  24014

"It doesn't stop; it doesn't even slow down."

------------------------------

Date: 1 Feb 88  9:32 +0100
From: Kai Quale <quale%si.uninett@TOR.nta.no>
Subject: Engineering Data Modelling Info

>I am working in the area of Engineering Databases, here at Georgia
>Tech, and  looking for information on Enginnering Data Modeling.
>Can anybody provide me with a list of good references in this area.
>Information on software packages for data modeling and names and
>address of people actively involved in this area will be also very
>helpful. The stress is on Engineering Data. I would really
>appreciate any help.

Sysdeco A/S (The SYStems DEvelopment COmpany) has a 4th gen. tool
for administrative database systems called Systemator, which provides
data modeling (with automatic database design), screen picture design,
program generation and many other facilities. The data model is used
as an active data dictionary. As of today, Systemator runs on Norsk
Data hardware and the Sibas database system. However, it is being con-
verted to other machines and database systems. I don't know if Engin-
eering Data requires special facilities or if this is in the area of
what you want, but the address is :

Sysdeco A/S, Chr. Michelsens gt. 65, 0474 Oslo 4, Norway.
Telephone : 02 (Oslo) 38 30 90

Kai Quale <quale%si.uninett@TOR.NTA.NO>

------------------------------

Date: Mon, 1 Feb 88 10:44:59 PST
From: kevinr@june.cs.washington.edu (Kevin Ross)
Subject: Re: Query - XLISP 1.5

You can find a copy of xlisp 1.6 on uunet.uu.net in I belive volume 6.


I FTP'd it a few weeks ago.

Kevin

------------------------------

Date: Tue 2 Feb 88 07:09:08-EST
From: CAROZZONI@RADC-TOPS20.ARPA
Subject: Re: Query - XLISP 1.5


You can get  XLISP 1.7 and 2.0 on BIX. Much imporved.
 -joe

------------------------------

Date: Thu 28 Jan 88 22:21:05-PST
From: Ken Laws <LAWS@KL.SRI.COM>
Reply-to: AIList-Request@SRI.COM
Subject: Re: Ten best vision references..

Fischler and Firschein's recently published collection of vision
reprints (Readings in Computer Vision: Issues, Problems, Principles,
and Paradigms, Morgan Kaufmann Pub. Inc., 1987) includes many of
the important papers, but doesn't cover the whole field.  I think
L.G. Roberts' paper on recognizing simple solids may be the only
paper everyone can agree on.

                                        -- Ken

------------------------------

Date: Sat, 30 Jan 88 23:54:52-1000
From: todd@uhccux.uhcc.hawaii.edu (The Perplexed Wiz)
Subject: Re: 10 best vision references

>Date: 28 Jan 88 00:15:09 GMT
>From: hunt@spar.SPAR.SLB.COM (Neil Hunt)
>Subject: Ten best vision references..
>
>I would like to collect votes for the ten most important references
>in the field of computer vision. I will compile a list and repost
>if there is sufficient response. Feel free to vote for one paper,
>or as many as you like; each mention by a separate person
>counts as one vote.
>
>        ...{amdahl|decwrl|hplabs}!spar!hunt    hunt@spar.slb.com

This does not exactly answer Neil's question.  However, those of you
interested in the general area of perception may want to take a look at the
following article.

        White, Murray J. (1987).  Big bangs in perception:  The most
            often cited authors and publications.  Bulletin of the
            Psychonomic Society, 25: 458-461.

His abstract is as follows:

        Textbook citations identified historically important writers and
        publications in the psychology of perception.  The influence of
        these writers and publications on present research was gauged from
        citations appearing in the current journal literature.

--
Todd Ogasawara, U. of Hawaii Faculty Development Program
UUCP:           {ihnp4,uunet,ucbvax,dcdwest}!sdcsvax!nosc!uhccux!todd
ARPA:           uhccux!todd@nosc.MIL            BITNET: todd@uhccux
INTERNET:       todd@uhccux.UHCC.HAWAII.EDU

------------------------------

Date: Mon, 1 Feb 88 10:29:27 EST
From: rapaport@cs.Buffalo.EDU (William J. Rapaport)
Subject: natural vs. artificial languages

Two classic references are:

Richard Montague, "English as a Formal Language" and "Universal
Grammar," in R. Montague, _Formal Philosophy_, ed. by R. H. Thomason
(Yale U.P.).

------------------------------

Date: Thu,  4 Feb 88 22:40:36 EST
From: "Keith F. Lynch" <KFL@AI.AI.MIT.EDU>
Subject: Ambiguous speech

> From: ulysses!sfmag!sfsup!saal@ucbvax.Berkeley.EDU  (S.Saal)

> "It's hard to wreck a nice beach." vs
> "Its hard to recognize speech."

> What I am looking for is more examples of these sentence pairs.

Eugene N. Miya (eugene@pioneer.arpa) has a list of these.
                                                                ...Keith

------------------------------

End of AIList Digest
********************

∂05-Feb-88  0535	LAWS@KL.SRI.COM 	AIList V6 #26 - Connectionism, Nature of AI, Interviewing 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 5 Feb 88  05:35:39 PST
Date: Thu  4 Feb 1988 22:58-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #26 - Connectionism, Nature of AI, Interviewing
To: AIList@SRI.COM


AIList Digest             Friday, 5 Feb 1988       Volume 6 : Issue 26

Today's Topics:
  Methodology - Two Extreme Approaches to AI & AI vs. Linguistics,
  Expert Systems - Interviewing Experts

----------------------------------------------------------------------

Date: 01 Feb 88  1153 PST
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: two extreme approaches to AI

1. The logic approach (which I follow).

Understand the common sense world well enough to express in
a suitable logical language the facts known to a person.  Also
express the reasoning methods as some kind of generalized logical
inference.  More details are in my Daedalus paper.

2. Using nano-technology to make an instrumented person.
(This approach was suggested by Drexler's book and by
Eve Lewis's commentary in AILIST.  It may even be what
she is suggesting).

Sequence the human genome.  Which one?  Mostly they're the
same, but let the researcher sequence his own.  Understand
embryology well enough to let the sequenced genome develop
in a computer to birth.  Provide an environment adequate for
human development.  It doesn't have to be very good, since
people who are deaf, dumb and blind still manage to develop
intelligence.  Now the researcher can have multiple copies
of this artificial human - himself more or less.  Because
it is a program running in a superduper computer, he can
put in science programs that find what structures correspond
to facts about the world and to particular behaviors.  It is
as though we could observe every synaptic event.  Experiments
could be made that involve modifying the structure, blocking
signals at various points and injecting new signals.

Even with the instrumented person, there would be a huge scientific
task in understanding the behavior.  Perhaps it could be solved.

        My exposition of the "instrumented man" approach is rather
schematic.  Doing it as described above would take a long time, especially
the part about understanding embryology.  Clever people, serious about
making it work, would discover shortcuts.  Even so, I'll continue
to bet on the logic approach.

3. Other approaches.  I don't even want to imply that the above two
are the main approaches.  I only needed to list two to make my main
point.

        How shall we compare these approaches?  The Dreyfus's
use the metaphor "AI at the crossroads again".  This is wrong.
AI isn't a person that can only go one way.  The headline should
be "A new entrant in the AI race" - to the extent that they
regard connectionism as new, or "An old horse re-enters the
AI race" to the extent that they regard it as a continuation
of earlier work.  There is no a priori reason why both approaches
won't win, given enough time.  Still others are viable.

        However, experience since the 1950s shows that AI is
a difficult problem, and it is very likely that fully understanding
intelligence may take of the order of a hundred years.  Therefore,
the winning approach is likely to be tens of years ahead of the
also-rans.

        The Dreyfus's don't actually work in AI.  Therefore, they take
this "Let's you and him fight" approach by babbling about a crossroads.
They don't worry about dissipating researchers' energy in writing articles
about why other researchers' are on the wrong track and shouldn't be
supported.  Naturally there will still be rivalry for funds, and even more
important, to attract the next generation of researchers.  (The
connectionists have reached a new level in this latter rivalry with their
summer schools on connectionism).  However, let this rivalry mainly take
the form of advancing one's own approach rather than denouncing others.
(I said "mainly" not "exclusively".  Critical writing is also important,
especially if it takes the form of "Here's a problem that I think gives
your approach difficulty for the following reasons.  How do you propose to
solve it?"  I hope my forthcoming BBS commentary on Smolensky's "The
Proper Treatment of Connectionism" will be taken in this spirit.)

The trouble is "AI at the Crossroads" suggests that partisans of each
approach should try to grab all the money by zapping all rivals.
Just remember that in the Communist Manifesto, Marx and Engels mentioned
another possible outcome to a class struggle than the one they
advocated - "the common ruin of the contending classes".

------------------------------

Date: 3 Feb 88 02:37:00 GMT
From: alan!tutiya@labrea.stanford.edu  (Syun Tutiya)
Subject: Re: words order in English and Japanese

In article <3725@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>Still,
>it seems to have little to do with the problems that AI researchers busy
>themselves with.  And it has everything to do with what language
>scholars busy themselves with.  Perhaps the participants realize
>instinctively that their views make more sense in this newsgroup.

I am no AI researcher or language scholar, so find it interesting to
learn that even in AI there could be an argument/discussion as to
whether this is a proper subject or that is not.  Does what AI
researchers are busy with define the proper domain of AI research?
People who answer yes to this question can be safely said to live in
an established discipline called AI.

But if AI research is to be something which aims at a theory about
intelligence, whether human or machine, I would say interests in AI
and those in philosophy is almost coextensive.

I do not mind anyone taking the above as a joke but the following
seems to be really a problem for both AI researchers and language
scholars.

A myth has it that variation in language is a matter of what is called
parameter setting, with the same inborn universal linguistic faculty
only modified with respect to a preset range of parameters.  That
linguistic faculty is relatively independent of other human faculties,
basically.  But on the other hand, AI research seems to be based on the
assumption that all the kinds of intellectual faculty are realilzed in
essentially the same manner.  So it is not unnatural for an AI
researcher try to come up with a "theory" which should "explain" the
way one of the human faculties is like, which endeavor sounds very odd
and unnatural to well-educated language scholars.  Nakashima's
original theory may have no grain of truth, I agree, but the following
exchange of opinions revealed, at least to me, that AI researchers on
the netland have lost the real challenging spirit their precursors
shared when they embarked on the project of AI.

Sorry for unproductive, onlooker-like comments.

Syun
(tutiya@csli.stanford.edu)
[The fact that I share the nationality and affiliation with Nakashima
has nothing to do with the above comments.]

------------------------------

Date: 30 Jan 88 17:34:31 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU  (Stephen
      Smoliar)
Subject: interviewing experts


I just read the article, "How to Talk to an Expert," by Steven E. Evanson
in the February 1988 issue of AI EXPERT.  While I do not expect profound
technical insights from this magazine, I found certain portions of this
article sufficiently contrary to my own experiences that I decided a
bit of flaming was in order.  Mr. Evanson is credited as being "a
practicing psychologist in Monterey, Calif., who works in the expert
systems area."  Let me being with the observation that I am NOT a
practicing psychologist, nor is my training in psychology.  What I
write will be based primarily on the four years of experience I had
at the Schlumberger-Doll Research Laboratory in Ridgefield, Connecticut
during which I had considerable opportunity to interact with a wide
variety of field experts and to attempt to implement the results of
those interactions in the form of software.

Mr. Evanson dwells on many approaches to getting an expert to explain
himself.  For the most part, he address himself to the appropriate sorts
of probing questions the interviewer should ask.  Unfortuntely, one may
conclude from Mr. Evanson's text that such interviewing is a unilateral
process.  The knowledge engineer "prompts" the expert and records what
he has to say.  Such a practice misses out on the fact that experts are
capable of listening, too.  If a knowledge engineer is discussing how an
expert is solving a particular problem;  then it is not only valuable, but
probably also important, that the interviewer be able to "play back" the
expert's solution without blindly mimicking it.  In other words, if the
interviewer can explain the solution back to the expert in a way the
expert finds acceptable, then both parties can agree that the information
has been transferred.  This seems to be the most effective way to deal
with one of Mr. Evanson's more important observations:

        It is very important for the interviewer to understand
        how the expert thinks about the problem and not assume
        or project his or her favored modes of thinking into the
        expert's verbal reports.

Maintaining bilateral communication is paramount in any encounter with an
expert.  Mr. Evanson makes the following observation:

        Shallowness of breathing or eyes that appear to defocus
        and glaze over may also be associated with internal
        visual images.

Unfortunately, it may also indicate that the expert is at a loss at the
stage of the interview.  It may be that he has encountered an intractable
problem, but another possibility it that he really has not processed a
question from the interviewer and can't figure out how to reply.  If
the interviewer cannot distinguish "deep thought" from "being at a loss,"
he is likely to get rather confused with his data.  Mr. Evanson would have
done better to cultivate an appreciation of this point.

It is also important to recognize that much of what Mr. Evanson has to say
is opinion which is not necessarily shared "across the board."  For
example:

        As experts report how they are solving a problem, they
        translate internal experiences into language.  Thus
        language becomes a tool for representing the experiences
        of the expert.

While this seems rather apparent at face value, we should bear in mind that
it is not necessarily consistent with some of the approaches to reasoning
which have been advanced by researchers such as Marvin Minsky in his work
on memory models.  The fact is that often language can be a rather poor
medium for accounting for one's behavior.  This is why I believe that it
is important that a knowledge engineer should raise himself to the level
of novice in the problem domain being investigated before he even begins
to think about what his expert system is going to look like.  It is more
important for him to internalize problem solving experiences than to simply
document them.

In light of these observations, the sample interview Mr. Evanson provides
does not serve as a particularly shining example.  He claims that he began
an interview with a family practice physician with the following question:

        Can you please describe how you go about making decisions
        with a common complaint you might see frequently in your
        practice?

This immediately gets things off on the wrong foot.  One should begin with
specific problem solving experiences.  The most successful reported interviews
with physicians have always begun with a specific case study.  If the
interviewer does not know how to formulate such a case study, then he
is not ready to interview yet.  Indeed, Mr. Evanson essentially documents
that he began with the wrong question without explicitly realizing it:

        This question elicited several minutes of interesting
        unstructured examples of general medical principles,
        data-gathering techniques, and the importance of a
        thorough examination but remained essentially unanswered.
        The question was repeated three or four times with
        slightly different phrasing with little result.

From this point on, the level of credibility of Mr. Evanson's account
goes downhill.  Ultimately, the reader of this article is left with
a potentially damaging false impression of what interviewing an expert
entails.

One important point I observed at Schlumberger is that initial interviews
often tend to be highly frustrating and not necessarily that fruitful.
They are necessary because of the anthropological necessity of establishing
a shared vocabulary.  However, once that vocabulary has been set, the
burden is on the knowledge engineer to demonstrate the ability to use
it.  Thus, the important thing is to be able to internalize some initial
problem solving experience enough so that it can be implemented.  At
this point, the expert is in a position to do something he is very good
at:  criticize the performance of an inferior.  Experts are much better
at picking apart the inadequacies of a problem which is claiming to
solve problems than at giving the underlying principles of solution.
Thus, the best way to get information out of an expert is often to
give him some novice software to criticize.  Perhaps Mr. Evanson has
never built any such software for himself, in which case this aspect
of interacting with an expert may never have occurred to him.

------------------------------

End of AIList Digest
********************

∂05-Feb-88  0742	LAWS@KL.SRI.COM 	AIList V6 #27 - Consciousness, Nanotechnology   
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 5 Feb 88  07:42:00 PST
Date: Thu  4 Feb 1988 23:05-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #27 - Consciousness, Nanotechnology
To: AIList@SRI.COM


AIList Digest             Friday, 5 Feb 1988       Volume 6 : Issue 27

Today's Topics:
  Philosophy - Self-Conscious Code and the Chinese Room,
  Applications - Nanotechnology, DNA Sequencing

----------------------------------------------------------------------

Date: 1 Feb 88 18:59:19 GMT
From: bwk@mitre-bedford.ARPA (Barry W. Kort)
Reply-to: bwk@mbunix (Barry Kort)
Subject: Re: Self-conscious code and the Chinese room

Jorn Berger identifies an important characteristic of an intelligent
system:  namely the ability to learn and evolve its intelligence.

In thinking about artificial intelligence, I like to draw a distinction
between a sapient system and a sentient system.  A sapient system reposes
knowledge, but does not evolve.  A sentient system adds to its abilities
as it goes along.  It learns.

If the Chinese Room not only applied the rules for manipulating the
squiggles and squoggles, but also evolved the rules themselves so
as to improve its ability to synopsize a story, then we would be more
sympathetic to the suggestion that the room was intelligent.

Here is where the skeleton key comes in.  In computer programming, there
is no inherent taboo that prevents a program from modifying its own code.
Most programmers religiously avoid such practice, because it usually leads
to suicidal outcomes.  But there are good examples of game-playing
programs that do evolve their heuristic rules based on experience.

Jacob Bronoswki has said that if man is any kind of machine, he is
a learning machine.  I think that Minsky would agree.

Now if we can just work out the algorithms for learning...

--Barry Kort

------------------------------

Date: Mon, 1 Feb 88 6:32:03 PST
From: jlevy.pa@Xerox.COM
Subject: Newton and Nanotechnology

Point well-taken, friends. I just got carried away with the analogy.. Next time
better.

References
        AIList V6 #23 - Newton, Nanotechnology, Philosophy

------------------------------

Date: 4 Feb 88 19:31:14 GMT
From: moss!odyssey!gls@rutgers.edu (g.l.sicherman)
Subject: Re: ailist as forum


> >........ This has to be the only forum in the civilized world which allows
> >such claims to be perpetrated without receiving equal portions of ridicule
> >and abuse.  Can it not be stopped?
>
> Obviously, ailist IS a forum where ridicule and abuse is permitted.
> Interestingly, in his book Drexler calls for setting up public forums
> where ideas of alleged scientific merit can be scrutinized openly and
> subjected to ridicule if such is deemed appropriate.

As the saying goes, freedom of the press is great ... if you have a
press.  Now we have the Net.  Let's see whether the people who have
all always paid lip service to free speech will stand by the real thing!
Personally, I'm glad to see scholarship get a chance to evolve from an
academic power game to an open group activity for all.

-:-
        "Hey!  My cigarettes are gone!"
        "Sorry, master.  I must obey the First Law of Robotics, you know."
--
Col. G. L. Sicherman
...!ihnp4!odyssey!gls

------------------------------

Date: 1 Feb 88 17:49:18 GMT
From: umix!umich!eecs.umich.edu!dwt@uunet.UU.NET (David West)
Reply-to: umix!umich!eecs.umich.edu!dwt@uunet.UU.NET (David West)
Subject: Re: intelligent nanocomupters


In article <8801251914.AA24568@LANL.GOV> t05rrs%mpx1@LANL.GOV
  (Dick Silbar) writes:
>...to accomplish a century of progress in one hour."  I am reminded of a novel
>some years back by Robert Forward, "Dragon's Egg", in which just that did
>happen in a civilization living on the surface of a neutron star.

Within that novel, no simulation was involved; the civilization "naturally" ran
that fast because the dominant forces in its material basis were baryonic
("strong nuclear") rather than coulomb ("electromagnetic").

-Davi.

------------------------------

Date: 1 Feb 88 18:22:46 GMT
From: umix!umich!eecs.umich.edu!dwt@uunet.UU.NET (David West)
Subject: Re: Intelligent Nanocomputers


Let me be quite clear: in my earlier posting I intended to ridicule neither
Eric Drexler nor the idea of molecular machinery. I *did* intend to ridicule
the idea that meaningful simulation is possible in the absence of sufficient
knowledge and understanding of the system one is allegedly simulating.
-David West

------------------------------

Date: 2 Feb 88 03:12:09 GMT
From: yunexus!unicus!craig@uunet.UU.NET (Craig D. Hubley)
Reply-to: craig@unicus.com (Craig D. Hubley)
Subject: Re: Intelligent Nanocomputers
Article-I.D.: unicus.2152


>In article <8801180618.AA08132@ucbvax.Berkeley.EDU> GODDEN@gmr.COM writes:
>> [...] the book >Engines of Creation< by K. Eric Drexler of MIT. [...]
>>it is not necessary to first understand intelligence.  All one has to do is
>>simulate the brain [...] a complete hardware simulation of the brain can be
>>done [...] in the space of one cubic centimeter [...] h a machine could then
>>just be allowed to run and should be able to accomplish a man-year of
>>work in ten seconds.
>
>The breathtaking simplicity of the idea is awesome.  Of course, some
>technological advances will be necessary for its realization, but note that
>to attain them, it is not necessary to understand technology ... all one has
>to do is simulate its development.  A complete hardware simulation of the
>U.S. technological enterprise can be done in the space of one cubic meter
>(see appendix A) ... such a machine could then just be allowed to run, and
>should be able to accomplish a century of progress in one hour.

The bounding factor on progress thus becomes imagination.  One could
argue that this has always been the case anyway.  The human race's
primary occupation would then become dreaming up strange ideas for
it's computers to chew on, prove/disprove, design and build.
This seems almost natural, since our primary occupation has changed
over the past three hundred years from manual labour through
operating machines to moving information around.

The so-called `Third Wave' of information technologies has only
recently (within the last ten years) been widely recognized as such.
It seems that you only sees the waves as they wash over you.

Drexler's arguments, for those of you who haven't read the book,
are broadly-based and in places expressionistic, though his appendices
spell out in some detail his reasoning, and several chapters contain
a sort of `question and answer' section where what must be the most
commonly asked skeptical questions are themselves addressed.
This is an intriguing technique of `compressing discourse' that
more controversial books might benefit from, that is, an explicit
answer to questions that otherwise would nag and bias the reader.
If the answers are unsatisfactory, so be it.  At least they are
there to refute.

I think it noteworthy that I've seen Drexler's name and book mentioned
in several electronic and a few conversational forums, and not once
did I ever hear an argument that he didn't explicitly address in his book.
Nor have I heard a credible refutation of any of his points.  On the contrary,
I have heard nothing but enthusiastic recommendation of the book from those
who've read it and receptivity to the ideas from individuals qualified in
the specific fields concerned, from computing to molecular biology to
business.  Some of these individuals were very much skeptics at heart.

I guess I won't be comfortable until I hear somebody *flame* the damn book!

After all, it's annoying to have to just wait around wondering if I'll
ever be able to solve problems just by thinking of them, live forever
(barring accidents) in whatever environment I choose, and live in a
body fortified by a truly formidable defense and immune system.
If it's coming soon, I don't see much point in doing anything other than
working on it, for those of us in technical fields.  To solve the
pollution, resource, food problems at once as a side effect!

I'm afraid that reading this book puts truly big ideas into one's head.
Don't read it unless your head is big enough to contain them.  :-)
And won't *someone* please flame the book!!!

        Craig Hubley, Unicus Corporation, Toronto, Ont.
        craig@Unicus.COM                                (Internet)
        {uunet!mnetor, utzoo!utcsri}!unicus!craig       (dumb uucp)
        mnetor!unicus!craig@uunet.uu.net                (dumb arpa)

------------------------------

Date: Wed, 3 Feb 1988  01:25 EST
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: Nanotechnology

Those reactionaries who were flaming at Drexler's ideas ought to read
this week's issue of Nature.  A group at IBM San Jose Almaden Research
Center have used a scanning tunneling microscope to pin a single
molecule of dimethyl phthalate to the surface of a graphite sheet and
then to rearrange its atoms, and see the results.  The exact details
of the rearrangement are not yet controllable, but the aromatic
subgroups are clearly visible.  (Dimethyl phthalate is about the size
of two benzene reings.)  The operations can be done at sub-microsecond
speed, using the order of .1 microsecond pulses at 3.5 volts.

Progress in this direction certainly seems faster than almost everyone
would have expected.  I will make a prediction: In the next few years,
various projects will request and obtain large budgets for the human
genome sequencing" enterprise.  In the meantime, someone will succeed
in stretching single strands of protein, DNA, or RNA across
crystalline surfaces, and sequence them, using the STM method.
Eventually, it should become feasible to do such sequencing at
multi-kilocycle rates, so that an entire chromosome could be logged in
a few days.

Using this system for constructive operations lies further in the
future; however, it might sooner be feasible to introduce controlled
damage to genetic elements.  This would, for example, make it easy to
inactivate particular gene-promoters and, thus, to remove a bad gene.
Incidentally, these operations can be performed inside a drop of
liquid (the STM does not need a vacuum).  So it ought to be feasible to
put the altered genetic material back into a cell.

------------------------------

End of AIList Digest
********************

∂11-Feb-88  1354	LAWS@KL.SRI.COM 	AIList V6 #28 - XLISP, Genetic Algorithms, Methodology    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 11 Feb 88  13:54:31 PST
Date: Sun  7 Feb 1988 23:39-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #28 - XLISP, Genetic Algorithms, Methodology
To: AIList@SRI.COM


AIList Digest             Monday, 8 Feb 1988       Volume 6 : Issue 28

Today's Topics:
  Query - Knowledge Pro & CLOS Compilation & SWE References &
    CAI & Neural-Net Survey,
  AI Tools - XLISP 1.7 & Genetic Algorithms,
  Binding - Rick Riolo,
  Methodology - Diversity & Interviewing Experts

----------------------------------------------------------------------

Date: Sat, 6 Feb 88 10:23:59 EST
From: Brady@UDEL.EDU
Subject: Query - Knowledge Pro

The product Knowledge Pro
has been heavily advertised in
AIExpert, and in other publications,
but I have seen no product reviews.
I would appreciate hearing about
user experiences. Also, if any product
reviews have been published, please tell
me where. I will summarize and
resend replies to the net.

My intended use would be in computer aided
instruction.

Thank you in advance.

------------------------------

Date: 6 Feb 88 16:24:50 GMT
From: pitt!cisunx!jasst3@cadre.dsl.pittsburgh.edu  (Jeffrey A.
      Sullivan)
Subject: CLOS Compilation Question

I have recently gotten the PCL code from the xerox parcvax.  In the defsys
file, in the variable *pcl-files*, it lists a file called "low.lisp" which
is not in the directories anywhere.  What should be done about this?  The
PCL code will not compile without a low.lisp, even though I have the coral-low
file which is my machine specific version.  Should I rename coral-low.lisp to
low.lisp (I don't think so; both are listed separately and one depends on the
other)?  Can someone either send me the low.lisp file or tell me what should
be in it?  I have created an empty low.lisp file and compilation has progressed
(apparently) normally, but memory constraints keep the full PCL from finishing
yet, so I don't know if that will work.

Thanks in advance,


--
..........................................................................
Jeff Sullivan                           University of Pittsburgh
pitt!cisunx!jasst3                      Intelligent Systems Studies Program
jasper@PittVMS (BITNET)                 Graduate Student

------------------------------

Date: Sat, 6 Feb 88 21:57 EDT
From: LEWIS%cs.umass.edu@RELAY.CS.NET
Subject: request for refs on SWE for AI

   I'm currently taking a seminar on Software Engineering and AI. It's supposed
to be balanced, but right now we've found many more papers on applying AI to
software engineering than we have on software engineering applied to AI.
Does anyone have some suggested papers on programming techniques,
language design, environments, methodology, etc. for AI or LISP?

Thanks,

David D. Lewis                         CSNET: lewis@cs.umass.edu
COINS Dept.                            BITNET: lewis@umass
University of Massachusetts, Amherst
Amherst, MA 01003

------------------------------

Date: 2 Feb 88 16:58:26 GMT
From: dalcs!aucs!870158a@uunet.uu.net  (Benjamin Armstrong)
Subject: Becoming CAI literate

I have, of late, become fascinated by the as yet unexplored possibilities for
the use of computers in all levels of our education systems.  A book called
"Mindstorms" by Seymour Papert has been most influential in inspiring me to
seek out and digest as much information regarding computers in education as I
can.  I have not yet, however, found discussions on the net concerning such
topics as: the design and evaluation of educational software; the effects of
introducing computers into the schools on the social organization of
classrooms; "computer as teacher" vs. "computer as learning tool"; and the
availability of microcomputers to students.

I hope that someone out there will either offer me some opinions on the above
topics or direct me to a newsgroup where such discussions take place.

  [The newsgroup is AI-ED@SUMEX.STANFORD.EDU.  -- KIL]

------------------------------

Date: 5 Feb 88 04:39:58 GMT
From: nuchat!uhnix1!cosc2mi@uunet.uu.net  (Francis Kam)
Subject: neural-net

I am working on the learning aspects of the neural net model in computing
and would like to know what's happening in the rest of the neural net
community in the following areas:
  1) neural net models
  2) neural net learning rules
  3) experimental (analog, digital, optical) results of any kind with
     figures;
  4) neural net machines (commercial, experimental, any kind);
  5) any technical reports in these areas;

For information exchange and discussion purpose,
please send mail to mkkam@houston.edu.
Thank you.

------------------------------

Date: Fri, 5 Feb 88 13:35:16 MST
From: t05rrs%mpx1@LANL.GOV (Dick Silbar)
Subject: XLISP 1.7

In V6 #19 Bill Delaney asks where he can get XLISP 1.5.  XLISP 1.7 can
be obtained from the Pioneer Valley PC User's Group on floppy diskette for
$6 plus $5 one-year membership fee ($15 if outside US or Canada) plus $1
postage ($5 if outside US or Canada).  I have not used 1.7 much myself,
but this version comes with examples and much better documentation than 1.5.
It includes, I believe, a C source listing.

The PVPCUG is at P.O. Box H, North Amherst, MA 01059.

------------------------------

Date: Fri, 5 Feb 88 10:08:02 PST
From: rik@sdcsvax.ucsd.edu (Rik Belew)
Subject: A short definition of Genetic Algorithms

Mark Goldfain asks:
        Would someone do me a favor and post or email a short definition of the
        term "Genetic Learning Algorithm" or "Genetic Algorithm" ?

I feel like Genetic Algorithms has two, not quite distinct meanings
these days. First, there is a particular (class of) algorithms developed
by John Holland and his students. This GA(1) has at its most distinctive
feature the "cross-over" operator, which Holland has gone to some
effort to characterize analytically. Then there is a broader class GA(2)
of genetic algorithms (sometimes also  called "simulated evolution") that
bear some loose resemblence to population genetics. These date back
to at least Fogel, Owen and Walsh (1966). Generally, these
algorithms make use of only a "mutation" operator.
        The complication comes with work like Ackley's thesis (CMU, 1987)
which refers to Holland's GA(1), but which is most accurately
described as a GA(2).

Richard K. Belew

rik@cs.ucsd.edu

Computer Science & Engr. Dept.  (C-014)
Univ. Calif - San Diego
San Diego, CA 92093

------------------------------

Date: 5 Feb 88 18:22:21 GMT
From: g451252772ea@deneb.ucdavis.edu  (0040;0000003980;0;327;142;)
Subject: Re: Cognitive System using Genetic Algo


I offer definitions by (1) aspersion (2) my broad characterization (3) one
of J Holland's shortest canonical characterizations and (4) application.

(1)  GA are anything J Holland and/or his students say they are.  (But this
_is_ an aspersion on a rich, subtle and creative synthesis of formal systems
and evolutionary dynamics.)

(2) Broadly, GA are an optimization method for complex (multi-peaked, multi-
dimensional, ill-defined) fitness functions.  They reliably avoid local
max/min, and the search time is much less than random search would require.
Production rules are employed, but only as mappings from bit-strings (with
wild-cards) to other bit strings, or to system outputs.  System inputs are
represented as bitstrings.  The rules are used stochastically, and in
parallel (at least conceptually; I understand several folk are doing
implementations, too).

A pretty good context paper for perspective (tho weak on the definition of
GA!) is the Nature review 'New optimization methods from physics and
biology' (9/17/87, pp.215-19).  The author discusses neural nets,
simulated annealing, and one example of GA, all applied to the TSP, but
comments that "... a thorough comparason ... _would be_ very interesting"
(my emphasis).

(3)  J. Holland, "Genetic algorithms and adaptation", pp. 317-33 in
ADAPTIVE CONTROL OF ILL-DEFINED SYSTEMS, 1984, Ed. O. Selfridge, E. Rissland,
M. A. Arbib.  Page 319 has:
"In brief, and very roughly, a genetic algorithm can be looked
upon as a sampling procedure that draws samples from the set C; each
sample drawn has a value, the fitness of the corresponding genotype.
From this point of view the population of individuals at any time t,
call it B(t), is a _set_ of samples drawn from C.  The genetic algo-
rithm observes the fitnesses of the individuals in B(t) and uses
this information to generate and test a new set of individuals,
B(t+1).  As we will soon see in detail, the genetic algorithm uses
the familiar "reproduction according to fitness" in combination with
crossing over (and other genetic operators) to generate the new
individuals.  This process progressively biases the sampling pro-
cedure toward the use of _combinations_ of alleles associated with
above-average fitness.  Surprisingly, in a population of size M, the
algorithm effectively exploits some multiple of M↑3 combinations in
exploring C.  (We shall soon see how this happens.)  For populations
of more than a few individuals this number, M↑3, is vastly greater
than the total number of alleles in the population.  The correspond-
ing speedup in the rate of searching C, a property called _implicit
parallelism_, makes possible very high rates of adaptation.  Moreover,
because a genetic algorithm uses a distributed database (the popu-
lation) to generate new samples, it is all but immune to some of the
difficulties -- false peaks, discontinuities, high-dimensionality,
etc. -- that commonly attend complex problems."

Well, _I_ shall soon close here, but first the few examples of applications
that I know of (the situation reminds me of the joke about the two rubes
visiting New York for the first time, getting off the bus with all of
$2.50.  What to do?  One takes the money, disappears into a drugstore
and reappears having bought a box of Tampax.  Quoth he, "With tampax,
you can do _anything_!)  Anyway:

o       As noted, the TSP is a canonical candidate.
o       A student of Holland has implemented a control algorithm for
a gas pipe-line center, which monitors and adaptively controls flow
rates based on cyclic usages and arbitrary, even ephemeral, constraints.
o       Of course, some students have done some real (biological) population
genetics studies, which I note are a tad more plausible than the usual
haploid, deterministic equations.
o       Byte mag. has run a few articles, e.g. 'Predicting International
Events' and 'A bit-mapped Classifier' (both 10/86).
o       Artificial animals are being modelled in artificial worlds.  (When
will the Vivarium let some their animated blimps ("fish") be so programmed?)

Finally, I noted above that the production rules take system inputs as
bit-strings.  This representation allows for induction, and opens up a
large realm of cognitive science issues, addressed by Holland et al in
their newish book, INDUCTION.

Hope this helps.  I really would like to hear about other application
areas; pragmatic issues are still unclear in my mind also, but as apparent,
the GA model has intrinsic appeal.




Ron Goldthwaite / UC Davis, Psychology and Animal Behavior
'Economics is a branch of ethics, pretending to be a science;
 ethology is a science, pretending relevance to ethics.'

------------------------------

Date: 3 Feb 88 18:13:15 GMT
From: umich!dwt@umix.cc.umich.edu  (David West)
Subject: Re: Classifier System Testbed

In article <241@wright.EDU> joh@wright.EDU (Jae Chan Oh) writes:
>Does anyone know where Rick Riolo (a former grad. student at Univ. of
>Mich.) is located at present, or how can I reach him by email...

You should be able to reach him at  Rick_Riolo@ub.cc.umich.edu
(case not significant).
-David.

------------------------------

Date: Fri, 5 Feb 88 11:15:48 EST
From: Jim Hendler <hendler@brillig.umd.edu>
Subject: Diversity

  While I realize that it is incredibly headstrong for an upstart like me
to feel compelled to echo the words of someone like McCarthy, I wanted to
quickly reply to his note about there being room for many approaches to AI
with a resounding ``Hurrah.''
  I do, however, want to add one thing: not only is there room for different
approaches, but it may be crucial to examine methodologies which are hybrids
of the differing techniques -- perhaps the whole can be stronger than the sum
of the parts.  The notion of logic, connectionism, cognitive modeling, and
etc. as different `paradigms,' using the strong meaning of that term, seems
to me to be dangerously divisive.  The problem is so hard, it is difficult
to believe that any one of the current approaches could possibly hold all
the answers.
 Finally, let me briefly note that it is possible to create these sorts of
mixed paradigm systems.  Not only has my own work shown the possibility of
reconciling differing approaches to activation-spreading (integrating
a connectionist network and a semantic network in such a way that they
communicate via a marker-passing-like spreading-activation mechanism), but
some of the recent work in connectionist natural language processing* and
work in structured connectionism** also seem to indicate that systems
blending the technologies hold promise.
 Thus, instead of viewing things as a horse race with each entrant ridden by
its own set of jockeys, we should try to harness the steeds together for
maximum horsepower.
  -Jim Hendler
   Dept. of Computer Science
   UMCP

* Jordan Pollack's recent doctoral thesis provides an excellant discussion of
many of these systems.
** The work at Rochester by Feldman et.  al.  and the work of Shastri, now
at UPenn, are good starting places for more info.  on the structured
connectionist approaches to traditional AI tasks.

------------------------------

Date: 5 Feb 88 02:58:18 GMT
From: fordjm@byuvax.bitnet
Subject: RE: interviewing experts


Note:  The following article is from both Larry E. Wood and John
M. Ford of  Brigham Young University.

We have also recently read Evanson's AI Expert article on
interviewing experts  and feel that some discussion of this topic
would prove useful.  Relative to Steve  Smoliar's reactions, we
feel it is appropriate to begin with a disclaimer of sorts.  As
cognitive psychologists, we hope those reading Evanson's article
will not judge the potential contributions of psychologists by
what they find there.  Some of the points Evanson  chooses to
emphasize seem counterintuitive (and perhaps counterproductive)
to us as well.  We attribute this in part to his being a
practicing clinician rather than a specialist in cognitive
processes.

On a more positive note, as relative newcomers to the newly
emerging field of knowledge engineering (two years), we do
believe that there are social science disciplines which can make
important contributions to the field.  These disciplines include
cognitive science research  methodology, educational measurement
and task analysis, social science survey  research,
anthropological research methods, protocol analysis, and others.

While knowledge elicitation for the purpose of building expert
systems (or  other AI applications) has its own special set of
problems, we believe that these social science disciplines have
developed some methods which knowledge engineers  can adapt to
the task of knowledge elicitation and documentation.  Two
examples of such interdisciplinary  "borrowing" which are
presently influencing knowledge engineering are the  widespread
use of protocol analysis methods (see a number of articles in
this  year's issues of the International Journal of Man-Machine
Studies) and the  influence of anthropological methods and
perspectives (alluded to by Steve  Smoliar in his previous
posting and represented in the work of Marriane  LaFrance, see
also this year's IJM-MS).  It is our belief that there are other
areas in the social sciences which can make important
contributions, but which  are not yet well known in AI circles.

This is *not* intended as a blanket endorsement of approaches to
knowledge  elicitation based on social science disciplines.  We
do, however,  believe that it is important for practicing
knowledge engineers to attend to methodologies developed outside
of AI so that they can spend their time  refining and extending
their application to AI rather than "reinventing the  wheel."

We have a paper in preparation  which addresses some of these
issues.


Larry E. Wood                      John M. Ford
woodl@byuvax.bitnet                fordjm@byuvax.bitnet

------------------------------

End of AIList Digest
********************

∂14-Feb-88  0116	LAWS@KL.SRI.COM 	AIList V6 #29 - Seminars, Conferences 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 14 Feb 88  01:16:45 PST
Date: Sat 13 Feb 1988 22:50-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #29 - Seminars, Conferences
To: AIList@SRI.COM


AIList Digest            Sunday, 14 Feb 1988       Volume 6 : Issue 29

Today's Topics:
  Policy - Seminars,
  Seminar - New Logics for Linguistic Descriptions (BBN) &
    The Power of Vacillation (SUNY) &
    Qualitative ODEs using Linear Approximations (UToronto),
  Conference - Computational Learning Theory &
    3rd European Working Session on Learning 1988 &
    Micros, Expert Systems in Planning, Transport, Building

----------------------------------------------------------------------

Date: Fri, 5 Feb 88 09:07:24 PST
From: rifrig@Sun.COM (Christopher Rigatuso)
Subject: Policy - Seminars

 How come we get seminar announcements for Feb 4th,
 on Feb 5th?

 It seems that this type of thing happens
 occasionally on this alias.

 --Chris.

  [I used to send out seminar notices as they came in, trying
  to maximize their usefulness.  Readers found this annoying,
  however, because lengthy seminar and conference notices were
  mixed into nearly every digest.  The list members seem to
  prefer having conference notices segregated so that this
  substream may be easily skipped or archived.  Such collection
  introduces a delay of up to a week.  Note that the purpose
  of seminar abstracts (N.B.: abstracts are required) on AIList
  is to inform those who can't attend seminars rather than to
  alert those who can.  -- KIL]

------------------------------

Date: Tue 9 Feb 88 09:03:58-EST
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: Seminar - New Logics for Linguistic Descriptions (BBN)


                         BBN Science Development Program
                          AI/Education Seminar Series



                 SOME NEW LOGICS FOR LINGUISTIC DESCRIPTIONS

                               William Rounds
                          CSLI, Stanford University
                                Xerox PARC
                         (ROUNDS@Russell.Stanford.EDU)


                           BBN Laboratories Inc.
                            10 Moulton Street
                    Large Conference Room, 2nd Floor

                10:30 a.m., Tuesday, February 23, 1988


Abstract:  Unification-based grammar formalisms typically use attribute-
value matrices as repositories of information derived from
utterances. In previous work we have shown how to represent
grammatical specifications as logical formulas which speak directly
about these matrices. This involved the use of a particularly
simple form of deterministic propositional dynamic logic. In this talk,
we will review this logic, and then discuss how to extend the
logic to speak about set-valued matrices, which involves
a notion of nondeterminism.

Examples will be given involving modeling common knowledge
as a certain non-wellfounded set (its elements include the set itself),
and some coordination phenomena in lexical-functional grammar.
Each example illustrates a particular kind of logical expression.

------------------------------

Date: 9 Feb 88 15:29:07 GMT
From: decvax!sunybcs!rapaport@ucbvax.Berkeley.EDU  (William J.
      Rapaport)
Subject: Seminar - The Power of Vacillation (SUNY)


           STATE UNIVERSITY OF NEW YORK AT BUFFALO

               DEPARTMENT OF COMPUTER SCIENCE

                         COLLOQUIUM

                  THE POWER OF VACILLATION

                         John Case
               Department of Computer Science
          State University of New York at Buffalo

     Recursion  theory  provides  a   relatively   abstract,
elegant  account of the absolute boundaries of computability
by discrete machines.  The insights it can provide are  best
described  as  philosophical.  In this talk I examine a sub-
part of this theory pertaining to machine learning, specifi-
cally, in this case, language learning.

     I   will   describe   Gold's   influential,   recursion
theoretic, language-learning paradigm (and variations on the
theme), point out its easily seen, considerable  weaknesses,
but  then  argue,  by  means of example theorems, that it is
possible, nonetheless, to obtain some insights into language
learning within the general context of this paradigm.

     For example, I will  squeeze  some  insight  out  of  a
theorem to the effect that allowing a kind of vacillation in
the convergent behavior  of  algorithmic,  language-learning
devices  leads,  perhaps  unexpectedly,  to greater learning
power.

     I'll sketch the proofs of a couple of the theorems,  in
part  to  convince you they are true, but mostly because the
proofs are beautiful and illustrative of techniques  in  the
area.

           Date:   Thursday, 11th February, 1988
                 Time:   3:30 pm to 4:30 pm
              Place:   Bell 337, Amherst Campus

   Wine and Cheese will be served at 4:30 pm at Bell 224.

       For further information, call (716) 636-3199.

------------------------------

Date: 11 Feb 88 22:14:07 GMT
From: Armin Haken <armin%ai.toronto.edu@RELAY.CS.NET>
Subject: Seminar - Qualitative ODEs using Linear Approximations
         (UToronto)

There will be an AI seminar on Tuesday 23 February at 2PM in room
SF 1105, given by Dr. Elisha Sacks of MIT.  [...]
Hosting is Hector Levesque.


  Automatic Qualitative Analysis of Ordinary Differential Equations
              Using Piecewise Linear Approximations

                           by Elisha Sacks

This talk explores automating the qualitative analysis of physical
systems.  Scientists and engineers model many physical systems with
ordinary differential equations.  They deduce the behavior of the
systems by analyzing the equations.  Most realistic models are
nonlinear, hence difficult or impossible to solve explicitly.  Analysts
must resort to approximations or to sophisticated mathematical
techniques.  I describe a program, called PLR (for Piecewise Linear
Reasoner), that formalizes an analysis strategy employed by experts.
PLR takes parameterized ordinary differential equations as input and
produces a qualitative description of the solutions for all initial values.
It approximates intractable nonlinear systems with piecewise linear
ones, analyzes the approximations, and draws conclusions about the
original systems.  It chooses approximations that are accurate enough
to reproduce the essential properties of their nonlinear prototypes, yet
simple enough to be analyzed completely and efficiently.

PLR uses the standard phase space representation.  It builds a
composite phase diagram for a piecewise linear system by pasting
together the local phase diagrams of its linear regions.  It employs a
combination of geometric and algebraic reasoning to determine whether
the trajectories in each linear region cross into adjoining regions and
summarizes the results in a transition graph.  Transition graphs
explicitly express many qualitative properties of systems.  PLR derives
additional properties, such as boundedness or periodicity, by theoretical
methods.  PLR's analysis depends on abstract properties of systems
rather than on specific numeric values.  This makes its conclusions
more robust and enables it to handle parameterized equations
transparently.  I demonstrate PLR on several common nonlinear systems
and on published examples from mechanical engineering.

  || Armin Haken                                  armin@ai.toronto.edu ||
  || (416)978-6277                               ...!utcsri!utai!armin ||
  || UofT DCS, Toronto M5S 1A4 CDN        armin%ai.toronto@csnet-relay ||

------------------------------

Date: Wed, 10 Feb 88 15:29:57 CST
From: pitt@p.cs.uiuc.edu (Lenny Pitt)
Subject: Conference - Computational Learning Theory


                          CALL FOR PAPERS

             Workshop on Computational Learning Theory

                     Cambridge, Massachusetts
                         August 3-5, 1988


     The first workshop on Computational Learning  Theory  will  be
held at MIT August 3-5, 1988.  It is expected that most papers will
consist of rigorous and formal analyses of  theoretical  issues  in
Machine  Learning.  Empirical work will be considered only if it is
testing some hypothesis that has a quantitative theoretical  basis.
Possible topics include, but are not limited to:

1. resource, convergence-rate and robustness analysis (time, space,
   number  of examples, noise sensitivity, etc.) of specific learn-
   ing algorithms,

2. general learnability and non-learnability  results  in  existing
   computational learning models and general upper and lower bounds
   on resources required for learning, and

3. new computational learning models, extensions of existing learn-
   ing models, and theoretical comparisons among learning models.

     Papers that make formal connections  with  work  in  Robotics,
Neural  Nets,  Pattern  Recognition, Adaptive Signal Processing and
Cryptography are also welcome.

TO REGISTER FOR THE WORKSHOP

     Due to space limitations, registration for the  workshop  will
be  limited  to 60.  If you would like to participate, send a brief
(one page max.) description of your current research, by April  15,
to  the  address  below.   Participants  will be notified, and sent
registration information, by June 1.   It  is  possible  that  some
financial  support  will be available for graduate student partici-
pants.

TO SUBMIT A PAPER

Authors should submit extended abstracts that consist of:

(1)  A cover page with title, author's names and addresses  (e-mail
     also if possible), and a 200 word summary.

(2)  A body not exceeding 5 pages in twelve-point  font.   A  brief
     statement of the definitions and model used followed by a list
     of theorems with proof  sketches  is  suggested.   A  succinct
     statement  on  the  significance of the results should also be
     included.

Authors should send 8 copies of their submissions to

                            John Cherniavsky
               Workshop on Computational Learning Theory
                     Department of Computer Science
                         Georgetown University
                        Washington, D.C.  20057


     The deadline for receiving  submissions  is  April  15,  1988.
This  deadline  is  FIRM. Authors will be notified by June 1, final
camera-ready papers will be due July 1.

Organizing/program committee:

 David Haussler, UC Santa Cruz,  (workshop  co-chair);
 Leonard Pitt,  U.  Illinois, (workshop co-chair);
 John Cherniavsky, Georgetown University, (program committee  chair);
 Ronald  Rivest,  MIT, (local  arrangements);
 Dana  Angluin, Yale University;
 Carl Smith, NSF;
 Leslie Valiant, Harvard University;
 Manfred Warmuth, UC Santa Cruz.

------------------------------

Date: Tue, 9 Feb 88 10:32:34 PST
From: Haym Hirsh <HIRSH@SUMEX-AIM.Stanford.EDU>
Subject: Conference - 3rd European Working Session on Learning 1988

[forwarding on request of Sleeman]
From: Derek Sleeman <sleeman%csvax.aberdeen.ac.uk@NSS.Cs.Ucl.AC.UK>


                                    EWSL 88


                    3rd European Working Session on Learning


            Turing Institute, Glasgow, Scotland - October 3-5, 1988
                                CALL FOR PAPERS


    EWSL  is  an  annual  meeting  on  Machine  Learning  enabling  European
    researchers to present recent research results.   However, participation
    is NOT limited to Europeans.  The first EWSL meeting was held at  Orsay,
    France, in February 1986;  the second was held in Bled, Yugoslavia.


          TOPICS


    The emphasis will be on Machine Learning, but relevant Cognitive Science
    studies pertinent to the theme will be most welcomed.

    SUBMISSION OF PAPERS

    Authors should submit five copies of papers (in English) of no more than
    5000 words by the 1 May 1988 to the Program Chairman:


              Derek Sleeman (EWSL-88)
              Department of Computing Science
              University of Aberdeen
              ABERDEEN  AB9 2UB
              Scotland  UK

              Tel No. Aberdeen (+44 224) 272288;  Telex 73458


    The title page should contain the following information: Authors'  names
    &  addresses;   and  for the contact person telephone and telex numbers;
    together with an abstract of 100-200 words, and upto 10 descriptive key-
    words.


    TIMETABLE

    - submission deadline:  1 May 1988
    - notification of acceptance or rejection:  1 July 1988
    - camera ready copy:  15 August 1988

    All delegates will receive a copy of the  proceedings  on  registration;
    it  is  intended  to publish a selection of the papers in a revised form
    after the meeting.

          PROGRAMME COMMITTEE MEMBERS

          Ivan BRATKO, Ljubljana, Yugoslavia  Bernd NORDHAUSEN, Muenchen.
          Pavel BRAZDIL, Porto, Portugal      Claus ROLLINGER, Stuttgart.
          Francoise FOGELMAN, Paris.          Derek SLEEMAN, Aberdeen.
          Yves KODRATOFF, Orsay.              Martin STACEY, Aberdeen.
          Donald MICHIE, Glasgow.             Luc STEELS, Brussels.
          Steve MUGGLETON, Glasgow.           Bob WIELINGA, Amsterdam.

          FORMAT OF THE CONFERENCE

    The conference  will  consist  of  invited  talks,  submitted  technical
    papers, short project progress reports, and in-depth discussions on spe-
    cial topics.   Suggestions for panel topics are invited  and  should  be
    sent to the Program Chairman.

          LOCAL ORGANISATION

          Tim Niblett/Jim Richmond
          Turing Institute
          George House
          36 North Hanover Street
          GLASGOW  G1 2AD


          Phone:  +44 41 552 6400.

------------------------------

Date: 9 Feb 88 22:08:35 GMT
From: munnari!dbrmelb.oz.au!ron%dbrmelb.dbrhi.OZ@uunet.UU.NET (Ron
      Sharpe)
Subject: Conference - Micros, Expert Systems in Planning, Transport,
         Building


INTERNATIONAL WORKSHOP:
MICROCOMPUTERS AND EXPERT SYSTEMS FOR PLANNING, TRANSPORT AND BUILDING

When         :   Tues. April 26th 1988    (Services planning)
                 Wed.  April 27th 1988    (Building planning)
                 Thur. April 28th 1988    (Expert systems)

Where        :  AUSTRADE  (Australian Trade Commission)
                24th Floor, Harbour Centre, 25 Harbour Road
                Wan Chai
                HONG KONG

Theme        :  Government and private enterprise currently face increasing
                responsibilities that must be met with diminishing
                resources in a period of rapid technological and social
                change. Microcomputer technology offers opportunities for
                improved capabilities for evaluating the services required,
                and in providing information for decision making. These
                seminars and workshops will describe the development of over
                15 microcomputer-based packages expressly designed for
                planners, engineers and management. Expert systems
                applications under development will be presented.

                Eleven key planners and engineers from the People's Republic
                of China & Phillipines will be participating in the workshop
                (under sponsorship from AIDAB - Australian International
                Development Assistance Bureau). This will provide a valuable
                opportunity for other participants wishing to establish
                linkages in those countries.

Cost         :  $HK300 per day, or $HK550 for 2 days, or $HK800
                for 3 days (includes luncheon, refreshments and software
                technical handouts). Discount: 10% for early registration
                before March 7th. (Australian costs: $A60, $A110, $A160
                respectively).
                As the number of places is limited, acceptance will be in
                order in receipt of registration.

Further details &
Enquires:       Dr Ron Sharpe, Commonwealth Scientific & Industrial Research
                Organisation (CSIRO), Div. of Construction & Engineering,
                PO Box 56, Highett, Vic 3190, Australia.
                Phone (+61 3) 556 2211, Fax (+61 3) 553 2819, Telex AA33766
                e-mail:  ron@dbrmelb.dbrhi.oz

                Dr T Y Chen, Centre of Computer Studies & Applications,
                University of Hong Kong, Pokfulam Rd, Hong Kong.
                Phone 5-859 2491, Telex 71919
                e-mail: tychen@hkucs.uucp

                Dr Anthony Gar-On Yeh,
                Centre of Urban Studies & Urban Planning,
                University of Hong Kong, Pokfulam Road, Hong Kong
                Phone 5-8592721-7, Telex 71919
                e-mail: hkucs!hkucc!hdxugoy.uucp

------------------------------

End of AIList Digest
********************

∂14-Feb-88  0342	LAWS@KL.SRI.COM 	AIList V6 #30 - Conferences 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 14 Feb 88  03:42:09 PST
Date: Sat 13 Feb 1988 23:04-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #30 - Conferences
To: AIList@SRI.COM


AIList Digest            Sunday, 14 Feb 1988       Volume 6 : Issue 30

Today's Topics:
  Conferences - AI in Databases and Information Systems (China) &
    COIS88 Office Information Systems &
    CADE-9 Automated Deduction

----------------------------------------------------------------------

Date: Fri, 5 Feb 88 20:10 N
From: MEERSMAN%HTIKUB5.BITNET@CUNYVM.CUNY.EDU
Subject: Conference - AI in Databases and Information Systems (China)

The following is a Call for Participation to attend the first AI conference in
the PR of China. Invited speakers are John McCarthy, Ray Reiter and John Sowa.


International Federation for Information Processing (IFIP)

                       CALL FOR PARTICIPATION

               IFIP WG2.6/WG8.1 Working Conference on
               The Role of Artificial Intelligence in
                 Databases and Information Systems

                     July 4-8, Guangzhou, China

The Working Conference is about The Role of Artificial  Intelligence
in  Databases and Information Systems as well as about the  role  of
Databases in Artificial Intelligence based systems.

The Working Conference features the invited speakers:

     John McCarthy, Stanford University: "Knowledge about  knowledge
     in databases";

     John   Sowa,  IBM:  "Knowledge  representation  in   databases,
     information systems and natural language";

     Raymond  Reiter, University of Toronto: "Integrity  constraints
     in databases and knowledge bases".

During the 5-day conference 30 additional papers will be  presented,
selected from a large number of submissions.

The  participation is limited to 75 non-Chinese scientists,  and  75
Chinese scientists.

Group travel will be arranged from Europe. Post conference tours  in
China  will be arranged provided that there is enough  interest  for
participating  in  the various tour alternatives. There  will  be  a
social program for accompanying persons during the conference.

Persons who want to participate are requested to register  promptly,
because  of  time consuming organizational details, like  getting  a
visa, etc.

In  CASE  OF  OVERBOOKING, first priority is  given  to  authors  of
accepted  papers,   WG-members,  TC-members,  and  persons  who  are
involved  in the organization of the conference  (e.g.  PC-members).
Next, authors of rejected papers and persons who are already on  the
guest-lists  of  WG2.6  and WG8.1 will be invited  to  fill  up  the
remaining  slots. Third priority is given to scientists without  any
previous affiliation to IFIP-activities.

ORGANIZATION:

General Conference Chairperson:
     A. Solvberg, Norwegian Inst. Techn., Univ. Trondheim, Norway
Program Committee Chairpersons:
     R. Meersman, Univ. Tilburg, The Netherlands
     C.H. Kung, Univ. Iowa, USA

Conference Co-Chairpersons:
     S. Sa, People's Univ., P.R. China
     C.S Tang, Academia Sinica, P.R. China

Conference Secretary:
     J.J. v. Griethuysen, Philips, The Netherlands

Organization Committee:
     K. Xu, P.R. China (Chair)
     Z. Shi, P.R. China (Secretary)
     Q. Yao, P.R. China (Local Arrangements)
     S. Chen, P.R. China
     Y. Gu, China
     G. Wu, P.R. China
     J. Yang, Norway

Program Committee:

R. Balzer       USA                  E. Neuhold      F.R. Germany
D. Beech        USA                  G.M. Nijssen    Australia
J. Bubenko      Sweden               A. Olive        Spain
Y. Chen         China                A. Pirotte      Belgium
E. Falkenberg   The Netherlands      R. v.d. Riet    The Netherlands
M. Fox          USA                  A. Robinson     USA
C. Furukawa     Japan                C. Rolland      France
A. Furtado      Brazil               E. Sandewall    Sweden
H. Gallaire     F.R. Germany         H.J. Schneider  F.R. Germany
G. Gardarin     France               A. Sernadas     Portugal
F. Golshani     USA                  Z. Shi          China
L. Kerschberg   USA                  L. Siklossy     The Netherlands
R. Lee          USA                  A. Solvberg     Norway
V. Lum          USA                  J. Sowa         USA
L. Mark         USA                  R. Stamper      UK
J. Minker       USA                  G. Wiederhold   USA
M. Morgenstern  USA                  B. Yao          USA
B. Moulin       Canada               C. Zaniolo      USA


FULL PAPERS (45 min.):
----------------------

     Cauvet  C., Proix C., Rolland C.: "Information systems  design:
     an expert system approach", France.

     Dubois E.: "Logical support for reasoning about the  specifica-
     tion and the elaboration of requirements", Belgium

     Esculier C.: "Inheritances with exceptions: an approach based
     on semantic tolerance", France.

     Falkenberg  E.D.,  van Kempen H., Mimpen  N.:  "Knowledge-based
     information analysis support", F.R. Germany.

     Jiang Y.J.: "A self-referential relational data model", UK.

     Lum V., Wu T., Hsiao D.: "Integrating advanced techniques  into
     multimedia DBMS", USA.

     Neuhold  E.J.,  Schrefl  M.:  "A  knowledge-based  approach  to
     overcome  structural  differences in object  oriented  database
     integration", F.R. Germany.

     Nguyen  G.T., Rieu D.: "Heuristic control on  dynamic  database
     objects", France.

     Qian W., Zhao Z.: "Temporal reasoning in DB", P.R. China.

     Rundensteiner E.A.:  "The role of AI in DB's versus the role of
     DB theory in AI: an opinion", USA.

     Schiff  J.: "The design of a knowledge based economic  modeling
     tool (EMT) prototype", F.R. Germany.

     Sernadas   C.,  Fiadeiro  J.,  Sernadas  A.:   "Object-oriented
     conceptual modeling from law", Portugal.

     Shao J., Bell D. A., Hull M.E.C.: "LQL: A unified language  for
     deductive database systems", UK.

     Tang C., Yin B.: "Data dependency and undecidability in a model
     of historical information system", P.R. China.

     Twine  S.:  "Representing  facts  in  KEE's  frame   language",
     Australia.

     Wan  J.-C., Zhou C.-H.: "MEX-1: an expert system  shell",  P.R.
     China.

     Wieringa R., van de Riet R.: "Algebraic specification of object
     dynamics in knowledge base domains", The Netherlands.

     Wohed R.: "Diagnosis of Conceptual schemas", Sweden.

     Zaniolo C., Sacca D.: "Rule rewriting methods in the  implemen-
     tation of the logic data language LDL", USA.

     Zeng  H., Tong Q., Yao W., Song X.: "HITKMS: a  knowledge  base
     machine   system  supporting  cooperative   expert-system   and
     experiential learning", P.R. China.

     Zhang  C., Tzu Y.: "A  model  for  maintaining compiled  Prolog
     databases", P.R. China.

     Zhou L., Yang D., Fan Z., Zhu L.: "QKBMS/75 -- A knowledge base
     management  system  growing  from  relational  DBMS  and  logic
     programming language", P.R. China.






SHORT PAPERS (15 min.):
-----------------------

     Berztiss   A.T.:  "On  information-control   systems,    object
     orientation, and expert systems", USA.

     Demolombe   R.,   Illarramendi  A.,  Blanco   J.M.:   "Semantic
     optimization   in  data bases using   artificial   intelligence
     tech.s", France.

     Potter W.D., Nute D.: "d-KDL: an EDS environment  incorporating
     defeasible reasoning", USA.

     Reimer U.: "On enriching the semantics of knowledge representa-
     tion models: a claim and an approach", F.R. Germany.

     Su B., Shi C., Wang K., Hu P., Shi H., Wang J.: "The  architec-
     ture of a distributed knowledge base system", P.R. China.

     Shao J., Yao Q.: "A Knowledge-based query optimization system",
     P.R. China.

     Tang C.S., et. al.:  "To connect the informal graphical  design
     methodology   with  the formal specification   in   information
     system design", P.R. China.

     van  Assche  F., Loucopoulos P., Speltincx  G.:  "A  rule-based
     approach to the development of information systems", Belgium.


DETAILS OF THE ARRANGEMENT ARE:

Conference fees:

     The  conference  fee will be approx. USD 250. There will  be  a
     social program for accompanying persons during the  conference,
     approx. 20-25 USD/day/person, including lunches.

Hotels:

     The recommended hotel is:

          East (Dong Fang) Hotel:
               Double room  ..........  40 USD/day
               Single room  ..........  30 USD/day

     A  limited  number  of guest rooms  of  the  Scientific  Garden
     Building  of "Guangzhou Association for Science  &  Technology"
     (GAST) may be available:
               Double room  ..........  20 USD/day
               Single room  ..........  12 USD/day

     The prices include breakfast.



Group travel from Europe:

     Provided that there is enough interest, there will be  arranged
     group travel from Europe. The details are:

     Price:    Approx. 2000 Swiss Francs, from any European country.
               Outward  trip July 1, evening, to Guangzhou via  Hong
               Kong. Individual returns from either Beijing or  Hong
               Kong.

Post conference tour alternatives:

     There  will  be arranged post conference tours,  if  there  are
     enough  participating  persons (min. 10 persons for  each  tour
     alternative). The details are [...]

  [Contact the message author for an application blank, hotel
  reservation form, and post-conference tour itineraries.  -- KIL]

------------------------------

Date: Tue, 9 Feb 88 09:36:32 est
From: rba@flash.bellcore.com (Bob Allen)
Subject: Conference - COIS88 Office Information Systems


        COIS88 - Conference on Office Information Systems
         .              Advance Program
                        March 23-25,1988
           Hyatt Rickeys Hotel, Palo Alto, California
Sponsored by: ACM SIGOIS and IEEECS TC-OA  In cooperation with: IFIP W.G. 8.4

Wednesday, March 23, 1988
Introductions: Najah Naffah, Bob Allen
Keynote: Terry Winograd
Collaborative Work: (paper session) Chair: Irene Greif
        The rapport multimedia conferencing system
           S.R. Ahuja, J.R. Ensor, D.N. Horn, AT&T Bell Laboratories
        An integrated framework for the use of computers and computer modeling
        in negotiations
           D. Samarasan, J.D. Nyhart, C. Goeltner, MIT
        Quilt: A collaborative tool for cooperative writing,
           R. Fish, R. Kraut, M. Leland, M. Cohen, Bellcore
        How can groups communicate when they use different languages?
           J. Lee, T.W. Malone, MIT
Distributed Artificial Intelligence - DAI (panel) Chair: Les Gasser
Task Modeling, Planning, and Coordination (paper session)
        Problems in modelling tasks and task views
           M. Mazer, U. Toronto
        OTM: Specifying office tasks
           F.H. Lochovsky, J.S. Hogg, S.P. Weiser, A.O. Mendelzon, U. Toronto
        Using a planner to support office work
           W.B. Croft, L.S. Lefkowitz, U. Mass.
        Customizing cooperative office procedures by planning,
           R. Lutze, Triumph-Adler
        AMS: A knowledge-based approach to task representation, organization
        and coordination
           M. Tueni, J. Li, P. Fares, Bull
Directions in Workstations

Thursday, March 24, 1988
Organizational Impact (paper session) Chair: Rob Kling
        Computers' impact on productivity and worklife
           S. Dumais, R. Kraut, S. Koch, Bellcore
        The impact of electronic mail on managerial and organizational
        communications
           M. Sumner, Southern Illinois
        The influence of training on actual use of end-user software,
           L. Olfman, R. Bostrom, Claremont Graduate School/Indiana U.
        Disaligning macro, meso and micro due process: A case study of office
        automation in Quebec colleges
           F. Blanchard, A. Cambrosio, U. Quebec
Social Research: Methods and Principles (paper session), Chair: Tora Bikson
        Cost benefit analysis of information systems: A survey of methodologies
           P. Sassone, Georgia Tech.
        Collection and analysis of data from communication system networks,
           R. Rice, USC
        Social choice theory and distributed decision making,
           A. Urken, Stevens Inst
        Understanding design as cooperative work, P. Ehn, U. Aarhus
SIGOIS Business Meeting
User Design of Interfaces (panel) Chair: Austin Henderson
Hypertext and Information Retrieval  (paper session) Chair: Walter Bender
        Query processing strategies : Cost evaluation and heuristics
           E. Bertino, F. Rabitti, and S. Gibbs
        Knowledge-based generation of conceptual hypertexts,
           U. Hahn, U. Reimer, U. Passau/U. Constance
        Knowledge based document classification supporting integrated document
        handling
           H. Eirund, K. Kreplin, Triumph-Adler
        Shared books: Collaborative publication management for an office
        information system
           B. Lewis, J. Hodges, Acorn Research/Xerox
        Seeing the forest for the trees: Hierarchical displays of hypertext
        structures.
           S. Feiner, Columbia U.
Hypertext and Electronic Publishing (panel) Chair: Norm Meyrowitz
Banquet, Speaker, Kristen Nygaard, Tresidder Union, Stanford University,
7:30-10:00

Friday, March 25, 1988
Multimedia (paper session) Chair: Donald Chamberlin
        Employing voice back channels to facilitate audio document retrieval
           C. Schmandt, MIT
        Interactive retrieval of office documents
           W.B. Croft, R. Krovetz, U. Mass.
        An experimental multi-media bridging system,
           E.J. Addeo, A. Dayao, A.D. Gelman, V.F. Massa, Bellcore
        Browsing within time-driven multimedia documents
           S. Christodoulakis, S. Graham, U. Waterloo
Object-Oriented and Distributed Databases (paper session)
        An application oriented approach to view updates,
           J. Klein, A. Reuter, U. Stuttgart
        Aggregation and generalization hierarchies in office automation
           M. Bever, D. Ruland, IBM
        Object flavor evolution in an object-oriented database system
           Q. Li, D. McLeod, USC
        Semantic queries for office information system desig
           B. Pernici, Politecnico di Milano
Object-Oriented, Organizational, and Market Systems (paper session)
Chair: R.E. Fikes
        An object oriented system implementing KNOs, E. Casais, U. Geneva
        A commitment-based communication model for distributed office
        environments
           C. Koo, G. Wiederhold, P. Cashman, Stanford/DEC
        Market automation: Self-regulation in a distributed environment
           R. Miller, Boston U.
        Ubik: A system for conceptual and organizational development
           P. de Jong, MIT
Object-Oriented PS/DBMSs (panel) Chair: Stan Zdonik

  [Contact the message author for the application form. -- KIL]

------------------------------

Date: Wed, 10 Feb 88 17:20:02 cst
From: lusk@anl-mcs.ARPA (Rusty Lusk)
Subject: Conference - CADE-9 Automated Deduction


                                 CADE - 9

            9th International Conference on Automated Deduction

                              May 23-26, 1988

             Preliminary Schedule and Registration Information

CADE-9 will be held at Argonne National Laboratory (near Chicago) in  cele-
bration  of the 25th anniversary of the discovery of the resolution princi-
ple at Argonne in the summer of 1963.

Dates
        Tutorials: Monday, May 23
        Conference:  Tuesday, May 24 - Thursday, May 26

Main Attractions:

1.   Presentation of more than sixty papers related to aspects of automated
     deduction.  (A detailed listing of the papers is attached.)

2.   Invited talks from

             Bill Miller, president, SRI International
             J. A. Robinson, Syracuse University
             Larry Wos, Argonne National Laboratory

     all of whom were at Argonne 25 years ago when the resolution principle
     was discovered.

3.   Organized dinners  every  night,  including  the  Conference  banquet,
     "Dinner with the Dinosaurs", at Chicago's Field Museum of Natural His-
     tory.

4.   Facilities for demonstration of  and  experimentation  with  automated
     deduction systems.

5.   Tutorials in a number of special areas within automated deduction.

Tutorials:

We have tried to make the tutorials relatively short and  inexpensive.   It
is  hoped  that  many  researchers that are skilled in specialized areas of
automated deduction will take the opportunity to get an overview of related
research  areas.  Some of the details (like titles, exactly which member of
a team will give the tutorial, etc.) have not yet been finalized.  The fol-
lowing  information  reflects  our  current  information.   It  may  change
slightly, but expect that no major changes will occur.

Tutorial 1:  Constraint Logic Programming

     This will be a tutorial on the Constraint  Logic  Programming  Scheme,
     and  systems that have implemented the concepts (see "Constraint Logic
     Programming", J. Jaffar and J-L Lassez, Proc. POPL87, Munich,  January
     1987).

Tutorial 2:  Verification and Synthesis

     This will be a tutorial by Zohar Manna and Richard Waldinger on  their
     work in verification and synthesis of algorithms.

Tutorial 3:  Rewrite Systems

     This will be a tutorial given by Mark Stickel covering the basic ideas
     of equality rewrite systems.

Tutorial 4:  Theorem Proving in Non-Standard Logics

     This tutorial will be given by Michael  McRobbie.   It  will  cover  a
     number of topics from his new book.

Tutorial 5:  Implementation I: Connection Graphs

     This tutorial will be given by members of the SEKI project.   It  will
     cover  issues  concerning why connections graphs are used and how they
     can be implemented.

Tutorial 6:  Implementation II: an Argonne Perspective

     This tutorial will present the central implementation issues from  the
     perspective  of  a  number  of  members of the Argonne group.  It will
     cover topics like choice of language, indexing, basic algorithms,  and
     utilization of multiprocessors.

Tutorial 7:  Open questions for Research

     This tutorial will be given by Larry Wos.  It will focus on  two  col-
     lections  of  open questions.  One set consists of questions about the
     field of automated theorem proving  itself,  questions  whose  answers
     will  materially  increase the power of theorem-proving programs.  The
     other set consists of questions taken from various fields of mathemat-
     ics  and logic, questions that one might attack with the assistance of
     a theorem-proving program.  Both sets of questions provide  intriguing
     challenges for possible research.

How to Register

Fill out the following  registration form (the part of this message between
the rows of ='s) and return as soon as possible to:

        Mrs. Miriam L. Holden, Director
        Conference Services
        Argonne National Laboratory
        9700 South Cass Avenue
        Argonne, IL 60439
        U. S. A.

Questions relating to registration and accommodations can  be  directed  to
Ms. Miriam Holden or Joan Brunsvold at (312) 972-5587.

  [Contact the message author for registration and hotel forms and for
  the schedule of sessions and social events. -- KIL]



Preliminary Session Schedule

              Session 1

First-Order Theorem Proving Using Conditional Rewriting
    Hantao Zhang
    Deepak Kapur

Elements of Z-Module Reasoning
    T.C. Wang

              Session 2

Flexible Application of Generalised Solutions Using Higher Order Resolution
    Michael R. Donat
    Lincoln A. Wallen

Specifying Theorem Provers in a Higher-Order Logic Programming Language
    Amy Felty
    Dale Miller

Query Processing in Quantitative Logic Programming
    V. S. Subrahmanian

              Session 3

An Environment for Automated Reasoning About Partial Functions
    David A. Basin

The Use of Explicit Plans to Guide Inductive Proofs
    Alan Bundy

LOGICALC: an environment for interactive proof development
    D. Duchier
    D. McDermott

              Session 4

Implementing Verification Strategies in the KIV-System
    M. Heisel
    W. Reif
    W. Stephan

Checking Natural Language Proofs
    Donald Simon

Consistency of Rule-based Expert Systems
    Marc Bezem

              Session 5

A Mechanizable Induction Principle for Equational Specifications
    Hantao Zhang
    Deepak Kapur
    Mukkai S. Krishnamoorthy

Finding Canonical Rewriting Systems Equivalent to a Finite Set of
 Ground Equations in Polynomial Time
    Jean Gallier
    Paliath Narendran
    David Plaisted
    Stan Raatz
    Wayne Snyder

              Session 6

Towards Efficient Knowledge-Based Automated Theorem Proving for
 Non-Standard Logics
    Michael A. McRobbie
    Robert K. Meyer
    Paul B. Thistlewaite

Propositional Temporal Interval Logic is PSPACE
    A. A. Aaby
    K. T. Narayana

              Session 7

Computational Metatheory in Nuprl
    Douglas J. Howe

Type Inference and Its Applications in Prolog
    H. Azzoune

              Session 8

Procedural Interpretation of Non-Horn Logic Programs
    Arcot Rajasekar
    Jack Minker

Recursive Query Answering with Non-Horn Clauses
    Shan Chi
    Lawrence J. Henschen

              Session 9

Case Inference in Resolution-Based Languages
    T. Wakayama
    T.H. Payne

Notes on Prolog Program Transformations, Prolog Style, and Efficient
 Compilation to the Warren Abstract Machine
    Ralph M. Butler
    Rasiah Loganantharaj

Exploitation of Parallelism in Prototypical Deduction Problems
    Ralph M. Butler
    Nicholas T. Karonis

              Session 10

A Decision Procedure for Unquantified Formulas of Graph Theory
    Louise E. Moser

Adventures in Associative-Commutative Unification (A Summary)
    Patrick Lincoln
    Jim Christian

Unification in Finite Algebras is Unitary(?)
    Wolfram Buttner

              Session 11

Unification in a Combination of Arbitrary Disjoint Equational Theories
    Manfred Schmidt-Schauss

Partial Unification for Graph Based Equational Reasoning
    Karl Hans Blasius
    Jorg Siekmann

              Session 12

SATCHMO:  A theorem prover implemented in Prolog
    Rainer Manthey
    Francois Bry

Term Rewriting: Some Experimental Results
    Richard C. Potter
    David Plaisted

              Session 13

Analogical Reasoning and Proof Discovery
    Bishop Brock
    Shaun Cooper
    William Pierce

Hyper-Chaining and Knowledge-Based Theorem Proving
    Larry Hines

              Session 14

Linear Modal Deductions
    L. Farinas del Cerro
    A. Herzig

A Resolution Calculus for Modal Logics
    Hans Jurgen Ohlbach

              Session 15

Solving Disequations in Equational Theories
    Hans-Jurgen Burckert

On Word Problems in Horn Theories
    Emmanuel Kounalis
    Michael Rusinowitch

Canonical Conditional Rewrite Systems
    Nachum Dershowitz
    Mitsuhiro Okada
    G. Sivakumar

Program Synthesis by Completion with Dependent Subtypes
    Paul Jacquet

              Session 16

Reasoning about Systems of Linear Inequalities
    Thomas Kaufl

A Subsumption Algorithm Based on Characteristic Matrices
    Rolf Socher

A Restriction of Factoring in Binary Resolution
    Arkady Rabinov

Supposition-Based Logic for Automated Nonmonotonic Reasoning
    Philippe Besnard
    Pierre Siegal

              Session 17

Argument-Bounded Algorithms as a Basis for Automated Termination Proofs
    Christoph Walther

Automated Aids in Implementation Proofs
    Leo Marcus
    Timothy Redmond

              Session 18

A New Approach to Universal Unification and Its Application to AC-Unification
    Mark Franzen
    Lawrence J. Henschen

An Implementation of a Dissolution-Based System Employing Theory Links
    Neil V. Murray
    Erik Rosenthal

              Session 19

Decision Procedure for Autoepistemic Logic
    Ilkka Niemela

Logical Matrix Generation and Testing
    Peter K. Malkin
    Errol P. Martin

Optimal Time Parallel Algorithms for Term Matching
    Rakesh M. Verma
    I.V. Ramakrishnan

              Session 20

Challenge Equality Problems in Lattice Theory
    William McCune

Single Axioms in the Implicational Propositional Calculus
    Frank Pfenning

Challenge Problems Focusing on Equality and Combinatory Logic:
 Evaluating Automated Theorem-Proving Programs
    Larry Wos
    William McCune

Challenge Problems from Nonassociative Rings for Theorem Provers
    Rick Stevens

------------------------------

End of AIList Digest
********************

∂14-Feb-88  0540	LAWS@KL.SRI.COM 	AIList V6 #31 - Neural Network Conference and Journal
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 14 Feb 88  05:40:09 PST
Date: Sat 13 Feb 1988 23:08-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #31 - Neural Network Conference and Journal
To: AIList@SRI.COM


AIList Digest            Sunday, 14 Feb 1988       Volume 6 : Issue 31

Today's Topics:
  Conference - International Neural Network Society

----------------------------------------------------------------------

Date: Wed, 10 Feb 88 18:05:26 EST
From: mike%bucasb.bu.edu@bu-it.BU.EDU (Michael Cohen)
Subject: Conference - International Neural Network Society

INNS 88 UPDATE AND CALL FOR PAPERS

International Neural Network Society
First Annual Meeting
September 6--10, 1988
Boston, Massachusetts

The International Neural Network Society (INNS) invites all
those interested in the exciting and rapidly expanding field of
neural networks to attend its 1988 Annual Meeting. Incorporated
in 1987, INNS already includes among its members 1300 of the field's
most active researchers and is growing rapidly. The meeting
includes plenary lectures, symposia, contributed oral and poster
presentations, tutorials, commercial and publishing exhibits,
government agency presentations, and social events. These events will
bring together scientists, engineers, government administrators, industrial
administrators, and students in an open form for advancing the full
spectrum of significant neural network research from biology through
technology.

JOIN US IN BOSTON IN SEPTEMBER!


INNS OFFICERS AND GOVERNING BOARD:
Stephen Grossberg, President;
Demetri Psaltis, Vice-President;
Harold Szu, Secretary and Treasurer.

Shun-ichi Amari, James Anderson, Gail Carpenter, Walter Freeman, Kunihiko
Fukushima, Lee Giles, Teuvo Kohonen, Christoph von der Malsburg, Carver Mead,
David Rumelhart, Terrence Sejnowski, George Sperling, Bernard Widrow.


MEETING ORGANIZERS:
General Meeting Chairman: Bernard Widrow
Technical Program Co-Chairmen: Dana Anderson and James Anderson
Organization Chairman: Gail Carpenter
Tutorial Program Co-Chairmen: Walter Freeman and Harold Szu
Conference Coordinator: Maureen Caudill
Publications Coordinators: Charles Butler and Maureen Caudill
Publicity and Vendor Liaisons: Tom Schwartz and Diane Schwartz

COOPERATING SOCIETIES:
The Societies listed below have generously agreed to cooperate with the INNS
meeting. Organizing Committee for Cooperating Societies: Mark Kon
(Chairman), Joseph Bronzino, William Freeman, Morris Hirsch, William
Hutchinson, Simon Levin, Daniel Levine, Herbert Rauch, David Rumelhart, Jay
Sage, Bernard Soffer.

   American Mathematical Society
   Association for Behavior Analysis
   Cognitive Science Society
   Computer Society of the IEEE
   IEEE Control Systems Society
   IEEE Engineering in Medicine and Biology Society
   IEEE Systems, Man and Cybernetics Society
   Optical Society of America
   Society for Industrial and Applied Mathematics
   Society for Mathematical Biology
   Society of Photo-Optical Instrumentation Engineers
   Society for the Experimental Analysis of Behavior


---A DAY OF TUTORIALS---

September 6, 1988
8:00AM--6:00PM

1. JOHN DAUGMAN, Harvard University: Vision and image processing

2. TEUVO KOHONEN, Helsinki University of Technology: Speech and language
   processing

3. STEPHEN GROSSBERG, Boston University: Sensory-motor control and robotics

4. GAIL CARPENTER, Northeastern University: Pattern recognition, associative
   learning, and self-organization

5. DAVID RUMELHART, Stanford University: Cognitive psychology for information
   processing

6. ALLEN SELVERSTON, University of California at San Diego: Local circuit
   neurobiology

7. MORRIS HIRSCH, University of California at Berkeley: Nonlinear dynamics for
   neural networks

8. DEMETRI PSALTIS, California Institute of Technology: Applications,
   combinatorial optimization, and implementations

The tutorial registration fee includes all eight one-hour tutorial lectures by
leading scholars in each subject. A reception at 6:00PM will be followed by a
plenary lecture.

SYMPOSIUM AND PLENARY SPEAKERS:

Plenary                              Cognitive and Neural Systems
-------                              ----------------------------
Stephen Grossberg                    James Anderson
Carver Mead                          Walter Freeman
Terrence Sejnowski                   Guenter Gross
Nobuo Suga                           Gary Lynch
Bernard Widrow                       Christoph von der Malsburg
                                     David Rumelhart
                                     Allen Selverston

Vision and Pattern Recognition       Combinatorial Optimization
------------------------------       and Content Addressable Memory
Gail Carpenter                       ------------------------------
Max Cynader                          Daniel Amit
John Daugman                         Stuart Geman
Kunihiko Fukushima                   Geoffrey Hinton
Teuvo Kohonen                        Bart Kosko
Ennio Mingolla
Eric Schwartz
George Sperling
Steven Zucker

Applications and Implementations     Motor Control and Robotics
--------------------------------     --------------------------
Dana Anderson                        Jacob Barhen
Michael Buffa                        Daniel Bullock
Lee Giles                            James Houk
Robert Hecht-Nielsen                 Scott Kelso
Demetri Psaltis                      Lance Optican
Thomas Ryan
Bernard Soffer
Harold Szu
Wilfrid Veldkamp


---CONTRIBUTED ORAL AND POSTER PRESENTATIONS---

Submit abstracts for oral and poster presentation on biological and
technological models of:

Vision and image processing
Local circuit neurobiology
Speech and language
Analysis of network dynamics
Sensory-motor control and robotics
Combinatorial optimization
Pattern recognition
Electronic implementation (VLSI)
Associative learning
Optical implementation
Self-organization
Neurocomputers
Cognitive information processing
Applications

Abstracts must be typed in the INNS camera-ready format. Either printed
abstract forms or white bond paper may be used. Instructions for Authors
are attached. Submit completed abstracts to:

INNS Conference
16776 Bernardo Center Drive
Suite 110B
San Diego, CA 92128 USA
(619) 451-3752

ABSTRACT DEADLINE: MARCH 31, 1988

Acceptance notifications will be mailed in June, 1988. Accepted abstracts
will be published as a supplement to the INNS journal, Neural Networks,
and mailed to meeting registrants and Neural Networks subscribers in
August, 1988.


---PROGRAM COMMITTEE---

Joshua Alspector   Walter Freeman       Lance Optican
Shun-ichi Amari    Kunihiko Fukushima   David Parker
Dana Anderson      Lee Giles            Demetri Psaltis
James Anderson     Stephen Grossberg    Adam Reeves
Jacob Barhen       Morris Hirsch        Thomas Ryan
Michael Buffa      Scott Kelso          Jay Sage
Daniel Bullock     Daniel Kersten       Eric Schwartz
Marcia Bush        Teuvo Kohonen        Allen Selverston
Terry Caelli       Bart Kosko           George Sperling
Gail Carpenter     Daniel Levine        David Stork
Michael Cohen      William Levy         Harold Szu
Max Cynader        Richard Lyon         David Tank
John Daugman       Carver Mead          Wilfrid Veldkamp
David van Essen    Ennio Mingolla       William Warren
Federico Faggin    Paul Mueller         Bernard Widrow
Nabil Farhat       Gregory Murphy

NSF TRAVEL GRANTS: The National Science Foundation has awarded a grant to
help students, postdoctoral fellows, and junior faculty travel to the INNS
Annual Meeting. Preference will be given to young scientists who are authors or
co-authors of contributed papers. A letter of request for travel support should
be sent along with the abstract. Please include the name, address, and
telephone number of each person requesting support; an estimate of basic travel
expenses; and the title and authors of the abstract. Awards will be announced
in June, 1988.

Complete attached forms for Conference and Tutorial Registration, Hotel
Reservations, and Discounted Travel Reservations. The celebrated Park Plaza
Castle will house many commercial, government, and publishing exhibits.

For exhibitor information write:
INNS Conference
J.R. Schuman Associates
316 Washington St.
Box 125
Wellesley, MA 02181 USA
(617) 237-7931.


---INSTRUCTIONS FOR AUTHORS OF ABSTRACTS---

An abstract may be typed on white bond paper, as specified below.
Alternatively, a printed abstract form can be obtained by writing:

INNS Conference
16776 Bernardo Center Drive
Suite 110B
San Diego, CA 92128 USA
(619) 451-3752

STEPS TO FOLLOW:

Mail to: INNS Conference, 16776 Bernardo Center Drive, Suite 110B, San Diego,
CA 92128 USA. Include:

1. Original abstract, typed on INNS abstract form or on white bond legal size
   (8-1/2" x 14") paper. DO NOT FOLD.

2. Five (5) photocopies.

3. Completed Abstract Information Form (next page).

4. A stamped, self-addressed acknowledgement card specifying abstract title
   and authors (optional).

DEADLINE FOR SUBMISSION: Postmarked March 31, 1988


---ABSTRACT PREPARATION---

Abstract Style Guide (sample):

ADAPTIVE SWITCHING CIRCUITS. B. Widrow and M. Hoff. Department of Engineering,
Stanford University, Stanford, CA 94305 USA.

   xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.

Style Rules:

Set title in CAPITAL LETTERS.
Underline authors' names.
Provide a full mailing address.
Skip one line following the address.
Indent paragraphs three spaces.
Single space.
Type right up to the margins. All text, including title, figures, and
references must fall within a 8-1/4-inch (width) x 11-inch (height)
rectangle.

Submitted abstracts will be reviewed by three members of the Program
Committee, and accepted abstracts will be published without revision. It is
therefore essential that abstracts be clear, specific, and self-contained.
Abstracts of previously published material should not be submitted. An
individual may appear as author on any number of abstracts. Acceptance
notifications will be mailed in June, 1988.


---CALL FOR PAPERS---

Neural Networks commenced quarterly publication in January, 1988.
Articles span the full range of biological through technological neural
network models.  The January issue includes:

Teuvo Kohonen: An introduction to neural computing.

Stephen Grossberg: Nonlinear neural networks: Principles, mechanisms,
and architectures.

Shun-ichi Amari and Kenjiro Maginu: Statistical neurodynamics of associative
memory.

Paul R. Gorman and Terrence J. Sejnowski: Analysis of hidden units in a
layered network trained to classify sonar targets.

Carver A. Mead and Misha Mahowald: A silicon model of early visual
processing.

The April, 1988, issue will include:

Allen Selverston: A consideration of invertebrate central pattern generators as
computational data bases.

Kunihiko Fukushima: Neocognitron: A hierarchical neural network capable of
visual pattern recognition.

Robert Hecht-Nielsen: Applications of counterpropagation networks.

Christoph von der Malsburg: Pattern recognition by labeled graph matching.

Demetri Psaltis, Cheol Hoon Park, and John Hong: Higher order associative
memories and their optical implementations.


INSTRUCTIONS FOR AUTHORS:

Authors should submit four copies of each manuscript, plus original
illustrations.  Do references in American Psychological Association
format; e.g., Hebb (1949). Submit from Asia and Australia to Prof. Shun-ichi
Amari; from North and South America to Prof. Stephen Grossberg; from Europe
and Africa to Prof. Teuvo Kohonen. Complete Instructions for Authors
may be obtained from the Editors.


---DISCOUNTED TRAVEL RESERVATIONS---

Domestic Travelers: Airline tickets may be ordered at a special convention
rate 60% off coach fares on Eastern Airlines and 5% off the lowest available
fare on Continental Airlines.  Eastern and Continental airlines have the
greatest number of flights in and out of Logan International Airport in
Boston. Convention rates are often the lowest unrestricted rates obtainable at
the time you book your flights. However, since the rate is negotiated far in
advance, and since the airlines may offer special lower fares at the time you
book, especially if you are willing to stay over a Saturday night, there may be
even lower fares available than our convention rate. However, if you book your
flight through UNIGLOBE The Travel Connection, Inc., we guarantee to find you
the lowest available fare at the time of booking, whether that is the
convention fare or a lower fare on another airline, or we will refund the
difference between our fare and the lower one.

International Travelers: Because of the weakness of the U.S. dollar, it may be
less expensive to purchase your ticket through us in the United States. You
may call us collect on our local (617) telephone line, let us know what the
lowest fare is you have been quoted in your country, and we will obtain a lower
one in U.S. currency if possible.

Call UNIGLOBE The Travel Connection Inc.: (800) 521-5144 (outside
Massachusetts) or (617) 235-7500 (Massachusetts and International), or fill out
the Travel Request below and send to:

Neural Networks
UNIGLOBE The Travel Connection, Inc.
40 Washington Street
Wellesley, MA 02181

Name:
Address:
City, State, Zip:
Telephone (Work and Home):
Departure from (City/Airport):
Date and Preferred Time:
Date and Preferred Time:
Form of Payment:

We will call you to confirm reservations and obtain seat assignment
preferences, frequent flier numbers, or any other special request
you may have.

[Contact the message author for registration or membership forms. -- KIL]

------------------------------

End of AIList Digest
********************

∂15-Feb-88  0042	LAWS@KL.SRI.COM 	AIList V6 #32 - Radio Control, Fuzzy Sets, Chinese, MDBS  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 15 Feb 88  00:42:41 PST
Date: Sun 14 Feb 1988 22:19-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #32 - Radio Control, Fuzzy Sets, Chinese, MDBS
To: AIList@SRI.COM


AIList Digest            Monday, 15 Feb 1988       Volume 6 : Issue 32

Today's Topics:
  Queries - Expert System Demos & Life Cycles for Expert Systems &
    AI and Object-Oriented Programming & Fuzzy sets & Neural Software &
    Concepts of the Future & Legal and Ethical Query &
    CLOS Specification Completion Date,
  Literature - Daedalus,
  Application - Ping Pong-Playing Robot,
  AI Tools - Radio Gear for Mobile Robots & Fuzzy sets &
    Chinese Character Generator & Technical Support for MDBS

----------------------------------------------------------------------

Date: Fri, 12 Feb 88 09:28 CST
From: PMACLIN%UTMEM1.BITNET@CUNYVM.CUNY.EDU
Subject: Request for Expert System Demos

The University of Tennessee at Memphis has scheduled an "Expert Systems in the
Health Sciences" seminar for April 7. More than 100 of our faculty members are
expected to attend to learn what is possible in expert systems with today's
technology.

If you have any expert systems available that you would be willing to share
with U-T via a demo disk or through our tying in with your computer using
telecommunications that day, please let me know. We need examples of expert
systems (in the health sciences) for the DEC VAX 8650 or 8700, the Macintosh II
or Mac Plus, and the IBM PC or AT. All programs or demos will be returned
promptly after the seminar.

Also if you could offer any advice as to your experiences with various expert
system shells or languages, I would appreciate your comments. I am especially
interested in Nexpert Object, Quintus Prolog, 1st Class expert system shell,
and Level5 Insight2+ expert system shell.

Philip Maclin
Department of Computer Science
University of Tennessee at Memphis
877 Madison Ave.
Memphis, TN 38163
Phone 901 528-5848
Bitnet userID   PMACLIN@UTMEM1

------------------------------

Date: Mon, 8 Feb 88 10:25 CDT
From: LUCINDA ASHMORE 343-7670 <ASHMORE%NGSTL1%eg.ti.com@RELAY.CS.NET>
Subject: Life Cycles for Expert Systems


        I am interested in information on life cycles for expert
        systems.  I would like information on life cycles that have
        been developed and also on conventional life cycles that have
        been tailored for expert systems.  If any books or articles
        have been written on the subject I would also like to know
        about them.  If I receive numerous responses I will summarize
        and post them.


        Thanks,

        Lucinda Ashmore
        ASHMORE%NGSTL1%EG.TI.COM@RELAY.CS.NET

------------------------------

Date: 8 Feb 88 16:27:36 GMT
From: spl1!iitmax!rc@ELROY.JPL.NASA.GOV  (Rajeev Chandrasekhar)
Subject: AI and Object Oriented programming !


I was wondering if any body in netland has heard about
any papers/journals about O.O stuff and its applications
to planning/robotics. I would appreciate a e-mail reply
or news posting if any one has any references or suggestions.

Thanks

Rajeev

bitnet : cs_chandrase@iitvax
ihnp4  : ihnp4\!iitmax\!rc

------------------------------

Date: 8 Feb 88 23:20:47 GMT
From: mmlai!barash@uunet.uu.net  (Rev. Steven C. Barash)
Subject: Fuzzy sets


Does anyone reading this understand "Fuzzy set theory"/"Fuzzy logic"
and its applicability to automated reasoning?
In particular, I'm interested in how one might verify empirically
(or experimentally, as with probability theory) the accuracy of the
fuzzy set formaulas for appropriate domains.  Also, for a given problem,
how should one determine the suitability of fuzzy sets (instead of traditional
methods) for reasoning under uncertainty?  The journal articles
tend to be rather specialized, and don't address such basic issues.

Please repond by E-mail; I'll post a summary if interest is sufficient.
Any ideas will help, and thanks in advance.

                                 Steve Barash

--

Rev. Steve Barash @ Martin Marietta Labs / Artificial Intelligence Department

Disclaimer: I speak for no one.

ARPA: barash@mmlai.uu.net
UUCP: {uunet, super, hopkins!jhunix} !mmlai!barash

------------------------------

Date: Thu, 11 Feb 88 02:30:20 EST
From: ST401843%BROWNVM.BITNET@MITVMA.MIT.EDU
Subject: Neural Software

i am looking for personal computer software to simulate/investigate/...
neural networks. the software should run on either an IBM comp. (preferred)
or a Mac. i am sufficiently interested in both, so any pointer to either
direction is welcome. please send messages to st401843@brownvm (bitnet
address, obviously) and i will summarize and post. i might note that i
vaguely remember some mention on this list of the third book in the PDP
series that was then (when?) to come out soon and with an accompanying
disk. this is exactly the kind of thing i think i want. finally, and
most importantly, specify whether public/share/other and what is the]
approximate price range.


                Thanx, Thanasis Kehagias

------------------------------

Date: Fri, 12 Feb 88 07:38:08 -0500
From: G B Reilly <reilly@UDEL.EDU>
Reply-to: reilly@wharton.upenn.edu
Subject: Help explain the concepts of the future


    The Franklin Institute Science Museum* will be opening
the Futures Center in 1990.  This is not a copy of EPCOT
Center or a futuristic living room.  It is exhibits to
explain the new concepts in science and technology that will
affect people's lives in the coming years.

    One section explains the concepts of robotics, computing,
and artificial intelligence.  We are interested in hearing
what you believe the public needs to know about these areas
and how they will affect their life in the next decade.

    Thank you for your cooperation.


Brendan Reilly
Curator


----
* The Franklin Institute is one of the oldest science museums
in the country and has hands-on exhibits explaining science
and technology which are visited by over one million people annually.

------------------------------

Date: Fri, 12 Feb 88 15:31 CST
From: WHITTAK%TAMAGEN.BITNET@CUNYVM.CUNY.EDU
Subject: legal and ethical query


I have a group of graduate students from several disciplines in my ES class
that are investigating the legal and ethical issues associated with ES delivery.
If any of you know of any court cases (or out of court settlements) that
have involved the use of or the lack of use of expert systems, please send
those contacts or references to me.  Also, if any of you know lawyers that
specialize or deal with such issues, I would also appreciate having their
names (and addresses).

We will be happy to send any contributors a final copy of their paper.
I will also post responses to the net if enough interest is shown.

Thank you in advance.

A. Dale Whittaker               whittak@tamagen (BITNET)
Agricultural Engineering Dept.
Texas A&M University.
College Station, TX 77843-2117

(409)846-3364

------------------------------

Date: 12 Feb 88 21:59:37 GMT
From: pitt!cisunx!jasst3@cadre.dsl.pittsburgh.edu  (Jeffrey A.
      Sullivan)
Subject: CLOS Specification Completion Date?

Does anyone know when the CLOS standard will be frozen so that language
developers will be willing to support it in commercial CL packages?

--
..........................................................................
Jeff Sullivan                           University of Pittsburgh
pitt!cisunx!jasst3                      Intelligent Systems Studies Program
jasper@PittVMS (BITNET)                 Graduate Student

------------------------------

Date: 7 Feb 88 20:25:22 GMT
From: ocvaxa.oberlin.edu!SAC8463@uunet.uu.net
Subject: RE: SOURCES FOR RESEARCH


>I have to do some research on Artificial Intelligence (primarily the
>history , but also current applications) and I would like to know if
>anyone could recommend a good (and recent) book dealing with the two areas
>of AI I have mentioned. Thank you.

The Winter 1988 issue of _Daedalus (no flames; it is Greek, after all!),
The Journal Of The Something-or-other Society of Arts and Sciences_  is
devoted to Aritificial Intelligence, and contains some enlightening articles
on the history and theory of AI, both as a field and a science.  (Is there
a difference?  I say yes...)  Contributors include luminaries such as Minsky
and Papert.  I picked my copy off the magazine shelf at a bookstore; it
shouldn't be too hard too find.  It's cheap, too ($5.00)

I am just getting into AI myself, and have found the journal to be very
helpful in getting my feet on the ground.  For a serious research project,
the extensive bibliographies of the articles should be useful, too.

Hope this helps!

------------------------------

Date: Mon, 8 Feb 88 10:11:14 PST
From: lambert@cod.nosc.mil (David R. Lambert)
Subject: ping pong playing robot

If my memory is correct, there was a ping pong playing robot built as a
masters thesis or PhD dissertation at a university in the north eastern US--
perhaps in Pennsylvania.  The thesis defense & demonstration was announced in
AIList about a year ago (Feb 87).

David R. Lambert

------------------------------

Date: 8 Feb 88 02:51:04 GMT
From: portal!cup.portal.com!Zona_-_Walcott@uunet.uu.net
Subject: Re: Radio gear for mobile robots

No, there's also a rec.rc, which covers Radio Control
models - air, land and water.  No tech.IDF tho, so I guess the
Israeli Defense Force will just have to volunteer its drone
specs.

I'm a guest on this account.  Reply to
  g451252772ea@deneb.ucdavis.edu


Ron Goldthwaite, UC Davis Psychology and Animal Behavior

------------------------------

Date: 9 Feb 88 21:43:30 GMT
From: vu0112@bingvaxu.cc.binghamton.edu  (vu0112)
Subject: Re: Fuzzy sets

In article <275@mmlai.UUCP> barash@mmlai.UUCP (Rev. Steven C. Barash) writes:
>
>Does anyone reading this understand "Fuzzy set theory"/"Fuzzy logic"
>and its applicability to automated reasoning?

I'm trying to. . .

>In particular, I'm interested in how one might verify empirically
>(or experimentally, as with probability theory) the accuracy of the
>fuzzy set formaulas for appropriate domains.

I'm not sure how such verification would differ from that for crisp formulas.

>Also, for a given problem,
>how should one determine the suitability of fuzzy sets (instead of traditional
>methods) for reasoning under uncertainty?

First, obviously, if the system in question is non-deterministic, then
fuzzy methods must come into play.  It should be recognized that
probability theory is a special case of fuzzy theory.

Now, as to the question of whether to use non-probabilistic (e.g.
possibilistic) fuzzy methods, that depends on the law of the excluded
middle (True(A) => False(~A)), which probability conforms to, and
possibility does not.  If the samples are highly interdependant, fuzzy
can yield better results.  I recently wrote a paper on Fuzzy Artificial
Inference and Expert Systems.  Fuzzy promises a much more succesful,
general method for approximate reasoning.

>The journal articles
>tend to be rather specialized, and don't address such basic issues.

Try _Fuzzzy_Sets_and_Systems_.  Also, I'd reccommend _Fuzzy_Sets,
Ucertainty,_and_Information_ (George Klir, Prentic Hall 1988), which is
an excellent introduction and bibliography.  Read anything by Zadeh.

>Please repond by E-mail; I'll post a summary if interest is sufficient.

Sorry, couldn't resist.  Plus my mailer usually chokes these days.

>                                 Steve Barash

O---------------------------------------------------------------------->
| Cliff Joslyn, Mad Cybernetician
| Systems Science Department, SUNY Binghamton, Binghamton, NY
| vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: 12 Feb 88 14:37:29 GMT
From: hubcap!ncrcae!gollum!rolandi@gatech.edu  (rolandi)
Subject: Chinese character generator


To the guy who wanted a Chinese character generator:

I have been unable to get mail through to you.  I may be able to help
you with the character generator however.  If you are interested, drop
me a line that includes a viable path.

walter rolandi
rolandi@gollum.UUCP ()
NCR Advanced Systems, Columbia, SC
u.s.carolina dept. of psychology and linguistics

------------------------------

Date: 13 Feb 88 05:30:53 GMT
From: pur-phy!mrstve!mdbs!kbc@ee.ecn.purdue.edu  (Kevin Castleberry)
Subject: technical support for mdbs


Technical Support for mdbs products:
KMAN (a relational db envrionment),
GURU (an expert system development environment),
MDBS III (a post-relational high performance dbs)
(Our products run in VMS, UNIX, OS/2 and MSDOS.)

is available by emailing to:
        support@mdbs.uucp
                or
        {rutgers,ihnp4,decvax,ucbvax}!pur-ee!mdbs!support


        Kevin Castleberry
        Manager mdbs Products Technical Information Center (TIC)

        Micro Data Base Systems Inc.
        P.O. Box 248
        Lafayette, IN  47902
        (317) 448-6187

For sales call: (317) 463-2581

------------------------------

End of AIList Digest
********************

∂15-Feb-88  0228	LAWS@KL.SRI.COM 	AIList V6 #33 - Genetic Algorithms, CAI, Psychnet, Nanotechnology   
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 15 Feb 88  02:27:57 PST
Date: Sun 14 Feb 1988 22:26-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #33 - Genetic Algorithms, CAI, Psychnet, Nanotechnology
To: AIList@SRI.COM


AIList Digest            Monday, 15 Feb 1988       Volume 6 : Issue 33

Today's Topics:
  Neuromorphic Systemd - Genetic Algorithms,
  Education - Becoming CAI Literate,
  Psychology - Psychnet,
  Nanotechnology - STM Engineering & Speedup,
  Expert Systems - Interviewing Experts

----------------------------------------------------------------------

Date: 13 Feb 88 22:13:47 GMT
From: jason@locus.ucla.edu
Subject: becoming literate with genetic algorithms

John Holland was here recently giving talks on genetic algorithms.  I found the
concept rather intriguing.  After hearing his lectures, I realized I needed to
do some introductory reading on the subject to fully appreciate its potential.

I am particularly interested in getting some references in the following areas:

(1) introductory theory behind GA
(2) its application to rule-based learning systems
(3) its relation to and implementation as neural nets

Thanks,

Jason Rosenberg                      Mira Hershey Hall
                                     801 Hilgard Avenue
jason@cs.ucla.edu                    Los Angeles, CA  90024
{ihnp4,ucbvax}!ucla-cs!jason         (213) 209-1806

------------------------------

Date: 11 Feb 88 19:34:52 GMT
From: umich!dwt@umix.cc.umich.edu  (David West)
Subject: Re: Cognitive System using Genetic Algorithms

In article <1062@ucdavis.ucdavis.edu> g451252772ea@deneb.ucdavis.edu.UUCP
 (PUT YOUR NAME HERE) writes:
>The author discusses neural nets,
>simulated annealing, and one example of GA, all applied to the TSP, but
>comments that "... a thorough comparason ... _would be_ very interesting"
  [...]
>As noted, the TSP is a canonical candidate.

I believe the TSP is popular because it is easy and compact to program.
The performance of a general method such as GAs can be strongly influenced
by the problem representation, and it turns out that the most straightforward
representations for genetic operations are particularly badly matched to the
most straightforward representations for TSPs.  This makes the TSP a rather
unfortunate choice of introductory example for people who are unfamiliar
with GAs.

>Finally, I noted above that the production rules take system inputs as
>bit-strings.  This representation allows for induction,...

It is *one* way of getting a form of induction, and has the property that
only very simple operations on the internal representation are used; the
extent to which this is useful depends, again, on the joint appropriateness
of the representations of the genetic operators and the world.
An "appropriate" representation has the property that the expected
fitness of the result of (say) a crossover is not severely worse than
that of its parents. This is something that must be ensured by the experimenter
if (as is most common) the representational mapping itself is not subject to
genetic selection.

 -David West

------------------------------

Date: 8 Feb 88 16:05:43 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Becoming CAI literate

Benjamin Armstrong asks about computers in education.

Sherry Turkle of MIT has written an excellent book on this subject
entitled The Second Self: Computers and the Human Spirit.  I found
her book to be well-researched and well-written, sensitive, insightful,
and thoroughly entertaining.  I highly recommend it.

She explores computer-mediated learning at all levels from pre-school
(Merlin and Speak 'n' Spell) through primary-level software (e.g. Logo)
to graduate level AI projects.  Her main thesis is that computers are
changing the way we think, and the way we think about the process of
thinking.

On Saturdays, I work at Computer Place at the Boston Museum of Science.
Computer Place is a resource center where youngsters can explore the
world of personal computers, with emphasis on educational software.
A common theme in educational software is to set up the learning
experience as a game, with amusing graphics and sound effects.  I
especially like the world geography lesson packaged as "Where in
the World is Carmen Sandiego".

Computers are accurate, infinitely patient, and highly interactive.
In this regard, they surpass classroom teachers.  I foresee the day
when computers will mediate 80% of the learning, freeing educators
to focus on special problems and enrichment beyond standard curricula.

--Barry Kort

------------------------------

Date: 9 Feb 88 14:59:53 GMT
From: decvax!sunybcs!rapaport@ucbvax.Berkeley.EDU  (William J.
      Rapaport)
Subject: Re: Becoming CAI literate

In article <817@aucs.UUCP> 870158a@aucs.UUCP (Benjamin Armstrong) writes:
>...direct me to a newsgroup where such discussions take place.

Try comp.ai.edu

------------------------------

Date: 9 Feb 88 15:01:56 GMT
From: decvax!sunybcs!rapaport@ucbvax.Berkeley.EDU  (William J.
      Rapaport)
Subject: Re: Becoming CAI literate

In article <822@aucs.UUCP> 870158a@aucs.UUCP (Benjamin Armstrong) writes:
> [Query about AI in education.]

You also might want to subscribe to news.announce.newusers, which,
among other things, has a complete list of newsgroups.

------------------------------

Date: 11 Feb 88 21:14:12 GMT
From: umich!dwt@umix.cc.umich.edu  (David West)
Subject: Re: Becoming CAI literate

In article <23938@linus.UUCP> bwk@mbunix (Barry Kort) writes:

>Computers are accurate, infinitely patient, and highly interactive.
>In this regard, they surpass classroom teachers.

Well, yes, but the things computers are best at teaching humans are,
by and large, things that humans used to  have to do only because they
didn't have computers.   Why train humans to emulate machines if
you have adequate machines?

-David West

------------------------------

Date: 9 Feb 88 15:05:50 GMT
From: decvax!sunybcs!rapaport@ucbvax.Berkeley.EDU  (William J.
      Rapaport)
Subject: Re: sci.psychology (was sci.psych) voting results

In article <1536@uhccux.UUCP> todd@uhccux.UUCP (The Perplexed Wiz) writes:
>
>I believe that this should be sufficient to establish sci.psychology.

Don't you folks know about Psychnet, the psychology bulletin board?

>From: EPSYNET@UHUPVM1 (Robert C. Morecock)
Subject: Announcement of new bboard named psychnet
Date: 30 Jun 86 17:34:27 GMT

           [Forwarded from Arpanet-BBoards by Laws@SRI-AI.]

PSYCHNET (tm)     Psychology Newsletter and Mailing List      EPSYNET@UHUPVM1
   The Psychnet mailing list and Newsletter sends out information and
news to those who sign up.  Within Bitnet, Psychnet is also a 24-hour
server machine which mails out files to users who first send the
PSYCHNET HELP command to userid UH-INFO at node UHUPVM1.  OUTSIDE
BITNET Psychnet is a mailing list and Newsletter only.  Once per week
ALL members receive the latest Psychnet Newsletter and Index of files
available on the server machine.  Outside Bitnet, if a file looks
interesting send an E-mail request to userid EPSYNET (NOT uh-info) at
node UHUPVM1 and the file will be shipped out to you.  Persons within
may also sign up for the mail list and will get the Newsletter and
Index along with other news.  Users within Bitnet should get their
files directly from the server machine.  An Exec file is available for
CMS users and COM files are available for VAX users within Bitnet.
   If you have a file or idea you wish distributed to members of the
list you may send it to userid EPSYNET at node UHUPVM1 and it will be
sent out for you, usually with the week's Psychnet Newsletter.  An
initial formal purpose of Psychnet is distribution of academic papers
in advance of this year's (1986) APA convention.  Other purposes will
develop according to the needs and interests of the profession and
Psychnet users.
   All requests to be added to or deleted from the mailing list, or to
have files distributed should be sent to:
   Coordinator:  Robert C. Morecock, Psychnet Editor, EPSYNET@UHUPVM1

------------------------------

Date: Sun, 10 Jan 88 10:35:44 PST
From: uazchem!dolata@arizona.edu (Dolata)
Subject: AIList V6 #27 - Nanotechnology


The reports of the significance of Foster and Formmers work (Nature, 331,
p324 (1988)) are a bit overstated.   To wit;

        F&F placed drops of di(2-ethylhexyl) pthalate on the carbon surface.
They then >SWEPT< the area with the scanning tunneling electron microscope,
causing a chemical reaction when the voltage applied to the electrode tip
was greater than 3.5 volts.   After this,  they then examined the surface,
and found lumps that correspond in size and shape to di(2-ethylhexyl)
pthalate molecules.   If they then increase the tip voltage,  they can
blast the stuff off of the surface.   At intermediate voltages, they
"believe" that they can cause intermediate level cleavages and rearrangments.

        Note that the process involves SCANNING of whole areas,  and not
individual pinning.   This is nothing new,  it can be done by standard electro
chemical techniques.  The ability to then alter individual molecules is
not very exciting either,   people have been doing that chemically with
polymer bound systems for a long time.   The exciting possibility was not
strictly addressed;  the ability to selectively alter moelcules in a
spatially regular fashion.  I.e.,  convert one to state 1, convert the
next to state 2,  etc...

        I don't mean to completely pooh-pooh their work.  It does indicate
an exciting new direction.   However,  I caution people from either claiming
that they did something that they didn't, or from being swept up in over
strong claims.    Chemists have long had the ability to pin molecules to a
surface,  and then modify them,  even in one's and two's at a time.    What
they haven't had much luck doing was using mechanical means to create
spatially interesting patterns of altered molecules.  What F&F has done is
to point the way, but they missed the biggy.

------------------------------

Date: 11 Feb 88 18:04:41 GMT
From: Martin Taylor <mmt%dciem.uucp@RELAY.CS.NET>
Reply-to: Martin Taylor <mmt%dciem.uucp@RELAY.CS.NET>
Subject: Re: Intelligent Nanocomputers


> ... such a machine could then just be allowed to run, and
>should be able to accomplish a century of progress in one hour.

I think we already do that, and have over the course of evolution managed
such a speedup several times.  No reason why it shouldn't happen again.
The building of structure (information, organization ...) is recursive.
The more you have, the easier it is to get more (a bit like money, come
to think of it, and for much the same reason).  BUT...humans will not
participate in this greatly augmented progress, any more than green algae
participate in human progress (except perhaps to be damaged by the side-
effects, analogous to pollution and destruction of habitat with which
we destroy those that have not shared our "progress").
--

Martin Taylor
...uunet!{mnetor|utzoo}!dciem!mmt
mmt@zorac.arpa
Magic is just advanced technology ... so is intelligence.  Before computers,
the ability to do arithmetic was proof of intelligence.  What proves
intelligence now?  Obviously, it is what we can do that computers can't.

------------------------------

Date: 8 Feb 88 22:08:59 GMT
From: hpcea!hpnmd!hpsrla!hpmwtla!garyb@hplabs.hp.com  (Gary
      Bringhurst)
Subject: Re: interviewing experts

(for the nasty line eating bug)

Warning: flaming ahead

As a (modest) computer scientist, I always find it disturbing to read
condescending remarks like those of professors Wood and Ford, who have, by
their own admission, been involved in AI only a short time (two years).

>We
>do, however,  believe that it is important for practicing
>knowledge engineers to attend to methodologies developed outside
>of AI so that they can spend their time  refining and extending
>their application to AI rather than "reinventing the  wheel."

I agree with this statement, as I believe any professional should try to expand
his area of expertise as far as possible.  Would I be out of place to ask
that cognitive psychologists who wish to contribute to AI study a little
computer science in return?

I have actually taken a class from Dr. Wood, and unless his depth of knowledge
in the field of computer science has increased significantly since early 1987,
I would find it very hard to give much weight to anything he says.

>Larry E. Wood                      John M. Ford
>woodl@byuvax.bitnet                fordjm@byuvax.bitnet

I suppose I'm just tired of well meaning zealots jumping into the foray.
The AI bandwagon is loaded heavily enough as is.  Let's lighten the load
a little.

Gary L. Bringhurst

(DISCLAIMER:    My opinions do not, in general, bear any resemblance at all
                to the opinions of my employer, which actually has none.)

------------------------------

End of AIList Digest
********************

∂18-Feb-88  0039	LAWS@KL.SRI.COM 	AIList V6 #34 - AI in Management, Software Engineering, Interviewing
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 18 Feb 88  00:39:36 PST
Date: Wed 17 Feb 1988 21:11-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #34 - AI in Management, Software Engineering, Interviewing
To: AIList@SRI.COM


AIList Digest           Thursday, 18 Feb 1988      Volume 6 : Issue 34

Today's Topics:
  Queries - Applications of AI Broadcast & Public Domain OPS5 &
    Data Smoothing for Character Recognition &
    Cryptology and Neural Networks & Kermit & Schank's Papers,
  References - Object-Oriented Programming & AI in Management &
    Software Engineering for AI,
  Expert Systems - Interviewing Experts

----------------------------------------------------------------------

Date: 15 Feb 88 14:38 -0600
From: Mike Attas <attas%wnre.aecl.cdn%ubc.csnet@RELAY.CS.NET>
Subject: Query - Applications of AI

Ken, do you know anything about an IEEE symposium entitled "Applications
of AI" scheduled to take place in NYC on Thursday the 18th?  Apparently
it is being broadcast live by satellite.  Some idea of the speaker(s),
topics, duration, intended audience, etc. would help in deciding what,
if anything, to do about this.
Apologies if you've announced it and I didn't catch the message.
Thanks.         Michael Attas

  [Nope, sorry.  -- KIL]

------------------------------

Date: 9 Feb 88 23:27:51 GMT
From: mcvax!unido!tub!tmpmbx!netmbx!morus@uunet.uu.net  (Thomas M.)
Subject: Public Domain OPS5 wanted.

Does anybody know where to get a public domain OPS5 (source or objectcode)
running on IBM-PCs - maybe on the net?
There should be an unmaintained Common-Lisp version which I think could be
ported to GCL or XLISP. Performance is not the main point although the es-
sentials (RETE-Match) should be included.
It  seems to  be common for the people who advert PC-OPS5s not to make any
comment about inclusion  of the Rete-Match. They only make proposals like:

" Our implementation is so  efficient that you'll think your PC is a full-
  sized mini-computer."

There are even some "full implementations" which do not use Rete-Match!
Has anybody had any experience with

- Sierra OPS5 by Inference Engine Technologies
- OPS5+
- TOPSI 2.0

Are these programs full implementations (given VAX-OPS5 as kind of reference)?
I would be very thankful for every comment on the topics mentioned above.

--
@(↑o↑)@   @(↑x↑)@   @(↑.↑)@   @(↑_↑)@   @(*o*)@   @('o`)@   @(`!')@   @(↑o↑)@
@  Thomas Muhr    Tel.: (Germany) 030 - 87 41 62    (voice!)                @
@  NET-ADRESS:    muhrth@db0tui11.bitnet  or morus@netmbx.UUCP              @
@  BTX:           030874162                                                 @

------------------------------

Date: 17 Feb 88 14:32:48 GMT
From: clong@topaz.rutgers.edu  (Chris Long)
Subject: Help Needed With Data Smoothing, Character Recognition

I am currently working on a vision project; solving the font-free
character recognition problem to be exact.  I am looking for any
and all reference sources dealing with data smoothing and
noise reduction, especially as applied to graphics.  Books,
technical journals, leads, etc. are all acceptable.   Highly
technical references are welcome, in fact, the more technical the
better.

Thank you for your time.
--

Chris Long
Rutgers University
New Brunswick, NJ

------------------------------

Date: 17 Feb 88 21:32:51 GMT
From: jscosta@cod.nosc.mil  (Joseph S. Costa)
Subject: Cryptology and Neural Networks?


     Has anyone out there heard of any attempts to apply the fledgling
concept of neural networks to cryptology?  I know that neurals are found
to be quite nice for handling pattern-recognition problems, so the pair
would seem a natural marriage to me.

     JSC

------------------------------

Date: Mon, 15 Feb 88 10:04:31 est
From: Mr. David Smith <dsmith@gelac.arpa>
Subject: Kermit

Ken,

This may be the wrong forum to ask the question, but I need to know if
there are any Government agencies which regularly use Kermit as a mechanism
for transferring files.  I would appreciate replies direct to me,
DSMITH@gelac.arpa or by phone, (404)494-3345.

------------------------------

Date: 15 Feb 88 06:30:30 GMT
From: kddlab!icot32!cavax3!cau1!lala@uunet.uu.net  (J.matsuoka)
Subject: We look for Schank's papers & ets. (In English)


I would like to know the references of papers/programs/books/reports
published (or otherwise accessible) recently by the AI Group
(Prof. Schank's group) at Computer Science Dept., Yale Univ., USA.
Can anyone help me ??  Please send me e-mail.
ThanQ !

      lala@cau1.crc.junet        H. Matsuoka

      CANON INC., Research Center
      5-1, Morinosato-Wakamiya, Atsugi city, Kanagawa, 243-01, Japan
      Tel: 0462-47-2111 x431.

------------------------------

Date: Mon, 15 Feb 1988 11:27:06 EST
From: Deeptendu Majumder
Subject: Conference Paper

I found the following reference in our Library Database. But it
seems the conference proceedings have not been published yet. I would
like to know if anybody has any information about this. If you have
a pre-print can you send me a copy of this paper. I will appreciate
that very much.

Papazoglou, M.P.; Hoffmann, C.
Towards versatile object oriented query languages.

1987 IEEE Workshop on Languages for Automation. (ISBN 0818647973)
pp 99-102. Vienna, Aug 1987.

Thanx in advance

Deeptendu Majumder
MEIBMDM @ GITVM2
Box 30963
Georgia Tech
Atlanta, GA 30332

------------------------------

Date: Wed, 17 Feb 88 11:09:16 EST
From: MEPHDBO@gitnve2.gatech.edu
Subject: Query on Object Oriented and Robotics

I saw in one of the recent issues of AILIST request for information in
the area of Object Oriented programming and Robotics. I am not an expert
in that area by any means, but I picked some references in our library
database. I am including them here and they are not complete references
but I have included enough info to locate them: I hope!!

1) 1984 Second Biennial International Machine Tools Tech Conf Proceed.
Vol 3 (Schmitt and Gruver.)

2) Schmitt and Gruver. IEEE trans os Sys Man Cybernetics July Aug 1986.
Vol SMC-16 No. 4 pp 582-9.

3) Nackman, Lavin and Taylor (IBM) AML/X: A prog Lang for Design and Manf
1986 Proc of Fall Joint Comp Conf pp 149-59

4) Bourne (CMU) CML: A Meta Interpreter for Manf. AI Mag Vol 7 No. 4
pp 86-95 , Fall 1986.

5) Lalonde: 1987 IEEE International Conf on Robotics  and Automation
pp 1456-62, Vol 3. "Smalltalk as a Prog Lang for Robotics".

6) Kemper, Lockmann (IBM FRG), "Obj Oriented data storage for Robotics"
Inform. Forsch. Entwickl Vol 2 No. 4 pp 151-69 (German).

7) Allen, P.K. (Coulumbia Univ) A framework for implementing
multisensor robotics task. 1987 Image Understanding Workshop Proc.
Vol 1 pp 392-8.

Please excuse me for the way I have presented them. I hope they are of
some use to the relevant person.


Deeptendu Majumder
MEIBMDM at GITVM2
MEPHDBO at GITNVE2

------------------------------

Date: 15 Feb 88 12:50:15 GMT
From: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Reply-to: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Subject: Re: Query: AI works in Management


In article <8801291631.AA21578@ucbvax.Berkeley.EDU> ISSLPL@NUSVM.BITNET
(Joel Loo) writes:
>  There aren't many AI research works on Management that I've  come across.

There was a great deal of AI work in management in the 1950s, as
a few of the founding fathers of AI began work in 'Scientific Management',
and don't seem to have changed their perspectives too much :-)

Luckily for the managed, mathematical models have long been restricted
in their application.  Today, sociological perspectives are more
dominant than algebraic, idealistic ones.  Any new AI work which can't
incorporate the recent perspectives on corporate and office culture is
going to be nearly thirty years out of date.
--
Gilbert Cockton, Scottish HCI Centre, Heriot-Watt University, Chambers St.,
Edinburgh, EH1 1HX.  JANET:  gilbert@uk.ac.hw.hci
ARPA: gilbert%hci.hw.ac.uk@cs.ucl.ac.uk UUCP: ..{backbone}!mcvax!ukc!hci!gilbert

------------------------------

Date: 16 Feb 88 12:53:25 GMT
From: Steve M Easterbrook <mcvax!ivax.doc.ic.ac.uk!sme@uunet.UU.NET>
Reply-to: Steve M Easterbrook <mcvax!doc.ic.ac.uk!sme@uunet.UU.NET>
Subject: Re: request for refs on SWE for AI


In article <8802080744.AA21399@ucbvax.Berkeley.EDU> LEWIS@cs.umass.EDU writes:
>   I'm currently taking a seminar on Software Engineering and AI. It's supposed
>to be balanced, but right now we've found many more papers on applying AI to
>software engineering than we have on software engineering applied to AI.
>Does anyone have some suggested papers on programming techniques,
>language design, environments, methodology, etc. for AI or LISP?

There aren't that many. One that was mentioned on this list recently is

R. J. K. Jacob and J. N. Froscher "Software Engineering for Rule-Based Systems"
1986 Proceedings, Fall Joint Computer Conference, Dallas Texas, p185-189

There are others by the same pair, but this was the easiest to get hold of.
I'd be interested in hearing about any more you turn up.

                                Steve Easterbrook
                                <sme@uk.ac.ic.doc>

------------------------------

Date: 15 Feb 88 11:16:39 GMT
From: mcvax!ukc!its63b!hwcs!hci!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: interviewing experts

In article <2300001@hpmwtla.HP.COM> garyb@hpmwtla.HP.COM
(Gary Bringhurst) writes:
>
>Would I be out of place to ask that cognitive psychologists who wish to
> contribute to AI study a little computer science in return?
Hear! hear! (and some psychology too :-) )
>
>I suppose I'm just tired of well meaning zealots jumping into the foray.
 (reference to Knoweldge Engineering tutors with no computing knowledge)

Whilst sceptical about much AI, it's my opinion that in 10 years time,
Knowledge Engineering will be seen as one of the most important
contributions of AI to Systems Design.  Why? - because the skills
required for succesful knowledge elicitation are applicable to ALL
systems design.  The result is that computer specialists who would
never have attended 'useless' courses or read up on 'Participative design'
and 'end-user involvement' have been seduced into learning about some
central skills in these design approaches (KE is still weak on
organisational/operations modelling though).  So, even if Expert
Systems never become the dominant systems technology, we will have
more systems specialists who do know how to find out what people
want. So, those well-meaning zealots, ignorant of computing, but
knowlegeable about human issues, have, in the promise of Intelligent
Systems and big profits, at last found a way to influence and educate
more computing professionals.  Pass the quiche!
--
Gilbert Cockton, Scottish HCI Centre, Heriot-Watt University, Chambers St.,
Edinburgh, EH1 1HX.  JANET:  gilbert@uk.ac.hw.hci
ARPA: gilbert%hci.hw.ac.uk@cs.ucl.ac.uk UUCP: ..{backbone}!mcvax!ukc!hci!gilbert

------------------------------

End of AIList Digest
********************

∂18-Feb-88  0231	LAWS@KL.SRI.COM 	AIList V6 #35 - Genetics, Fuzzy Logic, Nanotechnology, Greenblat    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 18 Feb 88  02:31:44 PST
Date: Wed 17 Feb 1988 21:19-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #35 - Genetics, Fuzzy Logic, Nanotechnology, Greenblat
To: AIList@SRI.COM


AIList Digest           Thursday, 18 Feb 1988      Volume 6 : Issue 35

Today's Topics:
  References - Genetic Algorithms & Self-Organizing Systems,
  AI Tools - Fuzzy Logic vs. Probability Theory,
  Nanotechnology - Altering Individual Atoms,
  Biography - Richard Greenblat

----------------------------------------------------------------------

Date: 15 Feb 88 18:22:11 GMT
From: g451252772ea@deneb.ucdavis.edu  (0040;0000001899;0;327;142;)
Subject: Re: becoming literate with genetic algorithms

The references, generally at good libraries, that I know of for GAs:

Introductory:
 Holland, J., et al.  INDUCTION.  1986, MIT Press.  The book is a coherent
whole, not a collection of separately authored papers - and reads very well
by any standard.  Most of it discusses human induction, but the main model
introduced early on is Holland's.  And the human material is fascinating in
its own right, only partly because of the lucid presentation.  The description
of Holland's GA is complete, and an alternative system, PI, is also presented.
This is a more familiar symbol-based production system, in LISP.

 Holland, J. "Genetic Algorithms and Adaptation", in O. Selfridge, et al,
ADAPTIVE CONTROL OF ILL-DEFINED SYSTEMS.  1984, Plenum Press, NY.  This is
a discrete chapter, in which an overview of GA is provided. Almost every main
theme is touched on.

 Davis, L.  GENETIC ALGORITHMS AND SIMULATED ANNEALING.  1987, Morgan Kauffman
Pub, Los Altos, CA.  A collection of research papers by Holland's colleagues,
mostly (his INDUCTION chapters are reproduced here also).  A good variety of
current work, and again very lucid as technical/research writing goes (by
contrast, the Neural net literature is hopeless).  Topics include a study of
the TSP; parallel implementation of the CFS-C simulation library for GA on
the Connection Machine (nice!); Axlerod's study of GA in round-robins of the
iterated Prisoner's dilemma; a somewhat vague but very suggestive study on
designing a mapping from 'an East Asian language' onto a usable keyboard,
using a GA; some formal tests of 'hard' problems for GAs; and another
suggestive paper (for me) on producing long action sequences with GA by
means of 'hierarchical credit allocation' (this problem has parallels in
the animal-behavior literature I'm familiar with).

 Holland, J.  ADAPTION IN NATURAL AND ARTIFICIAL SYSTEMS.  1975, U. Michigan
Press.  The definitive foundation, marred only by a generous use of formal
notation (not insensibly, but offputting nonetheless).  The main conceptual
addition since this has been the interpretive change in INDUCTION, I think.

The GA community has held two conferences, last summer and in '86.  The
proceedings are available from Lawrence Erlbaum Assoc., 365 Broadway,
Hillsdale, NJ 07642.  My copy is on order ("Proc. Second International
Conf. on GA and their applications", held at Cambridge, MA, July 28-81, 1987).

And the various dissertations Holland has supervised are worth perusing via
U.Microfilm copies at $25 each.

For relating GA to NNets, I'll hazard to volunteer Richard Belew's name.  He
responded to an earlier posting I made and stated an interest in what
commonalities there might be.  He teaches at UCSD: rik@sdcsvax.ucsd.edu.

Oh yes: as the _very best_ intro article to GA, I recommend the final issue
of Science 86, for July, I think.  Too bad that mag died.

Hopefully helpfully (let me know what else you find- I've been teaching this
material to budding animal behaviorists!) -



Ron Goldthwaite / UC Davis, Psychology and Animal Behavior
'Economics is a branch of ethics, pretending to be a science;
 ethology is a science, pretending relevance to ethics.'

------------------------------

Date: 14 Feb 88 15:31:54 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU  (Stephen
      Smoliar)
Subject: Re: becoming literate with genetic algorithms

In article <9430@shemp.CS.UCLA.EDU> jason@CS.UCLA.EDU () writes:
>John Holland was here recently giving talks on genetic algorithms.  I found
>the
>concept rather intriguing.  After hearing his lectures, I realized I needed to
>do some introductory reading on the subject to fully appreciate its potential.
>
The best source would be the book entitled INDUCTION, which Holland wrote with
Holyoak, Nisbett, and Thagard.  Most of the material from the talk is in
Section 4.1 (I think).  The preceding material leading up to the major
argument is very well written, as is the subsequent discussion.

------------------------------

Date: 16 Feb 88 11:16:27 GMT
From: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Reply-to: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Subject: Re: self organizing systems


In article <8801291421.aa28769@ARDEC-AC4.ARDEC.ARPA> pbeck@ARDEC.ARPA
(Peter Beck, LCWSL) writes:
>
>Is this a generally accepted proposition, ie, that complex constituent
>elements can "NOT" form self organizing systems??

Broadly speaking, social theories are often opposed across a
co-operation vs. conflict continuum.  Theories in the Marxian tradition
stress conflict as a fundamental dynamic of society.  Theories in the
functionalist tradition stress adaptation towards universal ends (e.g.
Talcott-Parsons).  Look to Marxian theories (e.g post/neo/vanilla
-structuralism) for evidence of non-self-organisation.  Look to
functionalist ones for evidence of dormitory consensus.

NB Rednecks! - 'Marxian' is a scholarly term, 'Marxist' is both a
scholarly and a political term.  Marx claimed he wasn't a Marxist!  It
is thus safe to follow up these ideas without the risk of brainwashing
yourself into running off to Cuba/Nicaragua :-)
--
Gilbert Cockton, Scottish HCI Centre, Heriot-Watt University, Chambers St.,
Edinburgh, EH1 1HX.  JANET:  gilbert@uk.ac.hw.hci
ARPA: gilbert%hci.hw.ac.uk@cs.ucl.ac.uk UUCP: ..{backbone}!mcvax!ukc!hci!gilbert

------------------------------

Date: Mon, 15 Feb 88 20:46:33 PST
From: golden@frodo.STANFORD.EDU (Richard Golden)
Subject: FUZZY LOGIC VS. PROBABILITY THEORY

I am not an expert in Fuzzy Logic or Probability Theory but I have examined
the literature regarding the foundations of Probability Theory and the
derivation of these foundations from basic principles of deductive logic.
The basic theoretical result is that selecting a "most probable" conclusion
for a given set of data is the ONLY RATIONAL selection one can make in
an environment characterized by uncertainty.  (Rational selection in this
case meaning consistency with the classic deductive/symbolic logic - boolean
algebra.)  Thus, one could argue that if one constrains the class of
possible inductive logics to be consistent with the laws of deductive logic
then Probability Theory is the MOST GENERAL type of inductive logic.

The reference from which these arguments are based is given
by Cox (1946). Probability Frequency and reasonable expectation.
American Journal of Statistical Physics, 14, 1-13.
The argument is based upon the following hypotheses:

(i) The belief of the event B given A may be represented by a
    real-valued function F(B,A).

(ii) F(~B,A) may be computed from F(B,A)

(iii) F(C and B,A) may be computed from F(C,B and A) and F(B,A)
      Note this assumption's similarity to Bayes Rule but the
      multiplicative property is not assumed.
(iv)  Assumptions (i), (ii), and (iii) must be consistent with the
      laws of Boolean Algebra (i.e., deductive/symbolic logic).

From these assumptions one can prove that F(B,A) must be equivalent
to the conditional probability of B given A. That is, F(B,A) must
lie between a maximum and minimum value (say 1 and 0) and the sum of
all possible values for B for a particular value of A must equal the
maximum value (1).  Note that we are taking the subjectivist view of
probability theory and we are NOT interpreting the probability of
an event as the limiting value of the relative frequency of an event.

To my knowledge, the axioms of Fuzzy Logic can not be derived from
consistency conditions generated from the deductive logic so I conclude
that Fuzzy Logic is not appropriate for inferencing.  Any comments?!!!

                                Richard Golden
                                Psychology Department
                                Stanford University
                                Stanford, CA 94305   GOLDEN@PSYCH.STANFORD.EDU

Cc:

------------------------------

Date: Mon, 15 Feb 1988  18:31 EST
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList V6 #27 - Nanotechnology

Dolata's reamrks about nanoscopic chemistry missed the point, so far
as I can see, in arguing that because it is a scanning microscope it
is not involved with individual molecules but is more like regular
volume chemistry.  However, the molecular rearrangement was not
accomplished by a conventional bulk effect.  Instead, it was
accomplished by a sub-microsecond pulse applied during the scan so
that it occurred while the needle was over a particular molecule.  The
next step, of course, is to try to make a particular modification at a
particular site on the molecule.

Much more will be done in the area soon, I'm sure, because the
techniques seem quite accessible.  But I see no reason to denigrate
the technique because it uses scanning.  Simply think of scanning as
examining, and possibly modifying, large numbers of points is
sequence.  What could be better?  The trouble with traditional
chemistry is, in fact, that it is constrained to do the same thing to
everything, in parallel.

------------------------------

Date: 14 Feb 88 08:28
From: minow%thundr.DEC@decwrl.dec.com (Martin Minow THUNDR::MINOW
      ML3-5/U26 223-9922)
Subject: Article on Richard Greenblat

AIList readers might enjoy this article from the Boston Globe, Feb. 7, 1988.

                  ZORCHED OUT: A COMPUTER HACKER'S TALE
                     by Alex Beam, Boston Globe staff

           Richard Greenblatt:  Single-minded, unkempt, prolific, and
           canonical MIT hacker who went into night phase so often
           that he zorched his academic career.  The hacker's hacker.

                         - HACKERS by Steven Levy.

       CAMBRIDGE -- "Lights On!"  Greenblatt yells, pushing through the
       door of MIT's Model Railroad Club.  "That's just in case
       anybody's sleeping under the layout."  He explains to a visitor.
       "They might pick up a shock or something."

       Happily, no one is sleeping underneath the thousand feet of
       handmade track that may be the world's most sophisticated model
       railway.  The last person to fall asleep under the layout was
       probably Greenblatt, who spent so much tinkering - "hacking" -
       with the railroad's switching system, and with his other
       favorite toy, computers, that he flunked out of MIT in his
       sophomore year.

       Greenblatt, now 44, has gone on to bigger things.  After a long
       career as senior researcher at MIT's Artificial Intelligence Lab,
       he helped found Lisp Machine Inc., one of the first artificial
       intelligence startups.  Now he is president of Cambridge-based
       GigaMOS, which purchased LMI's assets after it went broke last
       year.

       But scratching the surface of Richard Greenblatt, AI entrepreneur,
       one quickly finds traces of 17-year-old Ricky Greenblatt, the
       soda-pop swilling science whiz who arrived at MIT as a bewildered
       freshman from Columbia, MO, in 1963.

       Greenblatt still drops in on the railroad club from time to time,
       and exudes boyish enthusiasm when demonstrating "the famous
       Greenblatt track cleaning machine," a cleverly-engineered
       locomotive that spins an abrasive grinding wheel over the
       nickel-silver track.

       He sheepishly explains that he is "out of phase" on a particular
       day, because he spent the previous night hacking away on a
       computer.

       And even though he has cleaned up his presentation - friends say
       he bathed so rarely as an undergraduate that they had to ambush
       him with air freshener - Greenblatt still acts like an
       absent-minded computer genius.  Pallid-skinned from long hours of
       computer work, he trundles around Cambridge in rumpled work
       pants and a plaid shirt, with a digital calculator watch
       protruding from his breast pocket and a cellular phone slung
       across his shoulder.

       Although he has earned plenty of money in his computer ventures,
       he still rents a room in the same house in Belmont where he has
       lived for 20 years.  Why not buy a house:  "It's too much
       trouble," Greenblatt says.  "You have to pay taxes, mow the lawn.
       I don't want to bother."

       "Ricky lives in a world of his own, dominated by his own genius,"
       says Andy Miller, who briefly roomed with Greenblatt at MIT.  "We
       never saw him when he lived with us.  The Sun meant absolutely
       nothing to him - it happened to rise and fall in a way that
       wasn't in synch with his schedule."

       After two semesters on the Dean's List at MIT, Greenblatt threw
       in with the small band of electronics fanatics hanging around
       the Model Railroad Club.  Synchronizing the model railroad's
       switching system - its circuits can control five trains chugging
       across the vast layout, and set the 200 switches so no crashes
       occur - turned out to be a lot like programming the early
       computers that were making their first appearance in MIT labs.

       (It also resembled another electronic gimmick called "phone
       hacking," or fooling the phone system into placing free
       long-distance calls, which resulted in suspension of several of
       Greenblatt's friends.)

       Greenblatt and his friends often spent the daylight hours working
       on the railroad, and then migrated to a neighboring lab to stay
       up all night next to the PDP-1, DIGITAL EQUIPMENT CORP.'s first
       computer.  Fueled by the Railroad Club's private Coca-Cola
       machine, Greenblatt and his fellow hackers "wrapped around" day
       into night, working for 30 hours at a stretch to solve thorney
       problems, either with the railway or the computer.

       "To a large extent, our group wasn't interested in the normal
       social events around the institute," explains fellow hacker Alan
       Kotok, now a corporate consulting engineer at Digital.  "The
       railroad club was like a fraternity.  There were people you could
       talk to day or night about things of common interest.

       Although no one asked him to, Greenblatt wrote a high-level
       language computer program for the PDP-1, so the club's timetable
       system could be stored on the new computer.  Unfortunately, the
       young programmer's deepening involvement in computer hacking
       doomed his academic career.  "I sort of zorched out on classes,"
       Greenblatt admits.  During one of his 30-hour work blasts,
       Greenblatt slept through a final exam, and had to leave MIT.

       Of course, MIT didn't get where it is today by turning away
       computer talent.  After a brief sojourn on Route 128, Greenblatt
       landed a job as a programmer at the Artificial Intelligence Lab,
       and stayed for 20 years.

       Greenblatt's fame grew and grew.  He and a co-worker wrote ITS,
       and early minicomputer time-sharing program that is still in use
       today.  He was one of the early programmers to work in LISP, the
       high-level language that has become the key building block for
       artificial intelligence.

       "He would attack problems with great vigor," remembers Donald
       Eastlake, another railroad club alumnus.  "Everybody was smart,
       but the people who really excelled were smart and tenacious.
       He was one of the primary examples of that."

       An accomplished chess player, Greenblatt wrote MacHack, a chess
       program for a later DIGITAL mini, the PDP-6.  The program scored
       an important victory for AI boosters when it defeated a
       prominent critic of artificial intelligence who insisted that a
       computer would never play chess well enough to beat a 10-year
       old.  The program later became a member of the American Chess
       Federation and the Massachusetts State Chess Association.

       When Greenblatt later did graduate work at MIT, administrators
       hinted that if he submitted his chess program as a doctoral
       thesis, he might be awarded a degree.  "I never really got around
       to it," Greenblatt confesses.  "It just didn't seem that important."

------------------------------

End of AIList Digest
********************

∂21-Feb-88  0154	LAWS@KL.SRI.COM 	AIList V6 #36 - Grapher, Seminars, Conferences  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 21 Feb 88  01:53:56 PST
Date: Sat 20 Feb 1988 23:41-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #36 - Grapher, Seminars, Conferences
To: AIList@SRI.COM


AIList Digest            Sunday, 21 Feb 1988       Volume 6 : Issue 36

Today's Topics:
  AI Tools - ISI Grapher Update,
  Seminars - Problems in Prediction and Causal Reasoning (BBN) &
    Micro Motors (BBN) &
    Traveling Salesman in Token-Passing Networks (SRI),
  Conferences - Neural Networks in SOAR (Space Operations, Robotics) &
    Illinois Workshop on Decision Making &
    7th Nat. Conf. on Artificial Intelligence (AAAI-88)

----------------------------------------------------------------------

Date: Tue, 16 Feb 88 11:09:40 PST
From: Gabriel Robins <gabriel@vaxb.isi.edu>
Subject: ISI Grapher: an update


AI/Graphics tool availability update:

    "The ISI Grapher: a Portable Tool for Displaying Graphs Pictorially"

    Now available for the MacIntosh II and SUNs (as well as for Symbolics
                                                         and TI Explorers).


Greetings,

   Due to the considerable interest drawn by the ISI Grapher so far, I am
posting this abstract summarizing its function and current status.  The ISI
Grapher Common LISP sources are now available to both domestic and foreign
sites.

   A paper describing this effort is now available, entitled: "The ISI
Grapher: a Portable Tool for Displaying Graphs Pictorially."  A more
detailed document is also available, called "The ISI Grapher Manual."  It
describes the implementation, usage, data-structures, and algorithms of the
ISI Grapher.

   The CommonLisp sources are also available.  It currently runs on Symbolics
versions 6 & 7, TI Explorers versions 2 & 3, MacIntosh II (under both Coral
Allegro LISP and ExperCommon LISP), and SUNs (under Franz and Lucid, using X).
Ports to other machines, such as HP Bobcats, are currently in the planning.
If you know of any other ports, both completed or planned, please let me know.

   If you would like to have the paper and/or the sources, please forward your
name and postal address to "gabriel@vaxb.isi.edu" or to:

             Gabriel Robins
             Intelligent Systems Division
             Information Sciences Institute
             4676 Admiralty Way
             Marina Del Rey, Ca 90292-6695

ExperTelligence Inc. is currently marketing the ISI Grapher for the MacIntosh
under ExperCommon LISP.  You may contact them directly regarding the Mac
version: ExperTelligence Inc., 559 San Ysidro Road, Santa Barbara, CA 93108
(805) 969-7871.

               =======================================

                             The ISI Grapher

                              February, 1988

                              Gabriel Robins
                       Intelligent Systems Division
                      Information Sciences Institute


   The ISI Grapher is a set of functions that convert an arbitrary graph
structure (or relation) into an equivalent pictorial representation and
displays the resulting diagram.  Nodes and edges in the graph become boxes
and lines on the workstation screen, and the user may then interact with the
Grapher in various ways via the mouse and the keyboard.

   The fundamental motivation which gave birth to the ISI Grapher is the
observation that graphs are very basic and common structures, and the belief
that the ability to quickly display, manipulate, and browse through graphs  may
greatly enhance the productivity of a researcher, both quantitatively and
qualitatively.  This seems especially true in knowledge representation and
natural language research.

   The ISI Grapher is both powerful and versatile, allowing an
application-builder to easily build other tools on top of it.  The ISI NIKL
Browser is an example of one such tool.  The salient features of the ISI
Grapher are its portability, speed, versatility, and extensibility.  Several
additional applications were already built on top of the ISI Grapher,
providing the ability to graph lists, flavors, packages, divisors, functions,
LOOM hierarchies, and Common-Loops classes.

  Several basic Grapher operations may be user-controlled via the specification
of alternate functions for performing these tasks.  These operations include
the drawing of nodes and edges, the selection of fonts, the determination of
print-names, pretty-printing, and highlighting operations.  Standard
definitions are already provided for these operations and are used by default
if the application-builder does not override them by specifying his own
custom-tailored functions for performing the same tasks.

   The ISI Grapher now spans about 200K of CommonLisp code. The 105-page
ISI Grapher manual is available; this manual describes the general ideas, the
interface, the application-builder's  back-end, the algorithms, the
implementation, and the data structures.  A shorter paper is also available,
and includes hardcopy samples of the screen during execution. The ISI Grapher
presently runs on both Symbolics (versions 6 & 7), TI Explorer workstations
(versions 2 & 3), MacIntosh II (under both Coral Allegro LISP and ExperCommon
LISP), and SUNs (under Franz and Lucid, using X); ports to other machines are
being planned.

   If you are interested in more information, the sources themselves, or just
the paper/manual, please feel free to forward your name and postal address to
"gabriel@vaxb.isi.edu" or write to "Gabriel Robins, Information Sciences
Institute, 4676 Admiralty Way, Marina Del Rey, Ca 90292-6695 U.S.A."

------------------------------

Date: Tue 16 Feb 88 19:23:45-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Seminar - Problems in Prediction and Causal Reasoning (BBN)

                    BBN Science Development Program
                       AI Seminar Series Lecture

              PROBLEMS IN PREDICTION AND CAUSAL REASONING

                                Tom Dean
                            Brown University
                    (tld%cs.brown.edu@RELAY.CS.NET)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                      10:30 am, Friday February 19


Causal reasoning involving incomplete information constitutes a topic of
growing interest in AI.  Despite the enthusiasm and the volume of paper
devoted to the topic, there are still very few well defined problems.  In this
talk, we will consider three problems corresponding to variations on what is
commonly referred to as THE prediction problem.  In the first problem, all
events are known, but their order of occurrence is not.  The task is simply to
determine what facts persist over what intervals of time.  The general problem
is NP-hard, and, hence, the solutions we propose involve polynomial
approximations.  In the second problem, all of the events are not known.
Here, the task is to account for the possible impact of unknown events on the
persistence of facts over intervals of time.  Our solution, involving a
probabilistic theory of causation, introduces a number of problems of its own,
and, in the process of dealing with these new problems, we introduce a third
prediction problem involving unexplained but contingent events.  Our analysis
of this third problem leads us to a new view of prediction which has many
elements of what is commonly referred to as explanation.  We provide a precise
characterization of this problem and then consider the consequences of our new
view of prediction for existing formal accounts of causation and temporal
inference.

------------------------------

Date: Mon, 15 Feb 88 15:01:59 EST
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: Seminar - Micro Motors (BBN)


Subject: not-yet-nano technology
Originally
From: "Anita M. Flynn" <ANITA%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Subject:  motor on a chip!

       Wednesday, Feb 17th, 4:00         NE-43 - 8th floor playroom

                           MICRO MOTORS

   Stephen F. Bart       Theresa A. Lober     Lee S. Tavrow

         Laboratory for Electromagnetic and Electronic Systems
                 Microsystems Technology Laboratory

Silicon microfabrication technology has recently allowed development
of new sensing technologies for interfacing with the non-electronic
world.  Many of these microsensors rely on micromechanical structures
fabricated by the selective etching of the silicon substrate or
deposited thin films.  In contrast, research on microfabricated
actuators (microactuators) has been largely neglected.  Conventional
microstructures, such as cantilever beams, bridges, and diaphragms,
are able to move only a few micrometers perpendicular to the plane of
the substrate.  This restrained travel in one degree of freedom has
restricted existing microactuators to small-motion applications.

A flexible microactuator technology requires structures that have
unrestrained motion in at least one degree of freedom.  By means of a
straightforward extension of surface micromachining, thin-film disks
or plates can be made which are free to rotate or slide over the
surface of the substrate.  The addition of some means of applying an
electromechanical force opens a multitude of possibilities for
developing "micromotors".

This talk will discuss some of the electromechanical issues that
influence the design of the motor drive (electrostatic vs.  magnetic,
for example). The work that has taken place at MIT to date will be
examined. Finally, there will a discussion of the possible
applications for such a technology.

------------------------------

Date: Tue, 16 Feb 88 09:34:58 PST
From: seminars@csl.sri.com (contact lunt@csl.sri.com)
Subject: Seminar - Traveling Salesman in Token-Passing Networks (SRI)


            TRANSMISSION ORDER IN MOBILE TOKEN PASSING NETWORKS

                              Dr. Yaron Gold
                         Dept of Computer Science
                   Technion, Israel Inst. of Technology

                     Tuesday, February 23, 4:00 pm
               Engineering building, 3rd floor, room EJ330


In high-speed networks that use efficient token-passing access protocols
the total round-trip token passing time is a major component of the
protocol overhead. To minimize this overhead an appropriate TRAMSMISSION
ORDER (token-passin path) must be selected. The optimal path constitutes a
TRAVELING SALESMAN (TS) tour in the network graph. The length of the
optimal path is, at the worst case, proportional to the square root of n
(the number of nodes) while the expected length of a random path is
proportional to n.  In a network with MOBILE NODES the transmission
sequence must be re-computed from time to time to suit the changhing
node-layout.  Two distributed methods for constructing an approximate TS
tour are presented.  One consists mainly of a distributed algorithm for
constructing a MINIMUM SPANNING TREE from which a TS tour is constructed
using Christophides' method.  The other reduces the cost of constructing
the tour in certain cases, by ADJUSTING AN EXISTING TOUR that has
deteriorated due to node motion, rather than computing it "from scrach".
It is shown that the WORST-CASE BOUNDS on the path lenghts resulting from
both methods are the same.


NOTE FOR VISITORS TO SRI:

Please arrive at least 10 minutes early in order to sign in and
be shown to the conference room.

SRI is located at 333 Ravenswood Avenue in Menlo Park.  Visitors
may park in the visitors lot in front of Building E (a tall tan
building on Ravenswood Ave; the turn off Ravenswood has a sign
for Building E), or in the visitors lot in front of Building A
(red brick building at 333 Ravenswood Ave), or in the conference
parking area at the corner of Ravenswood and Middlefield.  The
seminar room is in Building E.  Visitors should sign in at the
reception desk in the Building E lobby.

Visitors from Communist Bloc countries should make the necessary
arrangements with Fran Leonard (415-859-4124) in SRI Security as
soon as possible.

------------------------------

Date: Wed 17 Feb 88 15:31:53-PDT
From: HOSEIN@Pluto.ARC.NASA.GOV
Subject: Conference - Neural Networks in SOAR (Space Operations,
         Robotics)


                Marc P. Hosein
                Intelligent Systems Technology Branch
                NASA Ames Research Center
                Mail Stop 244-4
                Moffett Field, CA.  94035
                (415) 694-6526

        TO: Neural Network and Connectionist Researchers

    I am a research scientist in the Intelligent Systems Technology
Branch of the Information Sciences division of NASA Ames Research
Center.  I am currently working on the Spaceborne VHSIC Multiprocessor
System (SVMS) project under Dr. Henry Lum.  In organizing a poster session
on Neural Networks for the 1988 SOAR conference, I am gathering
information on the current state of the field, as well as various technical
and non-technical papers for distribution at the conference.

    The SOAR (Space Operations Automation and Robotics) workshop in
automation and robotics is sponsored by NASA in conjunction with the
USAF.  The main objectives of the workshop are:

        1) To establish communications between individuals and
           organizations involved in similar research and technology

        2) To bring together project/program managers in open exchange
           through presentation of technical papers and panel discussions

        3) To document in the proceedings a snapshot of
           USAF/NASA efforts in automation and robotics

If you have papers or information to be included in a summary of the neural
networks field, please mail them to me at the above address or on the
arpanet.  Even more importantly, I am looking for papers on research done
or currently being done to incorporate as supplemental information and
distribution material at the conference and beyond.  Please feel free to
call me at (415) 694-6526 or send mail on the arpanet to
HOSEIN@AMES-PLUTO.ARPA if you have specific questions about the poster
session or the conference.


                                                Thank you,  Marc P. Hosein

------------------------------

Date: Thu, 18 Feb 88 10:23:27 cst
From: haddawy@m.cs.uiuc.edu (Peter Haddawy)
Subject: Conference - Illinois Workshop on Decision Making

                        CALL FOR PARTICIPATION

       1988 ILLINOIS  INTERDISCIPLINARY  WORKSHOP ON  DECISION  MAKING
      Representation and Use of Knowledge for Decision Making in Human,
                          Mechanized, and Ideal Agents

             Sponsored by the UIUC CogSci/AI Steering Committee

                       Champaign-Urbana, Illinois
                           June 15-17, 1988


PURPOSE
The 1988 Illinois Interdisciplinary Workshop on Decision Making is
intended to bring together researchers working on the problem of
decision making from the fields of Artificial Intelligence,
Philosophy, Psychology, Statistics, and Operations Research.  Since
each area has traditionally stressed different facets of the problem,
researchers in each of the above fields should benefit from an
understanding of the issues addressed and the advances made in
the other fields.  We hope to provide an atmosphere that is both
intensive and informal.

FORMAT
There will be talks by ten invited speakers from the above mentioned
areas.  The current list of speakers includes: P.Cheeseman, J.Cohen,
J.Fox, W.Gale, J.Payne, R.Quinlan, B.Skyrms, and C.White.  The talks
will be followed by prepared commentaries and open floor discussion.
Additionally, speakers will participate in small moderated discussion
groups focused intensively on their work.

TOPICS
- The representation, organization and dynamics of the knowledge
  used in decision making.
- Decision making strategies.
- Decisions under constraints (limited rationality).
- Combining normative and descriptive theories.
- The use of domain knowledge to initialize beliefs and preferences.

PARTICIPATION
This workshop will consist of a limited number of active participants,
commentators, and invited speakers.  To be considered for
participation, send a one page summary of your research interests and
publications no later than March 15.  Indicate also if you would like
to deliver either an inter- or intra-disciplinary commentary.
Commentators will receive copies of their assigned papers three weeks
prior to the workshop.  Acceptances will be mailed by April 4.

REGISTRATION
The registration fee is $50 general and $30 for students.  A copy of
the proceedings is included in the registration fee and will be
distributed at the workshop.  A few grants are available to cover most
or all travel, accommodation, and registration expenses.  In order to
be considered for a grant, include a request with your application.

Mail all correspondence to:  L. Rendell, Dept. of Computer
Science, University of Illinois, 1304 W. Springfield Ave., Urbana, IL
61801.

ORGANIZING COMMITTEE
U.Bockenholt, O.Coskunoglu, P.Haddawy, P.Maher, L.Rendell, E.Weber

------------------------------

Date: 19 Feb 88 05:02:00 GMT
From: ULKYVX.BITNET!ABCANO01@ucbvax.Berkeley.EDU
Subject: Conference - 7th Nat. Conf. on Artificial Intelligence
         (AAAI-88)

Subj:   Seventh National Conference on Artificial Intelligence (AAAI-88)
From: Julia Driver <arcsun!julia@calgary.uucp>

Hope this helps:

from the Net, Dec 21/87                            Aug 21 - 28     USA
                AAAI-88 Workshops:
               Request for Proposals

The AAAI-88 Program Committee invites proposals for the Workshop Program of
the Seventh National Conference on Artificial Intelligence (AAAI-88), to be
held at Saint Paul, Minn. from August 21, 1988 to August 26, 1988.  Gathering
in an informal setting, workshop participants will have the opportunity to
meet and discuss issues with a selected focus---providing for active exchange
among researchers and practioners on topics of mutual interest.  Members from
all segments of the AI community are encouraged to submit workshop proposals
for review.

To encourage interaction and a broad exchange of ideas, the workshops will be
kept small---preferably under 35 participants.  Attendance should be limited
to active participants only.  The format of workshop presentations will be
determined by the organizers of the workshop, but ample time must be allotted
for general discussion.  Workshops can range in length from two hours to two
days, but most workshops will last a half day or a full day.

Proposals for workshops should be between 1 and 2 pages in length, and
should contain:
1/ a brief description the workshop identifying specific issues that will be
   focused on.
2/ a discussion of why the workshop would be of interest at this time,
3/ the names and addresses of the organizing committee, preferably 3 or 4
   people not all at the same site,
4/ a list of several potential participants, and
5/ a proposed schedule.

Workshop proposals should be submitted as soon as possible, but no later
than 1 February 1988.  Proposals will be reviewed as they are received and
resources allocated as workshops are approved. Organizers will be notified
of the committee's decision no later than 15 February 1988.

Workshop organizers will be responsible for:
1/ producing a Call for Participation in the workshop, which will be mailed
   to AAAI members by AAAI,
2/ reviewing requests to participate in the workshop, and determining the
   workshop participants,
3/ scheduling the activities of the workshop, and
4/ preparing a review of the workshop, which will be printed in the AI
   Magazine.

AAAI will provide logistical support, will provide a meeting place for
the workshop, and, in conjunction with the organizers, will determine the
date and time of the workshop.

Please submit your workshop proposals, and enquiries concerning workshops, to:

       Joseph Katz MITRE Corporation MS L203
       Burlington Road, Bedford, MA 01730 (617) 271 5200
Katz@Mitre-Bedford.ARPA
------------------------------------------------------------------------------

Julia Driver
Alberta Research Council
Calgary Alberta Canada

------------------------------

End of AIList Digest
********************

∂21-Feb-88  0345	LAWS@KL.SRI.COM 	AIList Digest   V6 #37 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 21 Feb 88  03:45:21 PST
Date: Sat 20 Feb 1988 23:51-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V6 #37
To: AIList@SRI.COM


AIList Digest            Sunday, 21 Feb 1988       Volume 6 : Issue 37

Today's Topics:
  Queries - MACSYMA Under Golden Common Lisp & AI and OOPS &
    Frame/Logic/TMS/Rule Code & AI in Communications and Signal Processing &
    Economic Prediction Lecture & Analogy Conference Proceedings,
  AI Tools - Fuzzy Logic vs. Probability Theory

----------------------------------------------------------------------

Date: Thu, 18 Feb 88 19:01:58 WUT
From: LENNEIS%AWIWUW11.BITNET@CUNYVM.CUNY.EDU
Subject: MACSYMA under Golden-Common-Lisp


I am sending this out for a friend. Please send replys to the address
below. People interested in a summary should contact me directly.

*********************
request: MACSYMA under Golden-Common-Lisp

Does anybody know, if there is a version of MACSYMA available
for the Golden-Common-Lisp-Developer? If it exists, please give
me the company's name and address.

regards, Wolfram Rinke
*********************

Joerg Lenneis
University of Economics and Business Administration Vienna
Department of Statistics and Applied Data Processing
Vienna, Austria
Email: LENNEIS@AWIWUW11 (BITNET)

------------------------------

Date: Thu, 18 Feb 88 20:36:15 GMT
From: Frank_Calliss <COK8@MTS.DURHAM.AC.UK>
Subject: Re: AI and OOPS

Someone mentioned in this group that there soon a journal on OOPS would
be issued.  Could we have a name and address for information on
subscription.

Thanks

Frank

Return addresses

E-Mail
------

JANET : Frank_Calliss@UK.AC.DUR.MTS
BITNET: Frank_Calliss%DUR.MTS@AC.UK
BITNET: Frank_Calliss%DUR.MTS@UKACRL
ARPA  : Frank_Calliss%MTS.DUR.AC.UK@CUNYVM.CUNY.EDU
UUCP  : ukc!uk.ac.dur.easby!fwc

Snail Mail
----------

Mr. F.W. Calliss
University of Durham
School of Engineering and Applied Science(Computer Science)
Science Laboratories
South Road
Durham
England
DH1 3LE

------------------------------

Date: 19 Feb 88 04:29:53 GMT
From: pitt!cisunx!jasst3@cadre.dsl.pittsburgh.edu  (Jeffrey A.
      Sullivan)
Subject: Frame/Logic/TMS/Rule Code request

If anyone has pointers to code (pref Common Lisp) for any of the following,
please let me know:

-- Frame systems
-- Rule(Production) systems
-- Logic (theorem provers or deductive retrievers, pref both)
-- Truth Maintenance Systems

Thanks!


--
..........................................................................
Jeff Sullivan                           University of Pittsburgh
jas@dsl.cadre.pittsburgh.edu            Intelligent Systems Studies Program
jasper@PittVMS (BITNET)                 Graduate Student

------------------------------

Date: Fri, 19 Feb 88 18:44:05 est
From: drb@cscfac.ncsu.edu (Dennis R. Bahler)
Subject: AI in Communications and Signal Processing

I am looking for descriptions or pointers to AI applications in
advanced communications systems and adaptive signal processing.  Right now
I am looking more for traditional symbolic AI than connectionist/neural
approaches although the latter are of interest also in the longer term.

As an example, intelligent systems/models/approaches for:
   network modeling and analysis
   tools for advanced communication systems design
   network management, control, diagnosis, and repair
   vision, image, and signal processing
   multidimensional signal processing
   complex optimization

I know these are pretty broad areas, and some are more familiar to me
than others; what I need are suggestions of where to dig.

Dennis Bahler
Dept. of Computer Science          INTERNET - drb@cscadm.ncsu.edu
North Carolina State University    CSNET    - drb%cscadm.ncsu.edu@relay.cs.net
Raleigh, NC   27695-8206           UUCP     - ...!decvax!mcnc!ncsu!cscadm!drb

------------------------------

Date: 20 Feb 88 22:37:23 GMT
From: beta!unm-la!claborn@hc.dspo.gov  (Joe Claborn)
Subject: Re: Economic Prediction Lecture (2/23/88)

Will someone who is able to attend this lecture please summarize
to the net ?

Thanks.

------------------------------

Date: Thu, 18 Feb 88 12:55:05 MST
From: teskridg%nmsu.csnet@RELAY.CS.NET
Subject: analogy conference proceedings


Can anyone tell me where I can get the Proceedings of the Allerton
Workshop on Analogy & Similarity.  The workshop was held at the
University of Illinois Urbana-Champaign in June of 1986.
Thanks.

Tom Eskridge
Computing Research Laboratory
New Mexico State University
Las Cruces, NM 88003
teskridg%nmsu

------------------------------

Date: 17 Feb 88 23:47:25 GMT
From: texsun!skb%usl@Sun.COM (Sanjiv K. Bhatia)
Reply-to: texsun!usl.usl.edu!skb@Sun.COM (Sanjiv K. Bhatia)
Subject: Fuzzy logic


Is anybody on the network interested in fuzzy logic?  Is there already a group
to discuss this area?  If not, how about starting one?  I do not know the
procedures to start a new group, so will somebody out there get in touch with
me.

Sanjiv

------------------------------

Date: 19 Feb 88 04:24:08 GMT
From: vu0112@bingvaxu.cc.binghamton.edu  (Cliff Joslyn)
Subject: Re: Fuzzy Logic

In article <400@usl> skb@usl.usl.edu.UUCP (Sanjiv K. Bhatia) writes:
>Is anybody on the network interested in fuzzy logic?  Is there already a group
>to discuss this area?  If not, how about starting one?  I do not know the
>procedures to start a new group, so will somebody out there get in touch with
>me.
>
>Sanjiv

I'm researching the application of fuzzy theory to expert systems, and
would be very interested in participating in such a group.  If there is
one, I'm ignorant of it (someone please inform).  I recently posted a
reply to someone on this subject.  Is there a more general interest?

O---------------------------------------------------------------------->
| Cliff Joslyn, Mad Cybernetician
| Systems Science Department, SUNY Binghamton, Binghamton, NY
| vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: 19 Feb 88 15:56:57 GMT
From: Eric Neufeld <emneufeld%watdragon.waterloo.edu@RELAY.CS.NET>
Reply-to: Eric Neufeld <emneufeld%watdragon.waterloo.edu@RELAY.CS.NET>
Subject: Re: FUZZY LOGIC VS. PROBABILITY THEORY


In article <8802180658.AA11175@ucbvax.Berkeley.EDU> golden@FRODO.STANFORD.EDU
(Richard Golden) writes:
>I am not an expert in Fuzzy Logic or Probability Theory but I have examined
>the literature regarding the foundations of Probability Theory and the
>derivation of these foundations from basic principles of deductive logic.
>[...]
>The reference from which these arguments are based is given
>by Cox (1946). Probability Frequency and reasonable expectation.
>American Journal of Statistical Physics, 14, 1-13.
>[...]
>To my knowledge, the axioms of Fuzzy Logic can not be derived from
>consistency conditions generated from the deductive logic so I conclude
>that Fuzzy Logic is not appropriate for inferencing.  Any comments?!!!


The interest in reasoning with and about uncertainty in AI has sparked a
re-investigation into foundations of Prob. Theory.  Cox's theorem has become
an important result for those interested in prob. theory as a measure of
belief.  You may be interested in the following references:

Proceedings of the AAAI Workshop on Uncertainty and AI: 1985, 1986, 1987.
A number of articles investigate the relationship between formal prob. theory
and the various alternate formalisms: Fuzzy, Certainty Factors,
Dempster-Shafer.  Note articles by Cheeseman, Grosof, Heckerman and Horvitz.

Kyburg, Henry E- Bayesian and Non-Bayesian Evidential Updating, AI Journal,
Vol 31, 1987.  Investigates probabilistic assumptions underlying
Dempster-Shafer Theory.

Heckerman et al: AAAI-86: "A Framework for comparing alternate formalisms..."
Has been described as a presentation of Cox's result to the AI community.

Computational Intelligence: Upcoming issue (delayed in printing) Contains a
polemic article by Peter Cheeseman on probability theory (versus everything
in the world) and responses by various researchers, etc.

Aleliunas, Romas: "Mathematical Models of Reasoning".  Contains a
generalization of Cox's result to topologies other than real-valued
continuous [0,1] probability.  University of Waterloo Tech Report.

Eric Neufeld
Dept. Computer Science
University of Waterloo
Waterloo Canada

------------------------------

Date: Fri, 19 Feb 88 10:15:47 PST
From: golden@frodo.STANFORD.EDU (Richard Golden)
Subject: I'm still not convinced ... Fuzzy Logic and Probability
         Theory

I'm sorry but I do not find Bruce D'Ambrosio's arguments convincing
(although I would be happy to be convinced!!!)

As I noted before, the AXIOMS of probability theory can be justified
from constraints upon rational decision making (i.e., standard deductive logic).
I have not seen (I would like to see) similar arguments constructed for
the AXIOMS of fuzzy set theory.

In response to point 1 that fuzzy logic is appropriate in cases where we
do not have exact probabilities I would argue that probability theory is
still applicable since we can do conditioning.  That is, suppose we
know that the probability of event A, p(A), lies in the interval [0.3,0.4].
We can model our uncertainty associated with p(A) by rewriting p(A) as
p(A|t) where t is a dummy random variable uniformly distributed over the
interval [0.3,0.4].

In response to point 2, that Zadeh's intuition was set theoretic, and not
frequentist or subjective I would like to emphasize that all
of my arguments break down if one takes the
frequentist view of probability theory --- it is absolutely essential
that the subjectivist view of probability theory is taken.  The subjectivist
view simply says that some number is associated with a particular event in
the environment and this number reflects one's belief that the event occurs.
Thus, there is no reason why we can't interpret this number as an
indicator of approximate correctness.

The final comment that there can be "no real conflict between fuzzy set
theory and probability theory" is valid as long as you are not concerned with
making inferences which are always consistent with the symbolic logic
(i.e., Boolean Algebra).  If you are concerned with making inferences
consistent with symbolic logic...then you are right...there is no conflict...
probability theory wins (unless you can justify the axioms of fuzzy set theory
with respect to rational decision making for me).

A final caveat upon the limitations of probability theory AND fuzzy logic.
Both of these approaches assume one can represent belief as a single
real-valued function -- this critical assumption should not be ignored.
But if one does accept this assumption and one wants to make logically
consistent inferences, probability theory is the way to go.


Cc:

------------------------------

Date: Sat, 20 Feb 88 12:27:36 EST
From: ST401843%BROWNVM.BITNET@MITVMA.MIT.EDU
Subject: Re:Fuzzy Logic vs. Probability Theory

Here is my two bits about fuzzy logic: Richard Golden writes:

  >...Rational  selection in this case meaning consistency with the classic
  >deductive/symbolic logic - boolean algebra...

But here's the rub! In Boolean Algebra you have clear cut True or False
truth values.In Crisp Set Theory (to a subclass of which theory Boolean Algebra
is isomorphic) you have clear cut "x belongs A" relationships. Write this as
B(x,A)=1 , where x is an element, A is a set and B(.,A) takes values  (for
"belongs" and 0 for "not belongs". In other words B(.,A) is the "belongingness"
function of A. In Fuzzy Set Theory on the other hand, B(.,A)  can take any
value between 0 and 1. There are no clear cut  answers to questions such as:
"does a person 5 ft. 10 in. tall belong to the set of tall persons?".

Cisp Set Theory is what we traditionally call Set Theory. It is a subset of
Fuzzy Set Theory, in that in CST B can take only the extreme values 0 and 1.

Of course in all of FST we use the reasoning, syllogisms etc. of the Boolean
Algebra. I don't know if anyone does Fuzzy Mathematics. In FM, things such
as reasoning by Reductio ad Absurdum would not be valid. Why? Well, in
RaA we usually want to prove something for x, call it P(x). And we begin by
saying: "assume NOT P(x)..." But in FST P(x) and NOT P(x) are just two of
uncountably many possibilities. But maybe ther is a Fuzzy generalization
to RaA. Does anyone know?

Anyways, I think FST is still an alternative way to reasoning about
uncertain events, different from Probability Theory. In fact there has been
work done on Fuzzy Probability, postulating fuzzy probability measures. I did
not have time to check before I send  this, but I suppose they would drop
things like the countable additivity hypothesis. And classical PT is by
no means the only way to reason about uncertain events- see von Mises   and
deFinneti, among others.

One last observation. I do not want to make much of it, but is it not
remarkable that FST, an alternative to the Aristotelian logic, was invented
by Zadeh, an Indian?




                  Thanasis Kehagias

------------------------------

End of AIList Digest
********************

∂21-Feb-88  0520	LAWS@KL.SRI.COM 	AIList V6 #38 - Applications, Neuromorphic Tools, Nanotechnology    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 21 Feb 88  05:20:45 PST
Date: Sun 21 Feb 1988 00:00-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #38 - Applications, Neuromorphic Tools, Nanotechnology
To: AIList@SRI.COM


AIList Digest            Sunday, 21 Feb 1988       Volume 6 : Issue 38

Today's Topics:
  Application - Poetry Analysis & Cryptology and Neural Networks &
    Data Smoothing for Character Recognition,
  Videoconference - Practical Applications of AI,
  Software Engineering - Reference,
  Neural Networks - Tools,
  Nanotechnology - Semantic Quibbles

----------------------------------------------------------------------

Date: 18 Feb 88 18:10:04 GMT
From: uvaarpa!virginia!boole!cap4r@umd5.umd.edu  (Chris Pohlig)
Subject: poetry analysis, pattern recogn.


I have a project that involves determining variations between different
versions of a very long poem.  Unfortunately, simple file comparison
programs are inappropriate since not all differences between the versions
are important.  For example, many (but not all) spelling variations are
insignificant.  Some versions of the poem have extra, or missing lines.
Some corresponding lines (between different versions) are of unequal
length as well.  The real need (I think) is to be able to specify (in a
separate "rule" file) a list identifying significant difference rules.

Are there any relevant software products?  Are there any relevant
journals? Does anyone have any suggestions?

Please reply to:  cap4r@virginia.edu  (internet)
             or:  cap4r@virginia      (bitnet)

Many thanks.

------------------------------

Date: 20 Feb 88 00:21:41 GMT
From: farris@marlin.nosc.mil  (Russell H. Farris)
Subject: Re: poetry analysis, pattern recogn.

In article <419@boole.acc.virginia.edu>cap4r@boole.acc.virginia.edu
(Chris Pohlig) writes:
>I have a project that involves determining variations between different
>versions of a very long poem.  Unfortunately, simple file comparison
>programs are inappropriate since not all differences between the versions
>are important. . . .

Look into using SNOBOL (on a PC) or SPITBOL.  A PD version called
Vanilla SNOBOL4 is available--with many sample programs--from
Simtel20.  The full-blown version, called SNOBOL4+, is available
for $95 from Catspaw, Inc., P.O.  Box 1123, Salida, Colorado
81201, (305) 539-3884.

                   Russ (just a happy customer) Farris

------------------------------

Date: 18 Feb 88 23:38:24 GMT
From: sdcc6!sdcc13!ln63wgq@sdcsvax.ucsd.edu  (Keith Messer)
Subject: Re: Cryptology and Neural Networks?


Yea, Jeff Elman here at UCSD tried to apply neural nets to both the enigma
and DES, and I believe the problem was that the encryption algorithms are
simply too complex.  The neural net ends up memorizing the key-cyphertext
pairs you give it, but fails to come up with a good rule for learning new
pairs.  I wouldn't discount neural nets for cryptology but they haven't been
useful for straight known-plaintext decryption.

                                                Keith

------------------------------

Date: Thu 18 Feb 88 09:03:50-PST
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: Re: Help Needed With Data Smoothing, Character Recognition

Much of the MIT vision literature deals with data smoothing and
interpolation by fitting mathematical "thin plates" through the
image data.  The data I get is usually too smooth already, which
may be why the human vision system introduces the Mach effect.
The question is, once you have smooth data (e.g., if it were given
to you initially) what are you going to do with it?  Threshold it?
Detect edges? Segment it?  Match it to templates?  To generic models?
Take Fourier transforms?  Moment invariants?  Count concavities
relative to the convex hull?

The vision literature in graphics tends to consider only binary
data, ignoring the gray levels that high-quality scanners pick up.
There are shrink/expand techniques for smoothing and many papers
on how to characterize approximations to straight lines and arcs
on a digital grid.

You should check out the IEEE book list, particularly the pattern
recognition conferences and related books such as "Machine Recognition
of Patterns" and "Computer Text Recognition and Error Correction".
There is a very old book called "Optical Character Recognition" that
still has some good info on recognition by moments and some examples
of just how bad scanned characters can be.

                                        -- Ken

------------------------------

Date: 18 Feb 88 18:53:18 GMT
From: caip.rutgers.edu!pallab@rutgers.edu  (Pallab Dutta-Choudhury)
Subject: Re: Help Needed With Data Smoothing, Character Recognition

In article <18128@topaz.rutgers.edu>, clong@topaz.rutgers.edu (Chris Long)
writes:
> I am currently working on a vision project; solving the font-free
> character recognition problem to be exact.  I am looking for any
> and all reference sources dealing with data smoothing and
> noise reduction, especially as applied to graphics.  [...]


I am working on a project that has used smoothing
as a preprocessing operation for image segmentation.  As a
result I can help you find some relevant articles, and provide some
consultation if you desire.  The subject has been explored quite
extensively in the literature, for most applications it's just a
question of finding the correct balance between performance and
computational overhead.

Paul Dutta-Choudhury
Rutgers Univ.
e-mail:  pallab@caip

------------------------------

Date: Fri 19 Feb 88 20:18:08-PST
From: ELIOT@ECLA.USC.EDU
Subject: Answer to a Question

Ken:

Someone posted a message on AILIST asking about the IEEE
Videoconference, and you appeared to need some
assistance in answering the question.  The Videoconference
was not just in NY (as implied by the questioner) but was
broadcast via satellite "around the globe" (so to speak).

The title of the Videoconference was "Practical Applications
of Artificial Intelligence" and ran for three hours.  Elaine
Rich provided the program intro and served as technical
consultant.  Case studies were given by Bill Blake (DEC),
Robin Steel (Boeing), and Sandy Marcus (NCR).  A
specific focus was taken on programs that serve as an engineering
design aid.

Future topics of the Videoconferences include supercomputers,
microprocessors, lasers, superconductive materials, etc.
More infor can be obtained from the IEEE offices, and
tapes of previous presentations can be purchased as well
(the Feb. 18 show will be ready in about a month).

Hope this helps (but don't sign me as Dear Abbey).

Lance Eliot
IEEE Expert, Associate Editor
and
University of Southern California, Faculty

------------------------------

Date: Thu, 18 Feb 88 11:02:28 EST
From: hendler@dormouse.cs.umd.edu (Jim Hendler)
Subject: Re:  AIList V6 #34 - AI in Management, Software Engineering,
         Interviewing

In re: Software Engineering and AI

 Someone was looking for a more recent Jacob/Froscher paper.  One such, entitled
`Facilitating Change in Expert Systems' appears in the recent book
 ``Expert Systems: the User Interface'' which I edited for Ablex Publishing.
  Jim Hendler
  umcp
  Hendler@dormouse.cs.umd.edu

------------------------------

Date: Sat, 20 Feb 88 13:07:06 EST
From: ST401843%BROWNVM.BITNET@MITVMA.MIT.EDU
Subject: Neural Network Tools

Here is the responses I got to mu inquiry about Neural Network computational
tools. Not many answers, but thanks to those who did answer. I deleted the
parts of the messages that included no NNT information. Conclusion: right
now NN software costs BIG BUCKS. I think I will wait for a while.


                               Thanasis Kehagias


(Follow messages)

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

You can contact Neuralware Inc. at (412) 741-7699 for a commercail version. I
do not know their current University pricing. Also available an excellent
bibliography. -- Sandeep
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

We (at Beckman Instruments) have purchased the NEURALWORKS PROFESSIONAL
software package from Neural Ware Inc.  It runs on the IBM PC & clones.
It's a pretty good package, but the user interface is a bit rough and the
documentation is just a little weak.  It costs $495 (last I heard), and comes
with a good overview of Neural Computing and several Neural computing
techniques are supported. Their address/phone is:
        Neural Ware Inc.
        103 Buckskin Court
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I've seen something called Mac Brain, but I don't know who makes it. I
venture to guess its about 200-500 dollars. I know it works on the
Mac-II in colour and probably works on lower-end Macs too.

        Sewickley, PA 15143
        (412) 741-5959

------------------------------

Date: Thu, 18 Feb 88 12:23:55 PST
From: uazchem!dolata@arizona.edu (Dolata)
Subject: Nanotechnology - close,  but no cigar (yet).


Minsky takes me to task in his last message;

m> Dolata's reamrks about nanoscopic chemistry missed the point, so far
m> as I can see, in arguing that because it is a scanning microscope it
m> is not involved with individual molecules but is more like regular
m> volume chemistry.

Well, yes I did say that;

d>        Note that the process involves SCANNING of whole areas,  and not
d> individual pinning. This is nothing new,  it can be done by standard electro
d> chemical techniques.

That statement is strictly true. Minsky now proceeds to disagree with me;

m>                       However, the molecular rearrangement was not
m> accomplished by a conventional bulk effect.  Instead, it was
m> accomplished by a sub-microsecond pulse applied during the scan so
m> that it occurred while the needle was over a particular molecule.

Carefull reading of the above statement shows that I didn't claim that the
modification was done by scanning,  only the PINNING.  If you read my next
sentance,  you see that I implicitly allow thier claim to altering
individual molecules by pulsing during the scan (the key word is "then");

d>                        The ability to then alter individual molecules is
d> not very exciting either,   people have been doing that chemically with
d> polymer bound systems for a long time.

Minsky takes exception to my whole message;

m>                                 But I see no reason to denigrate
m> the technique because it uses scanning.  Simply think of scanning as
m> examining, and possibly modifying, large numbers of points in
m> sequence.  What could be better?

I now take exception to his taking exception,  because I already stated
the same thing!

d>                                    The exciting possibility was not
d> strictly addressed;  the ability to selectively alter molecules in a
d> spatially regular fashion.  I.e.,  convert one to state 1, convert the
d> next to state 2,  etc...

I assume my words "spatially regular fashion" to be the same as Minsky's
"in sequence". We both use the word "possibility" and "possibly",
to discribe the task,  and so we agree it hasn't been done yet.
And this possibility was not demonstrated, or even addressed, in FF&A's
letter to Nature.  Back to Minsky's message;

m>                                  But I see no reason to denigrate
m> the technique because it uses scanning.

I didn't denigrate the work.  My final paragraph had the words;

d>        I don't mean to completely pooh-pooh their work.  It does indicate
d> an exciting new direction.

If I am denigrating something,  I don't use the word "exciting".   I did say;

d>                             However,  I caution people from either claiming
d> that they did something that they didn't, or from being swept up in over
d> strong claims.

Foster, Frommer and Arnett themselves make the same caution;

ffa> Our interpretation of this process as the pinning, removal and cleaving
ffa> of the phthalate molecules is still open to question...

I respect FF&A's self criticism and scientific caution.  AI sometimes seems
filled with people making over grandiose claims,  using fuzzy terms to
disguise that what they are doing is reinventing X's wheel of '65, etc...
I applaud clear thinkers who critically evaluate technology rather than
try to get into newspapers or get tenure on false claims.  (for example,
Minsky's (I assume it is the same one?) work on Perceptrons is a good
example of critical thinking).

I don't claim that FF&A made over strong claims.  I do caution people in
AI-LIST, or other forums,  from being less cautious than the authors (who
should know best).

In summary,  I still stand by my claim;

d>                                                                  What
d> they haven't had much luck doing was using mechanical means to create
d> spatially interesting patterns of altered molecules.  What FF&A has done is
d> to point the way, but they missed the biggy.

------------------------------

End of AIList Digest
********************

∂27-Feb-88  0014	LAWS@KL.SRI.COM 	AIList V6 #39 - Queries, BBS Abstracts
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 27 Feb 88  00:14:29 PST
Date: Fri 26 Feb 1988 21:52-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #39 - Queries, BBS Abstracts
To: AIList@SRI.COM


AIList Digest           Saturday, 27 Feb 1988      Volume 6 : Issue 39

Today's Topics:
  Queries - Marilyn Dee & Pustejovsky & Softsmart & Phase Linear Inc. &
    Narrowing & Driving a Mac Serial Interface with Xlisp, Experlisp &
    Legal Reasoning in AI & Genetic Algorithms &
    Workshop on Knowledge Compilation & File Formats for Image Data &
    BBS Call for Commentators

----------------------------------------------------------------------

Date: 21 Feb 88 23:49:00 GMT
From: goldfain@osiris.cso.uiuc.edu
Subject: Address Requested


Can anyone electronically mail (email) me the "physical" mail (pmail) address
of Marilyn Dee and Associates (an AI head-hunter service) ?
                                    Thanks much,  - Mark Goldfain

------------------------------

Date: Wed 24 Feb 88 10:39:28-EST
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: Pustejovsky


Will the person who requested an email address, etc.
for James Pustejovsky please send me your
name, etc. again, your previous message was deleted.
Thanks.

------------------------------

Date: Fri 26 Feb 88 13:29:37-PST
From: Wei-Han Chu <CHU@KL.SRI.COM>
Subject: where about of Softsmart

Does anybody know the whereabout of a company named Softsmart. As of
a year ago they were selling a version of Smalltalk-80 that runs on
an IBM PC-AT that they oem from Xerox PARC. The last time I called
them for support their phone has been disconnected with no forwarding
number.

------------------------------

Date: 24 Feb 88 16:20:44 GMT
From: hubcap!ncrcae!gollum!rolandi@gatech.edu  (rolandi)
Subject: Phase Linear Inc.


Can anyone provide information on Phase Linear Inc?
On a product they are reported to have called KAM (Knowledge
Acquisition Module)?



walter rolandi
rolandi@gollum.UUCP ()
NCR Advanced Systems, Columbia, SC
u.s.carolina dept. of psychology and linguistics

------------------------------

Date: Wed, 24 Feb 88 09:10:45 EST
From: meadows%nrl-css.arpa@nrl-css.arpa (Catherine A. Meadows)
Subject: request for info on narrowing

Can anybody out there tell me of any good introductory references on
narrowing?

                Thanks in advance,

                Catherine Meadows
                Code 5593
                Naval Research Laboratory
                Washington, DC 20375
                meadows@nrl-css.arpa

------------------------------

Date: Mon, 22 Feb 88 06:16 PST
From: nesliwa%nasamail@ames.arc.nasa.gov (NANCY E. SLIWA)
Subject: Driving a MAC serial interface with Xlisp, experlisp


I have been attempting to help a group of high school students here in
the NORSTAR robotics team interface their mini-mover robot arms to
a Macintosh serial port using Lisp. They currently have an interface in
Turbo pascal, but would like to use lisp for some collision-avoidance
software they're writing. They have Xlisp and Experlisp, but not
full documentation for either. Any Mac lisp experts able to offer
some advice?  Thanks.

Nancy Sliwa
nesliwa%nasamail@ames.arpa
804/865-3871

------------------------------

Date: 22 Feb 88 19:53:39 GMT
From: rutgers!ucbvax.berkeley.edu!ames!esl!trwspp!spp3!gpearson%sdcsva
      x@ucsd.edu (Glen Pearson)
Reply-to: gpearson@spp3.UUCP (Glen Pearson)
Subject: Query: Legal Reasoning in AI

I heard of a conference on legal reasoning using AI techniques, but
I don't remember the time or place.  Can anyone out there give me
details?

Thanks much,

Glen
trwrb!spp!spp3!gpearson@ucbvax.berkeley.edu
1180 Kern Ave.
Sunnyvale, CA 94086
(408) 773-5021

------------------------------

Date: Sun, 21 Feb 1988 17:03:30 LCL
From: Kislaya Prasad <PRASAD@SUVM.ACS.SYR.EDU>
Subject: AIList V6 (Genetic Algorithms)

While only superficially familiar with the current Genetic Algorithms
literature I remember reading a paper by Holland a few years back which
I found very interesting:

Holland, J. (1970) "Logical Theory of Adaptive Systems", in A.W. Burks (Ed.)
Essays in Cellular Automata, University of Illinois Press.

(Remember Cellular Automata?)
My question to those familiar with the literature is:
How does this work of Holland relate to the recent literature? I don't see it
referred to at all. Is there a strong reason for this, or just that it is
now old stuff? It seems to me that this paper had many of the fundamental
ideas of today (especially with respect to parallel implementations).

------------------------------

Date: Tue, 23 Feb 88 05:24:31 +0100
From: mcvax!lasso!ralph@uunet.UU.NET (Ralph P. Sobek)
Subject: Workshop on Knowledge Compilation, 1986

Does anybody have a reference for the following:

        Proceedings Workshop on Knowledge Compilation
        Otter Crest, OR
        1986

And what has happened since?

Thanks in advance,

Ralph P. Sobek                 | UUCP:  uunet!mcvax!inria!lasso!ralph,    or
                               |        ralph@lasso.uucp
LAAS-CNRS                      | Internet:  ralph@lasso.laas.fr,    or
7, avenue du Colonel-Roche     |            ralph%lasso.laas.fr@uunet.UU.NET
F-31077 Toulouse Cedex, FRANCE | ARPA:   sobek@shadow.Berkeley.EDU (forwarded\
+(33) 61-33-62-66              | BITNET/EARN:  SOBEK@FRMOP11        \ to UUCP )
=  =  =  =  =  =  =  =  =  =  =  =  =  =  =  =  =  =  =  =  =  =  =  =  =  =  =

------------------------------

Date: 24 Feb 88 20:08:20 GMT
From: spar!hunt@decwrl.dec.com  (Neil Hunt)
Subject: File formats for image data ?

A question for you vision researchers and graphics wizards: what
file formats are you using for transferring and storing your image data ?

We are using a format called `picpac' which I believe originated from
CalTech; we are looking for something more comprehensive to use.
Some of the features to be considered are:

        Storage of image data in 1D (histograms etc), 2D (simple images),
          3D (image sequences, sets of images, etc) and higher dimensionalities
          (sequences of sets of multispectral images...).

        Typed pixels: integer and floating point; unsigned and signed,
          packed and unpacked, RGB, CMY, CMYB, IR-vis-UV, 1, 8, 16 and 32 bits,
          etc. etc.

        Storage of image data in non array format (list of points of
          interest in sparse images, compressed formats, etc.)

        Recording of arbitrary additional data: colour maps (both predefined
          and locally specified), camera pixel size, aspect ratio, date,
          time, condition, title, subject, pre- and post-processing, etc.

        Efficient storage and transfer (ie: the PostScript format is not
          ideal, with ASCII pixel values).

        Indirection: the ability to make a new image being some subset
          of another image, without having to copy the actual data.

Please let me know what format you are using at the moment, what you
are doing with it, and what other features we should consider if we decide
to invent our own format ?

Neil/.

        hunt@spar.slb.com
     ...{amdahl|decwrl|hplabs}!spar!hunt
        (415) 496 4708

------------------------------

Date: 22 Feb 88 18:34:30 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: BBS Call for Commentators


The following are the abstracts of 2 forthcoming articles on which BBS
[Behavioral and Brain Sciences -- An international, interdisciplinary
journal of Open Peer Commentary, published by Cambridge University Press]
invites self-nominations by potential commentators.

(Please note that the editorial office must exercise selectivity among the
nominations received so as to ensure a strong and balanced cross-specialty
spectrum of eligible commentators. The procedure is explained after
the abstract.)

-----
ABSTRACT #1:
        Numerical Competence in Animals: Definitional Issues,
        Current Evidence and a New Research Agenda

                Hank Davis & Rachel Perusse
                   University of Guelph

Numerical competence in  animals  has  enjoyed  renewed  interest
recently,  but  there  is still confusion about the definition of
numerical processes.  "Counting" has been  applied  to  phenomena
remote  from  its  meaning  in  the  human  case.  We  propose  a
consistent theoretical framework and  vocabulary  for  evaluating
numerical    competence.    Relative    numerousness   judgments,
"subitizing," counting and estimation are the principal processes
involved.   Ordinality,  cardinality  and  transitivity judgments
also play a role. Our framework can handle a  variety  of  recent
experimental  situations.  Some  evidence  of  generalization and
transfer is needed to demonstrate higher order  ability  such  as
counting;  otherwise  one  only  has  "protocounting" even if all
other alternatives have been excluded.
-----
ABSTRACT #2:
        Developmental Explanation and the Ontogeny of Birdsong
                      Nature/Nurture Redux

                        Timothy Johnston
               University of North Carolina, Greensboro

The view that behavior can  be  partitioned  into  inherited  and
acquired   components   remains   widespread   and   influential,
especially in the study  of  birdsong  development.  This  target
article  criticizes  the  growing  tendency  to  diagnose  songs,
elements of songs, or precursors of  songs  (song  templates)  as
either  innate  or  learned  on  the  basis  of isolation-rearing
experiments. Such experiments offer only a crude analysis of  the
contribution  of  experience  to  song development and provide no
information at all about genetic effects,  despite  arguments  to
the  contrary. Because developmental questions are so often posed
in terms of the learned/innate dichotomy, the  possible  role  of
nonobvious  contributions  to  song  development has been largely
ignored. An  alternative  approach,  based  on  Daniel  Lehrman's
interactionist theory of development, gives a better sense of the
issues that remain to be addressed in studies of song development
and provides a more secure conceptual foundation.
-----

This is an experiment in using the Net to find eligible commentators
for articles in the Behavioral and Brain Sciences (BBS), an
international, interdisciplinary journal of "open peer commentary,"
published by Cambridge University Press, with its editorial office in
Princeton NJ.

BBS publishes important and controversial interdisciplinary articles
in psychology, neuroscience, behavioral biology, cognitive science,
artificial intelligence, linguistics and philosophy. Articles are
rigorously refereed and, if accepted, are circulated to a large number
of potential commentators around the world in the various specialties
on which the article impinges. Their 1000-word commentaries are then
co-published with the target article as well as the author's response
to each. The commentaries consist of analyses, elaborations,
complementary and supplementary data and theory, criticisms and
cross-specialty syntheses.

Commentators are selected by the following means: (1) BBS maintains a
computerized file of over 3000 BBS Associates; the size of this group
is increased annually as authors, referees, commentators and nominees
of current Associates become eligible to become Associates. Many
commentators are selected from this list. (2) The BBS editorial office
does informal as well as formal computerized literature searches on
the topic of the target articles to find additional potential commentators
from across specialties and around the world who are not yet BBS Associates.
(3) The referees recommend potential commentators. (4) The author recommends
potential commentators.

We now propose to add the following source for selecting potential
commentators: The abstract of the target article will be posted in the
relevant newsgroups on the net. Eligible individuals who judge that they
would have a relevant commentary to contribute should contact the editor at
the e-mail address indicated at the bottom of this message, or should
write by normal mail to:

                        Stevan Harnad
                        Editor
                        Behavioral and Brain Sciences
                        20 Nassau Street, Room 240
                        Princeton NJ 08542
                        (phone: 609-921-7771)

"Eligibility" usually means being an academically trained professional
contributor to one of the disciplines mentioned earlier, or to related
academic disciplines. The letter should indicate the candidate's
general qualifications as well as their basis for wishing to serve as
commentator for the particular target article in question. It is
preferable also to enclose a Curriculum Vitae. (This self-nomination
format may also be used by those who wish to become BBS Associates,
but they must also specify a current Associate who knows their work
and is prepared to nominate them; where no current Associate is known
by the candidate, the editorial office will send the Vita to
approporiate Associates to ask whether they would be prepared to
nominate the candidate.)

BBS has rapidly become a widely read read and highly influential forum in the
biobehavioral and cognitive sciences. A recent recalculation of BBS's
"impact factor" (ratio of citations to number of articles) in the
American Psychologist [41(3) 1986] reports that already in its fifth year of
publication (1982) BBS's impact factor had risen to become the highest of
all psychology journals indexed as well as 3rd highest of all 1300 journals
indexed in the Social Sciences Citation Index and 50th of all 3900 journals
indexed in the Science Citation index, which indexes all the scientific
disciplines.

Potential commentators should send their names, addresses, a description of
their general qualifications and their basis for seeking to comment on
this target article in particular to the address indicated earlier or
to the following e-mail address:

harnad@mind.princeton.edu

[Subscription information for BBS is available from Harry Florentine at
Cambridge University Press:  800-221-4512]
--

Stevan Harnad            harnad@mind.princeton.edu       (609)-921-7771

------------------------------

End of AIList Digest
********************

∂27-Feb-88  0214	LAWS@KL.SRI.COM 	AIList V6 #40 - Head Count, Neural Simulators, Fuzzy Logic, Refs    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 27 Feb 88  02:14:00 PST
Date: Fri 26 Feb 1988 22:03-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #40 - Head Count, Neural Simulators, Fuzzy Logic, Refs
To: AIList@SRI.COM


AIList Digest           Saturday, 27 Feb 1988      Volume 6 : Issue 40

Today's Topics:
  Adminstrivia - Head Count Results & Speaker's Net Addresses,
  Correction - Lotfi Zadeh's Nationality,
  References - 1988 Canadian Artficial Intelligence Conference &
    Schank's Papers & Constraint Satisfaction Programming,
  AI Tools - Neural-Net Simulators & Fuzzy Logic and Probability Theory,
  References - Introduction to Parallel Processing

----------------------------------------------------------------------

Date: Fri 26 Feb 88 11:46:02-PST
From: Ken Laws <LAWS@SRI.COM>
Subject: Head Count Results

I received 30 responses to my request for messages from
readers with birthdays in early February.  (There were 19
from the U.S., 3 each from Australia and the UK, 2 from
Canada, and 3 from Bitnet sites unknown to me.)  Multiplying
by the appropriate time-span factor (365/15) shows that there
were about 730 readers alert, able, and willing to reply
to the request.

The number of AIList readers is obviously much higher than
that. Bitnet alone distributes to 400 addresses (only a
few of which are known to be further redistributions).
Applying this factor of 3/400 to the full response of 30
implies that there are about 4000 readers physically able
to reply.  (Many readers, e.g. in Great Britain, do not
have outgoing mail priviledges due to the expense involved.
Many others in Europe, Japan, South Korea, and elsewhere
may be unable to construct the necessary return path.)

Another handle on the readership is the number of AAAI
members with net addresses.  I don't have access to the full
membership list, but about 3500 of the members have net
addresses short enough to list in the printed directory.
If we assume that 3000 such members in the U.S. are all
Arpanet/CSNet AIList readers, the full 30 replies would
represent a pool of 4737 AIList readers worldwide.  This
is certainly imprecise, but perhaps U.S. net members not
reading AIList are balanced (or more than balanced) by
students reading AIList bboards and by readers with long
net addresses (when viewed from the Arpanet).

My conclusion is that there are probably around 4000
readers (perhaps 3000 to 6000), with fewer than 1000
able to respond to any given query or message.  A more
precise (or accurate) estimate of the readership would
require surface mail as the reply medium.

                                        -- Ken

------------------------------

Date: Tue, 23 Feb 88 14:10 N
From: MFMISTAL%HMARL5.BITNET@CUNYVM.CUNY.EDU
Subject: SPEAKER'S NET ADDRESSES IN SEMINAR ANNOUNCEMENTS


The seminar announcements on AIlist digest would be even more helpful
especially for those on the other side of the ocean, when the
net addresses of the speakers would be included. This would facilitate
us with a direct way to ask for more information such as reports.
I normally send an EMAIL to the organizers, but a direct message to the
speakers would be easier.

Jan L. Talmon
Dept of Medical Informatics and Statistics
University of Limburg
Maastricht, The Netherlands

EMAIL: MFMISTAL@HMARL5.Bitnet

------------------------------

Date: Tue, 23 Feb 88 17:29:47 EST
From: ST401843%BROWNVM.BITNET@MITVMA.MIT.EDU
Subject: Mea Culpa

As many people pointed out to me, Lotfi Zadeh is an Iranian (Persian as
we used to say in the olden days) and not an Indian. Mea Culpa!!!

------------------------------

Date: 21 Feb 88 16:36:13 GMT
From: mnetor!utzoo!utgpu!jarvis.csri.toronto.edu!ai.toronto.edu!gh@uun
      et.uu.net  (Graeme Hirst)
Subject: Re: 1988 Canadian Artficial Intelligence Conference

In article <8802111540.AA20271@jade.berkeley.edu> ABCANO01@ULKYVX.BITNET writes:
>"Submitted to the 1988 Canadian Artificial Intelligence Conference."
>Can anyone tell me anything about this conference, esp. dates, costs, etc?

The Canadian AI conference is in Edmonton, Alberta, the week of June 6-10,
in conjunction with the 1988 Canadian vision and graphics conferences.
For general information, write to
        Wayne Davis, Conference Chairman
        Dept Computing Science
        University of Alberta
        Edmonton, Alberta
        CANADA  T6G 2H1
        Phone: 403-432-3976


--
\\\\   Graeme Hirst    University of Toronto    Computer Science Department
////   utcsri!utai!gh  /  gh@ai.toronto.edu  /  416-978-8747

------------------------------

Date: 22 Feb 88 06:04:38 GMT
From: woodl@byuvax.bitnet
Subject: Re: We look for Schank's papers


     In response to the search for references on Roger Schank's work,

     Can I assume you are aware of Schank's article in the latest (Winter
1987)
issue of AI Magazine entitled "What is AI, anyway?"  It lists many of his
published references plus Yale Technical Reports.  The article following it
is a report on the AI efforts at Yale, which also includes references by
Schank.  If that doesn't help, I can make up a list of important ones and
send to you.

   Larry Wood, Brigham Young University, WoodL@BYUVAX.bitnet

------------------------------

Date: 25 Feb 88 18:17:16 GMT
From: zodiac!sun-oil!rlee@spam.istc.sri.com  (Richard Lee)
Subject: Re: constraint satisfaction programming

In article <5070@pyr.gatech.EDU> parvis@pyr.gatech.EDU (FULLNAME) writes:

<I saw an excellent talk a couple of years ago by a man named Wm. Leler
<in which Wm. (pronounced Wim) discussed a constraint language called
<Bertrand.  This language was developed by Wm. in connection with research
<leading to his Ph.D.  His Ph.D. thesis has since been published as a
<distinguished thesis by one of the computer science publishers.  It's called
<Constraint Languages.  It's fairly recent.  Don't know the publisher right
<off hand.
<
<Phil Miller


_Constraint Programming Languages: Their Specification and Generation_,
by Wm Leler, Addison-Wesley, 1988, ISBN 0-201-06243-7.
Wm is short for William.

------------------------------

Date: Mon, 22 Feb 88 09:32:28 PST
From: heirich@cs (Alan Heirich)
Subject: Re:  AIList V6 #38 - Applications, Neuromorphic Tools, Nanotechnology

AILIST #38 contained a summary of sources of neural network software.
They missed two important packages used in academic research:

* The Rochester Connectionist Simulator, available from the Computer Science
  Department of the University of Rochester.  A modifiable package, written
  in C.  Allows interactive design and use of neural networks using a variety
  of learning algorithms.  New algorithms can be easily coded in C.

* SunNet, available from the Institute for Cognitive Science, University of
  California San Diego.  A "closed" package, it includes a simple programming
  language that allows procedures to be written to implement learning
  algorithms.  Oriented toward back propagation learning, but can be extended
  to other types.  Superb graphics and very easy to use.

I believe that both of these packages are available for a nominal cost which
covers media and handling.  The Rochester simulator includes source code; I
don't know about SunNet, but it may well include the source.

- Alan Heirich (heirich@sdcsvax.ucsd.edu)

------------------------------

Date: 22 Feb 88 05:25:05 GMT
From: quintus!ok@sun.com (Richard A. O'Keefe)
Subject: Re: I'm still not convinced ... Fuzzy Logic and Probability
         Theory


I don't like "fuzzy logic".  The basic reason for that is very simple:
it puts the fuzziness in the wrong place.  The standard example is
        "John is very tall"
where "very" is interpreted as a degree-of-belief in the proposition
        "John is tall".
This fails to make it clear whether
        - there is some doubt about the height
or      - someone is tall, but there is some doubt about whether it's John.
This ambiguity does not exist in the original statement.  Putting the
fuzziness on the truth-values instead of the functions seems wrong.

In probability, there is a clear distinction between distributions
(relating to functions) and probabilities (relating to propositions).

------------------------------

Date: 23 Feb 88 07:27:37 GMT
From: calgary!gaines@sri-unix.ARPA (Brian Gaines)
Subject: Re: FUZZY LOGIC VS. PROBABILITY THEORY

In article <8802180658.AA11175@ucbvax.Berkeley.EDU>, golden@FRODO.STANFORD.EDU
(Richard Golden) writes:
> The basic theoretical result is that selecting a "most probable" conclusion
> for a given set of data is the ONLY RATIONAL selection one can make in
> an environment characterized by uncertainty.  (Rational selection in this
> case meaning consistency with the classic deductive/symbolic logic - boolean

Boolean logics are not appropriate for knowledge representation if the
underlying domain is truly fuzzy, that is, has borderline case where either,
x and not x are both true, or, x and not x are both false. A classic example
is shades of color that grade into one another.

> algebra.)  Thus, one could argue that if one constrains the class of
> possible inductive logics to be consistent with the laws of deductive logic
> then Probability Theory is the MOST GENERAL type of inductive logic.
>
> (iii) F(C and B,A) may be computed from F(C,B and A) and F(B,A)
>       Note this assumption's similarity to Bayes Rule but the
>       multiplicative property is not assumed.

This is an assumption of truth functionality that is excessively strong.
In general we cannot infer truth values for conjunctions in this way.

>
> To my knowledge, the axioms of Fuzzy Logic can not be derived from
> consistency conditions generated from the deductive logic so I conclude
> that Fuzzy Logic is not appropriate for inferencing.  Any comments?!!!
>

There is a strong relation between classical, probability and fuzzy logics.
If one starts with a lattice of propositions and an additive measure over it
such that p(a and b) + p (a or b) = p(a) + p(b), then one gets a general
logic of uncertainty that:
a) Becomes probability logic if you assume excluded middle, ie no borderline
   cases;
b) Becomes fuzzy logic if you assume strong truth functionality, ie p(a and b)
   can be inferred from p(a) and p(b);
c) Becomes standard propositional logic if you assume binary truth values.
Most of the useful results in fuzzy and probability logics can be derived
in the general logic and do not need the restrictive assumptions.
Theorem provers for any of the multivalued logics are essentially
constraint chasers that bound the truth values of propositions, more
like linear programming than classical resolution.

There is a wealth of literature on fuzzy and probability logics -
The journals Fuzzy Sets and Systems, Approximate Reasoning, and Man-Machine
Studies, all carry articles on these issues and applications to
knowledge-based systems.  Noth-Holland have published several books
edited by Gupta, Kandel, Yager, and others, on these topics.

Brian Gaines, gaines@calgary.cdn, (403) 220-5901

------------------------------

Date: 25 Feb 88 00:56:40 GMT
From: pyramid!leadsv!esl!ian@hplabs.hp.com  (Ian Kaplan)
Subject: Re: Parallel Processing

jb@otter.hple.hp.com (Jason Brown) writes:
>
>To the world,
>
>I have just read a book entitled "Supercomputers of today and tomorrow the
>parallel processing revolution", by richard a jenkins. (TAB press). This book
>goes through the basics of parallel processing and what is happening in the PP
>world at the moment. The book was wriiten in 1985 and is some what put of data
>, I should imagine.
> [ text deleted ]
>
>It would be nice if I could contact the originators [of the parallel
>machines]>, so if anyone reads this who is at any of the above places
>and knows, or can find out where the person is maybe you could get
>them to contact me.
>

Jason:

   I could not get your address to word via e-mail, so here is my note.

   If you want a more detailed survey read the book by Hwang and
Briggs.  Its title is something like "Parallel Processor
Architecture".  I think that it was published by McGraw-Hill.

  You are rather naive in your belief that someone is going to send
you a note explaining all the trade offs in the parallel architecture
they chose.  Such an explaination would be more appropriate as a
seminar or course, rather than a short note.  Also, why should
researchers spend their time when you will not even spend the time to
go to the library and read more extensively on the topic.  Tab press
is infamous for printing trash.  You also asked for references.  Even
a reference list would be extensive.  However, a technical library
should have some bibliographies.

The big conference on parallel processing is the International
Conference on Parallel Processing.  Look at the conference
proceedings.  The Proceedings of the International Symposium on
Computer Architecture also has articles on parallel processings on
occasion.

Finally, take a class on computer architecture.  You cannot understand
the tradeoffs in parallel architectures unless you understand standard
serial and vector architectures.

           Ian L. Kaplan
           ESL, Advanced Technology Systems
           M/S 302
           495 Java Dr.
           P.O. Box 3510
           Sunnyvale, CA 94088-3510

           esl!ian@ames

                    decvax!decwrl!\
                   sdcsvax!seismo!- ames!esl!ian
                    ucbcad!ucbvax!/     /
                          ihnp4!lll-lcc!

------------------------------

End of AIList Digest
********************

∂27-Feb-88  0430	LAWS@KL.SRI.COM 	AIList V6 #41 - Supercomputers, Nonotechnology  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 27 Feb 88  04:30:39 PST
Date: Fri 26 Feb 1988 22:16-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #41 - Supercomputers, Nonotechnology
To: AIList@SRI.COM


AIList Digest           Saturday, 27 Feb 1988      Volume 6 : Issue 41

Today's Topics:
  Reviews - Cray "Left Brains" & Spang Robinson Supercomputing,
  Opinion - Nanotechnology, Science Priorities

----------------------------------------------------------------------

Date: 25 Feb 88 06:29:18 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Cray makes "left brains", says chairman Rollwagen


      ... Adding more and more parallel processors turns a computer into
a new kind of animal, a thinking machine.  Developers of thinking machines
now are talking in terms not of 64 processors, but 64,000 interlinked
processors.  The linkages, not the processors, become the central part
of the technology.  Cray sees too much potential in its existing business
to consider, at this point, going into the thinking machine business.

      "It's kind of like we make left brains.  Even though they are getting
more complex, it's still rational, linear, deterministic programs that we
run.  They [the thinking machine people] are trying to build the right
brain, where the interconnections are as much of the machine as the
processors.  What you are going to do is just get them started and they're
going to go off on their own and it will be fascinating to see how this works.
But I think we're really talking about decades."

        (John A Rollwagen, chairman of the board of Cray Research,
quoted in an article by George Melloan, in The Wall Street Journal of
February 23, p. 29.)

------------------------------

Date: Sun, 21 Feb 88 22:36:39 CST
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: Review - Spang Robinson Supercomputing

Summary of Spang Robinson Report on Supercomputing and Parallel Processing
Volume 2, No. 1, January 1988

Lead article is on Supercomputer Marketplace

Cray Research
1985 - 100th machine shipped
1987 - 200th machine shipped

Annual market  - nine billion today,
                 twenty billion in 1990
                 thirty billion in 1995

Academic Supercomputer Application Development

Physical Sciences - 50%
Geosciences       - 15%
Biosciences       -  8%
Social Science    -  2%
Math Sciences     -  4%
Engineering       - 19%
Multidisciplinary -  4%

Worldwide Distribution of Supercomputers:

U. S. 54%
Japan 19%
Other  8%
UK, France, Germany, Canada less than 5% each


Categorization of where the Supercomptuers are

Research - 24%
Universities 18%
Defense      16%
Aerospace    13%
Petroleum    10%
Environment   7%
Nuclear Energy7%
Service Bureau5%
Automotive    5%

Installed Base (1986):

Supercomputers    228
Vector Augmented  190
Mainframes
MInisupercompute  450
Superminicomputer 140,000
Workstations      110,000

The article also includes estimates for 1991 and beyond for most of the abov
numbers as well as discussion.
______________________________
Article on Thinking Machines Connection Machine.

As of May 1987, they delivered seven systems:
They have raised a total of 31 million in equity capital with a great chunk
untouched.
______________________________

The U. S. House of Representatives Subcommittee on Science, Research
and Technology of the Committee on Science, Space and Technology plans
a brief hearing in the February -early March time frame to review
recommendations for futher support of Supercomputing.

The Office of Science and Technology Policy has a report that
says
1) U. S. government should development long range supercomputer support
program because this is necessary for national security and the economy
2) joint research should be undertaken because of relationship with
software and microelectronic
3) Network technology may become a barrier so further work should
be done linking things up.
4) Long term support should be within available resources:

They estimate $500 million on supercomputing  which should be increased
70 percent.  The National Network should be upgraded at a cost of
390 million.  This includes upgrading existing networks to 1.5 MBit/second
capacity, to be replaced with a 45 Mbit/sec and then to 45 Mbit/Sec.

+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_
Shorts:

IBM has made a committment to Steve Chen's company
that falls short of the $100 million needed to develop the product.

Dana Computer has changed its name to Ardent Computer Technology.

Cray Research will be selling a system to Construcciones Aeronauticas, S. A.

Prime Computer Systems is announcing a system developed jointly with
Cydronme.  It is a $500,000 =- $600,000 machine using
directed dataflow architecture and vector processors.

A broker, Piper, Jaffray and Hopwood, recommended that investors accumulate
Cray stocks.

Sequent has announced an OEM agreement with Apricot computers of England.

Westinghouse will be selling Scientific Computer System machines in a turnkey
fashion for nuclear fuel customers.

Encore made a profit of $114,000 for the year.

Japan's MITI awarded Tandem its "Good Design Prize for Foreign Products" for its
VLX mainframe and XL8 and V8 disk storage devices.

Encore's President, James R. Pompa has resigned.

Scientific Computers has released a version of System V Release 3 for
its machines which includes UNICOS extensions.  They also introduced
a feature to increase the amount of main meory and to implement gather/scatter
and compres instructions.

University of Georgia had an ETA10-P shipped.

BBN Advanced Systems has a low cost starter Butterfly system for $100,000
for universities.

Topologix has introduced add-in boards for Suns containing four transputers
a piece with up to eight boards per workstation.
'Convex has shipped beta site units of C2 which is targetted at 40
MFLOP's.

Indiana University developed a system to interactively automate the conversion
of programs for the Butterfly family of parallel processors.  It is public
domain.

Ametek has announced a Series 2010, a hypercube based system, where
messages pass through nodes without requiring processor time.
Each node is a 68020 CPU plus 68881 floating point unit with 1
to eight MBYTES memory.  It does not require expansions in powers
of two and can grow to 1024 nodes.

Dow Jones Informations has purchased a Connection Machine from Thinking
Machines.  The systems uses the Information retrieval technique
of "relevant factor" in which the user evaluates what is returned from
the first query to generate successive paths.

A second 32,00 processor is on order for March delivery.
One processor has 4.3% of the power of the entire world mainframe
computer power.

------------------------------

Date: Sun, 21 Feb 88 17:06:54 CST
From: smu!lewis@uunet.UU.NET (Eve Lewis)
Subject: Nanotechnology, Science Priorities

Urgent  insert:  On the morning of Friday, 12 February 1988, I heard
the  following NPR item:- The National Research Council has issued a
report  calling  for  a  major shift in biology research. The report
says the Federal Government should spend $200 million  a  year,  for
fifteen  years, on a single genetic project. NPR's Science reporter,
Laurie Garrett, has more:

Garrett:  For  several  years,  some  powerful  scientists have been
promoting the largest biological  research  effort  ever  attempted.
They  want  to  decipher the code of over three billion genetic mes-
sages stored in human chromosomes. The National Research Council en-
dorsed the project yesterday, but did not say which  Federal  agency
should  control  it.  Last night, Harvard University Nobel Laureate,
David Baltimore,  addressed  the  Annual  Meeting  of  the  American
Association for the Advancement of Science. Despite his leading role
in  genetics  and biotechnology research, Baltimore came out against
"The Genome Project."

Baltimore: When I walk around and ask people, "Do you feel  in  your
own  research,  that what's holding you up, is a lack of sequence of
the human genome?" I have yet to meet anybody who says, "Yes."

Garrett: Baltimore says Federal money and research efforts would  be
better spent on a massive, co-ordinated attack on the AIDS virus. In
Boston, I'm Laurie Garrett.

     End of this NPR transcript re: Baltimore's horrendous opinion.

1)  Do  researchers always know precisely what is "holding up" their
research, or how said research would  be  spurred,  stimulated,  and
aided by such an important store of data?

2)  From Robert Kanigel's book, "Apprentice to Genius," which I just
finished reading, discussing James Shannon of  NIH,  and  I  believe
supporting my view that Baltimore's advice is a mistake:

"Shannon's  task  was to align NIH's disease-oriented structure with
the needs of basic research. The strategy he advanced all the  years
of  his  tenure as director was: Don't embark on a narrow search for
disease cures at all." and,

"`Knowledge of life processes and of phenomena underlying health and
disease  is  still grossly inadequate,' he would write. Without such
knowledge, it was a waste of time, money, and manpower  to  aim  for
the solution of a specific medical problem. He blamed the failure of
polio  vaccines  back in the 1930s on lack of knowledge of the polio
virus and techniques needed to culture it. He pulled the  plug  from
an artificial heart program already approved because he didn't think
cardiac functioning was well enough understood.

"He  didn't  like  the  term basic research; he preferred calling it
fundamental. But in the end it was the same. As he put it in an  ar-
ticle he coauthored for "Science" soon after becoming director, "The
potential  relevance  of  research to any disease category is [best]
defined in terms of long-range possibilities and  not  in  terms  of
work  directed  toward  the quick solution of problems obviously and
solely related to a given disease." Additionally, re:

Ca. 1955, Dr. Seymour Kety, Director of Scientific Research for NIMH

"In the long run, basic science would gain, leading to clinical  ad-
vances more abundant than if they'd been pursued directly.

"Some  years  later,  two  clinical investigators, Julius Comroe and
Robert Dripps, lent analytical force to Kety's  intuition.  The  two
undertook  to examine the origins of the ten most important clinical
advances in heart and lung medicine and  surgery  of  the  preceding
thirty years. They tracked down 529 scientific articles that had, in
retrospect,  proven  crucial  to  those clinical success stories. Of
them, Comroe wrote, fully forty-one percent `reported work that,  at
the  time  it was done, had no relation whatever to the disease that
it later helped to prevent, diagnose, treat, or alleviate.' Penicil-
lin, the anticoagulant heparin, and the  class  of  drugs  known  as
beta-blockers were among them."

Now,  I refer to the above, because I believe that if David "reverse
transcriptase" Baltimore's advice were followed,  it  would  deprive
the AI people and the neuroscience people (let's hope that some have
a  foot  in  each camp) of the BIOLOGICAL COMPILER IN THE NEURONS OF
THE HUMAN BRAIN, to wit: the "reverse transcriptase"  implicated  in
mental  function.  That is why it is too ironic for words, that Bal-
timore should come out with such an  opinion,  in  addition  to  its
being pathetic that someone of his calibre should "think small." One
wonders what interests have gotten to him.

My article continues:

On  page A1 of the New York Times, 20 March 1987, there was a report
of a meeting called by the American Physical Society.  I  tell  you,
these  people were chortling in a state of absolute mania, in regard
to the discoveries in "superconductivity."  The  head:  "Discoveries
Bring a `Woodstock' for Physics." Byline: By James Gleick.

Was I jealous? Was I envious? I tell you that I was totally sick, to
the  max.  But  I  also tell you that less than one year later, I am
beginning to feel rather good, about AI (artificial intelligence, as
we all know) and about NI (natural intelligence).

Interdigitation between these two  disciplines  will  forge  an  ab-
solutely  unbreakable  bond,  and after a struggle of two and a half
millenia, WE will chortle at our own "Woodstock." I can see The  New
York  Times  article's  banner  headline  in  my  "mind's eye," now:
"DISCOVERIES BRING A `WOODSTOCK'  FOR  NEUROSCIENCE  AND  ARTIFICIAL
INTELLIGENCE." (We will also be able to define in neurophysiological
and molecular biological terms, just what is that "mind's eye.")

Now, re: the Drexler concept:  Godden  <GODDEN%gmr.com@RELAY.CS.NET>
in an article, "Intelligent Nanocomputers," dated Friday, 15 January
1988 @ 09:46 EST, discusses K. Eric Drexler's "Engines of Creation."
What he explores particularly in his review is the chapter on AI and
nanocomputers.  "Drexler  makes the fascinating claim (no doubt many
will vehemently disagree) that to create a  true  artificial  intel-
ligence  it  is  not necessary to first understand intelligence. All
one has to do is  simulate  the  brain,  which  can  be  done  given
nanotechnology.  He  suggests that a complete hardware simulation of
the    brain    can     be     done,     synapse-for-synapse     and
dendrite-for-dendrite,  in  the  space of one cubic centimeter (this
figure is backed up in the notes)."

However,  the  real  "Engine  of  Creation"  is,  in  actuality,   a
NANO-NANOTECHNOLOGY. It would be most resourceful, and productive to
access   the   BIOLOGICAL   NANO-NANOTECHNOLOGY,   and   with   this
inspiration,"  with   this   wealth   of   clues,   create   an   AI
NANO-NANOTECHNOLOGY.

Biological Nano-Nanotechnology IS Molecular Biology. But take heart,
because there is no conflict between Biological  Nanotechnology  and
Biological  Nano-Nanotechnology.  Indeed,  the  latter is the raison
d'etre, the vis a tergo, the provider of  templates,  the  sine  qua
non,  for  the  former,  which provides the model, and the source of
reference  for Drexler's "Engine of Creation," which is - one judges
from Godden's review - an isomorph of the human brain.

I  only  recommend that AI nanotechnologists avail themselves of the
wealth   of   experimentation   and   information   in    Biological
Nano-Nanotechnology.  Every  structure  in  the  human body, not ex-
cepting, for sure, the human brain, drags  with  it  a  phylogenetic
residue, from further back in time than perhaps we care to remember,
a  DNA  riddled  with  karma,  "sins,"  choices made at forks in the
evolutionary road, even choices not made at  such  forks,  and  all,
rattling  like  the  chains of Marley's ghost. We may be scared, in-
hibited by built-in protective mechanims, which is why we  have  not
yet solved "the biggie." I here refer to precisely what is the human
mind, in neurophysiological and molecular biological terms.

The structural genes of the human genome determine the morphology of
the  human  brain,  not to neglect the neurotransmitters, receptors,
etc. In my view, the human genome is the engineer that  masterminded
the "Engine of Creation." There is no antithesis implicated; indeed,
after  swallowing  the  theories  of  Galileo, Darwin and Freud, the
species (ours) had best prepare itself for another  humongous  gulp,
when  the genetic constraints on human thought are revealed and sub-
stantiated.

Godden's review of the Drexler book apparently elicited  some  reac-
tion.  I  refer  specifically  to the article dated 01 Feb 88 (11:53
PST),  by  John  McCarthy  <JMC@SAIL.Stanford.EDU>,  and  that  from
Wednesday,   03   Feb   1988   (01:25  EST),  by  Marvin  L.  Minsky
<MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>.

Dr. Minsky is optimistic in the extreme, as I am:

"Progress  in  this  direction  certainly  seems  faster than almost
everyone would have expected. I will make a prediction: In the  next
few  years,  various  projects will request and obtain large budgets
for the "human genome sequencing" enterprise. In the meantime, some-
one will succeed in stretching single strands of  protein,  DNA,  or
RNA  across  crystalline  surfaces, and sequence them, using the STM
method. Eventually, it should become feasible to do such  sequencing
at  multi-kilocycle  rates,  so  that  an entire chromosome could be
logged in a few days."

That is why Dr. Baltimore's advice is so alarming. With his prestige
and influence, whomelse will he recruit for that bandwagon?

John  McCarthy  (01  Feb 88) describes two extreme approaches to AI,
the "Logic Approach," which is his preference:

"Understand the common sense world well enough to express in a suit-
able logical language the facts known to a person. Also express  the
reasoning methods as some kind of generalized logical inference."

and the "Instrumental Approach," which McCarthy finds less useful:

"Using  nano-technology  to  make  an instrumented person. (This ap-
proach was suggested by Drexler's book and by Eve Lewis's commentary
in AILIST. It may even be what she is suggesting)."

McCarthy  points  out  certain  problems  with   the   "Instrumental
Approach,"  which  on a Nanotechnological (Drexler) level would be a
recruitment of a functional isomorph of the human brain,  and  on  a
more  intricate,  biologically  more "basic," Nano-Nanotechnological
level,  would  involve a functioning human genomime ["genomime" is a
word I coined for the genomic equivalent of the brain isomorph].

He says "Sequence the human genome. Which one?  Mostly  they're  the
same, but let the researcher sequence his own."

Dr. McCarthy is overlooking, in the most anti-serendipitous  manner,
what  neuroscience,  embryology, and most particularly, which he did
not mention, PHYLOGENY, have to offer. Just for starters, the evolu-
tion of the genome, pari passu with the phylogenesis of  the  pineal
organ  and  the  phylogenesis of the inner ear, as traced throughout
the vertebrate phylum, would give  artificial  intelligence  such  a
wealth of data, such an embarrassment of riches that it would hardly
begin to know what to do with.

This  article  is  fast  becoming a "gontzeh megillah," and I really
must wind it up, but when McCarthy refers to "the human genome," and
says, "which one?" that is enough to get me started again. As far as
feeding "culturgens"  (E.O.  Wilson's  term)  into  the  super-duper
parallel  computer,  does  McCarthy really believe that there are no
racial differences, no sexual differences, not to mention individual
differences, in human brains? Does  he  think  that  it's  "cultural
pressure"  that  accounts for the Oriental reverence for the dragon,
and for the Occidental St. George, slayer of same?

"Evolution keeps going, and even if we don't do anything artificial,
we won't be the same in ten million years." - Marvin L. Minsky

Meanwhile, before McCarthy abandons the "Engine  of  Creation,"  en-
tirely,  I suggest that he check out the "Sexually Dimorphic Nucleus
of the Hypothalamus," S.D.N., for short - just for starters.

McCarthy states: "However, experience since the 1950s shows that  AI
is a difficult problem, and it is very likely that fully understand-
ing  intelligence  may  take  of  the  order  of  a  hundred  years.
Therefore, the winning approach is likely to be tens of years  ahead
of the also-rans."

In fact, we are not referring just to the experience of the last few
decades, but that of the last two and a half millenia.

If you would read the "Works of Plato,"  specifically  the  Socratic
dialogues in "Phaedrus" and "Theatetus" and compare the Dialogues to
Minsky's "Society of Mind," or to Lopate's interview with him, or to
Sir  Francis  Crick's  seminar  on  "The  Impact  of Biochemistry on
Neurobiology," at Cornell University on 6 May 1986, or  to  Jonathan
Winson's  "Brain  and Psyche," or Michael S. Gazzaniga's "The Social
Brain," or J.Z. Young's "Programs of the Brain," or the  other  per-
ceptive  contributions  in  this  area,  then you would have to ack-
nowledge that as far as the struggle to comprehend  memory,  or  in-
ternal  representation,  or  vision, or "How We Know Universals," is
concerned, we would have to paraphrase the  Queen  in  "Through  the
Looking-Glass," and admit that it's taken all the running we can do,
just to keep in the same place!

Nonetheless,  solving this problem is do-able, and we are better off
with the information about the trillion  neurons,  with  the  10,000
connections  each,  and the plethora of neurotransmitters and recep-
tors. And we will be better off yet with sequencing the human genome
with the three billion base pairs, and the introns  and  the  exons,
and  the promoters and the enhancers and the repressors and the even
the "junk" DNA. Maybe, especially the "junk" DNA.

Surely, we will have come up with a robust theory, or  theories,  to
interpret  all  that data, but let it be there to interpret, just as
we have to make sense out of all the neural pathways and connections
and peptides and receptors.

We  don't  need  any  "bottom-line," pessimistic, "applied research"
types shoving us off that track, or plunging us back into  the  dark
ages.  So much for Baltimore's suggestion. Keep the faith!

------------------------------

End of AIList Digest
********************

∂01-Mar-88  0000	LAWS@KL.SRI.COM 	AIList Digest   V6 #42 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 1 Mar 88  00:00:00 PST
Date: Mon 29 Feb 1988 22:12-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V6 #42
To: AIList@SRI.COM


AIList Digest            Tuesday, 1 Mar 1988       Volume 6 : Issue 42

Today's Topics:
  Seminars - Massively Parallel Object Recognition (BBN) &
    A Theory of Prediction and Explanation (SRI) &
    Cognition and Metaphor (BBN) &
    Physically Based Modeling (CMU) &
    Qualitative Probabilistic Networks (MIT) &
    Automated Program Recognition (BBN) 

----------------------------------------------------------------------

Date: Thu 25 Feb 88 09:21:08-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Seminar - Massively Parallel Object Recognition (BBN)

                    BBN Science Development Program
                       AI Seminar Series Lecture

     OBJECT RECOGNITION USING MASSIVELY PARALLEL HYPOTHESIS TESTING

                             Lewis W. Tucker
                     Thinking Machines Corporation
                             Cambridge, MA
                           (TUCKER@THINK.COM)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                       10:30 am, Tuesday March 1


Problems in computer vision span several layers of data representation
and computational requirements.  While it is easy to see how advances in
parallel machine architectures enhance our capability in "low-level"
image analysis to process the large quantities of data in typical
images, it is less obvious how parallelism can be exploited in the
"higher" levels of vision such as object recognition.

Traditional approaches to object recognition have relied on
constraint-based tree search techniques that are not necessarily
appropriate for parallel processing.

This talk will introduce a model-based object recognition system
designed at Thinking Machines Corporation and its implementation
on the Connection Machine.  The goal of this system is to be able
to recognize a large number of partially occluded objects in 2-D
scenes of moderate complexity.  In contrast to previous
approaches, the system described here utilizes a massively
parallel hypothesize-and-test paradigm that avoids serial search.
Perceptual grouping of features forms the basis for generating
hypotheses; parameter space clustering accumulates weak evidence;
template matching provides verification; and conflict resolution
ensures the consistency of scene interpretation.

Results from experiments with databases ranging from 10 to 100
objects show the relative independence of the time to recognize
objects with either the complexity of the scene or number of
objects in the database.

------------------------------

Date: Thu, 25 Feb 88 14:36:28 PST
From: Margaret Olender <olender@malibu.ai.sri.com>
Subject: Seminar - A Theory of Prediction and Explanation (SRI)


   WHEN:  FRIDAY, MARCH 4th
   TIME:  10:30am
  WHERE:  EJ228
SPEAKER:  LEORA MORGENSTERN / BROWN UNIVERSITY.



                         WHY THINGS GO WRONG:
            A FORMAL THEORY OF PREDICTION AND EXPLANATION

                        Leora Morgenstern
                        Brown University

This talk presents a theory of Generalized Temporal Reasoning.  We focus
on the related problems of:
(1) Temporal Projection - figuring out all the facts that are true
in some chronicle, given a partial description of that chronicle
                    and
(2) Explanation - figuring out what went wrong if an expected
outcome didn't occur.

Standard logics can't handle temporal projection due to such problems
as the frame problem (and qualification problem).  Simplistic
applications of non-monotonic logics also won't do the trick, as the
Yale Shooting Problem demonstrates.  During the past several years, a
number of solutions have been proposed to the Yale Shooting Problem,
which either use extensions of default logics (Shoham,Kautz), or which
circumscribe over predicates specific to a theory of action
(Lifschitz, Haugh).  We show that these solutions - while perfectly
valid for the Yale Shooting Problem - cannot handle the general
temporal projection problem, because they all handle either forward or
backward projection improperly.

We present a solution to the generalized temporal projection problem
based on the notion that actions only happen if they are *motivated*.
We handle the non-monotonicity using only preference criteria on
models, and avoid both modal operators and circumscription axioms.  We
show that our theory handles both forward projection and backward
projection properly, and in particular solves the Yale Shooting
Problem and a set of benchmark problems which other theories can't
handle.  An advantage of our approach is that it lends itself to an
intuitive model for the explanation task.  We present such a model,
give several characterizations of explanation within that model, and
show that these characterizations are equivalent.

This talk reports on joint work done with Lynn Stein of Brown
University.

------------------------------

Date: Mon 29 Feb 88 08:41:01-EST
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: Seminar - Cognition and Metaphor (BBN)


                   BBN Science Development Program
                 Language & Cognition Seminar Series


                      COGNITION AND METAPHOR

                     Professor Bipin Indurkhya
                    Computer Science Department
                         Boston University

                      BBN Laboratories Inc.
                       10 Moulton Street
                Large Conference Room, 2nd Floor


              10:30 a.m., Wednesday, March 9, 1988


Abstract:  In past years a view of cognition has been emerging in which
metaphors play a key role. However, a satisfactory explanation of the
mechanisms underlying metaphors and how they aid cognition is far from
complete.

In particular, earlier theories of metaphors have been unable to account
for how metaphors can "create" new, and sometimes contradictory, perspectives
on the target domain.

In this talk I will address some of the issues related to the role metaphors
play in cognition. I will first lay an algebraic framework for cognition,
and then in this context I will pose the problem of metaphor. Two mechanisms
will be proposed to explain the workings of metaphors. One of these
mechanisms gives rise to what we call "projective metaphors", and it is
shown how projective metaphors can "create" new perspectives and new
ontologies on the target domain. The talk will conclude with a brief
discussion of some further implications of the theory on "Direct Reference
vs. Descriptive Reference", "Is all knowledge metaphorical?", and
"Induction and Analogies", among other things.

------------------------------

Date: Sun, 28 Feb 88 17:05:41 EST
From: Anurag.Acharya@CENTRO.SOAR.CS.CMU.EDU
Subject: Seminar - Physically Based Modeling (CMU)


        TOPIC:    Physically Based Modeling For Vision And Animation

        SPEAKER:  Andy Witkin, Purdue University

        WHEN:     Thursday, March 3, 1988, 3:30-4:30 p.m.

        WHERE:    Wean Hall 5409

                            ABSTRACT

Our approach to modeling for vision and graphics uses the machinery of
physics.  We will describe two current foci of our research:

To create models of real-world objects we use simulated materials that
move and deform in response to applied forces.  Constraints are imposed
on these active models by applying forces that coerce them into states
that satsify the constraints.  In visual analysis, the constraint forces
are derived from images.  Additionally, the user may apply forces
interactively, guiding the models towards the desired solution.
Examples of the approach include simulated pieces of springy wire
attracted to edges, and symmetry-seeking elastic bodies used to recover
three-dimension shapes from 2-D views.

To animate active character models we use a new method called
``spacetime constraints.''  The animator specifies what the character
has to do, for instance, ``jump from here to there, clearing a hurdle in
between;'' how the motion should be performed, for instance ``don't
waste energy,'' or ``come down hard enough to splatter whatever you land
on;'' the character's physical structure---the geometry, mass,
connectivity, etc.  of the parts; and the physical resources available
to the character to accomplish the motion, for instance the character's
muscles, a floor to push off from, etc.  The requirements contained in
this description, together with Newton's laws, comprise a problem of
constrained optimization.  The solution to this problem is a physically
valid motion satisfying the ``what'' constraints and optimizing the
``how'' criteria.  We will present animation of a Luxo lamp performing a
variety of coordinated motions.  These realistic motions conform to such
principles of traditional animation as anticipation, squash-and-stretch,
follow-through, and timing.

We will conclude with a videotape presenting an overview of our recent
vision and animation work.

------------------------------

Date: Monday, 1 February 1988  11:31-EST
From: Paul Resnick <pr at ht.ai.mit.edu>
Subject: Seminar - Qualitative Probabilistic Networks (MIT)

                 [Excerpted from the IRList Digest.]

  Thursday 4, February  4:00pm  Room: NE43- 8th floor Playroom

                        The Artificial Intelligence Lab
                        Revolving Seminar Series

                        Qualitative Probabilistic Networks
                        Mike Wellman

Many knowledge representation schemes model the world as a collection of
variables connected by links that describe their interrelationships.
The representations differ widely in the nature of the fundamental
objects and in the precision and expressiveness of the relationship
links.  Qualitative probabilistic networks occupy a region in
representation space where the variables are arbitrary and the
relationships are qualitative constraints on the joint probability
distribution among them.

Two basic types of qualitative relationship are supported by the
formalism.  Qualitative influences describe the direction of the
relationship between two variables and qualitative synergies describe
interactions among influences.  The probabilistic semantics of these
relationships justify sound and efficient inference procedures based on
graphical manipulations of the network.  These procedures answer queries
about qualitative relationships among variables separated in the
network.  An example from medical therapy planning illustrates the use
of QPNs to formulate tradeoffs by determining structural properties of
optimal assignments to decision variables.

------------------------------

Date: Thursday, 4 February 1988  11:05-EST
From: Paul Resnick <pr at ht.ai.mit.edu>
Subject: Seminar - Automated Program Recognition (BBN)

                 [Excerpted from the IRList Digest.]


  Thursday, 11 February  4:00pm  Room: NE43- 8th floor Playroom

                        The Artificial Intelligence Lab
                        Revolving Seminar Series

                        Automated Program Recognition
                        Linda Wills

By recognizing familiar algorithmic fragments and data structures in a
program, an experienced programmer can understand the program, based on
the known properties of the structures found.  Automating this
recognition process will make it easier to perform many tasks which
require program understanding, e.g., maintenance, modification, and
debugging.  This talk describes a recognition system which automatically
identifies occurrences of stereotyped computational structures in
programs.  The system can recognize these standard structures, even
though they may be expressed in a wide range of syntactic forms or they
may be in the midst of unfamiliar code.  It does so systematically by
using a parsing technique.  Two important advances have made this
possible.  The first is a language-independent graph representation for
programs, which canonicalizes many syntactic features of programs.  The
second is an efficient graph parsing algorithm.

------------------------------

End of AIList Digest
********************

∂01-Mar-88  0200	LAWS@KL.SRI.COM 	AIList V6 #43 - Conferences 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 1 Mar 88  02:00:00 PST
Date: Mon 29 Feb 1988 22:28-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #43 - Conferences
To: AIList@SRI.COM


AIList Digest            Tuesday, 1 Mar 1988       Volume 6 : Issue 43

Today's Topics:
  Conferences - Volunteers needed for AAAI-88 &
    3rd Rocky Mountain Conf. on AI &
    1st Int. Symp. on AI &
    Illinois Decision Making Workshop &
    Computing and Human Senses (IEEE/SU)

----------------------------------------------------------------------

Date: 28 Feb 88 19:47:45 GMT
From: feifer@locus.ucla.edu
Subject: Volunteers needed for AAAI-88


ANNOUNCEMENT:  Student Volunteers Needed for AAAI-88

AAAI-88 will be held August 20-26, 1988 in beautiful Minneapolis,
Minnesota.  Student volunteers are needed to help with local
arrangements and staffing of the conference.  To be eligible for
a Volunteer position, an individual must be an undergraduate or
graduate student in any field at any college or university.

This is an excellent opportunity for students to participate
in the conference.   Volunteers receive FREE registration at
AAAI-88, conference proceedings, "STAFF" T-shirt, and are
invited to the volunteer party. More importantly, by
participating as a volunteer, you become more involved and
meet students and researchers with similar interests.

Volunteer responsibilities are varied, including conference
preparation, registration, staffing of sessions and tutorials
and organizational tasks.  Each volunteer will be assigned
twelve (12) hours.

If you are interested in participating in AAAI-88 as a
Student Volunteer, apply by sending the following information:

Name
Electronic Mail Address (for mailing from arpa site)
USMail Address
Telephone Number(s)
Dates Available
Student Affiliation
Advisor's Name

to:

feifer@SEAS.UCLA.EDU

or

Richard Feifer
UCLA
Center for the Study of Evaluation
145 Moore Hall
Los Angeles, California  90024



Thanks, and I hope you join us this year!





Richard Feifer
Student Volunteer Coordinator
AAAI-88 Staff


- Richard

------------------------------

Date: 29 Feb 88 18:08:46 GMT
From: oahu!feifer@locus.ucla.edu  (Richard G. Feifer)
Subject: Re: Volunteers needed for AAAI-88


>>AAAI-88 will be held August 20-26, 1988 in beautiful Minneapolis,
>>Minnesota.  Student volunteers are needed to help with local
>
>Any chance of travel/housing stipends?
>
>O---------------------------------------------------------------------->
>| Cliff Joslyn, Mad Cybernetician

I am sorry we cannot reimburse for travel.  (it would
be nice if we could).



-Richard

- Richard

------------------------------

Date: 23 Feb 88 09:18:00 MDT
From: "Martha Polson" <mcpolson@clipr.colorado.edu>
Reply-to: "Martha Polson" <mcpolson@clipr.colorado.edu>
Subject: Conference - 3rd Rocky Mountain Conf. on AI

RMCAI '88

The Third Annual Rocky Mountain Conference
on
Artificial Intelligence

June 13-15, 1988
Sheraton Denver Tech Center
4900 DTC Parkway
Denver, Colorado  80235

CALL FOR PAPERS

RMCAI '88 is a third annual conference sponsored in 1988 by US WEST
Advanced Technologies and the Rocky Mountain Society for Artificial
Intelligence.  The purpose of the 1988 Conference is to promote
interaction among industry and university in this rapidly growing
region of high-technology.

Program Committee Chairman: Dr. Yorick Wilks, Director of the Computer
Research Laboratories, New Mexico State University.

Keynote Speaker: Dr. Terry Winograd, Stanford University.

Invited Speaker: Dr. W. Thomas Cathey, University of Colorado.

PROGRAM COMMITTEE

Alexander, James H.     US WEST Advanced Technologies
Bradshaw, Gary  University of Colorado
Burns, Lt. Col. Hugh    AFHRL
Ensor, Robert   AT&T Bell Laboratories
Ferguson, Jay   Carnegie Group
Filman, Robert E.       Intellicorp
Freeman, Edward US WEST Advanced Technologies
Freiling, Michael       Tektronix
Lewis, Clayton  University of Colorado
Kessler, Robert University of Utah
Mozer, Michael  University of Colorado
Rathke, Christian       University of Colorado
Rich, Elaine    MCC

RMCAI '88's Technical Program will include paper presentations of
quality research in AI.  Particular attention will be given to those
papers which reflect significant research results in the following
application areas, however well- written papers in other topic areas
will be considered.

AI and Education        Knowledge Acquisition
Automated Reasoning     Knowledge Representation
Cognitive Modeling      Machine Learning
Commonsense Reasoning   Natural Language
Connectionism   Neural Networks
Expert Systems  Robotics
Impacts of AI Technology        User Interfaces

REQUIREMENTS FOR SUBMISSION

Submission Deadline     March 15, 1988
Notification of Acceptance:     April 15, 1988
Camera-ready Copy Due:  May 12, 1988

Authors must submit six (6) complete copies of their papers to the
RMSAI address listed below.  Each submission must include the topic
area of research.  Submitted papers must be original work.  State
whether you are submitting your paper to more than one conference.
Single spaced type is acceptable, however, papers must be 8-1/2" by
11" and not exceed 4,00 words.  dot-matrix type is not recommended,
unless truly letter quality.  The title should be included on the
first page, and the bibliography the last page, however, those pages
will not count toward the 4,000 word limit.  Papers which do not
fulfill the above requirements will be returned without review.

Each paper will be reviewed by experts in the area specified as topic
of the paper by our National Program Committee.  Accepted papers will
be published in the RMCAI '88 conference Proceedings.  An outstanding
paper will be selected by the Program Committee and the award given
during lunch at the Conference.

Submit papers to:       Dr. Yorick Wilks
        c/o RMSAI
        1616 - 17th Street, Suite M-76
        Denver, Colorado  80202

------------------------------

Date: 25 Feb 88   18:16 EDT
From: PL233270%TECMTYVM.BITNET@CUNYVM.CUNY.EDU
Subject: Conference - 1st Int. Symp. on AI

Date: 25 February 1988, 18:16:14 EDT
From: Teresa Lucio Nieto        Mexico (83) 58 56 49 PL233270 at TECMTYVM
To:   AILIST at SRI.COM

We would like to invite all of the American Association for Articial
Intelligence members to send papers for the 1st International
Symposium on Artificial Intelligence to be held on October 24-28, 1988
in Monterrey, Mexico at the Instituto Tecnologico y de Estudios
Superiores de Monterrey (ITESM).


               C A L L    F O R    P A P E R S
               -------------------------------

Topics include knowledge representation, knowledge acquisition,
natural language processing, knowledge based systems, inference
engine, machine learning, speech recognition, pattern recognition,
vision and theorem proving.

Four to five pages maximum summaries, four copies and resume, to
I T E S M. Centro de Investigacion en Informatica.
David Garza Salazar. Sucursal de Correos J.  64849 Monterrey, N.L.
Mexico. (83) 59 57 47, (83) 59 59 43;

Bitnet Address: SIIACII at TECMTYVM
Telex: 0382975 ITESME
Telefax: (83) 58 59 31
AppleLink address: IT0023

------------------------------

Date: 23 Feb 88 15:55:00 GMT
From: haddawy@m.cs.uiuc.edu
Subject: Conference - Illinois Decision Making Workshop


                        CALL FOR PARTICIPATION

       1988 ILLINOIS  INTERDISCIPLINARY  WORKSHOP ON  DECISION  MAKING
      Representation and Use of Knowledge for Decision Making in Human,
                          Mechanized, and Ideal Agents

             Sponsored by the UIUC CogSci/AI Steering Committee

                       Champaign-Urbana, Illinois
                           June 15-17, 1988


PURPOSE
The 1988 Illinois Interdisciplinary Workshop on Decision Making is
intended to bring together researchers working on the problem of
decision making from the fields of Artificial Intelligence,
Philosophy, Psychology, Statistics, and Operations Research.  Since
each area has traditionally stressed different facets of the problem,
researchers in each of the above fields should benefit from an
understanding of the issues addressed and the advances made in
the other fields.  We hope to provide an atmosphere that is both
intensive and informal.

FORMAT
There will be talks by ten invited speakers from the above mentioned
areas.  The current list of speakers includes: P.Cheeseman, J.Cohen,
J.Fox, W.Gale, J.Payne, R.Quinlan, T. Seidenfeld, B.Skyrms, and
C.White.  The talks will be followed by prepared commentaries and open
floor discussion.  Additionally, speakers will participate in small
moderated discussion groups focused intensively on their work.

TOPICS
- The representation, organization and dynamics of the knowledge
  used in decision making.
- Decision making strategies.
- Decisions under constraints (limited rationality).
- Combining normative and descriptive theories.
- The use of domain knowledge to initialize beliefs and preferences.

PARTICIPATION
This workshop will consist of a limited number of active participants,
commentators, and invited speakers.  To be considered for
participation, send a one page summary of your research interests and
publications no later than March 15.  Indicate also if you would like
to deliver either an inter- or intra-disciplinary commentary.
Commentators will receive copies of their assigned papers three weeks
prior to the workshop.  Acceptances will be mailed by April 4.

REGISTRATION
The registration fee is $50 general and $30 for students.  A copy of
the proceedings is included in the registration fee and will be
distributed at the workshop.  A few grants are available to cover most
or all travel, accommodation, and registration expenses.  In order to
be considered for a grant, include a request with your application.

Mail all correspondence to:  L. Rendell, Dept. of Computer
Science, University of Illinois, 1304 W. Springfield Ave., Urbana, IL
61801.

ORGANIZING COMMITTEE
U.Bockenholt, O.Coskunoglu, P.Haddawy, P.Maher, L.Rendell, E.Weber

------------------------------

Date: Fri 26 Feb 88 17:23:49-PST
From: Marcelo Hoffmann <HOFFMANN@KL.SRI.COM>
Subject: Conference - Computing and Human Senses (IEEE/SU)


                COMPUTING AND HUMAN SENSES
(Exploring the future of computing through biological research)

                 A one day technical seminar

        Saturday, March 26, 1988, 9:00 a.m. to 4:30 p.m.

        Stanford University, Terman Engineering Building

Organized by the Committee on AI and Expert Systems of the Santa Clara
         Valley Chapter of the Computer Society of the IEEE

Topics/speakers:

Vision:  Dragutin Petkovic, IBM Almaden Research Center
         Pattern Recognition in Digital Visual Inspection

         H. Keith Nishihara, Schlumberger, Palo Alto Research
         Center, Image Matching in Human and Computer Vision

Hearing: Dick Lyon, Schlumberger, Palo Alto Research,
         "Analog VLSI Hearing Models"

Sensory: Joseph Rosen, Stanford Medical Center and VAHPA
         Nerve Chip - The bionic switchboard

Motor:   Scott Fisher, NASA Ames Research Center
         Man-machine Symbiosis - Telerobotics

Smell:   Walter Freeman, UC Berkeley Physiology-Anatomy Dept.
         Roles of Chaos in Olfactory Processing

Registration fee includes lunch and notes.  Early registration, prior
to March 11, 1988 is strongly urged to assure there is enough time to
return an acknowledgement and map showing room location.  Company
P.O.'s must have payment by check and include all registration
information (preferred: fill out and enclose the form below)

For more information call the IEEE Council office at (415)327-6622

  [Contact the author for the registration form.  -- KIL]

------------------------------

End of AIList Digest
********************

∂01-Mar-88  0348	LAWS@KL.SRI.COM 	AIList V6 #44 - Spang Robinson Review, New JETAI Journal  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 1 Mar 88  03:47:48 PST
Date: Mon 29 Feb 1988 22:34-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #44 - Spang Robinson Review, New JETAI Journal
To: AIList@SRI.COM


AIList Digest            Tuesday, 1 Mar 1988       Volume 6 : Issue 44

Today's Topics:
  Review - Spang Robinson V4 N1
  Journal - Experimental and Theoretical Artificial Intelligence

----------------------------------------------------------------------

Date: Sat, 27 Feb 88 21:54:02 CST
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: Review - Spang Robinson V4 N1

Spang Robinson Report on Artificial Intelligence, Volume 4, No. 1
January 1988

Lead Article on 1988, the Vendor's Perspective

New Product from Artificial Intelligence Corporation:
  Knowledge Base Management System, IBM mainframe environment with linkage
  to IBM databases, written in C, can be combined with Intellect 400 to
  give natural language capability

Bachman/Re-Engineering
  New product to do reverse engineering of programs and is targetted at
  maintenance.  Completed third round of financing after market crash.

Symbolics gets 60% of its revenues from government and aerospace markets.
Symbolics will come out with a multi-processor, twenty-five times more
powerful than current systems.


DEC is now the leading vendor selling standard machines for AI use.

Units Sold to date for various companies:
  Aion Corporation  550+
  Artificial Intelligence Intellect   550
  Gold Hill Computers
     Goldworks             500+
     Developer Series    1,700+
     Golden Common Lisp  9,000+
     Run-time Licenses  30,000+
     Hummingboards         35-60 boards/month
  Inference Corporation   Art  700+
  Neuron Data, Nexpert and Nexpert Object 1750+
  Sterling Wentworth (Planman financial expert system, Next5 and BusinessPlan
     Insurance Applications 1000

  Symbolics   4000
  Digital Equipment 5000 units for AI
________________________________________________________________________________
Expert Systems for End Users

Human Intellect Sells
  a system to assist in connecting two RS-232 connectors
  a tax consulting expert system to help decide how to recognize revenue
  an expert system to help write a personnel policy for a business

________________________________________________________________________________
Review of Mind Path Technologies two videotape instruction system for
new users:
It help determine expert system
opportunities and then has a tutorial with an expert system shell or demo.
Cost is $495 with demo program or $795 with expert system shell included.

________________________________________________________________________________
Aion Corporation has granted McCormick and Dodge rights to use its expert
system shell.

Wisdom Systems sells an object-oriented design logic base with three-dimensional
shapes.  It provides automatic bills of materials and manufacturing
work orders.

Digital Equipment is now selling an enhanced version of OPS5 that
allows interfacing to C and Ada.

Mind Path Technologies has developed an expert system for creating intelligent
electronic forms.

Franz Inc. has signed a contract to implement a Common Lisp on Cray
Supercomputers.

Knowledge Garden is selling a Stock Expert that includes the procedures
of the National Association of Investment Clubs.

------------------------------

Date: Mon, 29 Feb 88 16:54:15 MST
From: yorick%nmsu.csnet@RELAY.CS.NET
Subject: Journal - Experimental and Theoretical Artificial Intelligence

Call for Papers: The Journal of Experimental and Theoretical
                 Artificial Intelligence.

                 Eric Dietrich and Chris Fields, Editors.
                 Computing Research Laboratory
                 Box 30001/3CRL
                 New Mexico State University
                 Las Cruces, NM 88003-0001
                 USA

The Journal of Experimental and Theoretical Artificial Intelligence
(JETAI), a new journal dedicated to the advancement of AI as a
scientific discipline, will be launched by Taylor and Francis, Ltd. in
January, 1989.  We would like to invite researchers in all areas of AI
to submit papers for publication in the first volume.

A statement of the aims and scope of JETAI as well as a statement of
instructions for authors is included below.  JETAI will publish a
broad range of AI research.  We will preferentially publish relatively
short papers, and will strive to maintain a three-month turn-around
time between submission and a publication decision.  Our intent is for
JETAI to provide a forum for active, timely discussion of
experimental, theoretical, and methodological issues in AI research.
Papers should report work of interest to a broad cross-section of the
AI research community.  The clarity with which the theoretical or
methodological motivation of the research is presented, and with which
the results of the research are discussed, will be a principal
criterion by which the appropriateness of a paper for publication in
JETAI will be assessed.

The focus of JETAI on the development of a scientific methodology for
AI will be reflected in its editorial policy.  We are primarily
interested in papers of the following three types: 1) reports of
research in which AI computer programs are employed as experimental
tests of hypotheses about intelligence and cognition, especially if
the results show that a particular hypothesis must be ruled out; 2)
papers describing logical calculi and other mathematical formalisms,
and using these formalisms to formulate hypotheses and design
experiments; and 3) critical discussions of methodological issues in
the foundations of artificial intelligence.

Please feel free to submit either a full paper, a theoretical or
experimental note, or a commentary.  To be considered for inclusion in
the first issue, your contribution should reach us by June 1, 1988.




JETAI Aims and Scope.

The aim of JETAI is to advance scientific research in AI by
providing a public forum for the presentation, evaluation, and
criticism of research results, the discussion of methodological
issues, and the communication of positions, preliminary findings,
and research directions.  Work in all subfields of AI research,
including work on problem solving, perception, learning,
knowledge representation and memory, and neural system modeling
will be within the scope of JETAI.

JETAI's contribution to advancing AI as a scientific discipline will
be threefold.  First, JETAI will, through editorial statements and its
editorial policy, encourage AI research that adopts a scientific,
rather than an engineering methodology.  In particular, JETAI will
publish papers that advance precise and well-formulated computational
theories of particular aspects of intelligence, and papers that report
well-designed experimental tests of such theories, with an emphasis on
those that employ programs as the vehicles with which experiments are
carried out.  Second, JETAI will publish papers reporting research
relevant to the computational understanding of intelligence regardless
of their disciplinary origin; e.g. JETAI will publish papers
describing systems-theoretic or neural-modeling research on
intelligence as well as more conventional, symbolic AI research.
JETAI will, moreover, encourage the submission of papers that attempt
to integrate the results of research carried out in different
disciplinary styles.  Finally, JETAI will provide a forum for the
discussion of foundational and methodological issues in AI research,
and for critical discussions of results and techniques published
either in JETAI or elsewhere in the AI or cognitive science
literature.  Such discussion is especially important in young
sciences, such as AI, that have grown from a multidisciplinary base.

JETAI will not publish papers describing applications of AI methods or
techniques in new domains, except when such applications are of
particular scientific or methodological interest.  All submissions to
JETAI should include explicit statements of the experimental,
theoretical, or methodological interest of the work, and of the issues
that are left unresolved.

JETAI will publish four types of papers: research papers, target
articles with associated commentary, theoretical or critical notes,
and refereed position papers.  The first category includes papers
reporting results of experimental, theoretical, or methodological
research, i.e. the types of papers standardly published in archival
journals.  The second category includes either submitted or invited
papers advancing controversial positions, with associated commentary
from researchers in the relevant fields.  The third category comprises
short papers reporting research results of immediate interest and
critical papers in which particular theoretical or experimental
results or methodological issues are discussed.  The fourth category
comprises papers advancing positions on significant issues, proposing
research directions, or describing preliminary findings that may be
incomplete, but nonetheless of interest to the community.  It is
anticipated that research papers and notes will be published in every
issue, and that a target article with associated commentary will be
published at least once per volume.



Instructions for Authors (1st Volume)

1. The original manuscript and two clear copies should be
submitted to:

        Editor
        Journal of Experimental and Theoretical Artificial Intelligence
        Computing Research Laboratory
        Box 30001/3CRL
        New Mexico State University
        Las Cruces, NM 88003-0001
        USA

All papers will be refereed by at least two external reviewers,
as well as by one of the editors.

2. All papers must be in English.  The entire manuscript should
be typed on one side only of plain paper, either A4 or 8.5 x 11
inch, with double spacing used throughout.

3. The first page of the manuscript should carry the title, the
names, institutional addresses, and institutional telephone
numbers of the authors, and a short title of no more than 50
characters (including spaces) to be used as a running head.  The
second page of the manuscript should carry an abstract of about
200 words.  The remainder of the text should not exceed 30 double
spaced pages, including references but excluding figures and
tables.  All figures and tables must be referred to by number in
the text.

4. An original set of professional quality figures should
accompany the manuscript.  Line drawing may be India ink
originals or glossy prints.  Halftone illustrations must be
submitted as glossy prints.  Illustrations cannot be printed in color.

5. Tables should be typed on separate pages, which should
accompany the text.

6. The text should be written in third person to facilitate blind
reviewing.  The names of the authors or their institutions should
appear only on the title page.

7. The name-date style should be used for all references.  All
authors' names should be included in the reference list.  Journal
names should not be abbreviated.  Inclusive page numbers must be given
for all references to articles in journals, proceedings volumes, or
books.  With the exception of theses or dissertations, unpublished
works should not be included as references.

8. Footnotes may not be used.  Endnotes may be used if necessary;
they should be collected on separate sheets at the end of the
text.

9. Fifty free offprints will be provided to the first author of
each paper.  There will be no page charges.

------------------------------

End of AIList Digest
********************

∂02-Mar-88  0011	LAWS@KL.SRI.COM 	AIList V6 #45 - Logic, RuleC, Methodology, Constraint Languages
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 2 Mar 88  00:11:31 PST
Date: Tue  1 Mar 1988 21:21-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #45 - Logic, RuleC, Methodology, Constraint Languages
To: AIList@SRI.COM


AIList Digest           Wednesday, 2 Mar 1988      Volume 6 : Issue 45

Today's Topics:
  Query - Chinese Room Simulation Time,
  Logic - Modal Logic and AI References,
  Expert Systems - RuleC Imbeddable Inference System,
  Methodology - Approaches to AI,
  AI Tools - Representing Uncertainty,
  Opinion - Nanotechnology,
  AI Tools - Constraint Satisfaction Programming

----------------------------------------------------------------------

Date: Mon, 29 Feb 88 12:19:12 GMT
From: "G. Joly" (Birkbeck) <gjoly@NSS.Cs.Ucl.AC.UK>
Subject: Yet another Visit to the Chinese Room

``Thinking Machines'' was the title of a recent Horizon programme on BBC
television, featuring Hubert Dreyfuss, Marvin Minsky and John Searle.
There was a demonstration of the Chinese Room with two Chinese actors
and an English (only) speaking person in the room.  Searle asserted
that ``the room'' could not speak Chinese, since the operator inside
had no knowledge of written Chinese; he was merely manipulating
symbols (as computers do).

But in terms of the Turing test, the room spoke Chinese, since it
satisfied the basic ideas of the test. Agreed that the operator could
not speak the language, but the language was spoken by the (language
translation?) program he was following.

The image was amusing. Does anybody have a ballpark figure for the
time needed to run such a program ``by hand''? More or less than the
age of the universe?

Gordon Joly.
gcj@maths.qmc.ac.uk
gjoly@nss.cs.ucl.ac.uk

------------------------------

Date: 27 Feb 88 07:11:15 GMT
From: mnetor!utgpu!kurfurst@uunet.uu.net  (Thomas Kurfurst)
Subject: Modal Logic and AI -- References Needed


I am seeking references to seminal works relating modal logic to artifical
intelligence research, especially more theoretical (philosophical)
papers rather than applications per se.

Any and all pointers will be greatly appreciated - I am having trouble
tracking these down myself. Thanks in advance.



--

________

Thomas Kurfurst      kurfurst@gpu.utcs.toronto.edu (CSnet,UUCP,Bitnet)
205 Wineva Road      kurfurst@gpu.utcs.toronto.cdn (EANeX.400)
Toronto, Ontario     {decvax,ihnp4,utcsri,{allegra,linus}!utzoo}!utcs!kurfurst
CANADA M4E 2T5       kurfurst%gpu.utcs.toronto.edu@relay.cs.net (CSnet)
(416) 699-5738

________

------------------------------

Date: 27 Feb 88 17:29:48 GMT
From: vu0112@bingvaxu.cc.binghamton.edu  (Cliff Joslyn)
Subject: Re: Modal Logic and AI -- References Needed

In article <1988Feb27.021115.11206@gpu.utcs.toronto.edu>
kurfurst@gpu.utcs.toronto.edu (Thomas Kurfurst) writes:
>I am seeking references to seminal works relating modal logic to artifical
>intelligence research, especially more theoretical (philosophical)
>papers rather than applications per se.

I am currently researching the application of various kinds of
"alternative" logics to AI.  I, also, would be interested in information
about modal logic in this context, but also for multi-valued and fuzzy
logics.

O---------------------------------------------------------------------->
| Cliff Joslyn, Mad Cybernetician
| Systems Science Department, SUNY Binghamton, Binghamton, NY
| vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: 29 Feb 88 14:42:52 GMT
From: sunybcs!rapaport@AMES.ARC.NASA.GOV  (William J. Rapaport)
Subject: Re: Modal Logic and AI -- References Needed

In article <1988Feb27.021115.11206@gpu.utcs.toronto.edu>
kurfurst@gpu.utcs.toronto.edu (Thomas Kurfurst) writes:
>
>I am seeking references to seminal works relating modal logic to artifical
>intelligence research, especially more theoretical (philosophical)
>papers rather than applications per se.

Depends, of course, on how broad you intend "modal" to cover, but here
are a few starting points:

S. C. Shapiro (ed.), Encyclopedia of AI (John Wiley, 1987):
    -   articles on Modal Logic, Belief Systems

J. Y. Halpern (ed.), Theoretical Aspects of Reasoning About Knowledge
(Los Altos, CA:  Morgan Kaufmann)

and, not to be shy, my own work on belief representation is rather
philosophical:

Rapaport, William J. (1986), ``Logical Foundations for Belief Representation,''
Cognitive Science 10:  371-422.
                                        William J. Rapaport
                                        Assistant Professor

Dept. of Computer Science||internet:  rapaport@cs.buffalo.edu
SUNY Buffalo             ||bitnet:    rapaport@sunybcs.bitnet
Buffalo, NY 14260        ||uucp: {ames,boulder,decvax,rutgers}!sunybcs!rapaport
(716) 636-3193, 3180     ||

------------------------------

Date: Sun, 28 Feb 88 14:23:41+1200
From: werner%aucsv.aukuni.ac.nz@RELAY.CS.NET
Subject: Re: seeking imbeddable inference system


In article <12371202150.13.KARNICKY@KL.SRI.COM>, KARNICKY@KL.SRI.COM
(Joe Karnicky) writes:
>
>       A friend of mine creates programs and devices to enable severely
> handicapped individuals to interact with computers ...
>       Now, to implement this economically what we would like to
> have is a simple, cheap ($100), IMBEDDABLE inference system.

What you would like to have is RuleC. RuleC is a new language developed
at the Technical University Vienna. It adds a forward-chaining production
system to standard C. The system is totally flexible - you can write your
own conflict resolution strategies, modify the interpreter cycle itself,
have structured facts in the database which may contain components of
arbitrary C data types (pointers(!), arrays, structs, ints, ...).
The execution of a RuleC program starts procedurally with invocation
of the main function. At any point you may start the production system
by calling the function 'run'. This function returns when no more rules
are applicable, procedural processing resumes.
The condition part of rules allows complex patterns including calls
to userwritten comparison functions, the action part looks like an
ordinary C block - you can define local variables and have arbitrary
C statements here! For addition,deletion, and modification of working
memory elements three new statements are provided.

The compiler works as a preprocessor, like yacc. The production system
is based on the RETE-match.

If you want any further info or the software write to:

Herbert Groiss,
Institut fuer Praktische Informatik
Technische Universitaet Wien

Resselgasse 3/180D
A-1040 Vienna
AUSTRIA

The only e-mail address I have from him is a PSI one:
        PSI%02322622107040:HERBERT

--------------------------------------------------------------------------
I Werner Staringer                     I University of Auckland          I
I                                      I Department of Computer Science  I
I werner@aucsv.aukuni.ac.nz            I Private Bag                     I
I ..uunet!vuwcomp!cscvaxa!aucsv!werner I Auckland                        I
I PSI%0530197000073::CSC_STAR          I New Zealand                     I

------------------------------

Date: Sun, 28 Feb 88 16:27:09 PST
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Approaches to AI

     McCarthy has recently described two paths to artificial intelligence.
But his two, while the most active, are not the only ones in which substantial
work is underway.  A more general taxonomy might be outlined as follows.

     1.  "Good, old fashioned AI".  This is the line of development that
         includes LISP, GPS, the Blocks World, automatic theorem proving,
         and expert systems.  The major thrust of this line of work is
         to model the world using formalisms related to mathematical logic.

     2.  Neural networks.  This line begins with perceptrons and continues
         through neural networks to connectionism.  The major thrust of
         this line is the development of massively parallel self-organizing
         systems.

     3.  Engineered artificial life.  This bottom-up approach begins with
         such efforts as the Hopkins Beast, continues through the early
         MIT eye-hand coordination work, and continues today with Brooks'
         artificial insects and much of Moravec's robotics work.  The
         major thrust here is the construction of robots that
         function in the real world, using whatever technology seems
         appropriate.

     4.  Study and replication of the detailed structure of biological
         intelligence, without necessarily understanding how it works.
         Drexler proposes this approach, which is primarily in the discussion
         phase at this point.

It is important not to confuse #2 and #4.

                                        John Nagle

------------------------------

Date: 29 Feb 88 19:13:18 GMT
From: Paul Creelman <creelman%dalcsug.uucp@RELAY.CS.NET>
Subject: Uncertainty and FUZZY LOGIC VS PROBABILITY


Subject: Uncertainty and FUZZY LOGIC VS PROBABILITY
Newsgroups: comp.ai.digest
Keywords: UNCERTAINTY, PROBABILITY

   There appears to be some discussion about uncertainty by Eric Neufeld
   and others. According to Spiegelhalter, the use of probability for
   representing uncertainty in expert systems is the wrong method. He says
   it  is inappropriate because uncertainty in knowledge does not match the
   chance mechanism of an observable event. It is unnecessary since no meaning
   must be attached to numbers, but instead the rank order of hypotheses is
   often all that matters in an expert system. Furthermore probability is
   somewhat impractical since it requires too many estimates of prior
   probabilities, fails to distinguish ignorance from uncertainty, and
   fails to provide an explanation of conclusions. I must agree. Down with
   probability!
   Surely what is needed is a simplified version of Shafer's evidence theory
   which deals with all possible subsets of the possible variable values, the
   frame of discernment. A number is associated with each subset which measures
   the certainty that the actual variable value is in that subset. Suppose we
   coarsen the uncertainty measure by reducing the number of subsets specified.
   While providing a measure of certainty for all subsets of values may be
   impractical, a system that uses only a limited number of these subsets
   could be very useful. If only there was a way of consistently updating
   certainty for such subsets! I wonder if additivity of the certainty measure
   is necessary. Perhaps a condition like c(B)> c(C) -> c(A+B) > c(A+C) where
   A B and C are disjoint subsets and + is set union. Of course, it is desirable
   to allow A B and C to have common elements as well. I hope this will
   stimulate discussion. For references, see
   William Gale,ed.,Artificial Intelligence and Statistics, Addison-Wesley
   Publishing Company,1986.

   L.N.Kanal,J.F.Lemmer,eds.,Uncertainty in Artificial Intelligence,
   North-Holland,1986.


   Paul Creelman
   student
   Dalhousie University


   ZZ

------------------------------

Date: Mon, 29 Feb 88 11:53:37 HNE
From: Spencer Star <STAR%LAVALVM1.BITNET@CUNYVM.CUNY.EDU>
Subject: Re: AIList V6 #41 - Supercomputers, Nonotechnology

I just got through reading Eve Lewis' article on Nanotechnology.  It
seems to me that interested people ought to send a donation to the
Committe for the Scientific Investigation of Claims of the
Paranormal (CSICP) to investigate these claims about creating a
nano computer.  It would make an interesting article in The
Skeptical Inquirer.  The current issue has articles such as "The Brain
and Consciousness: Implications for Psi Phenomena" and "Fantasizing
Under Hypnosis: Some Experimental Evidence".  I think something like
"The Brain and Nanotechnology: Fantasizing Under the Spell of AI"
might be an appropriate title.
   One of the problems that AI seems to have is that people outside
of the AI research community have trouble distinguishing between
AI research and AI speculation.  This becomes especially problematic
when the same person engages in both activities.
   Actually I must admit that one of the reasons I read AI-List is for
the interesting speculations on the future of AI.  But I wonder if the
people involved in discussions about nanotechnology see it as pure
speculation or as legitimate research.  If the latter, how do they
come to that conclusion.
                             Spencer Star
                 Arpanet: star%lavalvm1.bitnet@vma.cc.cmu.edu
                 Bitnet: star@lavalvm1

------------------------------

Date: 25 Feb 88 18:48:37 GMT
From: garry@tcgould.tn.cornell.edu  (Garry Wiegand)
Subject: Re: constraint satisfaction programming

A week ago I asked about "constraint-based languages", and since then a
number of people have replied. My thanks to you all - your notes have
been a considerable help, and have led me to a lot of good work.

A summary follows...

******************************************************************************
** From: rich@devvax.Jpl.Nasa.Gov (Richard Pettit)
** Organization: Jet Propulsion Laboratory, Pasadena, CA.

    See the next to last (January edition I think) of AI Expert. They have
    at least one article on constraint languages in it. No doubt there will
    be many more to follow.

    Rich

[There's another popular article too - in Byte, last September or so. GW]

******************************************************************************
** From: quiroz@cs.rochester.edu

    Take a look at ICCP'87.  Prof. Baldwin and I have a paper there (p.
    389) on a parallel constraint-based language.  There is a more
    extensive TR (number 208, "The Design of the Consul Programming
    Language") you might like to order by sending a message to Ms. Peggy
    Meeker (meeker@cs.rochester.edu).

    For more details, the person to contact is certainly Prof. Doug
    Baldwin (baldwin@cs.rochester.edu), who is conducting research on
    general-purpose constraint languages.

    Good luck with your research!
    Cesar
    --------
    Cesar Augusto  Quiroz Gonzalez

    Department of Computer Science     ...allegra!rochester!quiroz
    University of Rochester            or
    Rochester,  NY 14627               quiroz@cs.rochester.edu

******************************************************************************
** From: jane@CCA.CCA.COM (Jane Eisenstein)
** Organization: Computer Corp. of America, Cambridge, MA

    Last fall, I ran into a nice book entitled "Constraint Programming
    Languages, Their Specification and Generation" by Wm Leler which is
    published by Addison-Wesley Publishing Company.  It "provides an
    introduction to the subject of constraint satisfaction, a survey of
    existing systems, and introduces a new technique that makes
    constraint-satisfaction systems significantly easier to create and
    expand" in a very readable fashion.

    The latter half of the book focuses on the author's general-purpose
    specification language called Bertrand that allows a user to describe a
    constraint-satisfaction system using rules.  The software described is
    available for a "nominal charge" from the author.

******************************************************************************
** From: bnfb@june.cs.washington.edu (Bjorn Freeman-Benson)
** Organization: U of Washington, Computer Science, Seattle

                A few of our Constraint Language References


    [Borning & Duisberg 86] Alan Borning and Robert Duisberg. Constraint-Based
                          Tools for Building User Interfaces. _ACM_
                          _Transactions_on_Graphics_, 5(4), October 1986.
                          ThingLab basics, object definer and Animus with an
                          emphasis on MVCish things.

    [Borning et al. 87]   Alan Borning, Robert Duisberg, Bjorn Freeman-Benson,
                          Axel Kramer, and Micheal Woolf. Constraint H
                          ierarchies. In _OOPSLA'87_Conference_Proceedings_,
                          pages 48-60, ACM SIGPLAN, October 1987.

    [Borning 81]          Alan Borning. The Programming Language Aspects of
                          ThingLab, A Constraint-Oriented Simulation Laboratory
                          . _TOPLAS_, 3(4):353-387, Oct 1981.

    The OOPSLA'87 one has a good bibliography...

    Bjorn N. Freeman-Benson


[The work of Prof. Borning's group on a general UIMS is wonderful - very
 much along the lines we've started thinking about. GW]

******************************************************************************
** From: Lindsay Errington <dlerrington%watdragon.waterloo.edu@RELAY.CS.NET>
** Organization: U. of Waterloo, Ontario

    Constraint Logic Programming is currently getting alot of attention
    at logic programming conferences.

    You might try:

    Jaffar, Joxan and Lassez, Jean-Louis, "Constraint Logic Programming", Proc
    of the 14th ACM Conference on Principles of Programming Languages, Munich,
    January 1987.

    Jaffar, Joxan and Michaylov, Spiro, "Methodology and Implementation of
    a CLP System", Proc of the 4th International Conference on Logic
    Programming, Melbourne Australia, May 1987, pp 196-218, MIT Press.

    Heintze, N.C., Michaylov, S., and Stuckey, P.J., "CLP(R) and Some
    Electrical Engineering Problems", Proc of the 4th International Conference
    on Logic Programming, Melbourne Australia, May 1987, pp 675-703, MIT Press.

    (I suspect that a number of people will send you the same citations)

    Jaffar's work is very interesting since it provides a semantic
    framework (if that makes any sense) for a whole family of constraint
    based languages.

    The bibliographies will point you to other work into constraints and logic
    programming.

    I hope this helps.

    Lindsay

******************************************************************************
** From: uw-beaver!ssc-vax!dickey%cornell.UUCP@tcgould.TN.CORNELL.EDU
(Frederick J Dickey)
** Subject: Re: "Constraint-based" languages

    A book has been published recently called (I think) "constraint-
    based languages." The author is William Leler or Leder (sorry, I
    don't have it here at work with me so I can't give you a more
    accurate reference).

******************************************************************************
** From: mcvax!cwi.nl!lambert@uunet.UU.NET (Lambert Meertens)
** Organization: CWI, Amsterdam

    Here is a reference to a book that I haven't had an opportunity to look
    into yet since it is still being processed as a new acquistion by our
    library:

      W. Leler (1988).
      Constraint programming languages -- their specification and generation.
      Addison-Wesley series in computer science.
      Reading (MA), [etc.].

    I would be interested in hearing about further references you have or may
    receive, and in particular in relation to UIMS.

    --Lambert Meertens, CWI, Amsterdam; lambert@cwi.nl

******************************************************************************

I've also heard that at least some Prologs understand how to do arithmetic,
and that there's a commercial product called TK!Solver which does interesting
things. I haven't seen these myself yet.

Constraint languages seem to be very much still in their infancy. Lots of
room for some good work (hint hint!)  - I hope we'll be able to contribute
some too. Thanks again -

garry wiegand   (garry@oak.cadif.cornell.edu - ARPA)
                (garry@crnlthry - BITNET)

------------------------------

End of AIList Digest
********************

∂04-Mar-88  0922	LAWS@KL.SRI.COM 	AIList V6 #46 - Queries
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 4 Mar 88  09:22:15 PST
Date: Fri  4 Mar 1988 00:33-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #46 - Queries
To: AIList@SRI.COM


AIList Digest             Friday, 4 Mar 1988       Volume 6 : Issue 46

Today's Topics:
  Queries - Chemical Reasoning & Connectionist Simulator &
    AAAI Address & Journal and Seminar Information &
    Contrasting Human and Computer Learning &
    Neural Networks and Vision & Sequence-Comparison Algorithms &
    Most-Parallel Planning,
  Funding - Robotics Application

----------------------------------------------------------------------

Date: Sun, 28 Feb 88 00:19:08 EST
From: Raul.Valdes-Perez@B.GP.CS.CMU.EDU
Subject: Request for information

I am involved with a new project here at CMU that will
examine scientific theory formation and testing in an
underdeveloped branch of synthetic chemistry.  The first
stage involves both experiment planning and modification
to a current qualitative theory based on the experimental
result.

We are very interested in first becoming acquainted with
any cognitive-psychological study of chemical reasoning,
preferably that of chemists rather than novices.  Can any
reader point out some references to such work?  Thank you.

Raul Valdes-Perez (valdes@b.gp.cs.cmu.edu)

------------------------------

Date: 3 Mar 88 14:11:20 GMT
From: jem97@leah.albany.edu (Jim Mower)
Subject: Re: AIList V6 #38 - Applications, Neuromorphic Tools,
         Nanotechnology


In article <8802221732.AA14122@beowulf.UCSD.EDU>, heirich@cs (Alan Heirich)
writes:
> * The Rochester Connectionist Simulator, available from the Computer Science
>   Department of the University of Rochester.  A modifiable package, written
>   in C...


Could someone from U of R post an article or send me a note concerning
the intent and capabilities of the Simulator?  Alan
mentioned that the simulator was being offered for a nominal fee.  How
does one go about ordering a copy?  Thanks.

Jim Mower, Dept. of Geography and Planning
University at Albany
jem97@leah.albany.edu  (internet)
jem97@albny1vx         (bitnet)

------------------------------

Date: Tue, 01 Mar 88  09:01 EST
From: WURST%UCONNVM.BITNET@MITVMA.MIT.EDU
Subject: Publications information request...

          I am interested in joining the AAAI, and subscribing to their
     journal, but I cannot find an address or membership/subscription
     information.  I checked our library, but they do not subscribe to
     it.  Would someone please send me the address, and any other
     information (cost, etc...) that I need to join/subscribe?  Thanks.
          Also, I would be interested in knowing about any other journals
     which apply to the field of AI.  Please email any information that
     you have to me directly.  Once again, thank you.

  [The AAAI information can be obtained from
  AAAI-OFFICE@SUMEX.STANFORD.EDU . -- KIL]

----------
Karl R. Wurst
Computer Science and Engineering, U-155       BITNET: WURST@UCONNVM
University of Connecticut                     CSNET : WURST@UCONN
Storrs, CT   06268

"If A = B and B = C, then A = C, except where void or prohibited by law."

------------------------------

Date: Wed, 02 Mar 88  12:40 EST
From: WURST%UCONNVM.BITNET@MITVMA.MIT.EDU
Subject: Seminar information...

          I accidentally deleted a recent AILIST containing a announcement
    about a seminar on temporal logic.  Could someone please send me a
    copy of the announcement?  I believe that the seminar was being given
    at BBN, but I am not sure.  Thank you.

----------
Karl R. Wurst
Computer Science and Engineering, U-155       BITNET: WURST@UCONNVM
University of Connecticut                     CSNET : WURST@UCONN
Storrs, CT   06268

"Progress might have been alright once, but it's gone on too long."
                                         --Ogden Nash

------------------------------

Date: Wed, 2 Mar 88 11:11:07 PST
From: woodson@ernie.Berkeley.EDU (Chas Woodson)
Subject: contrasting human and computer learning

I seem to recall discussion about a course comparing and contrasting learning
in humans and learning in machines.  Can you direct me to that discussion?
Thanks.

------------------------------

Date: 27 Feb 88 23:21:53 GMT
From: parvis@pyr.gatech.edu  (FULLNAME)
Subject: neural networks and vision


I'm looking for some interesting research in the field of neural network
applications in vision, particularly in using neural network simulation to
process images. I am already familiar with the work of Kohonen (face
recognition), Fukushima (neocognitron), Marr (stereo parallax).
Concrete implementations: UCLA PUNNS, PABLO, BOSS and ISCON.

1. Any additional practical approach s.a. neural simulation programs for
   image processing  and
2. Any approach to image understanding (object recognition and identification
   in contrast to feature extraction from images) by using a neural network
   modell is of interest.

     I appreciate any response. Thanks, Parvis.

----
Parvis Avini
parvis@gitpyr.gatech.edu
U.S.Mail:
Georgia Institute of Technology
P.O. BOX 34331
Atlanta, GA 30332

------------------------------

Date: Thu 3 Mar 88 07:07:04-PST
From: June Bossinger <BOSSINGER@BIONET-20.ARPA>
Subject: Sequence algorithms

Can anyone suggest some good references where parallel computers have
been used to do sequence comparisons (especially comparisons of either
algorithms or machines)?

June Bossinger
bossinger@BIONET-20.arpa

------------------------------

Date: Wed 2 Mar 88 15:49:26-PST
From: Nabil Kartam <KARTAM@Sushi.Stanford.EDU>
Subject: Most parallel plan using AI-techniques

  I am investigating the utility of AI-techniques in the Project Planning
domain.  I have studied in depth most papers about AI-planners such as
STRIPS, NOAH, NONLIN, and SIPE.  During this year, I have been using SIPE
to plan different construction projects.  I would like to share some of my
experience with concern researchers and hopefully get some answers for some
questions.

I dicover that SIPE is unable to provide the most parallel network and would
like to know whether you think NOAH or NONLIN will be able to do so.
e.g. suppose we have the following (B1 and B2 are specific beams ;
                                    C1, C2, and C3 are specific columns )

The preconditions of building B1 are building C1 and building C2
The preconditions of building B2 are building C2 and building C3

if our goal is to build B1 and B2, then the plan should be:

                 ----                            ----
                | C1 |------------------------->| B1 |
                 ----       /                    ----
                          /
                        /
                 ---- /
                | C2 |\
                 ----  \
                        \
                         \
                 ----     \                      ----
                | C3 |------------------------->| B2 |
                 ----                            ----

However, the plan obtained by SIPE looks like this:

                 ----             ----        ----            ----
                | C1 |---------->| B1 |------| C3 |--------->| B2 |
                 ----       /     ----        ----            ----
                          /
                        /
                 ---- /
                | C2 |
                 ----

Although this plan is logically correct, it is not the most parallel plan.
There is no reason why the column C3 should start after B1.  In fact, C3 should
be scheduled without any preconditions.
I know that David Wilkins is working to solve this problem in his planner, but
I would like to know whether you think this is a hard or simple problem and
whether there is any planner that could produce the most parallel plan.

Thanks,

Nabil Kartam
Construction Eng.&Mgt
Stanford University
kartam@sushi

------------------------------

Date: 1 Mar 88 18:51:47 GMT
From: sunybcs!rapaport@AMES.ARC.NASA.GOV  (William J. Rapaport)
Subject: Funding for robotics applications

I have been contacted by a fellow in the construction industry who is
seeking to fund research in robotics applications for construction.

Two typical problems are:

        (1) developing a robot that can distribute and finish concrete masonry
        (2) developing a robot that can lay 8" masonry blocks.

The research would be a commercial undertaking, with no military
applications, to result in a product to be distributed primarily in the
southeastern US.

Inquiries may be directed to:

        Mr. John S. Sherman
        9801 Collins Ave.
        Bal Harbour, FL 33154

        (305) 866-4400

or to me at:
                                        William J. Rapaport
                                        Assistant Professor

Dept. of Computer Science||internet:  rapaport@cs.buffalo.edu
SUNY Buffalo             ||bitnet:    rapaport@sunybcs.bitnet
Buffalo, NY 14260        ||uucp: {ames,boulder,decvax,rutgers}!sunybcs!rapaport
(716) 636-3193, 3180     ||

------------------------------

End of AIList Digest
********************

∂05-Mar-88  0034	LAWS@KL.SRI.COM 	AIList V6 #47 - Head Count, Image Formats, Chemistry, Law, Logic    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 5 Mar 88  00:33:06 PST
Date: Fri  4 Mar 1988 21:22-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #47 - Head Count, Image Formats, Chemistry, Law, Logic
To: AIList@SRI.COM


AIList Digest            Saturday, 5 Mar 1988      Volume 6 : Issue 47

Today's Topics:
  Administrivia - Head Count,
  Queries - CommonLoops & NEXPERT/GOLDWORKS & Arguments Against AI,
  AI Tools - File Formats for Image Data,
  Application - AI in Chemistry & Legal Reasoning,
  Logic - Modal-Logic References,
  Nanotechnology - Picotechnology and Positronic Brains

----------------------------------------------------------------------

Date: Wed, 2 Mar 88 13:24:25 CST
From: Alan Wexelblat <wex%SW.MCC.COM@MCC.COM>
Subject: Re: Head Count Results

There are a number of possible reasons why your survey produced low
numbers.  The simplest is, perhaps, that February is a month with a
comparatively low birthrate.  February births usually indicate
conception in May, which is less common than conception in June.  June
birth rates are also high (I don't know what factors influence this,
though).

Anyway, there are other ways to measure how many readers you have.
Among them is the Arbitron program run by Brian Reid
(reid@decwrl.dec.com).  His data for January 88 estimates that
comp.ai.digest (the form of AILIST on USENET) has approximately 8900
readers.

--Alan Wexelblat
ARPA: WEX@MCC.COM
UUCP: {harvard, gatech, pyramid, &c.}!sally!im4u!milano!wex

The Pentagon has "fire and forget" systems; I have "file and forget."


  [Great; I may have more friends than I thought!  My attempt
  at a head count was apparently worse than useless.  So --
  I mail to approximatedly 408 individual on the Arpanet and
  CSNet, plus 134 redistributions and bboards.  With the 400
  Bitnet readers and an unknown number reading bboards or on
  other networks (EARNET, JANET, etc.), there are at least
  12000 AIList readers.  That's on the same order as the number
  of AAAI members. -- KIL]

------------------------------

Date: Fri 4 Mar 88 12:10:56-PST
From: Wei-Han Chu <CHU@KL.SRI.COM>
Subject: CommonLoops

I recently obtained a copy of CommonLoops from Goldhill Computer.
Unfortunately they dont have any documentation on how to use it.
Does anyone has any information on any documentation on CommonLoops.
Goldhill said they obtain this from Xerox PARC, however, I was
not able to get a response from the CommonLoops coordinator at
PARC.

------------------------------

Date: Fri, 4 Mar 88 10:34 N
From: MFMISTAL%HMARL5.BITNET@CUNYVM.CUNY.EDU
Subject: Experience with NEXPERT / GOLDWORKS ???

Ken,

Could you put the following request in the AIlist digest?

Thanks

Jan Talmon.

---------------------------------------------------------------------

We are considering to purchase a high-end AI development tool for PC's.
Sofar, we considered NEXPERT and GOLDWORKS.

Are there people in netland who have experience with one or both of
these products and are willing to give us their comments? We are
interested in issues such as quality of documentation, speed, ease
of use, flexibility, ease of interfacing with the outside world (Dbase III,
1-2-3, languages etc).

Thanks for your comments. Please mail me directly. When sufficient replies
come in, I will summarize on the list.

Jan L. Talmon
Dept. of Medical Informatics and Statistics
University of Limburg
Maastricht
The Netherlands

EMAIL: MFMISTAL@HMARL5.bitnet

------------------------------

Date: Fri, 04 Mar 88 18:09:12 SET
From: "Adlassnig, Peter" <ADLASSNI%AWIIMC11.BITNET@CUNYVM.CUNY.EDU>
Subject: Question on arguments against AI


Is it true that there are two main arguments against the feasibility
of AI?

1) The philosophical and cognitive science argument
   (e.g., Dreyfus, Searle)

2) The computability and complexity theory argument
   (e.g., Lucas(?))

Could someone point out some relevant literature on the second
point, please?

Thank you in advance.

Klaus-Peter Adlassnig
Department of Medical Computer Science
Garnisongasse 13
A - 1090 Vienna, Austria
email: ADLASSNI at AWIIMC11.BITNET

------------------------------

Date: 1 Mar 88 17:47:00 GMT
From: acf8!schwrtze@nyu.edu  (E. Schwartz group)
Subject: Re: File formats for image data ?

On image file formats:
We are using the HIPS system, of Mike Landy and colleagues (NYU Dept. Psych.),
which was described in Comp. Graphics and Image Processing (1985?..).
The original version of this system consists of a large number of
image processing routines ( written in C) and features a well thought
out image header, which includes an executable history of the operations
which have been performed on the given file. The basic system supports
typed images ( BYTE, SHORT, INT, FLOAT, DOUBLE, COMPLEX), and
image sequences. In regard to some of the other requirements mentioned,
we (Computational Neuroscience, NYU MedCtr) have extened the basic
HIPS system to include color maps, non-array image formats (histograms,
sparse images), arbitrary additional data ( pixel size aspect ratio,
and user defined info that needs to accopmany the image), and interactive
window based tools for the SUN environment.
My impression is that essentially all of the requirements listed are
met by the original HIPS implementation and/or our extensions of it.
We have used this system for a number of years in a large computer
aided neuro-anatomy project, and various computational vision applications:
since the basic system is well constructed, it is easy to extend it
to handle new problems, although we have found that the current
implementation has stabilized, and there is little that we need add...
Eric Schwartz
schwrtze@acf8.nyu.edu
Computational Neuroscience Labs
NYU Med. Center

------------------------------

Date: Tue, 26 Jan 88 09:22:22 EST
From: "David K. Johnson, Exxon Research & Engineering Co." <DKJOHNS@ERENJ>
Subject: AI in Chemistry

                 [Excerpted from the IRList Digest.]


"Artificial Intelligence Applications in Chemisty"  American Chemical
Society Symposium Series #306; ISBN 0-8412-0966-9; 190th ACS Meeting,
Chicago, 1985; Editors: T. H. Pierce, B. A. Hohne; American Chemical
Society, Washington, D.C. 1986.

I would also suggest that you check the ACS Abstracts of Papers for
the twice-a-year ACS meetings.  There have been a number of papers and
symposia on AI and Expert Systems in Chemistry--particularly
in the Divisions of Chemical Information and Computers in Chemistry.
The ACS journal Journal of Chemical Information and Computer Science
may also be useful.

------------------------------

Date: 1 Mar 88 18:46:26 GMT
From: shs@ramones.rutgers.edu (S. H. Schwartz)
Reply-to: shs@ramones.rutgers.edu (S. H. Schwartz)
Subject: Re: Query: Legal Reasoning in AI


In article <8802221953.AA23125@spp3.SPP> spp3!gpearson (Glen Pearson) writes:
>I heard of a conference on legal reasoning using AI techniques...

The International Conference on Artificial Intelligence and Law
(ICAIL-1) was held at Northeastern University in May 1987.
Proceedings are available from the ACM Order Department, Baltimore MD:
order number 604870.
--
                     *** QUESTION AUTHORITIES ***
                          Rashi, Rif, Maharal...
S. H. Schwartz       (201) 846-9185  shs@paul.rutgers.edu
                     (201) 932-4714  ...rutgers!paul.rutgers.edu!shs

------------------------------

Date: 29 Feb 88 18:15:31 GMT
From: unido!gmdzi!thomas@uunet.UU.NET (Thomas Gordon)
Subject: Re: Query: Legal Reasoning in AI


From article <8802221953.AA23125@spp3.SPP>, by gpearson%sdcsvax@spp3.UUCP
(Glen Pearson):
> I heard of a conference on legal reasoning using AI techniques, but
> I don't remember the time or place.  Can anyone out there give me
> details?
>
> Thanks much,
>
> Glen
> trwrb!spp!spp3!gpearson@ucbvax.berkeley.edu
> 1180 Kern Ave.
> Sunnyvale, CA 94086
> (408) 773-5021


The conference you are probably thinking of is the First International
Conference on Artificial Intelligence and Law, sponsored by the Center
for Law and Computer Science of Northeastern University and held in
Boston in May, 1987.   It was an ACM conference and the proceedings
are available from

        ACM Order Department
        P.O. Box 64145
        Baltimore, MD 21264

order number: 604870.

------------------------------

Date: Wed, 2 Mar 88 13:52:41 EST
From: lsuc!dave@ai.toronto.edu
Reply-to: dave@lsuc.UUCP (David Sherman)
Subject: Re: Query: Legal Reasoning in AI

In article <8802221953.AA23125@spp3.SPP> spp3!gpearson (Glen Pearson) writes:
>I heard of a conference on legal reasoning using AI techniques, but
>I don't remember the time or place.  Can anyone out there give me
>details?

This was the First International Conference on Artificial Intelligence
and Law, held at Northeastern University in Boston, May 1987.
Carole Hafner of Northeastern was the local organizer. The proceedings
were published by the ACM (they were available for the conference).
Quite a range of papers was presented. (Mine was on programming the
Income Tax Act in Prolog.)

There were also two conferences held at the University of
Houston in 1984 and 1985, called the First and Second Annual
Conferences on Law and Technology.  They were organized by
Charles Walter.  The papers from the first conference were
published by West Publishing Company (St. Paul, Minn.) as
"Computing Power and Legal Reasoning", (Charles Walter, ed.),
1985, 871pp.  The papers from the second conference were never
published and can be found, as far as I know, only in the hands
of the people who attended.  (Some of them are labelled "draft -
not for publication or attribution".)

The West publication and the proceedings of the 1987 conference,
between them, are a pretty thorough overview of what's happening
in the world of AI and law.

David Sherman
The Law Society of Upper Canada
Toronto
--
{ uunet!mnetor  pyramid!utai  decvax!utcsri  ihnp4!utzoo } !lsuc!dave

------------------------------

Date: 2 Mar 88 12:18 PST
From: hayes.pa@Xerox.COM
Subject: Re: Funny Logics and AI: references

Try looking through  `Logics for Artificial Intelligence' by Raymond Turner,
Ellis Horwood series in AI, John Wiley 1984.  He does a pretty good survey of a
whole lot of odd logics with bibliographical sumaries at the end of each
chapter.

Pat Hayes

------------------------------

Date: Wed,  2 Mar 88 23:09:05 -0500 (EST)
From: Leslie Burkholder <lb0q+@andrew.cmu.edu>
Subject: modal logic references


Try
Introductory Modal Logic, K Konyndyk, U of Notre Dame Press, 1986.

LB

------------------------------

Date: Fri, 04 Mar 88 10:48:30 HNE
From: Spencer Star <STAR%LAVALVM1.BITNET@CUNYVM.CUNY.EDU>
Subject: Re: AIList V6 #45 - Logic, RuleC, Methodology, Constraint

Paul Creelman notes in the Feb 28th AIList that Speigelhalter says that
probabilities are not the right way to represent uncertainty in expert
systems, and then goes on to list a number of reasons why.  I found this
somewhat surprising since Speigelhalter uses probabilities in his work
on uncertainty in expert systems.  Where does he make all these nasty
comments.

------------------------------

Date: Tue, 1 Mar 88 17:00:55 EST
From: George McKee <mckee%corwin.ccs.northeastern.edu@RELAY.CS.NET>
Subject: picotechnology & positronic brains

I think the recent speculations about learning about natural
intelligence by simulating the brain in a nanotechnological device
aren't looking carefully enough at the problem.  If the brain is
anything like the immune system, the source of the changes in neural
structure that lead to learning, thought, and behavior are in the
genome itself.  If you recall the biomedical "breakthroughs" of just
a few months ago, you'll know that the source of the immune system's
ability to recognize new information is in "variable sequences" in
part of the genome that codes for antibodies.  Given the way
genetic crossover can transfer enzymatic networks from one system
to another, there's little reason to believe something similar
doesn't work in the nervous system.  It would help explain why
the brain has a higher rate of protein synthesis than almost anywhere
else in the body, and that blocking protein synthesis blocks some
kinds of learning (G.Ungar's work years ago).  You might argue that
the protein synthesis is just making new synapses, but that fails
to explain why more than 40% of the genome is expressed in the brain.
There's alot more than just modification of synaptic efficiency
(connection weights) going on there.

If modifications in DNA sequences, or for that matter any molecular
structure, play any significant role in the function of the nervous
system in vivo, then those nanotechnologists that think they can do AI
by building complete human brains "in calculo" are working at the wrong
level of detail.  Not only will they have to simulate neurons and synapses,
but they'll have to simulate the molecules that control and form the
structure of those synapses.  What they ought to be doing is worrying about
creating devices functionally equivalent to macromolecules in which the
components have the same stability properties as real molecules, but the
components are smaller, faster, and less noisy.  Any technology that
can get inside of molecules will of course be called "picotechnology."
One way towards this is to use matter composed of other subatomic
particles than electrons, protons, and neutrons.  Looking at the
subject line above, you can see where this leads...

If you were trying to make devices out of positronium, you might
attempt to stabilize them with a ceramic matrix like that of the high-
temperature superconductors, but carrying electron-positron pairs
rather then the superconductors' electron-electron pairs.  It's true that
the platinum-iridium sponge that forms the matrix for the creation and
destruction of positrons in the "positronic brains" that power Asimov's
robot stories contains rare-earth metals just like the high-temp
superconductors, but I think that was just luck, and that Dr.A. chose
that alloy simply because it was shiny and expensive.

There can be no doubt about the success of R.Daneel Olivaw as an AI
artifact, but of course this whole microtechnological exercise ignores
the really interesting part of AI, namely the programming, not to
mention the robopsychology.

        - George McKee
          College of Computer Science
          Northeastern University, Boston 02115
CSnet: mckee@Corwin.CCS.Northeastern.EDU
Phone: (617) 437-5204
Usenet: in New England, it's not unusual to have to say
                "can't get there from here."

p.s. I should add that I happen to be on David Baltimore's side of the
debate whether or not to have a big project to sequence the entire
human genome.  Without going into a long discussion that's really
irrelevant here, I think that a "big science" sequencing project will
lead to a myopic focusing of attention on the mere task of sequencing,
rather than the broader and harder to predict/manage task of
understanding how 1-dimensional sequences become 3-dimensional proteins
and organisms.  Alas, it's characteristic of the adversarial nature of
the political process to end up with only one golden egg in the funding
basket. I wouldn't like to see a genome sequencing project end up like
the Apollo or Space Shuttle projects have, but I'd bet that the
probablity of such an outcome is directly proportional to the size of
the project budget.

------------------------------

End of AIList Digest
********************

∂08-Mar-88  0038	LAWS@KL.SRI.COM 	AIList V6 #48 - CommonLoops, OPS5, Constraint Languages, Probability
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 8 Mar 88  00:38:30 PST
Date: Mon  7 Mar 1988 22:12-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #48 - CommonLoops, OPS5, Constraint Languages, Probability
To: AIList@SRI.COM


AIList Digest            Tuesday, 8 Mar 1988       Volume 6 : Issue 48

Today's Topics:
  Queries - Approaches to AI & Prototypical Knowledge &
    CL for an IBM running VM/CMS HPO 4.2,
  AI Tools - CommonLoops & Image Formats & Student Versions of OPS5 &
    Constraint Languages,
  Theory - Uncertainty and Fuzzy Logic vs Probability

----------------------------------------------------------------------

Date: Sun, 6 Mar 88 16:35:30 EST
From: "Timothy J. Horton" <tjhorton%ai.toronto.edu@RELAY.CS.NET>
Subject: Re: Approaches to AI

jbn@GLACIER.STANFORD.EDU (John B. Nagle) writes (very roughly):
>     McCarthy has recently described two paths to artificial intelligence.
>But his two, while the most active, are not the only ones in which substantial
>work is underway.  A more general taxonomy might be outlined as follows:
>     1.  "Good, old fashioned AI". ... to model the world using formalisms
>         related to mathematical logic.
>     2.  Neural networks. ... development of massively parallel
>         self-organizing systems.
>     3.  Engineered artificial life.  (bottom-up approach) ... construction
>         of robots that function in the real world, using whatever technology
>         seems appropriate.
>     4.  Study and replication of the detailed structure of biological
>         intelligence, without necessarily understanding how it works.

Could anyone fill out this tree a little more?

For instance, what about Woods' work on abstract procedures (not to be
confused with proceduralism)?  He wants something more general than logic
-- not throwing it out, but not accepting it as sufficient or appropriate
for the whole job.

What about anything else?  Surely there are "mathematical" theoreticians
that hope for something more than logic, that ought to be included here?
The latest issue of "Computational Intelligence" had more than a score
of responses to Drew McDermott's critique of pure logicism, and one heck
of a lot of camps got staked out in the process.

------------------------------

Date: 7 Mar 88 05:05:31 GMT
From: daniel@aragorn.cm.deakin.OZ (Daniel Lui)
Reply-to: daniel@aragorn.OZ (Daniel Lui)
Subject: Request for related research work on Prototypical Knowledge

Can anybody tell me what research work has been done on Prototypical
Knowledge? So far, I found only two related publications:

Janice S. Aikins, "Prototypical Knowledge for Expert Systems",
   Artificial Intelligence, Vol. 20, 1983.

Henrik Nordin, "Using Prototypes for Knowledge-Based Consultation and Teaching",
   SPIE Vol. 635 Applications of Artificial Intelligence III, 1985.

Thanks in advance.
                                                          daniel@aragorn.oz

Daniel Lui              >> CSNET:daniel@aragorn.oz                       <<
Division of Computing   >> UUCP: seismo!munnari!aragorn.oz!daniel        <<
Deakin University       >>       decvax!mulga!aragorn.oz!daniel          <<
Geelong, Victoria       >> ARPA: munnari!aragorn.oz!daniel@seismo.arpa   <<
Australia  3217         >>       decvax!mulga!aragorn.oz!daniel@Berkeley <<

------------------------------

Date: 23 Feb 88 14:27:20 GMT
From: msu.bitnet!13501jsk@psuvm.bitnet  (John Kern)
Subject: Wanted CL for an IBM running VM/CMS HPO 4.2

I am currently working with LISP/VM on an IBM 3090 and I am not satisfied
with it.  I would appreicate any information on Common LISP available for
this machine.

Sincerely,

John

------------------------------

Date: Sat, 05 Mar 1988 12:51 EST
From: sidney%acorn@oak.lcs.mit.edu
Subject: commonloops


I thought that this reply to Wei-Han Chu's request for information
would be of more general interest:

Portable CommonLoops (PCL) is available for free from Xerox PARC. Even
though Gold Hill includes a copy with our GCLisp 3.0, that is for the
convenience of our customers who would like a copy, and we do not make
any attempt to support it. PCL is evolving rapidly towards the
emerging CLOS standard, with new releases appearing frequently. It is
currently available for at least 9 Lisp implementations that I know
of. The most current source and documentation is available via
anonymous ftp from parcvax.xerox.com.  The file /pub/pcl/get-pcl.text
contains more information. Requests for information about the
CommonLoops mailing list can be sent to commonloops-request@xerox.com.

Disclaimer: While I work for Gold Hill, this message is my own
personal reply to a request for information and is not the official
word from either Gold Hill Computers or anybody at Xerox PARC.

Sidney Markowitz <sidney%acorn@oak.lcs.mit.edu>

------------------------------

Date: 2 Mar 88 19:45:06 GMT
From: hao!noao!stsci!sims@ames.arc.nasa.gov  (Jim Sims)
Subject: Re: File formats for image data ?

I tried to reply but our mailer is brain-dead...

Most astronomers use FITS, which allows multiple data types, multiple groups
and types of data in the same file, and extension to the format. Any
observatory or astronomy program at a university can get you the full scoop.

--
    Jim Sims     Space Telescope Science Institute  Baltimore, MD 21218
    UUCP:  {arizona,decvax,hao,ihnp4}!noao!stsci!sims
    SPAN:  {SCIVAX,KEPLER}::SIMS
    ARPA:  sims@stsci.edu

------------------------------

Date: 3 Mar 88 19:24:19 GMT
From: ptsfa!pacbell!att-ih!alberta!ahmed@AMES.ARC.NASA.GOV  (Ahmed
      Mohammed)
Subject: Student versions of OPS5


Computer Thought Corp. has begun shipping students versions
of the expert system lang. These are operational on IBM PC/XT/AT
on one 5.25'' floppy drive
The system should accomplish for rule-based systems what
Borland's Turbo Prolog has already accomplished for logic
programming: mass distribution and increased public understanding.

The package costs $255 for Graduate student version,
$90 for undergraduate version.

------------------------------

Date: 07 Mar 88 15:08:30 EST (Mon)
From: sas@bfly-vax.bbn.com
Subject: AIList V6 #45 - Constraint Languages

I just got back from vacation and I cannot remember if I sent this out
before I left or not, but:

I was the lead engineer on the TK!Solver product back in the early
80's so I did a bit of constraint language research.  Although
TK!Solver was (and still is) a software product, the only publications
are the manual itself and a number of software reviews.  TK!Solver let
you enter equations and then use a Newton-Raphson solver to find
solutions.  We tried to generalize the then popular MBA calculators
which let you manipulate PV, FV, T and/or i and given three out of
four compute the fourth.  Inflation and interest rates were much
higher then and MBA's worried about IRR instead of market share.

TK!Solver was based on some work done by Milos Konopasek at U of
Manchester, Georgia Tech and NCSU.  While I might be able to find some
copies of his papers, you could try looking them up under the name QAS
or the Question Answering System.  He used it to help teach textile
engineering.  TK!Solver made a number of improvements.

If you want to follow this vein you might try finding Bob Light's
MIT Mechanical Engineering thesis done in the mid 80's.  He combined
back solution techniques with more traditional CAD rendering
techniques.

I am not sure if anyone mentioned the Sutherland's MIT doctoral thesis
which is kind of the grand daddy of constraint systems done in the
early 60's.  The stressed trestle example in Borning's 80-81 Thinglab
paper (PARC, I think) originally appeared in this one.

I'd also recommend Guy Steele's MIT thesis on a discreet state
constraint system and Gosling's CMU thesis which uses a constraint
system to compile closed form algebraic solutions to constraint
problems.

Depending on you interests in constraint systems you could look at
Negroponte's Architecture Machines, an MIT press book which discusses
the kinds of constraint systems actual designers would be interested
in using. There were a number of architects working with constraint
systems which have developed, at least partially, into modern CAD
systems.  You might try looking up some papers by Tim Johnson, Yona
Friedman, or Masanori Nagashima if you are interested in this sort of
thing.  These systems tried to be useful during the early stages of
design, rather than during the final drafting.  Most of these
languages were visual rather than textual, but a solution was found by
adding and manipulating constraints.

If you are interested in the interaction of generative grammatical
constraints and explicit situational constraints (as are often found
in natural structures) I'll mention the SAR design people based in
Eindhoven, though I am only familiar with the MIT contingent including
Habraken, Gerzso and Govela.  (I won't mention the fascinating
politics of this design methodology although there are a number of
good stories).

For more on the interaction of constraint and construction you should
check out the classic On Growth and Form by Thompson (or is it Thomas)
which turns up now and then.

                                        Still jet lagged
                                            after all these years,
                                                Seth

------------------------------

Date: 7 Mar 88 04:31:24 GMT
From: Eric Neufeld <emneufeld%watdragon.waterloo.edu@RELAY.CS.NET>
Reply-to: Eric Neufeld <emneufeld%watdragon.waterloo.edu@RELAY.CS.NET>
Subject: Re: Uncertainty and FUZZY LOGIC VS PROBABILITY


In article <8802291913.AA13911@dalcsug.UUCP> creelman@dalcsug.UUCP
(Paul Creelman) writes:
>
>Subject: Uncertainty and FUZZY LOGIC VS PROBABILITY
>Newsgroups: comp.ai.digest
>Keywords: UNCERTAINTY, PROBABILITY
>
>   There appears to be some discussion about uncertainty by Eric Neufeld
>   and others. According to Spiegelhalter, the use of probability for
>   representing uncertainty in expert systems is the wrong method.

First favour I would like to ask of the net: Something has happened with
our news feed:  I have seen nothing of this controversy since my original
posting.  Would someone, possibly the moderator, be so kind as to mail me the
controversy?     [Done.  -- KIL]

To continue the discussion:

>   it  is inappropriate because uncertainty in knowledge does not match the
>   chance mechanism of an observable event. It is unnecessary since no meaning
>   must be attached to numbers, but instead the rank order of hypotheses is
>   often all that matters in an expert system.

I have heard Ben-Bassat say that even the rank ordering is unimportant in
*applications*.  Physicians want to know *possible* diagnoses, relative
strengths are what is important.  (My apologies to Dr. Ben-Basset if this is
incorrect.)  But so what?   That is an opinion.  Suppose rank ordering is
important as Spiegelhalter suggests.  The use of numbers in probability
theory can be viewed merely as a convention.  Nothing precludes the use of
probability as a way of deriving rank orderings.  One of my favourite papers
is Koopman's which eliminates the numbers (in the preamble) with the hope of
restoring the primal intuition of probability.  The numbers are later added
for consistency with the mathematical theory.

>Furthermore probability is
>   somewhat impractical since it requires too many estimates of prior
>   probabilities, fails to distinguish ignorance from uncertainty, and
>   fails to provide an explanation of conclusions. I must agree. Down with
>   probability!

You contradict yourself!  Probability tells us that it is not trivial to
distinguish ignorance from uncertainty.  Probability tells us that truth is
*independent* of explanation (i.e., given our knowledge of your symptoms,
the probability of disease X is 0.xx (or rank ordering 3) REGARDLESS OF THE
EXPLANATION or ARGUMENT used to get the diagnosis).  But that is not what
probability is for.  It is used to measure (relative) strength in an
argument.

>   Surely what is needed is a simplified version of Shafer's evidence theory
>   which deals with all possible subsets of the possible variable values, the
>   frame of discernment. A number is associated with each subset which measures
>   the certainty that the actual variable value is in that subset. Suppose we
>   coarsen the uncertainty measure by reducing the number of subsets specified.

I would say surely not!  Professor Kyburg has, more than a year ago, shown
that the theory of Dempster-Shafer is equivalent to an
interval-valued theory of probability, with some added statistical
assumptions.  That is not to say that the D-S model has no useful
applications.  I will take the liberty of paraphrasing Dr. Kyburg, who
concludes his article by saying that there is nothing wrong with variations
on the theory of probability containing such statistical assumptions, but
these assumptions should be in full view, up-front, for all to criticize
constructively.
>
>   Paul Creelman


--
. 
w
q

------------------------------

End of AIList Digest
********************

∂10-Mar-88  0213	LAWS@KL.SRI.COM 	AIList V6 #49 - Seminars, LA SIGART, Conferences
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 10 Mar 88  02:13:12 PST
Date: Wed  9 Mar 1988 22:52-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #49 - Seminars, LA SIGART, Conferences
To: AIList@SRI.COM


AIList Digest           Thursday, 10 Mar 1988      Volume 6 : Issue 49

Today's Topics:
  Seminars - Witkin/Kass Vision/Animation Seminar Correction (CMU) &
    Representations for Model-Based Troubleshooting (BBN) &
    Cognition and Metaphor (BBN) &
    The Inadequacy of the Turing Test (SUNY) &
    Panel Discussion on AI/Neural Net Start-ups (Linkabit),
  Meeting - LA SIGART Organizational Meeting,
  Conferences - Hawaii Systems Sciences Correction &
    Westex-88 Expert Systems &
    ICEBOL3 Conference on Symbolic and Logical Computing &
    2nd IFIP Workshop on Intelligent CAD

----------------------------------------------------------------------

Date: Wed, 09 Mar 88 10:38:32 EST
From: Anurag.Acharya@CENTRO.SOAR.CS.CMU.EDU
Subject: Seminar - Witkin/Kass Vision/Animation Seminar Correction (CMU)


A previous message announcing a seminar on "Physically Based Modeling
For Vision and Animation" had some inadvertent errors. The correct
version is :


        SPEAKERS:  Andy Witkin & Michael Kass , Schlumberger Palo Alto
                   Research

        WHEN:     Thursday, March 3, 1988, 3:30-4:30 p.m.

        WHERE:    Wean Hall 5409

( Abstract : same as in the original message )

I am sorry for the mixup.

------------------------------

Date: Thu 3 Mar 88 15:14:36-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Seminar - Representations for Model-Based Troubleshooting
         (BBN)

                    BBN Science Development Program
                       AI Seminar Series Lecture

            REPRESENTATIONS FOR MODEL BASED TROUBLESHOOTING

                           Walter C. hamscher
                               MIT AI Lab
                        (hamscher@ht.ai.mit.edu)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                       10:30 am, Tuesday March 15


Model based troubleshooting is fundamentally about modeling.  Its goal
is to apply a general troubleshooting engine to a new domain by
providing only a new domain model, so it is essential to know not only
what relation the model should bear to the real physical device being
diagnosed, but also what features the resulting model should include by
virtue of its intended use in troubleshooting.  Since every model
embodies some abstractions, this is just another way of saying that it's
essential to know the useful abstractions for the task at hand.

This talk presents a methodology for model based troubleshooting of
board-scale digital circuits that emphasizes the importance of
appropriate temporal abstractions for coping with behavioral complexity.
The result is a remarkably coarse representation for digital circuit
behavior that often yields as much diagnostic resolution as traditional
circuit models, in spite of its simplicity.  In the same spirit, the
importance of appropriate representation of circuit organization is
emphasized, and the result is a primary representation of the physical
organization of the circuit, along with a more familiar representation
of functional organization.

------------------------------

Date: Mon 7 Mar 88 08:56:59-EST
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: Seminar - Cognition and Metaphor (BBN)


                   BBN Science Development Program
                 Language & Cognition Seminar Series


                      COGNITION AND METAPHOR

                     Professor Bipin Indurkhya
                    Computer Science Department
                         Boston University

                      BBN Laboratories Inc.
                       10 Moulton Street
                Large Conference Room, 2nd Floor


              10:30 a.m., Wednesday, March 9, 1988


Abstract:  In past years a view of cognition has been emerging in which
metaphors play a key role. However, a satisfactory explanation of the
mechanisms underlying metaphors and how they aid cognition is far from
complete.

In particular, earlier theories of metaphors have been unable to account
for how metaphors can "create" new, and sometimes contradictory, perspectives
on the target domain.

In this talk I will address some of the issues related to the role metaphors
play in cognition. I will first lay an algebraic framework for cognition,
and then in this context I will pose the problem of metaphor. Two mechanisms
will be proposed to explain the workings of metaphors. One of these
mechanisms gives rise to what we call "projective metaphors", and it is
shown how projective metaphors can "create" new perspectives and new
ontologies on the target domain. The talk will conclude with a brief
discussion of some further implications of the theory on "Direct Reference
vs. Descriptive Reference", "Is all knowledge metaphorical?", and
"Induction and Analogies", among other things.

------------------------------

Date: Mon, 7 Mar 88 12:35:40 EST
From: rapaport@cs.Buffalo.EDU (William J. Rapaport)
Subject: Seminar - The Inadequacy of the Turing Test (SUNY)


                STATE UNIVERSITY OF NEW YORK AT BUFFALO

                        BUFFALO LOGIC COLLOQUIUM

                           RANDALL R. DIPERT

                        Department of Philosophy
                             SUNY Fredonia

           THE INADEQUACY OF THE TURING TEST AND ALTERNATIVES
                 AS CRITERIA OF MACHINE UNDERSTANDING:
     Reflections on the Logic of the Confirmation of Mental States

In this paper, I  address  the  question  of  how  we  would  confirm  a
machine's,  or any entity's, "understanding".  I argue that knowledge of
the internal properties of an entity--as opposed to  "external"  proper-
ties  and  relations, such as to a linguistic or social community, or to
abstract entities such as propositions--may not be  sufficient  for  the
justified attribution of understanding.  I also argue that our knowledge
of the internal construction or of the origin of  an  artificial  system
may  serve as defeating conditions in the analogical reasoning that oth-
erwise supports the claim of a system's understanding.   (That  is,  the
logic  of  the  confirmation  of understanding is itself non-monotonic!)
These issues are discussed within an analysis of the complex  fabric  of
analogical reasoning in which, for example, the Turing Test and Searle's
Chinese Room counterexample are merely examples of  larger  issues.   No
previous  contact with the logic of analogy, artificial intelligence, or
the philosophy of mind (other than having  one)  is  assumed.   [Shorter
summary:   Will  we (ever) be able justifiably to say that an artificial
system has "understanding"?  Probably not.]

                        Tuesday, March 15, 1988
                               4:00 P.M.
                      Fronczak 454, Amherst Campus

    For further information, contact John Corcoran, (716) 636-2438.

------------------------------

Date: 9 Mar 88 16:33:00 EDT
From: "GAIL SLEMON          455-1330" <gslemon@afhrl.arpa>
Reply-to: "GAIL SLEMON          455-1330" <gslemon@afhrl.arpa>
Subject: Seminar - Panel Discussion on AI/Neural Net Start-ups (Linkabit)

          SDSIGART March Meeting

          THursday, March 17, 1988

Growing an AI/ANS Business: A Panel discussion on AI/ANS Start Ups

There are many aspects of starting a new company that are the same
for all types of businesses.  The purpose of this
panel is to discuss the unique aspects of starting an AI/ANS
company, ;and to do this, we have brought together some of the piople
who have started AI/ANS companies recently.

Dr. Hjecht-Nielsen worked for several years devloping neural networks at TRW
before forming HNC, Hecht-Nielsen Neuron\computer Corporation.  Dan
Greenwor\od has worked in the defense industry for many ;years, and was
with Verac, Inc. before starting Netrologic.  Dr. Pamela Coker will
discuss her company, Computer Cognition.  Dr. Burt will discuss his company,
Cogensys.  This panel will discuss why and how
they are entering the AI/ANS business market.

     Location:  M/A Com-Linkabit, 3033 Science Park Road
          (off Torrey Pines Road),  San Diego, CA
     TIme :  6:30-8:30 p.m. Thursday March 17, 1988

------------------------------

Date: 7 Mar 88 16:23:20 PST (Monday)
From: Bruce Hamilton <Hamilton.osbuSouth@Xerox.COM>
Reply-to: Hamilton.osbuSouth@Xerox.COM
Subject: Meeting - LA SIGART Organizational Meeting

[I'm posting this for Kim Goldsworthy, who is not on the Internet.  The use of
the first person below refers to Kim, not me.  Note: you don't need to be a
national ACM member to participate, although you probably do to hold office.
--Bruce]

I am planning the first organizational meeting for Sunday afternoon, March 13,
at 2:00 pm at the Pasadena Library, 285 E. Walnut St. (Thomas Brothers map 27
A3).  Please come, or at least contact me prior to then.  If too few people
respond, then I will inform the ACM that there is not enough interest in
ARTIFICIAL INTELLIGENCE to charter a local group.  WE NEED VOLUNTEERS,
VOLUNTEERS, VOLUNTEERS!  To Organize, we will need to name a chairman,
vice-chairman, secretary, treasurer, and program coordinator.

I envision our group meeting once monthly to hear programs on: Symbolics' latest
developments... Poker progessional Mike Caro explaining his poker expert that
beats 99% of all poker players... Boeing's use of AI.. Texas Instruments' new
Explorer... Hughes' use of Epistemological Engineering... USC's accounting
expert system... etc.

If you care enough to be in on the *very first* Los Angeles SIGART, then tell me
so.

Sincerely,

Mr. Kim Goldsworthy

home 818/280-5644 (evening, and answering machine)

------------------------------

Date: Sun, 6 Mar 88 20:09:36 CST
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: Conference - Hawaii Systems Sciences Correction


                  CALL FOR PAPERS AND REFEREES

    HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 22

                   Rapid Prototyping Session

            KAILUA-KONA, HAWAII - JANUARY 3-6, 1989



Correction:

The dates for the Rapid Prototyping Section of the HICCS-22 (Hawaii
Software Track, Jan 3-6, 1989) were incorrectly reported.  The
correct ones are:

      A 300-word abstract is due by March 30, 1988
      Feedback to author concerning abstract by April 30, 1988
      Six copies of the manuscript are due by June 6, 1988
      Notification of accepted papers by September 1, 1988
      Accepted manuscripts, camera-ready, are due by October 3, 1988

------------------------------

Date: Mon, 7 Mar 88 16:33:59 PST
From: Harris Sperling <hsperling@nrtc.northrop.com>
Subject: Conference - Westex-88 Expert Systems

                  WESTEX-88
                  CALL FOR PAPERS
                  EXPERT SYSTEMS CONFERENCE

The objective of the third WESTEX conference is to explore the
practical application of expert and knowledge-based systems in
industry.

The program will consist of invited speakers, submitted papers,
and panel sessions.  Other features of the conference will be
tutorials in expert system development, and exhibits of expert
system hardware and software.

WESTEX-88 will be held in June 28-30, 1988 in Anaheim, California.

Topics for papers and panel sessions are invited.  Please send
five copies of an abstract and clean review draft paper to:

     Bruce Bullock
     Program Chairman, WESTEX-88
     Teknowledge Federal Systems, Inc.
     501 Marin Street, Suite 214
     Thousand Oaks, CA  91360

For further information, contact the Conference Chairman,
George Friedman, Northrop Corporation, 1840 Century Park East,
Los Angeles, California 90067-2199.  (213) 201-3311.


Topics of interest include but are not limited to:

Aerospace & Commercial Applications

     Avionics
     Command & Control
     Diagnostic & Test
     Logistics
     Training & Tutoring
     Manufacturing
     Banking/Finance
     Management Information Systems


Development & Implementation Issues

     Knowledge Engineering Methodology
     Case Studies in KBS Development
     Systems Integration & Fielding
     Real Time Processing
     Hybrid Expert Systems
     Knowledge Base Verfication & Validation
     Expert Systems Transportability
     Measures of Quality and Performance

     Topics of interest also include other disciplines that
     may be applied to problems normally considered to be in
     the Expert Systems domain.


Call for Paper Schedule
Abstract (500 Words) anc clean review draft paper (min. 3 pgs.)
Deadline:  March 15, 1988
Notification of acceptance:  April 16, 1988
Camera-ready copies due:  May 16, 1988

------------------------------

Date: Fri, 04 Mar 88 09:52:27 -0800
From: Richard Nelson <nelson@ICS.UCI.EDU>
Subject: Conference - ICEBOL3 Conference on Symbolic and Logical
         Computing

Although the preliminary announcement was posted to AIList a
couple of months ago, this new announcement lists featured
speakers, planned session topics, and registration info.

Cheers
Richard

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
                      International Conference
                                  on
                   Symbolic and Logical Computing

Dakota State College                              Madison, South Dakota
                          April 21-22,1988


The third International Conference on Symbolic and Logical Computing
(ICEBOL3) will present papers and sessions on many aspects of non-numeric
computing: artificial intelligence, analysis and printing of texts,
machine translation, natural language processing, the use of dangerously
powerful computer languages, SNOBOL4, SPITBOL, Icon, Prolog, and LISP.
There will be a series of concurrent sessions (some for experienced
computer users and others for interested novices).

Coffee breaks, lunches, social hour, and banquet will provide a series
of opportunities for participants to meet informally and exchange
information.  Sessions will be scheduled for "Birds of a Feather" to
discuss common interests.


KEYNOTE SPEAKER: Paul Abrahams, President, Association for Computing
     Machinery (ACM), Consultant on Programming and Technical Writing.

BANQUET SPEAKER: Robert Dewar, Courant Institute, New York
     University, SPITBOL originator.

OTHER FEATURED SPEAKERS AND PANELISTS:
     Ralph Griswold,  University of Arizona, author of numerous books
          and articles on SNOBOL4 and Icon
     Viktors Berstis, Minnesota SNOBOL4 creator
     Michael Shafto, author of articles on SNOBOL4 and artificial
          intelligence
     Mark Emmer, of Catspaw, Inc., creator of SNOBOL4+


Sessions covering the following topics are planned:

Parsing and grammar analysis          Tutoring Systems
Machine translation                   Data conversion
Style analysis                        Processing and printing
List handling and scheduling             special character sets
Natural language processing           Multi-thread processing
Cryptography                          SNOBOL4 heuristics
Icon programming                      Music analysis


- - - - - - - - - -  REGISTRATION FORM  - - - - - - - - - - - - - - -

                   Dakota State College
                 International Conference
                            on
               Symbolic and Logical Computing
                     April 21-22, 1988
                Madison, South Dakota 57042


Indicate a number for the following:

____  Advanced registration - $125.00   (includes two lunches, coffee
      breaks, banquet, one copy of the proceedings)
      on-site registration - $140.0

____ Additional copies of the proceedings ($20.00 each)

____ Additional banquet tickets ($12.00 each)

____ Shuttle from Sioux Falls airport ($25.00 each roundtrip)
       (Notify us of date and time of arrival & departure)


TOTAL ENCLOSED $_________


     Rental cars are available at the Sioux Falls, SD, airport.


Name __________________________________________________________

College or Firm _______________________________________________

Mailing address _______________________________________________

                _______________________________________________

Electronic mail address _______________________________________

Suggested topic for "Birds of a Feather" section:______________

_______________________________________________________________

Please make your own motel reservations at one of the
following:

Super 8 Motel              Lake Park Motel            DSC Dormitory
(605) 256-6931             (605) 256-3524             (605) 256-5149
Single:  $25.00            Single:  $23.00            Single: $7.50
Double:  $31.00            Double:  $30.00            Double: $10.00

Return this form to     Eric Johnson
                        ICEBOL3
                        114 Beadle Hall
                        Dakota State College
                        Madison, SD 57042

------------------------------

Date: Mon, 07 Mar 88 16:00:41 V
From: b39711%tansei.cc.u-tokyo.junet%utokyo-relay.csnet@RELAY.CS.NET
Subject: Conference - 2nd IFIP Workshop on Intelligent CAD

Here goes Call for Papers of the 2nd IFIP WG 5.2 Workshop on Intelligent
CAD.

Tetsuo Tomiyama (b39711%tansei.cc.u-tokyo.junet@relay.cs.net)

-------------------------------------------



            The Second IFIP W.G. 5.2 Workshop on
                      Intelligent CAD

     19-22 September 1988, University of Cambridge, UK




_λA_λI_λM

This workshop is the second in  the  series  of  three  IFIP
Working  Group 5.2 workshops on Intelligent CAD.  In October
1987, the first workshop was successfully held at MIT,  USA,
and  various  concepts  about Intelligent CAD were outlined.
An intelligent CAD system is an  environment  or  a  set  of
tools  to  support  designer's  intellectual activities with
built-in knowledge on design processes and objects.

     The introduction of knowledge engineering is not neces-
sarily  the  primary goal for the development of intelligent
CAD.  In the first workshop we have found  out  that  under-
standing  the  nature of design processes and representation
of design objects in an evolutionary design process are more
important.   A  number of interesting work to try to capture
the semantics of design were presented in  subgroup  discus-
sions.   The results of the first workshop will be published
from North-Holland by the summer of 1988.

     Based on theoretical achievements  made  in  the  first
workshop,  this  second  workshop  is  aiming  at  outlining
specifications for intelligent CAD.  The architectures shall
be  schematically  clarified through stimulating discussions
among experts of the field.  In the third workshop in Tokyo,
1989,  practical  applications  of  intelligent  CAD systems
based on new theories and  specifications  are  expected  to
appear.


_λT_λO_λP_λI_λC_λS

o    Specifications for intelligent CAD systems
o    Architecture of intelligent CAD systems
o    Implementation of intelligent CAD systems
o    Implementation of design knowledge in  intelligent  CAD
     systems


_λW_λO_λR_λK_λS_λH_λO_λP _λF_λO_λR_λM_λA_λT

The number of participants are  roughly  limited  to  40  in
order  to  stimulate  mutual  exchange  of  opinions.  Thus,
participation will be decided based  on  submitted  position
papers or extended abstracts.  Since this workshop is aiming
at exchanging ideas on individual base rather than organiza-
tional,  any participant must be the first author of her/his
own position paper or extended abstract.

     Potential authors are invited to submit 5 copies  of  a
position  paper  (or  an  extended abstract) of 1000 to 2000
words (reference and figures do not count)  before  May  20,
1988.   Acceptance  will  be  notified  by  June  20 and the
accepted authors will submit the preprint versions  of  full
papers by August 20, 1988.  The results of the workshop will
be published by North-Holland and the program committee will
select  papers  for this book from the submitted full papers
after the workshop.

     A  couple  of  subgroups  will  be  formed  during  the
workshop  to discuss specialized topics.  The topics will be
suggested at the opening session based  on  the  reviews  of
position papers.  In addition to this, there will be invited
speakers  from   artificial   intelligence,   computer-aided
design, and design studies.


_λS_λC_λH_λE_λD_λU_λL_λE

20 May 1988
     Deadline for position papers/extended abstracts

20 June 1988
     Notification of acceptance

20 August 1988
     Deadline for preprint versions

19-22 September
     Workshop


_λP_λR_λO_λG_λR_λA_λM _λC_λO_λM_λM_λI_λT_λT_λE_λE

H. Yoshikawa    The University of Tokyo
T. Holden       University of Cambridge
F. Kimura       The University of Tokyo
F. Arbab        University of Southern California
A. Bijl         EdCAAD, University of Edinburgh
K. MacCallum    University of Strathclyde
R. Popplestone  University of Massachusetts
H. Suzuki       The University of Tokyo
T. Tomiyama     The University of Tokyo

_λC_λO_λR_λR_λE_λS_λP_λO_λN_λD_λE_λN_λC_λE

Please send applications to:

        Professor Hiroyuki Yoshikawa
        Department of Precision Machinery Engineering
        The University of Tokyo
        Hongo 7-3-1, Bunkyo-ku, Tokyo 113, Japan
        Tel: +81-3-812-2111 (ext. 6446)
        Telex: 272 2111 FEUT J, Fax: +81-3-812-8849
        Internet: b39711%tansei.cc.u-tokyo.junet@relay.cs.net

------------------------------

End of AIList Digest
********************

∂10-Mar-88  0428	LAWS@KL.SRI.COM 	AIList V6 #50 - Constraints, Neural Nets, Prototypical Knowledge    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 10 Mar 88  04:28:38 PST
Date: Wed  9 Mar 1988 23:03-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #50 - Constraints, Neural Nets, Prototypical Knowledge
To: AIList@SRI.COM


AIList Digest           Thursday, 10 Mar 1988      Volume 6 : Issue 50

Today's Topics:
  Queries - Auto User Interface Program & Xerox's 1186 Workstation &
    Agricultural Uses of AI & Consultation Paradigms & sci.logic,
  AI Tools - Constraint Languages and TK!Solver & Student Versions of OPS5,
  Neuromorphics - Rochester Connectionist Simulator Update &
    Neural Networks and Vision,
  Psychology - Language-Free Thinking,
  References - Prototypical Knowledge,
  Anaotechnology - Picotechnology

----------------------------------------------------------------------

Date: Mon, 7 Mar 88 17:55:00 PST
From: TAYLOR%PLU@ames-io.ARPA
Subject: Request for Reference on Auto User Interface Pgm

Sometime during the summer of 1987, there was a seminar given at either
Stanford or SRI by someone maybe named Jack London concerning his
PhD thesis on the subject of automatic generation of user interface
specifications and/or code based on the specifications/contents of
a given data base. Obviously this is vague - it is second hand
information. Can someone give me an accurate reference to the
work presented in the seminar?

Thanks alot - Will Taylor   taylor%plu@io.arc.nasa.gov

------------------------------

Date: 9 Mar 88 15:08:50 GMT
From: bgsuvax!maner@TUT.CIS.OHIO-STATE.EDU  (Walter Maner)
Subject: Future of Xerox's 1186 Workstation

I have heard rumors that the 1186 will go out of production soon.  Could
anyone confirm or deny?  If the 1186 isn't approaching the end of its
product life, we plan to acquire some.  We could also benefit from user
reports on the functionality of the 1186 for serious AI development work.


--
CSNet   : maner@research1.bgsu.edu               | CS Dept    419/372-2337
UUCP    : {cbatt,cbosgd}!osu-cis!bgsuvax!maner   | BGSU
Generic : maner%research1.bgsu.edu@relay.cs.net  | Bowling Green, OH 43403
Opinion : If you are married, you deserve a MARRIAGE ENCOUNTER weekend!

------------------------------

Date: 9 Mar 88 16:23 EST
From: Rockwell.henr@Xerox.COM
Subject: Agricultural Uses of AI

I'm interested in finding any available references to argicultural uses of AI.
I'm aware of a COMAX paper of a few years ago but nothing else. Thanks in
advance.

Ronald Rockwell

Rockwell.Henr@XEROX.COM

------------------------------

Date: 8 Mar 88 16:54:00 GMT
From: pacbell!att-ih!occrsh!occrsh.ATT.COM!tas@AMES.ARC.NASA.GOV
Subject: Consultation Paradigms: info req


        I am looking for information on Consultation Paradigms as they
        pertain to expert systems.  Any references would be greatly
        appreciated.

                                Thanks...
                                                Tom Sonby
                                                AT&T
                                                Oklahoma City, OK.

                                                ...ihnp4!3b2fst!3b2tas

------------------------------

Date: 7 Mar 88 18:25:16 GMT
From: mcvax!ukc!its63b!hwcs!jack@uunet.uu.net  (Jack Campin)
Subject: call for votes for sci.logic


This is a call for votes for an unmoderated newsgroup "sci.logic".

Logic is a subject that sprawls across a number of newsgroups: these include
        comp.os.research - logics for specifying concurrent systems
        comp.ai - fuzzy and nonmonotonic logics
        comp.databases - dependency theory, deductive databases (I gave up on
                         this newsgroup long ago as it all seemed to be about
                         prices for UNIFY and bugs in INGRES, but it may have
                         got more interesting since then for all I know)
        comp.lang.prolog - not that Prolog has much to do with logic, but some
                           of its adherents think it does, and some of its
                           descendants do
        comp.theory - denotational semantics, combinatory logic, type theory ...
        sci.math - which has had a number of discussions of set theory
        sci.philosophy.tech - ditto
        sci.lang - Montague grammar and its successors
        sci.physics - quantum logic
and there are a number of topics that don't fit happily into any of these
(theorem provers for intensional logics? free logics? strict finitism?).

Enough! Time for an end to the diaspora! Towards a National Home for logicians!

I made a preliminary enquiry at the end of another posting to see if anyone was
interested in a logic newsgroup. I got enough replies (mostly from the UK and
California, as I expected) to suggest that we might have a quorum if I made the
request a bit louder.
Don't reply again if you've already mailed me successfully.

I shouldn't have to say this, but:
        MAIL me your votes. Don't post them as news items!
        Follow up to news.groups ONLY! NO CROSSPOSTING!
--
ARPA: jack%cs.glasgow.ac.uk@nss.cs.ucl.ac.uk
JANET:jack@uk.ac.glasgow.cs       USENET: ...mcvax!ukc!cs.glasgow.ac.uk!jack
Mail: Jack Campin, Computing Science Department, University of Glasgow,
      17 Lilybank Gardens, Glasgow G12 8QQ, Scotland (041 339 8855 x 6045)

------------------------------

Date: Tue, 8 Mar 88 13:02:16 pst
From: Walter Underwood <wunder%hpcerb@ce.hp.com>
Subject: AIList V6 #48 - Constraint Languages & TK!Solver

Wm Leler's book discusses TK!Solver's algorithms and deficiencies,
and uses the same contraint program as an example for his language,
Bertrand.  He also discusses the systems mentioned in other articles
here: Sketchpad, ThingLab, IDEAL, Steele's work, and others that I
can't remember right now.

The main contribution of Wm's work is a firm theoretical foundation
for constraint languages, plus the implementation work to make them
run fast.  He shows Turing equivalence, shows how to add constraints
to enforce datatypes, etc.

Bertrand is implemented in Scheme.

Walter Underwood

------------------------------

Date: 8 Mar 88 04:45:07 GMT
From: ubc-vision!ubc-cs!alberta!goebel@beaver.cs.washington.edu 
      (Randy Goebel)
Subject: Re: Student versions of OPS5

It's nice that software is made available for students at reduced prices,
but I'm a little worried that the OPS5 system would do what Turbo-Prolog
does...Turbo-Prolog is just plan bad, and anyone who uses Turbo-Prolog
as a method of understanding Prolog is doing damage to their logic
skills.  OPS5 is procedural, so maybe you will all be better off?

------------------------------

Date: Tue, 08 Mar 88 17:37:37 -0500
From: goddard@cs.rochester.edu
Subject: Rochester Connectionist Simulator update

(my apologies if this message is sent twice)

The Rochester Connectionist Simulator is available from:

        Rose Peet
        Department of Computer Science
        University of Rochester
        Rochester, NY 14627.

        rose@cs.rochester.edu
        ...!seismo!rochester!rose

There is a licence to sign, and a distribution fee.  Currently
distribution is via tape only, anonymous ftp may become available at
some indeterminate point in the future.  The package is written in C,
runs under UNIX, and has a graphics package which runs under Suntools.
It is currently in use at several dozen sites and is described in the
February issue of the CACM.  The simulation system is highly general
and flexible, placing no restrictions on network architecture, unit
activation functions and data, or learning algorithms.

A new version, 4.1, will be releases shortly.  Version 4.1 includes
facilities to selectively delete links and sites, with garbage
collection; capability for integration with Kyoto Common Lisp and
Scheme, allowing the simulator to be controlled from those packages;
dynamic reloading of activation and other functions into a running
simulator, with access to global variables from the interface; and the
ability to associate a delay with each link.

An X-windows graphics package is under development.
A mailing list for simulator users will be started shortly.

For more information, licence, distribution details, contact Rose Peet
at the address above.

Nigel Goddard

------------------------------

Date: 9 Mar 88 02:39:12 GMT
From: parvis@pyr.gatech.edu  (FULLNAME)
Subject: Neural networks and vision

Some time ago I posted a request for neural networks and vision literature on
the news. Since I got much more requests for literature than suggestions I
post a few more recent interesting references that I found.

- W.M. Bartlett, A computational model for neural feature extraction,
TR-87- 1357 University of Illinois at Urbana-Champain
- J.A. Feldman, Connectionist Models and Parallelism in High Level Vision, in
Perceptives in Computing: Human and Machine Vision II, 1986
- K.Fukushima, Neocognitron: A new Algorithm for pattern recognition tolerant
of deformations and shifts in position, Pattern Recognition Vol 15 1982
- S.Grossberg, Cortical dynamics of boundary completion, segmentation, and
depth perception, Illumination, and Image Sensing for Machine Vision Volume 728,
1986
- S.Grossberg and E.Mingolla, Neural Dynamics of Surface Perception: Boundary
Webs, Illuminants, and Shape-from-Shading, Computer Vision, Graphics, and Image
Processing 37, 1987
- T.Kohonen,E. Oja,P.Lehtioe, Storage and processing of
information in distributed associative memory systems, Parallel models of
associative memory, G.E. Hinton, J.A. Anderson 1981

P.S. If anyone has the book 'The 1987 Annotated Neuro-Computing Bibliograpy', I
would appreciate any comments.
                                        Thanks, Parvis.
----
Parvis Avini
parvis@gitpyr.gatech.edu

------------------------------

Date: 8 Mar 88 09:40:28 GMT
From: mcvax!botter!klipper!biep@uunet.uu.net  (J. A. "Biep" Durieux)
Subject: Language-free thinking (was: language, thought, and culture)

In article <2894@pbhyf.UUCP> rob@pbhyf.UUCP (Rob Bernardo) writes:
>In article <44@gollum.Columbia.NCR.COM> rolandi@gollum.UUCP (Walter Rolandi)
  writes:
>+What sort of thinking do people typically do that does not involve language?
>
>I wouldn't know how to call the sorts of thinking I do which do not involve
>language. Language does not give me very good ways of labeling them, so
>they're hard to talk about.

The standard example of non-linguistic problem solving is the following
(forgive me my English):

Suppose a dog carrying a stick enters a fence of inter-spaced vertical
poles. How does he get through the fence?

Almost everybody solves this visually, even if the problem is given
verbally. I suppose most spacial problems (moving the piano to the
second floor..) fall in the category you ask for.
--
                                                Biep.  (biep@cs.vu.nl via mcvax)
        As the NSA is now skipping last lines of articles,
        let's discuss our anti-american conspiracy over here.

------------------------------

Date: 8 Mar 88 17:43:25 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Language-free thinking


      For some references to recent work in the area, see "Spatial Reasoning
and Multi-Sensor Fusion, Proceeding of the 1987 Workshop", ISBN 0-934613-59-1,
Morgan Kaufman Publishers, Los Altos, CA.  This reports the work of an
AAAI-sponsored conference last summer.

                                        John Nagle

------------------------------

Date: 9 Mar 88 15:04:43 GMT
From: sunybcs!nobody@rutgers.edu
Reply-to: sunybcs!rapaport@rutgers.edu (William J. Rapaport)
Subject: Re: Request for related research work on Prototypical
         Knowledge


In article <8803070737.AA22890@uunet.UU.NET> daniel@aragorn.OZ (Daniel Lui)
writes:
>Can anybody tell me what research work has been done on Prototypical
>Knowledge?

Peters, Sandra L., & Shapiro, Stuart C. (1987a),
``A Representation for Natural Category Systems,'' Proceedings of the
9th Annual Conference of the Cognitive Science Society, Seattle
(Hillsdale, NJ:  Lawrence Erlbaum Associates):  379-390.

Peters, Sandra L., & Shapiro, Stuart C. (1987b),
``A Representation for Natural Category Systems,'' Proceedings of the
10th International Joint Conference on Artificial Intelligence, Milan
(Los Altos, CA:  Morgan Kaufmann):  140-146.

------------------------------

Date: Tue, 8 Mar 88 09:21:12 MST
From: t05rrs%mpx1@LANL.GOV (Dick Silbar)
Subject: RE: picotechnology


Regarding George McKee's "picotechnology and positronic brains" article
in AIList V6 #47, my initial reaction was "You must be kidding, Mr.
McKee!".  However, lest the non-physicist be misled:
      1) positronium ain't "small" but is the size of a hydrogen atom
(which is essentially what it is).
      2) positronium decays rapidly, in less than a microsecond, be it
ortho or para, ceramic matrix or not.  This fact was already well known
at the time positronic brains came into the literature.
      3) a Cooper pair, as occurs in superconductors, is even much larger
than positronium, since it is bound by exchange of phonons rather than
photons.

What I don't know is whether Mr. McKee intended us to take his remarks
about biology seriously.

      Dick

------------------------------

End of AIList Digest
********************

∂13-Mar-88  2312	LAWS@KL.SRI.COM 	AIList V6 #51 - Programming, Dependency Propagation, Uncertainty    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 13 Mar 88  23:12:18 PST
Date: Sun 13 Mar 1988 20:46-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #51 - Programming, Dependency Propagation, Uncertainty
To: AIList@SRI.COM


AIList Digest            Monday, 14 Mar 1988       Volume 6 : Issue 51

Today's Topics:
  Queries - Missing KCL Parts for Sequent Balance & VLSI Testing &
    Student versions of OPS5,
  Programming - AI and Software Engineering & Constraint Satisfaction,
  Theory - Representations of Uncertainty,
  Philosophy - Chinese Room

----------------------------------------------------------------------

Date: Thu, 10 Mar 88 20:32:15 pst
From: George Cross <cross%cs1.wsu.edu@RELAY.CS.NET>
Subject: Missing KCL parts for Sequent Balance

Hi,
Does anyone have the parts that were left out of KCL for the Sequent Balance?
I have the KCL distribution from a recent ftp from rascal and the Sequent
parts are missing.

 ---- George

 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
 George R. Cross
 Computer Science Department    cross@cs1.wsu.edu
 Washington State University    cross@wsuvm1.BITNET
 Pullman, WA      99164-1210    Phone: 509-335-6319 or 509-335-6636

------------------------------

Date: Fri, 11 Mar 88 10:00:11 GMT
From: Gabriel Mc Dermott <GMCDEH88@IRLEARN>
Reply-to: AIList@Stripe.SRI.Com
Subject: Re: AIList V6 #50 - Constraints, Neural Nets,

Does anyone out there have any references or information on
applications of AI techniques to Testability in VLSI design?

                     Thanks in advance,
                           Gabriel.

------------------------------

Date: 10 Mar 88 19:01:53 GMT
From: mcvax!unido!tub!tmpmbx!netmbx!morus@uunet.uu.net  (Thomas M.)
Subject: Re: Student versions of OPS5

In article <1110@pembina.UUCP> ahmed@alberta.UUCP (Ahmed Mohammed) writes:
>Computer Thought Corp. has begun shipping students versions
>of the expert system lang. These are operational on IBM PC/XT/AT
>The package costs $255 for Graduate student version,
>$90 for undergraduate version.

I would like to know about the capabilities of the above mentioned versions
of OPS5.
Are there any substantial differences between the "expensive" and the "cheap"
version ? I think it is not really the same price-politics as Borland's -
255 $ is not a "Pappenstiel" for a student.

By the way: All of you, who helped me with my request for a PC-public domain
OPS5-version - thank you very much*. I have now available a few common-lisp
sources (each about 100KB big) which I will try to convert to a PC-runnable
version in the near future. There will be - I suppose - some difficulties
to adapt the sources to PC-Scheme or XLISP. If anybody had experience with
porting code from CL to XLisp or Scheme, I would like to contact you for
an explanation for some of the more annoying transformations.

* I will summarize later!



Please - no e-mailing of long files to the above adress (..netmbx..)!

For e-mailing of sources etc. please use my BITNET-adress:

           muhrth@db0tui11.BITNET

Thank you very much,
  Thomas Muhr.
--
@(↑o↑)@   @(↑x↑)@   @(↑.↑)@   @(↑_↑)@   @(*o*)@   @('o`)@   @(`!')@   @(↑o↑)@
@  Thomas Muhr    Tel.: (Germany) 030 - 87 41 62    (voice!)                @
@  NET-ADRESS:    muhrth@db0tui11.bitnet  or morus@netmbx.UUCP              @
@  BTX:           030874162                                                 @

------------------------------

Date: 11 Mar 88 06:11:01 GMT
From: munnari!metro.ucc.su.oz.au!daemon@uunet.UU.NET
Subject: AI &Software Engineering


A recent article on news asked for information regarding software engineering
applied to AI. A subsequent article eluded to a lack of information in this
area.

The CSIRO division of information technology has a research program applying
software engineering techniques to the AI area. We have two projects currently
on the go, both first generation expert system redevelopment projects.

SIRATAC is an expert system that advises cotton growers on what to spray on
their cotton crop, and GARVAN ES1 is an expert system that interprets blood
samples in a pathology laboratory and inserts an interpretation onto the
report in the area of thyroid disorders. Both these expert systems have grown
uncontrolled by any design parameters, and subsequently have become difficult
to maintain. We are applying data dictionary technology to both these expert
systems to fully document, define, and cross reference the knowledge, in the
form of production rules.

There are several technical reports available as follows

  tr-fd-87-02 Applying software engineering concepts to rule based expert
              systems.

  tr-fc-87-05 Formal Specification of a self referential data dictionary

  tr-fc-88-01 The knowledge dictionary. A relational tool for the maintenance
              of expert systems.
 In addition, another program is implementing a pattern matching hardware
engine into UNSW Prolog to speed up the pattern matching process.

Copies of any technical reports are available by applying to

  The Divisional Secretary
  CSIRO Division of Information Technology
  PO Box 1599
  North Ryde
  NSW 2113

  Australia

  phone Australia 02 887 9307
  fax   Australia 02 888 7787

or alternatively by mailing a request to me at the address


ACSnet: jansen@ditsyda.oz               JANET:  ditsyda.oz!jansen@ukc
ARPA:   jansen%ditsyda.oz@seismo.css.gov        CSNET:  jansen@ditsyda.oz
UUCP:  {enea,hplabs,mcvax,prlb2,seismo,ubc-vision,ukc}!munnari!ditsyda.oz!jansen
AUSPAC: jansen@au.csiro.ditsyda


D

------------------------------

Date: 10 Mar 88 02:52:21 GMT
From: kddlab!icot32!nttlab!gama!etlcom!hasida@uunet.uu.net  (Koiti Hasida)
Subject: Re: constraint satisfaction programming

In <5070@pyr.gatech.EDU>, Parvis Avini writes:
> I'm looking for some interesting research in the topic of constraint logic
> programming or constraint satisfaction programming. I'm already familiar with
> Jaffar's and Lassez' work and also with the Prolog III approach.

See my article, entitled 'Dependency Propagation', included in IJCAI87
Proceedings, though, I'm afraid, this is not very well-written; less
than half of it talks about constraint programming, and natural
language is what the rest of it is about. I should work out its
constraint programming part in a more complete form.

A crucial difference between my DP and others (CLP, Prolog III, etc.)
is that DP deals with constraints on combinatorial objects (i.e., the
term structures of logic) whereas the constraints considered in the
other approaches are about arithmetic objects (rational numbers or
real numbers). Another difference is that in DP we look upon
processing as constraint transformation. Since the constraint is the
program here, processing is a sort of program transformation.

Currently under way is an implementation of the interpreter according
to DP. This implementation is being done in language C. The first
phase of the work is supposed to be finished within one month or two,
and will be applied to a natural-language parser based on a
unification grammar formalism.

I hope this is of some interest to you.

HASIDA, Koiti ('HASIDA' is my family name)
Machine Inference Section

------------------------------

Date: Thu, 10 Mar 88 10:32:55 EST
From: Robert Hummel <hummel@acf8.NYU.EDU>
Subject: Re: Uncertainty and FUZZY LOGIC VS PROBABILITY

  Concerning the note by Paul Kreelman --

You might find an article by Mike Landy and me of interest.
It appears in the North-Holland book of the proceedings of
the 1986 workshop on `Uncertainty in AI,' held in Philadelphia.
A more complete version of the paper appears in this month's
issue (Mar 88) of IEEE Transactions on Pattern Analysis and Machine
Intelligence.
                                Bob Hummel
                                New York University
                                hummel@relaxation.nyu.edu

------------------------------

Date: Thu, 10 Mar 88 10:28:48 HNE
From: Spencer Star <STAR%LAVALVM1.BITNET@CUNYVM.CUNY.EDU>
Subject: Re: AIList V6 #48 - CommonLoops, OPS5, Constraint Languages,

First I saw a surprising reference to Spiegelhalter quoting him as
saying that probabilites were inappropriate for representing uncertainty
in expert systems, and then in the March 8th AI LIst, a reference to
the original quote.  I just don't believe Spiegelhalter made those
remarks, or at least they should be put in a context that reflects
his views on probabilities.  Here's why.

> We shall not attempt to review the long, and sometimes acrimonious,
> debate as to whether probability theory is an appropriate tool in
> this context [expert systems];...Finally the 'statistical/engineering'
> model adheres to the probability calculus, justified both from a
> theoretical perspective (Lindley, 1982, 1987) and from the pragmatic
> claim that it alone provides flexible and operational means of assessment,
> criticism and learning (Cheeseman, 1985; Spiegelhalter, 1987). Pearl (1986a)
> also argues for probabilistic structuring in expert systems as
> providing a good model for human understanding and memory.
       S.L. Lauritzen and D.J. Spiegelhalter "Local computations with
probabilities on graphical structures and their application to
expert systems" Oct. 1987

The referenced articles include Lindley, "Scoring rules and the
inevitability of probability" Internat. Stat. Review, 50, 1982. and
Cheeseman, "In defense of probability" AAAI-85.

To put it simply, Spiegelhalter is one of the researchers most committed
to putting a probabilistic approach to work in expert systems.

My own view is that a subjective probability approach appears to be a
better choice than either fuzzy sets or Dempster-Shafer' belief functions
because it is the only approach that has the characteristics of
1.  Being based on a few simple, acceptable axioms.
2.  Being able to connect directly with decision theory (Dempster-Shafer can't)
3.  Having efficient algorithms for computation (The Laruitzen-Spiegelhalter
paper cited above gives one; Pearl gives another)
4.  Being well understood. (Look at what people are doing with Dempster-Shafer
belief functions or fuzzy sets.  People are not agreed as to what their
fundamental theory says.

However, if someone prefers one of the other approaches, fine.  It's really
a question of whether someone wants to work on the mainstream approach, which
is Bayesian subjective probabilities or Bayesian decision theory, or if a
more experimental approach is preferred, such as fuzzy sets or belief
functions.
                  Spencer Star (Bitnet: star@lavalvm1)

------------------------------

Date: 10 Mar 88 20:26:58 GMT
From: Adrian G C Redgers <mcvax!ivax.doc.ic.ac.uk!agcr@uunet.UU.NET>
Reply-to: Adrian G C Redgers <mcvax!doc.ic.ac.uk!agcr@uunet.UU.NET>
Subject: ...visit to the Chinese Room - some implications


In article <8803020915.AA14044@ucbvax.Berkeley.EDU> gjoly@NSS.CS.UCL.AC.UK
("G. Joly", Birkbeck) writes:
>...demonstration of the Chinese Room with two Chinese actors
>and an English (only) speaking person in the room.
>...[the anglophone] had no knowledge of written Chinese; he was merely
>manipulating symbols (as computers do).
>
>..in terms of the Turing test, the room spoke Chinese, since it
>satisfied the basic ideas of the test. Agreed that the operator could
>not speak the language, but the language was spoken by the program he was
>following.
>
>Does anybody have a ballpark figure for the time needed to run such a program
>"by hand''? More or less than the age of the universe?

a) I thought Searle's point was that humans might not "understand" Chinese (or
English) and are simply manipulating symbols which model the world. The
'Chinese room' is then a brain.  Personally I don't go for this because it
doesn't give an adequate explanation of conciousness or 'intention'; which I
know I've got even if no-one else does.  Or was Searle pointing out that the
room is unsatisfactory for those very reasons?  Ballpark figure is human
reaction time.

  [Searle proposed the room as a challenge for the symbolic-AI school,
  and would agree with your interpretation.  This was discussed at great
  length in AIList a year or two ago.  -- KIL]

b) Last night (Wednesday March 9th) BBC1 showed 'Girls on Top' with French &
Saunders and Ruby Wax. In it 'Saunders' acts as a jobber waving her arms around
and making money in a share dealing room. After rising to dizzy heights of
profit she 'crashes'. It transpires that she had no idea what her symbols meant
to other dealers - she thought she was making a butterfly and then a bird....
The moral of the story is that 'meaning' or the 'real world' will always outwit
(symbol manipulation) systems. I think Aristotle would disagree - but I don't.

c) As Jon Silkin put it in his editor's introduction to the Penguin Book of
First World War Poetry:
        Our humanity must never be outwitted by systems, and this is why we are
        at our most vital when our intelligence is in full and active
        cooperation with feeling.  We shall never not be political again, and
        the best way to be this, among others, is to think and feel; and if this
        cooperative impulse is permeated with values we can decently share, we
        stand a chance, as a species, of survival.  For that, I think, is what
        is at stake.

Systems outwitted humanity to cause WW1. Time to move to newsgroup
Aristotlelian.conspiracy.
                                love (and peace), Adrian XXX

------------------------------

End of AIList Digest
********************

∂20-Mar-88  0114	LAWS@KL.SRI.COM 	AIList V6 #52 - Prolog Digest, CLIPS, OPS5 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 20 Mar 88  01:13:47 PST
Date: Sat 19 Mar 1988 22:06-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #52 - Prolog Digest, CLIPS, OPS5
To: AIList@SRI.COM


AIList Digest            Sunday, 20 Mar 1988       Volume 6 : Issue 52

Today's Topics:
  Bindings - Prolog List & McRobbie & Boston AI Contacts,
  Queries - Compiling LISP & Boyer-Moore Theorem Prover in Common LISP &
    Bitnet Archives & Source for a Planning Program &
    TI microExplorer (Mac II Coprocessor) & AI and Chemistry &
    Parallel Inference Mechanism & Translators in Prolog/Lisp &
    KES Experience,
  Expert Systems - CLIPS & Student OPS5

----------------------------------------------------------------------

Date: Tue, 15 Mar 88 11:18:19 MET
From: TNPHHBU%HDETUD1.BITNET@CUNYVM.CUNY.EDU
Subject: missing prolog list

Date: 15 March 1988, 11:05:53 MET
From: Hans Buurman              +31 (15) 783538      TNPHHBU  at HDETUD1

Dear people,

Does anybody know what happened to the PROLOG  list ? The European subscribers
on EARN (BITNET) haven't seen anything since early december. Also, nobody
seems to be able to contact Chuck Restivo, the moderator.

The same seems to apply to the PROLOG-H(ackers) list. Could anybody tell us
what's wrong ?


     Hans Buurman
     TNPHHBU at HDETUD1.BITNET


  [Fernando Pereira tells me that Chuck is now a student at
  Cambridge.  I assume that the Prolog list will remain idle
  until a new moderator volunteers.  -- KIL]

------------------------------

Date: Mon, 14 Mar 88 12:16:27 PDT
From: ladkin@kestrel.ARPA (Peter Ladkin)
Subject: address needed

If either you are Mike McRobbie of ANU, or you know his current address,
please let me know what it is!
Thanks,
peter ladkin
ladkin@kestrel.arpa

------------------------------

Date: Mon, 14 Mar 88 11:26:37 SET
From: "Palli Wolfgang Mag."  <CA00184%AEARN.BITNET@CUNYVM.CUNY.EDU>
Subject: contact persons in boston

I want to visit the MIT in the week beginning at 28-mar-1988. I am interested
in expert systems and AI techniques.
Please send me a list of contact persons in or near by boston.
Thank You


  [I don't give out addresses of list subscribers, and don't
  have such a list for the MIT bboard readers in any case.
  Would anyone care to volunteer as a contact?  -- KIL]

------------------------------

Date: Mon, 14 Mar 88 11:51 EST
From: Glenn Ross  <G_ROSS%FANDM.BITNET@CUNYVM.CUNY.EDU>
Subject: Compiling LISP


I am interested in converting some of my VAX-LISP programs into stand-alone
excecutables for IBM and VAX mainframes.  I've been told that there is a kyoto
common lisp compiler that compiles LISP into C.  Can anyone tell me how to get
it?  Is there a better solution?  Thanks in advance.

Glenn Ross
Dept. of Philosophy
Franklin & Marshall College
Lancaster, PA  17604-3003

G_ROSS@FANDM (BITNET)

------------------------------

Date: Wed, 16 Mar 88 10:30:37 SET
From: Herwig Mayr <K312631%AEARN.BITNET@CUNYVM.CUNY.EDU>
Subject: boyer-moore theorem prover in common LISP

We are looking for a Common LISP version of the Boyer-Moore Theorem Prover
for use at our institute (mainly for lectures). Please, send replies over the
AIlist or to myself (K312631 at AEARN). Thank you||||

-herwig.

------------------------------

Date: Thu, 17 Mar 88 13:16 H
From: LEWKT%NUSDISCS.BITNET@CUNYVM.CUNY.EDU
Subject: questions

Hello, I am a new user of Bitnet. I would like to know how to accesss and
and download journals/publications that have been uploaded onto Bitnet.
I am currently working on an expert system and intelligent tutoring
system with TI Explorer and KEE 3 at the National University of
Singapore.

------------------------------

Date: 17 Mar 88 20:48:18 GMT
From: mittal@TUT.CIS.OHIO-STATE.EDU  (Vibhu O. Mittal)
Subject: Wanted Source for a Planning Program

We would like to acquire the source of a planning program to use for
educational purposes in some of the undergraduate AI courses that are taught
here at Ohio State University. We would ideally like the program to be in
vanilla Common Lisp (without any system dependant window calls! .... unless
they're to X windows), and suitable as an example of a 'typical' planning
system to introduce to a class. The students would then be able to play
around with the program.

We have considered a few systems already, but they were all written with
particular machines and environments in mind, and none of them will run on '
just common lisp'. I would like to invite opinions from people who may

   - either have such a system with them, which they would be willing to allow
     us to use for teaching and playing around with, or

   - know of such a system through having used one themselves

And thank you all for any help that you might be able to give me -



Sincerely,

Vibhu Mittal

+------------------------------------------------------------------------------+
|                               Vibhu O. Mittal                                |
|                        Department of Computer Science                        |
|                           The Ohio State University                          |
|                                                                              |
| Phone: 614-292-4635                                    ittal@ohio-state.arpa |
|                                                       ihnp4!osu-cis!mittal   |
+------------------------------------------------------------------------------+

------------------------------

Date: 17 Mar 88 14:30:45 GMT
From: rochester!kodak!luciw@louie.udel.edu  (bill luciw)
Subject: TI microExplorer (Mac II coprocessor) ...

Well, our KBS Lab is ordering a microExplorer, the coprocessor for the Mac II.
It will be equipped with 12MB of memory (using a daughter-board), and we will
be using the Development System from TI as well as trying to run TCP/IP.  I
wonder if anyone (beta-sites, maybe) has had some relevant experiences with
the product or can comment on some of our concerns:

1) What impact (if any) does the alledged lack of "true" DMA have on the
paging performance of the microExplorer?

2) Is TI's implementation of RPC available to other applications (such as those
developed under MPW)?

3) How well integrated is the microExplorer into the rest of the Mac
environment - (cut, copy, paste, print on an AppleTalk printer) ?

4) Can you install the "load bands" on third party disks (SuperMac 150) or do
they need to remain on the Apple hard disk (the load bands are supposed to be
normal, finder accessible files)?

5) How much of a hassle is it to port applications over to the little beastie
from a normal Explorer (what about ART, KEE, SIMKIT, etc.)?

6) Do any benchmarks (ala Gabriel) exist for this machine?

7) How about ToolBox access from the Lisp Environment? (or am I dreaming?)

That'll do for starters ...

Our group is responsible for testing this type of technology and developing a
"delivery vehicle strategy."  Ideally, said delivery vehicle should be under
$10K, but it looks like we'll be around $20K before we're through. This puts
the microExplorer in the same price range as a "reasonably" equiped Sun 3/60FC.

Thankyou in advance for all your comments and I will post our experiences
(good or bad, of course) as they develop ...


Happy St. Patty's Day ...



--
Bill Luciw / Technology Leader        ATTnet:  (716) 477-5384
Knowledge-Based Systems Group           UUCP: ...rutgers!rochester!kodak!luciw
Eastman Kodak Company                   ARPA: luciw@cs.rochester.edu
 "Don't take life seriously, you'll never get out of it alive!"  -- Bugs Bunny

------------------------------

Date: 17 Mar 88 21:45:39 GMT
From: mist!warrier@cs.orst.edu  (ulhas warrier)
Subject: Info on AI and Chemistry

Organization: Oregon State Universtiy - CS - Corvallis, Oregon
Keywords: Artificial Intelligence, Chemistry


        I would appreciate it very much if someone can give me information
on companies (name,address & area of research) doing substantial work on
Artificial Intelligence applications in Chemistry.
        Please e-mail the response to me and if there is enough interest, a
list will be posted later.
        Thanks in advance,
                                Ulhas Warrier

------------------------------

Date: 17 Mar 88 16:04:21 GMT
From: mcvax!inria!vmucnam!daniel@uunet.uu.net  (Daniel Lippmann)
Subject: parallel inference mechanism

In an article "a new parallel inference mechanism based on
sequential processing", published in the IFIP-86 based book
"Fifth generation computer architecture" (J.V Woods edit.)
the authors refer to the so called " KABU-WAKE " method.
Does anybody heard of this method and can explain how and
where learn about it ?
the authors were :
-Yukio Sohma, Ken Satoh, Kouichi Kumon, Hideo Hasuzawa and
Akihino Itashiki from the A.I Fujitsu laboratory.
Once more does anybody know if it possible to contact them ?
thanking everybody in advance
daniel (...!seismo!mcvax!inria!vmucnam!daniel)

------------------------------

Date: 16 Mar 88 23:39:22 GMT
From: ihnp4!fortune!bloom@ucbvax.Berkeley.EDU  (Chris Bloom)
Subject: translators in Prolog/Lisp

I have written a translator in "C" that converts a simulation model written
in the CADAT simulation modeling language to either one of two other widely
accepted languages being used for simulation modeling.  EDIF (Electronic
Design Interchange Format) and VHDL (Very_Large_Scale_Integrated Hardware
Design Language) are the two language types that can be produced from a
CADAT structural model using this program.

I have been told that Prolog is optimal for this type of translator.  Does
anyone have references on the topic of writing translators in Prolog (or
Lisp).

I am currently modifying the translator to add chip placement information
into the EDIF netlist view (which was originally translated from a CADAT
model).  This will then be run through my homebrew auto-router to produce
a schematic.  This is currently being done in "C" which is fine.  I would
though like to know about any similiar work being done using a list
processing language.

-->Chris B. Bloom

------------------------------

Date: 15 Mar 88 21:19:15 GMT
From: rion@ford-wdl1.arpa  (Rion Cassidy)
Subject: KES experience ?


Our local Prime sales rep has been trying to convince us that Software
Architecture and Engineering's Knowledge Engineering System (KES) is
what we need for a robotics path planning and control system.  I
intend to get a demo and ask a lot of questions, but am hoping that
*someone* out here has had some contact with this company and/or
product, and willl be wiling to share their experiences.

Any help would be appreciated.

Rion Cassidy
Ford Aerospace
rion@ford-wdl1.arpa
...{sgi,sun,ucbvax}!wdl1!rion

Disk Claimer: My employer forced me to write this at gun-point.
I assume no responsibility whatsoever for what I've said here.

------------------------------

Date: 14 Mar 88 18:13:19 GMT
From: gt-cmmsr!rr@gatech.edu  (Richard Robison)
Subject: Every heard of KLIPS ???


A professor here is interested in a program called KLIPS.  He was very vague
about what it was, but did say that it was some kind of AI application.  Any
help locating this program would be very helpful.  Thanks.

-Richard
--
Richard Robison

UUCP:   rr@gt-cmmsr.UUCP        (404-894-6221)
        ...!{allegra,hplabs,ihnp4,ulysses}!gatech!gt-cmmsr!rr
INTERNET:       rr@cmmsr.gatech.edu

------------------------------

Date: 15 Mar 88 22:43:23 GMT
From: devvax!jplpro!leem@ELROY.JPL.NASA.GOV  (Lee Mellinger)
Subject: Re: Every heard of KLIPS ???

In article <31922@gt-cmmsr.GATECH.EDU> rr@gt-cmmsr.GATECH.EDU
(Richard Robison) writes:
:
:A professor here is interested in a program called KLIPS.  He was very vague
:about what it was, but did say that it was some kind of AI application.  Any
:help locating this program would be very helpful.  Thanks.
:
:-Richard
:--
:Richard Robison
:
:UUCP:  rr@gt-cmmsr.UUCP        (404-894-6221)
:        ...!{allegra,hplabs,ihnp4,ulysses}!gatech!gt-cmmsr!rr
:INTERNET:      rr@cmmsr.gatech.edu

There is an expert system language called CLIPS (C Language Integrated
Production System) that was written by the Mission Planning and Analysis
Division of the Johnson Spaceflight Center, NASA.  It is available from
COSMIC at the University of Georgia.  The program number is MSC-21208.
The COSMIC phone number is 404/525-3265.

Lee

The

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|Lee F. Mellinger                         Jet Propulsion Laboratory - NASA|
|4800 Oak Grove Drive, Pasadena, CA 91109 818/393-0516  FTS 977-0516      |
|-------------------------------------------------------------------------|
|UUCP: {ames!cit-vax,psivax}!elroy!jpl-devvax!jplpro!leem                 |
|ARPA: jplpro!leem!@cit-vax.ARPA -or- leem@jplpro.JPL.NASA.GOV            |
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

------------------------------

Date: 15 Mar 88 16:55:45 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Ever heard of KLIPS ???


     KLIPS has been used as an acronym for Kilo Logical Inferences Per
Second.  In the Prolog community, the execution of one Prolog statement
is considered one logical inference, and Prolog systems are thus rated
in LIPS, Logical Inferences Per Second, to which the usual metric prefixes
are applied when appropriate.

------------------------------

Date: 17 Mar 88 17:38:29 GMT
From: uflorida!kirmse@gatech.edu  (Dale Kirmse)
Subject: Re: Every heard of KLIPS ???

The program KLIPS that you are probably looking for is CLIPS.
The README file for the version that I have access to reads as follows:

The Artificial Intelligence Section of the Mission Planning and
Analysis Division at NASA/Johnson Space Center has completed the
first release version of CLIPS, a tool for the development of
expert systems. CLIPS is an inference engine and language syntax
which provides the framework for the construction of rule-based
expert systems.

CLIPS was entirely developed in C for performance and portability and
is available for a wide variety of computers, from PC's to a CRAY.
The key features of CLIPS are:

Powerful Rule Syntax: CLIPS allows Forward Chaining Rules with free form
   patterns, single and multi-field variable bindings across patterns, user
   defined predicate functions on the LHS of a rule, and other powerful
   features.

Portable: CLIPS has been installed on over half a dozen machines
   without little or no code changes.

High Performance: CLIPS performance on minicomputers (VAX, SUN) is
   comparable to the performance of high powered expert system tools in
   those environments. On microcomputers, CLIPS outperforms most other
   microcomputer based tools.

Embeddable: CLIPS systems may be embedded within other C programs
   and called as a subroutine.

Interactive Development: CLIPS provides an interactive, text oriented
   development environment, including debugging aids.

Completely integrated with C: Users may define and call their own
   functions from within CLIPS.

Extensible: CLIPS may be easily extended to add new capabilities.

Source Code: CLIPS comes with all source code and can be modified or
   tailored to meet a specific users's needs.

Fully Documented: CLIPS comes with a full reference manual complete
   with numerous examples of CLIPS syntax. Examples are also given on how
   to create user defined functions and  CLIPS extensions. A User's Guide to
   introduce expert system programming with CLIPS is also available.

CLIPS is available at no cost to NASA, DoD or other government agencies.
Call the CLIPS Help Desk at (713) 280-2233 to obtain a copy.
Other organizations can obtain CLIPS and/or documentation
for a nominal fee through COSMIC:

  COSMIC
  382 E. Broad St
  Athens, GA  30602
  (404) 542-3265
________________

I understand from talking to NASA personnel that the current plans
are to include Macitosh and X window interfaces in later versions.
And, an ART sales representative has told me that CLIPS was the initial
basis of microART which is currently underdevelopment and will have many
features not now in CLIPS.
--
Dale Kirmse
Chemical Engineering Department
University of Florida
Gainesville, Florida 32611
Internet:       kirmse@ufl.bikini.UUCP  Phone: (904-392-0862)
Dale Kirmse @bikini.cis.ufl.edu

------------------------------

Date: 16 Mar 88 03:45:28 GMT
From: sundc!rlgvax!bdmrrr!shprentz@seismo.css.gov  (Joel Shprentz)
Subject: Re: Every heard of KLIPS ???

In article <31922@gt-cmmsr.GATECH.EDU>, rr@gt-cmmsr.GATECH.EDU
(Richard Robison) writes:

> A professor here is interested in a program called KLIPS.  He was very vague
> about what it was, but did say that it was some kind of AI application.  Any
> help locating this program would be very helpful.  Thanks.

HOW TO GET CLIPS

Clips is available as program #MSC-21208 from COSMIC, NASA's software
distribution center at the University of Georgia.  Their address is:

        COSMIC
        The University of Georgia
        382 East Broad Street
        Athens, Georgia  30602

        Phone: (404) 542 3265
        Telex: 490 999 1619

We received Clips on six IBM-PC floppy disks.  Other formats are
available.  The disks included the C source code, PC executables,
utility programs, and some examples.  The C source code is portable;
we compiled it on a Sun workstation.

CLIPS VS. OPS5

Clips (C Language Integrated Production System) is similar to OPS5.
OPS5 skills are directly transferable to Clips.  Clips rules, like OPS5
rules, are compiled into a network for efficient matching with the Rete
algorithm.  This algorithm is inherently forward chaining.

One noticeable difference between OPS5 and Clips is that OPS5 tags
values in working memory elements but Clips does not.  For example,
an OPS5 memory element may be

        (Person ↑name Smith ↑age 23 ↑eyes blue)

Because the OPS5 values are tagged, they may be reordered without
changing their meaning:

        (Person ↑age 23 ↑name Smith ↑eyes blue)

When matching OPS5 patterns, partial working memory elements may be
specified.  This pattern selects people with blue eyes:

        (Person ↑eyes blue)

Clips uses the value's position within the memory element to associate
it with some meaning.  The Clips version of the same person is

        (Person Smith 23 blue)

To match blue eyed people with Clips, wildcards must match values that
don't matter:

        (Person ? ? blue)

The value tagging difference makes Clips program development more
error prone than OPS5 development.

THE C INTERFACE

Clips can interface to C programs in three ways.  First, Clips rules can
call C functions.  This is great for complex calculations and
user interfaces. The C functions must be listed in a table compiled
into Clips.

Second, C programs may call the Clips inference engine to do logical
processing.  The Clips system is embedded within a C program.

Third, Clips provides C functions to assert information, define rules,
etc.  The standard clips user environment simply provides interactive
access to these functions.

Clips may also be interfaced with languages other than C.  Examples
show how to interface to Ada and FORTRAN.

--
Joel Shprentz                   Phone:  (703) 848-7305
BDM Corporation                 Uucp:  {rutgers,vrdxhq,rlgvax}!bdmrrr!shprentz
7915 Jones Branch Drive         Internet:  shprentz@bdmrrr.bdm.com
McLean, Virginia  22102

------------------------------

Date: 16 Mar 88 15:33:37 GMT
From: midevl.dec.com!barabash@decwrl.dec.com  (Digital has you now!)
Subject: Re: Student versions of OPS5


  In article <26695@aero.ARPA> srt@aero.UUCP (Scott R. Turner) writes:
> I don't think the Vax version [of OPS5] uses Rete (at least, it allows
> calculations in the LHS).

  In fact, VAX OPS5 uses the high-performance compiled Rete, first used
  by OPS83, wherein each node in the network is represented by machine
  instructions.  This makes it easy for the compiler to support inline
  expression evaluation and external function calls in the LHS.

        Bill Barabash
        DEC AI Advanced Systems & Tools
        barabash@rachel.dec.com

------------------------------

Date: 16 Mar 88 17:20:25 GMT
From: trwrb!aero!srt@ucbvax.Berkeley.EDU  (Scott R. Turner)
Subject: Re: Student versions of OPS5

In article <1501@netmbx.UUCP> muhrth@db0tui11.BITNET (Thomas Muhr) writes:
>I have now available a few common-lisp
>sources (each about 100KB big) which I will try to convert to a PC-runnable
>version in the near future.

It should be possible to write an OPS5-like language in a lot less than
100K.  The only difficult part of OPS5 to implement is the RETE algorithm.
Throw that out, ignore some of the rules for determining which rule out
of all the applicable rules to use (*), and you should be able to implement
OPS5 in a couple of days.  Of course, this version will be slow and GC
every few minutes or so, but those problems will be present to some extent
in any version written in Lisp.

(*) My experience is that most OPS5 programmers (not that there are many)
ignore or actively counter the "pick the most specific/least recently used"
rules anyway.

                                        -- Scott Turner

------------------------------

End of AIList Digest
********************

∂20-Mar-88  0357	LAWS@KL.SRI.COM 	AIList V6 #53 - VLSI Testability, Agriculture, Genetic Algorithms   
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 20 Mar 88  03:57:25 PST
Date: Sat 19 Mar 1988 22:18-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #53 - VLSI Testability, Agriculture, Genetic Algorithms
To: AIList@SRI.COM


AIList Digest            Sunday, 20 Mar 1988       Volume 6 : Issue 53

Today's Topics:
  Philosophy - Implications of the Chinese Room,
  Application - References on VLSI Testibility Checking &
    Agricultural Uses of AI,
  Logic - Funny Logics and AI,
  Course - Graduate Study in AI (Washington University),
  Genetic Algorithms - Definition and Schools of Research

----------------------------------------------------------------------

Date: 14 Mar 88 11:03 PST
From: hayes.pa@Xerox.COM
Subject: Re: AIList V6 #51 ..implications of the Chinese Room

Adrian G C Redgers writes:
>a) I thought Searle's point was that humans might not "understand" >Chinese (or
English) and are simply manipulating symbols which >model the world. The
'Chinese room' is then a brain. ... Or was >Searle pointing out that the room is
unsatisfactory for those very >reasons?

Why not try reading Searle?  He couldnt be clearer or more entertaining ( or
wronger, but thats another story ).    He isnt claiming that brains arent
machines, or that humans dont understand Chinese.  His point is that a
programmed computer cant understand anything, even if it behaves impeccably,
passing the Turing test all over the place.  Reason: well, the program cant
because its just a pile of symbols, and the unprogrammed hardware ( =the man in
the room ) certainly doesnt know anything, being just dumb electronics, and
that/s all there is in a programmed computer, QED.  A brain, now, is different,
because of course brains understand things: and the conclusion obviously is that
whatever sort of machine the brain is, it isnt a programmed computer.   So
`strong AI' is wrong, Turing test and all. Weak AI, on the other hand, just
claims that it is simulating intelligence on electronics, which is fine ( says
Searle ) - probably a scientific mistake, he guesses, but not a philosophical
mistake, and immune from the Chinese room argument.

Pat Hayes

------------------------------

Date: Tue 15 Mar 88 13:05:24-CST
From: Charles Petrie <AI.PETRIE@MCC.COM>
Subject: Reply to Gabriel - VLSI Testibility Checking

The MCC CAD program has a knowledge-based testibility checker -
  contact Thomas@MCC.com

The NCR Design Advisor includes testibility checking -
  contact AI.Steele@MCC.com
also see Robin Steele's article in the proceedings of the 24th
Design Automation Conference (ACM/IEEE)

Both are commercial applications.  NCR charges a minimum of $4,000
for users to come in and run their designs through the Design Advisor
and interact with it.

------------------------------

Date: 16 Mar 88 14:07:06 est
From: Mark Shirley <mhs@ht.ai.mit.edu>
Subject: applications of AI techniques to Testability in VLSI design?


   Does anyone out there have any references or information on
   applications of AI techniques to Testability in VLSI design?

                        Thanks in advance,
                              Gabriel.



AI and Design for Testability:

    @InProceedings(shirley87,
        Key="shirley87",
        Author="Shirley, M., P. Wu, R. Davis, G. Robinson",
        Title="A Synergistic Combination of Test Generation and Design for
          Testability",
        Organization="The Computer Society of the IEEE",
        BookTitle="International Testing Conference 1987 Proceedings",
        Year="1987",
        Pages="701-711")

    In the last couple of years, the testing conference has had AI sessions

    \bibitem{abadir85}
    Magdy~S. Abadir and Melvin~A. Breuer.
    \newblock A Knowledge-Based System for Designing Testable VLSI Chips.
    \newblock {\it IEEE Design \& Test of Computers}, 56--68, August 1985.

AI and Test Generation:

    @InProceedings(Shirley86,
        key="Shirley86",
        Author="Shirley, M.",
        Title="Generating Tests by Exploiting Designed Behavior",
        Organization=AAAI,
        BookTitle="Proceedings of the Fifth National Conference on
          Artificial Intelligence (AAAI-86)",
        Year=1986,
        Month=August,
        Pages="884-890")

    @InProceedings(Singh86,
        key="Singh86",
        Author="Singh, N.",
        Title="Saturn: An Automatic Test Generation System for
          Digital Circuits",
        Organization=AAAI,
        BookTitle="Proceedings of the Fifth National Conference on
          Artificial Intelligence (AAAI-86)",
        Year=1986,
        Month=August,
        Pages="778-783")

Related DFT and Test Generation work:

    \bibitem{horstmann}
    Paul~W. Horstmann.
    \newblock Design for Testability using Logic Programming.
    \newblock In {\it Proceedings of 1983 International Test Conference},
      pages~706--713, 1983.

    @PhDThesis(Lai81,
        Key="Lai",
        Author="Lai, Kwok-Woon",
        FullAuthor="Larry Kwok-Woon Lai",
        Title="Functional Testing of Digital Systems",
        School=CMU,
        Number="CMU-CS-148",
        Month="December",
        Year="1981")

    \bibitem{williams73}
    M.~J.~Y. Williams et al.
    \newblock Enhancing testability of large-scale integrated circuits via test
     points and additional logic.
    \newblock {\it IEEE trans on Computers}, C-22(1):46--60, January 1973.

------------------------------

Date: 18 Mar 88 08:34:47 GMT
From: munnari!metro.ucc.su.oz.au!daemon@uunet.UU.NET
Subject: Re: Agricultural Uses of AI


Siratac is an expert system for the advice of pest management for
cotton farmers. it is being developed by the CSIRO Division of
Information Technology and Division of Plant Industry in Australia.
Several references are available in conference proceedings both
computer/ai and cotton related. However, a talk is being presented to
the AAAI Workshop Series in the US this year specifically to do with
Siratac. If you want any copies contact me at the following adresses

ARPA: jansen%ditsyda.oz@seismo.css.gov
CSNET: jansen@ditsyda.oz
UUCP: {enea,hplabs,mcvax,prlb2,seismo,ubcvision,ukc}!munnari!ditsyda.oz!jansen
AUSPAC: jansen@au.csiro.ditsyda

------------------------------

Date: Tue, 15 Mar 88 13:39:55 GMT
From: Flash Sheridan <flash%ee.qmc.ac.uk@NSS.Cs.Ucl.AC.UK>
Reply-to: flash <@NSS.Cs.Ucl.AC.UK,@cs.qmc.ac.uk:flash@ee.qmc.AC.UK>
Subject: Re: Funny Logics and AI: references

You might also look at _Non-Standard Logics for Automated Reasoning_,
Academic Press, 1988, ed. E.H.Mamdani et al.  It's a bunch of polemics
and discussions and expositions of lots of different approaches, of varying
quality.

------------------------------

Date: 18 Mar 88 21:22:39 GMT
From: wucs1!wucs2!posdamer@uunet.uu.net  (Jeff Posdamer)
Subject: Graduate study in AI offering (Washington University)


         GRADUATE CERTIFICATE IN ARTIFICIAL INTELLIGENCE
                        Intensive Program
              Summer Session - May 30-August, 1988



This Graduate Certificate in  Artificial  Intelligence  offers  a
strong  foundation  in  the theory, techniques and methods of AI.
Graduates will have a grounding in knowledge engineering, AI pro-
gramming   methods   and  languages,  knowledge  acquisition  and
representation, and application development  tools  and  methods.
Ideal  for engineers, scientists and MIS professionals, this pro-
gram will give the students an intensive preparation in the  fun-
damentals  of artificial intelligence and expert systems with em-
phasis on the practices required to design and construct applica-
tions.   The  program  has over 100 graduates and is currently in
its fourth offering.  Graduates will earn credit applicable to  a
graduate  degree  and will be awarded a graduate certificate upon
satisfactory completion.  the program emphasises laboratory work.
A significant part of the program is the students' projects. Each
student should enter the program with a proposed AI project.  The
individual  student  performs knowledge acquisition, luation, en-
coding and analysis. The staff will select projects to be  imple-
mented  as  prototypes  by teams of three or four students. Final
project reports are presented in seminar and written form.


For further information contact:
            Center for Intelligent Computing Systems
                         Campus Box 1141
                      Washington University
                     Saint Louis, MO  63130

                         (314)-889-6766

                       tonya@syr.wustl.edu

--
Jeff Posdamer, Washington University, St. Louis, MO, (314) 889-6147
posdamer@syr.wustl.edu

------------------------------

Date: Wed, 16 Mar 88 18:16:17 PST
From: rik%cs@ucsd.edu (Rik Belew)
Subject: A four-bit definition of Genetic Algorithms

My two-bit definition of "genetic algorithms has,
like all over-simplifications, drawn criticisms
for being --- you guessed it --- over-simplified!
So here's a second pass:

  First  and foremost,  I want to  "make one thing
perfectly  clear":  When I  defined GA(1) in terms
of  "...   John Holland  and his  students" I most
certainly  did  not  intend that  only  those that
had  journeyed  to  Ann Arbor  and  been annointed
by  the man  himself were fully  certified to open
GA  franchises!    Defining a  scientific approach
in  terms  of  the personalities  involved  is not
adequate,  for  GA  or  any other  work.     I was
attempting  to  distinguish a  particular approach
from  the broader set of  techniques that I called
GA(2).    In  my  experience, John  is  the "root"
of   all  such  work  and  much  of  it  has  been
done  by  "direct"  students  of  his at  Michigan
and  "indirect"  students of  these students.    I
also  know, however, that  others --- notably Dave
Ackley  and  Larry  Rendell  ---  have  worked  on
these  methods  without direct  contact  with this
lineage.   But I very much consider them "students
of  Holland," in that  they are aware  of and have
benefited  from John's work.   (Again, I mean that
as  a compliment, not because  I have been charged
with  GA  Clique membership  validation.)   I  see
absolutely  no benefit and  potentially great harm
in  drawing lines  between closely  related bodies
of research.


  So  let's move on to more meaningful attempts to
define  the GA.  My two-bit definition  focused on
the  cross-over operator:   GA(1) depended  on it,
and  GA(2)  generally relyed  on the  weaker (less
intelligent)  mutation operator.   This  lead Dave
Ackley to feel that:
      ...   membership in  GA(1) is restricted
    to  a small and  somewhat quirky "DNA-ish"
    subset  of all  possible combination rules
    [ackley@flash.bellcore.com, 3 Mar 88]

  I   take  Dave   to  mean  that   the  algorithm
presented  by Holland  (let's say  the R  class of
algorithms  described  in his  ANAS Chapter  6, to
be  specific) sacrifices some performance in order
to  remain more  biologically plausible.   But I'm
with  Dave on this one:  Personally, I'm also more
intereted  in the  algorithms than  their relation
to  real biological  mechanisms.   (Let the record
show,   however,  that  there are  GA  practioners
who  do try  to take biological  plausibility more
seriously, e.g., Grosso and Westerdale.)

  So   the  next   possibility  is  to   refer  to
properties  of  the GA  we  find desirable.    For
Dave,  I think the  key property of the  GA is its
"implicit  parallelism":  the  ability to search a
huge  space implicitly  by explicitly manipulating
a  very small set  of structures.   Jon Richardson
comes  closer  to  the definition  I  had  in mind
with  his emphasis on  Holland's "building blocks"
notion:

      The  proper   distinction  I   think  is
    whether  or not the recombination operator
    in  question  supports the  building block

    hypothesis.     "Mutation-like  operators"
    do  not  do  this.     Any  kind  of weird
    recombination   which  can   be  shown  to
    propagate  and construct  building blocks,
    I  would  call a  Genetic  Algorithm.   If
    the  operator  does nothing  with building
    blocks,  I  would consider  it apocryphal.
    It   may   be   valuable   but  apocryphal

    nonetheless and  shouldn't be called a GA.
    [richards@UTKCS2.CS.UTK.EDU, 4 Mar 88]


  While  I  would  echo the  value  of  both these
desiderata,  I don't  find them  technically tight
enough  to be useful.  So I suggest that we follow
Holland's  suggestion  (in  his  talk  at  ICGA85)
and  reserve  the term  "GA" for  those algorithms
for  which  we  can prove  the  "schemata theorem"
(ANAS,  Theorem 6.2.3).    I believe  this theorem
is  still the  best understanding  we have  of how
the  GA gives  rise to the  properties of implicit
parallelism,   building  blocks,  and  why.     Of
course,  there are  problems with  this definition
as  well.     In  particular,  it  is  so  tightly
wed  to the  string representation  and cross-over
operator  that it is very difficult to imagine any
algorithm  very  different from  the (traditional)
GA  that would  satisfy the  theorem.   But that's
exactly where I think the work needs to be done.

  Finally,  I want  to say  why I (as  Dave Ackley
says)  "took the trouble to exclude" Ackely's SIGH
system  from my  definition of  GA(1).   My answer
is  simply  that I  view  SIGH as  a hybrid.    It
has  borrowed techniques  from a number  of areas,
the  GA,  connectionism,  simulated  annealing  to
name  three.   There  is absolutely  nothing wrong
with  doing this, and as  Dave's thesis showed and
Larry  Eshelman's note  confirmed [Larry.Eshelman-
@F.GP.CS.CMU.EDU,  11 Mar 1988] there are at least
some  problems  in  which  SIGH  does much  better
that  the  traditional  GA. My  only  problem with
SIGH  is  that  I can't  do  the  apportionment of
credit  problem:    when it  works,  I  don't know
exactly  which technique is  responsible, and when
it  doesn't  work  I  don't  know  who  to  blame.
I  too  think about  connectionist  algorithms and
simulated  annealing along  with Holland's  GA and
bucket  brigade,  and see all  of them  as members
of  a  class of  algorithms I  want  to understand
better.   But  I find it necessary  to isolate the
properties  of each before trying to combine them.
In  short,  I think Dave  and I  agree to  a great
extent  (on the problem,  and on  what portions of
the  solution might be), and  disagree only in our
respective approaches to putting it all together.

------------------------------

End of AIList Digest
********************

∂22-Mar-88  0108	LAWS@KL.SRI.COM 	AIList V6 #54 - Seminars, Conferences 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 22 Mar 88  01:08:28 PST
Date: Mon 21 Mar 1988 21:45-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #54 - Seminars, Conferences
To: AIList@SRI.COM


AIList Digest            Tuesday, 22 Mar 1988      Volume 6 : Issue 54

Today's Topics:
  Seminars - A Formalization of Inheritance (Unisys) &
    Partial Global Planning for Problem Solving (CMU),
  Conferences - AAAI88 Design Workshop &
    Parallel Algorithms in AI workshop at AAAI88 &
    Workshop on Blackboard Systems &
    IJCAI 89 in Detroit

----------------------------------------------------------------------

Date: Thu, 17 Mar 88 13:35:37 EST
From: finin@PRC.Unisys.COM
Subject: Seminar - A Formalization of Inheritance (Unisys)


                              AI SEMINAR
                     UNISYS PAOLI RESEARCH CENTER


              A Formalization of Inheritance Hierarchies
                with Exceptions and Multiple Ancestors

                           Lokendra Shastri
                      University of Pennsylvania

Many knowledge-based systems express domain knowledge in terms of a hierarchy
of concepts/frames - where each concept is a collection of attribute-value (or
slot-filler pairs). Such information structures are variably referred to as
frame-based languages, semantic networks, inheritance hierachies, etc.  One
can associate two interesting classes of inference with such information
structures, namely, inheritance and classification.  Attempts at formalizing
inheritance and classification, however, have been confounded by the presence
of conflicting attribute-values among related concepts. Such conflicting
information gives rise to the problems of exceptions and multiple inheritance
during inheritance, and partial matching during classification.  Although
existing formalizations of inheritance hierarchies (e.g., those proposed by
Etherington and Reiter, and Touretzky) deal adequately with exceptions, they
do not address the problems of multiple inheritance and partial matching.
This talk presents a formalization of inheritance hierarchies based on the
principle of maximum entropy. The suggested formalization offers several
advantages: it admits necessary as well as default attribute-values, it deals
with conflicting information in a principled manner, and it solves the
problems of exceptions, multiple inheritance, as well as partial matching. It
can also be shown that there exists an extremely efficient realization of this
formalization.

                      2:00 pm Tuesday, March 22
                     Unisys Paloi Research Center
                      Route 252 and Central Ave.
                            Paoli PA 19311

   -- non-Unisys visitors who are interested in attending should --
   --   send email to finin@prc.unisys.com or call 215-648-7446  --

------------------------------

Date: Fri, 11 Mar 88 11:27:24 EST
From: Anurag.Acharya@CENTRO.SOAR.CS.CMU.EDU
Subject: Seminar - Partial Global Planning for Problem Solving (CMU)


                   AI SEMINAR

TOPIC:     The Partial Global Planning Approach
           to Coordinating Distributed Problem Solvers

SPEAKER:    Edmund H. Durfee
           Department of Computer & Information Science
           Lederle Graduate Research Center
           University of Massachusetts at Amherst
           Amherst, Massachusetts  01003
           (413) 545-1349

WHEN:      Tuesday, March 15, 1988   3:30pm

WHERE:     Wean Hall 5409

                        ABSTRACT

As distributed computing is used in applications where the distributed tasks
are highly interrelated and change dynamically, coordination becomes
increasingly important and difficult.  Distributed artificial intelligence
(DAI) applications often have these characteristics.  In distributed problem
solving networks, for example, individual nodes solve interacting subproblems
of larger network problems.  Network problems may change over time, so
effective network problem solving depends on nodes coordinating their local
actions and planning their interactions to cooperate as a coherent team.

We introduce partial global planning as a new approach to coordination.
Whereas previous DAI approaches, such as contracting or multi-agent planning,
are specialized for particular situations, our new partial global planning
approach provides a unified and versatile framework for dynamically
coordinating independent nodes.  The approach views control as a planning
task, where nodes individually develop local plans and asynchronously
exchange plan information in order to form and follow partial global plans
that specify cooperative actions and interactions.  In this talk, I will
describe how partial global planning has been implemented in a simulated
distributed problem solving network for vehicle monitoring.  I will discuss
experimental results showing that partial global planning improves
coordination without introducing excessive overhead, allows effective
coordination even in dynamically changing situations, and provides
flexibility so that nodes can cooperate in different ways to achieve a
variety of goals.

------------------------------

Date: 13 Mar 88 19:54:52 GMT
From: ISL1.RI.CMU.EDU!dchandra@pt.cs.cmu.edu  (Dundee Navinchandra)
Subject: Conference - AAAI88 Design Workshop


                         AAAI-88 WORKSHOP ANNOUNCEMENT

                                AI in Design
                                ------------

Part-of: Workshop on Integrated Manufacturing
Sponsored-by: AAAI Special Interest Group on AI in Manufacturing
Date: August 24, 1988 (During the Conference)

Background:

Design is fast becoming a major focus of AI research.  This is due, in
part, to the emerging theory's reliance on AI techniques such as
planning, control, learning, qualitative reasoning, etc.  The purpose
of this workshop is to identify critical issues in design, applicable
AI technologies, and directions for future research.  The emphasis
will be on issues rather than on specific systems. There are several
issues which are of common interest to people concentrating in
different domains such as : VLSI design, Mechanical Engineering,
Chemical Engineering, Computer Engineering etc. The workshop will
provide researchers from these different domains an opportunity to
exchange ideas and views on topics of specific interest to the them.

Organization:

The workshop will be a one-day affair and will be comprised of five
sessions of 1.5 hours each. Each session will be led by a member of
the organizing committee and will concentrate on one issue.

We have tentatively identified the following topics of interest:

   * Planning and Control of the design process. The use abstractions
     and rough designs to guide future decisions.

   * Design Modeling. Qualitative modeling. Simulation of behavior and
     manufacturing process.

   * Innovation in design. The use and control of mutation operators,
     precedents and the role of analogy in design.

   * Cooperative design. Control and communication among agents.
     Negotiation and conflict management.

   * Constraint management and domain-independent frameworks
     for configuration design problems. Constraint compilation.

The list is tentative and will be revised based on the interests expressed
by the participants. Potential attendees are invited to submit extended
abstracts of about 5 pages. These abstracts will be used as background
information for the discussions during the workshop. Send SIX copies of your
abstract, including your name, address, phone number, title and authors on a
separate page, to:

        D. Navinchandra
        Robotics Institute
        Carnegie Mellon University
        Pittsburgh, PA 15213

Abstracts should be received no later than May 15.  Authors will be notified
of acceptance by July 1, 1988.  The workshop is limited to 45 participants
and acceptance of abstracts will be based on their relevance to the
prevailing issues.

Organizing Committee:

Workshop Chairmen: Mark S. Fox and  D. Navinchandra, Carnegie Mellon
University.

Program Committee:
        M. Dyer, UCLA
        J. Gero, Sydney University, Australia
        R. Mayers, Texas A&M
        S. Mittal, Xerox PARC
        D. Sriram, M.I.T.
        C. Tong, Rutgers University

------------------------------

Date: 16 Mar 88 00:12:02 GMT
From: Laveen N. KANAL <kanal@mimsy.umd.edu>
Reply-to: kanal@mimsy.umd.edu.UUCP (Laveen N. KANAL)
Subject: Conference - Parallel Algorithms in AI workshop at AAAI88

AAAI-88 workshop on Parallel Algorithms for Machine Intelligence
and Pattern Recognition.
Minneapolis, Minn. Aug.20 and 21, 1988.

                       Organizing Committee:

                       Prof. Laveen N. Kanal,(kanal@mimsy.umd.edu)
                       Dept. of Computer Science
                       University of Maryland,College Park, Md., 20742

                       Dr. P.S. Gopalakrishnan, (PSG@ibm.com)
                       IBM Research Division
                       T.J. Watson Research Center, 39-238
                       P.O.Box 218
                       Yorktown Heights, N.Y. 10598

                       Prof. Vipin Kumar,(kumar@sally.utexas.edu)
                       Computer Science Dept.
                       Univ. of Texas at Austin,
                       Austin, Texas, 78712.


There is much interest in AI in parallel algorithms for exploring
higher level knowledge representations and structural relationships. Parallel
algorithms for search,combinatorial optimization,constraint satisfaction,
parallel production systems,and pattern and graph matching are expressions of
this interest. There is also considerable interest and ongoing work on
parallel algorithms for lower level analysis of data,in
particular,in vision,speech and signal processing,often based on stochastic
models. For practical applications of machine intelligence and pattern
recognition the question arises as to the extent to which parallelism
for high and low level analysis can be achieved in an integrated manner.

The workshop will aim at bringing together individuals working in each of
the above two aspects of parallel algorithms to consider the basic nature
of the procedures involved and the degree to which parallel
approaches to high and low level operations in machine intelligence,
pattern recognition, and signal processing can be integrated.

Contributors interested in participating in this workshop are requested to
submit a 1000-2000 word extended abstract of their work on parallel algorithms
in areas of Machine Intelligence and Pattern Recognition. Areas of interest
include Search Problems in A.I. and Pattern recognition, high and low level
processing in Computer Vision, Speech Recognition, Optimization Problems
in A.I.,Constraint Satisfaction, and Pattern and  Graph matching. Although
the number of participants at the workshop must be limited due to limited
room size, papers invited for submission on the basis of the abstracts,
together with invited papers and discussions presented at the workshop, will
all be reviewed for inclusion in an edited volume of the series Machine
Intelligence and Pattern Recognition published by North-Holland Publishers.

Abstracts should be sent as soon as possible and must reach the organizers
no later than June 1,1988. Abstracts may be sent by electronic mail to all
the organizers at the e-mail addresses shown.  Hard copy versions of each
abstract should also be sent to each of the organizers in order to expedite
review. Responses to all who submit abstracts will be sent by July 1, 1988.


--
     Laveen N. KANAL
     (301)454-7877/927-3223
     kanal@mimsy.umd.edu
     uunet!mimsy!kanal

------------------------------

Date: 10 Mar 88 10:14:26 PST (Thu)
From: rajd@cel.fmc.com (Rajendra Dodhiawala)
Subject: Conference - Workshop on Blackboard Systems


                Workshop Announcement -- Call For Participation

                   THE SECOND WORKSHOP ON BLACKBOARD SYSTEMS

                               Sponsored by AAAI
         Congress Suite, Radisson St. Paul Hotel, St. Paul, Minnesota
                          Wednesday, August 24, 1988
                      (Parallel activity during AAAI '88)

(Note change in date from previous announcements of the workshop)

The  Second  Workshop on Blackboard Systems will address issues that pertain to
the design, implementation and applications of blackboard systems.    Emergence
of   blackboard  systems  as  practical  tools  to  implement  a  diversity  of
applications has raised some important questions about the various aspects that
need  further  investigation.  The focus of the workshop will be to discuss the
following interest areas in blackboard systems:

1. Control and Organization Issues: What is the approach taken to  control  the
problem solving and the rationale for the choice?  What is the ramifications on
performance? What are the mechanisms available for organizing knowledge in such
systems? What is the ramifications of organization in control?
Moderator: Vic Lesser

2.  Real-time  Issues:  What  is  the  applicability  of  blackboard systems to
real-time problems? How is the architecture enhanced or  reduced  to  meet  the
needs of real-time problem-solving? There is increasing interest on the role of
parallelism at the system level  to  achieve  real-time  performance.  How  can
parallelism or distribution be exploited to provide for real-time performance?
Moderator: Roberto Bisiani

3.  Applications:  In what innovative ways can the blackboard system be used to
address particular problems or classes of problems?  Comparisons with alternate
approaches  and  paradigms,  why  they  fail,  and  distinctive features of the
blackboard paradigm that had the greatest impact should be highlighted.
Moderator: Bob Engelmore

To  encourage  vigorous  interaction  and  exchange  of  ideas  between   those
attending, the workshop will be limited to approximately 30 participants. There
will be three panel sessions, one for each of the three  subject  areas  listed
above.  There will also be a free form discussion session to address unanswered
questions.  The format  of  the  panels  will  be  decided  by  the  respective
moderators.

All  submitted  papers  will be refereed with respect to how well they identify
and discuss the factors affecting the design and implementation  of  blackboard
systems.    Authors  should  discuss  their  design decisions (why a particular
approach was selected); what worked, what did  not  and  why;  the  advantages,
disadvantages  and limitations of their approach; and what they would recommend
to others developing such systems.  Preference will be given  to  those  papers
that discuss approaches that have been demonstrated in real applications.

Workshop planning committee and referees:
Roberto Bisiani, Carnegie Mellon University
Harold Brown, Stanford University
Dan Corkill, University of Massachusetts
Robert Engelmore, Stanford University
Lee Erman, Teknowledge, Inc.
Barbara Hayes-Roth, Stanford University
Victor Lesser, University of Massachusetts
Penny Nii, Stanford University
D. Sriram, MIT

Submission  Details:  Five  copies of an extended abstract, double spaced up to
4000 words, should be submitted to either of the workshop co-chairs before  May
10,  1988.  Acceptances  will  be  mailed by June 30, 1988. Final copies of the
papers will be required by July 31, 1988.

Title Page: The extended abstract must have a title page which lists the  names
and  addresses,  including electronic addresses if any, of all the authors. All
communication will be with the first author, unless indicated otherwise.  Since
there  may  be  overlaps  in  the  subject  areas of this workshop, authors are
encouraged to mention on the title page the area in which they think their work
best contributes.

Edited Volume: Academic Press (HBJ Publishers) have agreed to publish an edited
volume of the current and outstanding papers  in  the  subject  areas  of  this
workshop.  The  papers for this volume will be selected from the submissions to
this workshop and the proceedings of the first workshop. However, there will be
an  additional  review process to select the final set of papers for the edited
volume.

Workshop Co-chairs:
V. Jagannathan,  M/S  7L-64             R. T. Dodhiawala
Boeing Advanced Technology Center       FMC Central Engineering Labs
Boeing Computer Services                1205 Coleman Avenue, Box 580
P. O. Box 24346                         Santa Clara, CA 95052
Seattle, WA 98124-0346.                 (408) 289-3303
(206)865-3240.                          rajd@cel.fmc.com
juggy@boeing.com

Important dates:
May 10, 1988: Extended Abstracts due (5 copies)
June 30, 1988: Notification of acceptance
July 31, 1988: Final versions of papers and abstracts due.
August 24, 1988: Workshop

                                - rajendra

------------------------------

Date: 16 Mar 88 18:31:11 PST (Wed)
From: sridhara@cel.fmc.com (Sridharan)
Subject: Conference - IJCAI 89 in Detroit


Eleventh International Joint Conference on
Artificial Intelligence

Detroit, Michigan U.S.A

August 20 thru 26 1989


The International Joint Conferences on Artificial Intelligence (IJCAI)
continue to be the premier forum for international scientific
exchange and presentation of AI research.  The next conference will
be held in Detroit, Michigan USA from August 20 through August 26,
1989.  The conference is sponsored by the International Joint
Conferences on Artificial Intelligence Inc. (IJCAII) and is co-
sponsored and hosted by the American Association for Artificial
Intelligence (AAAI) and a broadly-based consortium of academic,
industrial and governmental institutions in the Southeastern
Michigan region.

The conference is designed to give representation to all subfields of
AI including research of all kinds.  The conference will also highlight
the relationship of AI to other related disciplines.  The technical
program will comprise a Paper Track focusing on empirical,
analytical, theoretical, conceptual, foundational aspects and applied
research; and a Videotape Track focusing on applications in all
subfields best suited for this form of presentation.

The Eleventh IJCAI will feature:

 o  an outstanding technical program;
 o  state-of-the-art exhibit of AI related hardware and software;
 o  stimulating and informative tutorials;
 o  special events that include prizes, awards, panels and workshops;
 o  visits to academic and industrial research centers and
 o  automobile manufacturing plants;  and
 o  an interesting variety of extra-conference activities.


The official language of the conference is English, both for papers
and videotapes.  The major areas and subareas are indicated below.

A. AI Tools and Technologies
A1. Machine Architectures, Languages, Shells
A2. Parallel and Distributed Processing
A3. Real-Time Performance

B. Fundamental Problems, Methods, Approaches
B1. Search Methods
B2. Knowledge Acquisition, Learning, Analogy
B3. Cognitive Modeling
B4. Planning, Scheduling, Reasoning about Actions
B5. Automated Deduction
B6. Patterns of Commonsense Reasoning
B7. Other issues in Knowledge Representation

C. Fundamental Applications
C1. Natural Language, Speech Understanding and Generation
C2. Perception, Vision, Robotics
C3 Intelligent Tutoring Systems
C4. Design, Manufacturing, Control

D. Perspectives and Attitudes
D1. Philosophical Foundations
D2. Social Implications



Submission Requirements and Guidelines

Important Dates

Submissions must be postmarked by:              December 12, 1988
Notification of acceptance or rejection:        March 27, 1989
Edited version to be produced by:               April 27, 1989
Conference:                                     August 20-26, 1989


Program Chair: Paper submissions, reviewing, invited talks, panels,
awards and all matters related to the technical program.
                        Dr. N.S. Sridharan
                        FMC Corporation, Central Engineering Labs.
                        1205 Coleman Avenue, Box 580
                        Santa Clara, CA 95052 USA
                        (408) 289-0315          sridharan@cel.fmc.com

Videotape Track Chair: For videotape submissions, editing and
scheduling of video presentations.
                        Dr. John Birk
                        Hewlett-Packard Labs.
                        3500 Deer Creek Road, P.O. Box 10350
                        Palo Alto, CA 94304-1317 USA
                        (415) 857-2568  birk@hplabs.hp.com
Other Contacts:
Local Arrangements Chair: Enquiries about local arrangements.
                        Dr. Ramasamy Uthurusamy
                        General Motors Research Laboratories
                        Computer Science Department
                        Warren, Michigan, 48090-9055 USA
                        (313) 986-1989          samy%gmr@relay.cs.net

Tutorials, Exhibits and Registration:
                        Ms. Claudia Mazzetti
                        AAAI Office, 445 Burgess Drive, Suite 100
                        Menlo Park, CA 94025 USA
                        (415) 328-3123

General Chair:  For all general conference related matters.
                        Professor Wolfgang Bibel
                        Computer Science, University of British Columbia
                        6356 Agricultural Road, Vancouver, B.C.
                        V6T 1W5  Canada
                        (604) 228-3061          bibel@ubc.csnet

IJCAII Secretary-Treasurer:
                        Dr. Donald Walker
                        AI and Information Science Research
                        Bell Communications Research
                        445 South Street, MRE 2A379
                        Morristown, NJ 07960-1961 USA
                        (201) 829-4312          walker@flash.bellcore.com

Paper Track Submission:

Authors should submit five (5) copies of their papers in hard copy
form.  Papers should be a minimum of 2000 words (about four pages
single spaced) to a maximum of 5000 words (about 10 pages single
spaced).  Papers should be printed on 8.5"x11.0" or European A4
sized paper, with 1.5" margins, using 12 pt type and be of letter-
quality print (no dot matrix printouts).  Each full page figure counts
for 500 words.

Each paper should contain the following information:
 o  Title of paper
 o  Full names of all authors and complete addresses
 o  Abstract of 100-200 words
 o  Length of the paper in words
 o  The area/subarea in which the paper should be reviewed
 o  Declaration of multiple submissions.

If the paper submitted to IJCAI-89 is similar in substance or form to
another paper submitted to other major conferences in 1989, this
must be declared by the author.

Papers will be uniformly subject to peer review.  Selection criteria
include accuracy and originality of ideas, clarity and significance of
results and the quality of the presentation.  Late submissions will be
automatically rejected without review.  The decision of the Program
Committee will be final and cannot be appealed.  Papers selected will
be scheduled for presentation and will be printed in the Proceedings.

Videotape Track Submission:

Authors should submit one (1) copy of a videotape of 15 minutes
maximum duration, of applied research, accompanied by a
submission letter  that includes
 o  Title
 o  Full names of authors and complete addresses
 o  Tape format (indicate one of NTSC, PAL or SECAM; and one of VHS
    or .75" U-matic)
 o  Duration of tape in minutes
 o  An abstract not to exceed 100 words.

Late submissions will be automatically rejected without review.
Tapes will not be returned; authors must retain extra copies for
making revisions.  All submissions will be converted to NTSC format
before review.  Permission to copy for review purposes is
required  and authors should indicate this in the submission letter.

This track is reserved for displaying interesting applications to real-
world problems arising in industrial, commercial, space, defense and
educational arenas.  This track is designed to demonstrate the
current level of usefulness of AI tools, techniques and methods.

Tapes will be reviewed and selected for presentation during the
conference.  The following criteria will guide the selection:
 o  Level of interest to the conference audience
 o  Clarity of goals, methods and results
 o  Presentation quality  (including audio, video and pace)

Preference will be given to applications that show a good level of
maturity.  Tapes which are deemed to be advertising commercial
products, propaganda, purely expository materials, merely taped
lectures or other material not of scientific or technical value will be
rejected.

------------------------------

End of AIList Digest
********************

∂24-Mar-88  2352	LAWS@KL.SRI.COM 	AIList V6 #55 - Queries
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 24 Mar 88  23:51:47 PST
Date: Thu 24 Mar 1988 21:25-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #55 - Queries
To: AIList@SRI.COM


AIList Digest            Friday, 25 Mar 1988       Volume 6 : Issue 55

Today's Topics:
  Queries - Robot Arm Simulator & Drawing Conversion and Expert Systems &
    POPLOG Availability in the US & Parallel Approaches to VLSI Routing &
    Logic/Control Applications & Sandia/Parallel Processing &
    PC Guru Expert System Application &
    PC Tools for Developing Expert Systems &
    Portable CommonLoops & Automatic Knowledge Extraction &
    Mathematical Work Station For Computer Illiterate &
    Work on AM Since the Original?

----------------------------------------------------------------------

Date: 18 Mar 88 15:18:44 GMT
From: paul.rutgers.edu!masticol@rutgers.edu  (Steve Masticola)
Subject: Needed: Robot Arm Simulator Software


Hi,

The computer science department here at Rutgers is interested in
finding a supported graphic robot arm simulator package for use on Sun
workstations (preferably) or as a standalone box. We'd be using it for
a grad class in robotics.

It would be nice if small, simple objects (blocks) could be put into
the robot's environment and manipulated by the robot. If the package
included some kind of vision output, and warning outputs when the arm
intersected itself or anything else in the environment, that would be
of great help also.

If you are aware of such a package, please reply by email and let me
know what you know. (Including vendor name/address/telno, if you have
them.)

Thanks for your help!

Steve Masticola
masticol@paul.rutgers.edu

------------------------------

Date: 20 Mar 88 04:19:56 GMT
From: pacbell!pbhyg!paw@AMES.ARC.NASA.GOV  (Pat Weldon)
Subject: Drawing Conversion and Expert Systems

Hi out there in netland!  This is my first posting to the net, so
please bear with me.  I am interested in hearing from folks out
there that might be doing something that involves drawing conversion
and the use of expert systems and PROLOG.  Please send any responses
via email.
Thanks in advance.
--
Pat A. Weldon   * Pacific Bell *     uucp: {ihnp4,dual}!ptsfa!pbhyg!paw
                2600 Camino Ramon, 2S500, San Ramon, CA 94583
                (415) 823-7277

------------------------------

Date: Sun, 20 Mar 88 22:43:08 PST
From: uazchem!dolata@arizona.edu (Dolata)
Subject: POPLOG availability in the US


Can someone give me a pointer to the party who distributes POPLOG in the
US?   Since my net connections are a bit rocky,  could you send me both
email and US Snail mail addresses?? (Phone number?)   Thanks for the help.

------------------------------

Date: 22 Mar 88 13:03:38 GMT
From: hubcap!jem97@gatech.edu  (Jim Mower)
Subject: parallel approaches to VLSI routing

Does anyone know of research in automated VLSI routing that uses a
parallel approach?  A book by Rostam Joobbani, _An Artificial
Intelligence Approach to VLSI Routing_ (1986), suggests parallelism as a
possibility because of the heavy reliance of human designers on visual
interaction.  Thanks in advance.

Jim Mower, Dept. of Geography and Planning
University at Albany
jem97@leah.albany.edu (internet)
jem97@albny1vx        (bitnet)

------------------------------

Date: 22 Mar 88 10:48:44 GMT
From: otter!cwp@hplabs.hp.com  (Chris Preist)
Subject: Logic/control applications wanted please.

I am looking for applications of Logic Programming to AI/ES problems, and
would appreciate any references you can give me. I am particularly interested
in work which investigates logic/control separation, though not necessarily
in a positive fashion (i.e. a paper which describes a problem which cannot
be solved using logic/control separation would be equally useful.).

Please email any references you think may be of use,

                Thanks in advance,

                        Chris Preist.

cwp@otter.hple.hp.com
cwp@hplb.csnet
cwp%hplb.csnet@csnet-relay.arpa
..!hplabs!otter!cwp

------------------------------

Date: Tue, 22 Mar 88 12:24:54 pst
From: George Cross <cross%cs1.wsu.edu@RELAY.CS.NET>
Subject: Sandia/Parallel Processing


Anybody know what this is?

Business Week, March 28, 1988 P 75, Developments to Watch

"The Speed of a Cray at a Tenth of the Price"

... [paragraph explaining parallel processing omitted]

Now computer researchers at Sandia National Laboratories in
Albuquerque have developed a formula, or algorithm, that does the
trick [to divide up a program so that parallel processors don't get
in each other's way].  Using a $2.2 million computer with 1,024
processors from Ncube in Beaverton, Ore., Sandia has solved certain
real-life problems up to 1,020 times quicker than a single processor
and, in one case, even faster than a $20 million Cray Supercomputer.
Sandia says the algorithm should be adaptable to similar computers
designed by Intel Systems, Floating Point Systems, and Bolt Beranek &
Newman.  "We've found a way to tailor problems for parallel
processing," says Edwin H. Barsis, Sandia's director of computer
science.

------------------------------

Date: 23 Mar 88 01:42:59 GMT
From: sdcrdcf!csun!polyslo!mmacfade@burdvax.prc.unisys.com  (Mike MacFaden)
Subject: EXPERT SYSTEM APPLICATION


I am currently learning the PC Guru application from MicroDataBaseAssociates
and I was hoping that some of you have had experience with it.

I am working on an application that will analyse financial statements
(using ratios) to draw conclusions regarding a company's status
within the Oil/Gas industry from an investors point of view.

Any hints, headaches, experiences would be most appreciated.

_____________________________________________________________________________
|  Michael R. MacFaden                uucp: !sdsu!polyslo!mmacfade           |
!  Systems Support                    (805) 756-2005                         !
|  Cal Poly                                                                  |
!  San Luis Obispo, CA 93407                                                 !
|____________________________________________________________________________|

------------------------------

Date: 22 Mar 88 20:19:18 GMT
From: Namasivayam R. Alagiasundaram <psuvax1!nrast@cisunx.cs.psu.edu>
Subject: PC tools for developing expert systems.


 I am posting this for a friend , who plans to develop an expert system
(diagnostic ) for some chemical methods. He plans to develop the system
in an IBM pc. Could anyone suggest any expert systems tools which are
available for Pc's. Again, please let me know the manufacturer's name
and the price.

Please reply to my account. (nrast).

Thanx a lot,

With appreciation,
siva.

------------------------------

Date: 23 Mar 88 14:39:01 PST (Wed)
From: rajd@cel.fmc.com (Rajendra Dodhiawala)
Subject: Portable CommonLoops Query


I am trying to install Portable CommonLoops (PCL) on the Symbolics
Genera 7.1 environment. I have the March 17, 1988 version of PCL.

I have followed the instructions in the defsys.lisp file. I set the
variable *pcl-directory*, save and load defsys.lisp, execute
(pcl::compile-pcl). I get an error while trying to compile fsc.lisp.
The error occurs in the first eval-when form: some nth level call from
load-defclass is trying to append #:SLOT-UNBOUND and NIL -- the first
argument is of the wrong type... etc.

The question I have is: Is there anybody out there who has been
successful in installing PCL on the Symbolics 7.1? I have had problems
with the last two releases of PCL (never tried before that). I suspect
that I am missing something. There hasn't been any such problems
expressed on the CommonLoops mailing list which has explicitly been
set up for this purpose. So any pointers in this direction will be
greatly appreciated. Thanx in advance.

                                - rajendra


FMC Central Engineering Labs
1205 Coleman Ave
Santa Clara, CA 95052
(408) 289-3303

ARPAnet: rajd@cel.fmc.com

------------------------------

Date: Thu, 24 Mar 88 10:22:14 +0100
From: Van Uytven Herman <SYSTHVU%BLEKUL11.BITNET@CUNYVM.CUNY.EDU>
Subject: automatic knowledge extraction


Hello,
I'm interested in systems for automating the knowledge extraction process.
In the literature there's a lot of information about these systems.
Personally, I'm looking for some references of commercially available products.
Can anyone provide me this information ?
Is there anyone who uses these systems, or has some experience with them ?
I'd be very grateful if you could send your comments to me.

Thanks in advance,

Chris Vanhoutte
fpaasaa@blekul11.bitnet

------------------------------

Date: 22 Mar 88 23:55:45 GMT
From: garfield!gingell@oberon.usc.edu  (Thomas Gingell)
Subject: Mathematical Work Station For Computer Illiterate Desired


 I desire software that would approach the following as closely as
possible. (Would spend under $10,000.)

 Preferably for a SUN 3 or 4, but a VAX 11/780
is okay: I would like the capability of integerating a function
of two variables numerically with a user typing in the integrand, and
the limits.  The user would then be shown a table of run time
vs. level of approximation to choose from and the option to place the
result in a file, or display it graphically at the terminal. When the
result is presented, the option to make a change in the integrand and
/or limits would be provided and the new result shown next to the
previous (if desired).

 Thank you very much.  Please respond via e mail.

--
Tom Gingell - Research & Development Labs
ARPA: gingell@rdlvax.RDL.COM
UUCP: ...!{psivax,csun,sdcrdcf,ttidca}!rdlvax!gingell

------------------------------

Date: Wed, 23 Mar 88 20:59:22 EST
From: PJURKAT@VAXC.STEVENS-TECH.EDU
Subject: work on AM since the original?

One of the students in my seminar on belief and uncertainty has just
rediscovered the heuristics in AM, the mathematical discovery system
which was developed by, I believe, Lenat.  In the description of the
work there were several references to questions that would be interesting
to pursue further.

Short of workg forward through the literature looking for back
references to AM, I am asking people who read this for references to any
follow on work to AM, particulary its heuristics for 'interestingness', which
people in my seminar claim was a form of 'belief'.

You are welcome to respond through the AIList to the extend that Ken Laws
welcomes it.  Alternately you may address your repsonses to pjurkat@sitvxc on
BITNET.  Thanks in advance.

Cheers - peter J.
pjurkat@sitvxc.bitnet

------------------------------

End of AIList Digest
********************

∂25-Mar-88  0234	LAWS@KL.SRI.COM 	AIList V6 #56 - Mind Simulation & Software Engineering    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 25 Mar 88  02:33:55 PST
Date: Thu 24 Mar 1988 21:33-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #56 - Mind Simulation & Software Engineering
To: AIList@SRI.COM


AIList Digest            Friday, 25 Mar 1988       Volume 6 : Issue 56

Today's Topics:
  Query - Software Wanted to Build a Mind,
  Programming - AI and Software Engineering References

----------------------------------------------------------------------

Date: 23-MAR-1988 23:31:42 GMT
From: POPX@VAX.OXFORD.AC.UK
Subject: Software Wanted to Build a Mind

From: Jocelyn Paine,
      Experimental Psychology Department,
      South Parks Road,
      Oxford.

Janet Address: POPX @ OX.VAX



                             SOFTWARE WANTED
                                    -
                             TO BUILD A MIND


I'm  trying to  teach  Oxford  undergraduates an  information-processing
model of psychology, by giving  them a computerised organism (named P1O,
after the course) which has a "mind" which they can watch and experiment
with. To do this, I've depicted how  a mind might be built out of units,
each performing a  simpler task than the original mind  (my depiction is
loosely   based   on   Dennett's   "Towards   a   Cognitive   Model   of
Consciousness").  Each of  my  units does  some  well-defined task:  for
example,   parsing,    edge-detection,   conversion   of    a   semantic
representation to text.

Now I have to implement each unit, and hook them together. The units are
not black  boxes, but black  boxes with windows:  i.e. I intend  that my
students can inspect and modify some of the representations in each box.
The units  will be  coded in Prolog  or Pop-11, and  run on  VAX Poplog.
Taking the parser as an example: if it is built to use a Prolog definite
clause grammar, then  my students should be able to:  print the grammar;
watch the parser generate parse trees,  and use the editor to walk round
them;  change the  grammar  and see  how this  affects  the response  to
sentences.

P1O will live in a simulated  world which it perceives by seeing objects
as sharp-edged images on a retina.  This retina is a rectangular grid of
perhaps 2000 pixels, each spot sensing either nothing, or a spot of some
particular colour. One of the images  will be that of P1O's manipulator,
which can detect whether it is touching an object. P1O can also perceive
loud noises, which direct its attention toward some non-localised region
of space. Finally, P1O can hear sentences  (stored as a list of atoms in
its "auditory  buffer"), and  can treat  them either  as commands  to be
obeyed, statements  to be  believed (if  it trusts  the speaker),  or as
questions to be answered.

P1O's  perceptual  interpreter  takes  the images  on  its  retina,  and
converts them via edge-detection  and boundary-detection into hypotheses
about the  locations of  types of objects.  The interpreter  then checks
these hypotheses  for consistency with P1O's  belief memory, determining
the while which individuals of a type it's seeing. Hypotheses consistent
with  past beliefs  are  then  put into  the  belief  memory, as  Prolog
propositions.

The sentences which P1O hears are also converted into propositions, plus
a mood (question,  command, or statement). This is  done by generating a
parse tree,  and then a  propositional representation of  the sentence's
meaning. Statements are  checked for consistency with  the belief memory
before being  entered into it; questions  cause the belief memory  to be
searched; commands invoke P1O's planner,  telling it for example to plan
a sequence  of actions  with which it  can pick  up the  brown chocolate
button which it sees.

These action sequences then go to  P1O's motor control unit, which moves
the manipulator. This  involves positional feedback - P1O  moves a small
step at a time, and has to correct after each step.

P1O's simulated environment is responsible for tracking the manipulator,
and updating the retinal image accordingly. Students can also update the
image for themselves.

At the top level,  P1O has some goals, which keep it  active even in the
absence of commands from the student.  The most important of these is to
search  for food.  The  type of  food sought  depends  on P1O's  current
feeling of hunger, which depends in  turn on what it has recently eaten.
The goals are processed by the top-level control module, which calls the
other modules as appropriate.



Above, I've described  P1O as if I've already built  it. I haven't, yet,
and  I'm seeking  Prolog or  Pop-11 software  to help.  I'd also  accept
software in other  languages which can be translated  easily. I'll enter
any software  I receive into my  Prolog library (see AILIST  V5.279, 3rd
Dec 1987; IKBS  Bulletin 87-32, 18 Dec 1987; the  Winter 1987 AISB News)
for use by others.

I think so far that I need these most:

(1) LANGUAGE ANALYSIS:

(1.1)  A grammar,  and  its parser,  for some  subset  of English,  in a
notation  similar to  DCG's  (though  it need  not  be _implemented_  as
DCG's). Preferably  with parse  trees as  output, represented  as Prolog
terms. The notation  certainly doesn't have to be Prolog,  though it may
be translatable thereto: it should be comprehensible to linguists who've
studied formal grammar.

(1.2) As above, but for the  translation from parse-trees into some kind
of  meaning (preferably  propositions, but  possibly conceptual  graphs,
Schankian CD, etc) represented as Prolog terms. I'm really not sure what
the clearest notation would be for beginners.

(1.3) For teaching reasons, I'd prefer my analyser to be 2-stage; parse,
and then convert the trees to some  meaning. However, in case I can't do
this: one grammar and analyser which does both stages in one go. Perhaps
a chart parser using functional unification grammars?

(1.4) A morphological analyser, for splitting words into root, suffixes,
etc.

(2) VISION

(2.1) An edge-detector. This should take a 2-D character array as input,
and return a list of edges  with their orientation. I'm content to limit
it to vertical  and horizontal edges. It need not  deal with fuzzy data,
since the images will be drawn by  students, and not taken from the real
world.  This  can  be  in  any algorithmic  language:  speed  is  fairly
important, and I can call most other languages from Poplog.

(2.2) A boundary-detector. This should  take either the character array,
or the list  of edges, and return  a list of closed  polygons. Again, it
can be in any algorithmic language.

(3) SPEAKING

(3.1) A  speech planner,  which takes  some meaning  representation, and
converts into a  list of words. This  need not use the  same grammar and
other knowledge as the language analyser (though it would be nicer if it
did).

(4) WINDOWING

(4.1) Any  software for allowing the  Poplog editor VED to  display more
than two windows on the same  screen, and for making VED highlight text.
Alternatively,   Pop-11   routines  which   control   cursor-addressable
terminals directly, bypassing VED, but  still being able to do immediate
input of characters.

(5) OTHER

(5.1)  If  I  model  P1O's   mind  as  co-operating  experts,  perhaps a
blackboard shell would be useful. Does anyone have a Prolog one?



I'd also  like to  hear from anyone  who has  other software  they think
useful, or who has  done this kind of thing already -  surely I can't be
the first to  try teaching in this way? In  particular, does anyone have
ideas on  how to manage the  environment efficiently, and what  form the
knowledge in top-level control should take. I'll acknowledge any help in
the course documentation.


                            Jocelyn Paine



                  INFORMATION ON BULLETIN BOARDS WANTED



I already  belong to AILIST: from  time to time  I see in it  mention of
other boards, usually  given in the form COMP.SOURCES  or SCI.MED. Where
can I obtain a list of these boards, and how to subscribe?

                                                            Jocelyn Paine


  [To reply, I think the following should work:
  POPX%VAX.OXFORD.AC.UK%AC.UK%UKACRL.BITNET@CUNYVM.CUNY.EDU .
  As for other bboards, different ones exist on different
  networks.  For a list of Arpanet bboards, write to
  Zellich@SRI-NIC.ARPA.  For Bitnet bboards, I think a message
  containing the command HELP will get you started; just send
  it to LISTSERV@FINHUTC (or @NDSUVM1).  I don't know how one
  gets the list of Usenet newsgroups.  -- KIL]

------------------------------

Date: Wed, 23 Mar 88 19:49 EDT
From: LEWIS%cs.umass.edu@RELAY.CS.NET
Subject: AI & SE references


Since there were a large number of people who asked to see the responses to
my query on software engineering and AI, a list of them follows. I have
tried to edit out extraneous material and long discussions and abstracts.
Many thanks to all who replied!

I now have a new request. Does anyone out there have a design document for
an AI system they/their group built, that they would be willing to send me a
copy of? I'm interested in looking at how people approach planning for
change in AI research software. Let me say in advance: Yes, it's OK that you
didn't follow it/didn't keep it up to date/didn't finish it, or any of the
other things that happen in real life.

Thanks,

David D. Lewis                         CSNET: lewis@cs.umass.edu
COINS Dept.                            BITNET: lewis@umass
University of Massachusetts, Amherst
Amherst, MA  01003

*************************************************************************

FROM Wm. Randolph Franklin
   Preferred net address: Franklin@csv..rpi.edu
   Alternate net: wrf@RPITSMTS.BITNET
   Papermail: ECSE Dept, Rensselaer Polytechnic Institute,
                          Troy NY, USA, 12180
   Telephone: (518) 276-6077
   Telex: 6716050 RPI TROU   Fax: (518) 276-6003

W.R. Franklin et al, Debugging and Tracing Expert Systems, Twenty-First
Annual Hawaii International Conference on System Sciences, Vol III, ed.
B.R. Konsynski, Kona, Hawaii, USA, January 1988, pp. 159-167.

*************************************************************************
From: Shashi Shekhar <shekhar@ERNIE.BERKELEY.EDU>

        May be a survey paper titled "Development Support of AI Programs"
in IEEE Computer magazine, Jan.1987 issue, would be useful to you. This
paper includes lots of relevant references to environments for AI etc.
More recent paper include one by P.Hart & Duda on SYNTEL, in IEEE Expert
magazine Fall1987 issue. This has one section on software eng. issues.
You may also find the special issue of Trans. on Software Engg., on AI and
SOftware Eng. (Nov.1986 ??) somewhat useful.

*************************************************************************
From: Cathleen Wharton <cwharton@boulder.colorado.EDU>

Readings in Artificial Intelligence and Software Engineering
Edited by Rich, C. and Waters, R.C.
Softcover
602 pages
Published by Morgan Kaufmann Publishers in August 1986
Average Price $20-25

*************************************************************************
From:   IN%"france@vtopus.cs.vt.EDU"  "Robert France" 23-FEB-1988 05:22

There were several very good papers in *IEEE Transactions on Software
Engineering* v. SE-11, #11 (Nov. 1985).  For my favorite solution to the
probelm of coupling and cohesion within AI systems, you can do no better
than to check out Penny Nii's articles on blackboard systems in  *AI
Magazine* v. 7 #1-2 (1986).

*************************************************************************

FROM:
Nancy Sliwa
NASA Langley
nesliwa%nasamail@ames.arpa

NASA has been researching this issue. Contact Chris Culbert at NASA Johnson
(cculbert%nasamail@ames.arpa), or Sally Johnson at NASA Langley (804/865-3681).
NASA Ames is also active here, but they will liekly see your posting and reply
in person.

*************************************************************************
From: Jorge Gautier <gautier@CS.WISC.EDU>

I have seen an article in the Fall Joint Conference (1977?  I know it was
in Dallas, Texas) titled "Software Engineering for Rule-Based Systems."
The authors were at a naval center (NOSC, NUSC or something like that.)
The basic idea was to group rules according to the classes that they
affect in working memory, so it became easier to keep track of classes
and the state of WM whenever rules were added, deleted or modified.

*************************************************************************
From: munnari!basser.cs.su.oz.au!ray@uunet.uu.NET

One thing that comes to mind is the work by Tom Addis.  He has extended
relational database theory and uses it as a knowledge representation
technique.  Have a look at his book ...

Addis, T. R.  "Designing Knowledge-Based Systems", Kogan Page, London, 1985.

*************************************************************************

From:
Mark A. Whiting
(Arpa c/o: erickson@lbl-csam)
Battelle Northwest Laboratories
PO Box 999
Richland, WA 99352

A paper I liked: Partridge, D. 1986. "Engineering Artificial Intelligence
Software"  in _Artificial Intelligence Review_, Vol. 1, No. 1, 1986.

Partridge authored _Artificial Intelligence: Applications in the Future of
Software Engineering_, Horwood, 1986.


*************************************************************************

FROM
                               H A N S - L U D W I G  H A U S E N

GMD Schloss Birlinghoven       Telefax   +49-2241-14-2889
D-5205 Sankt Augustin 1        Teletex   2627-224135=GMD VV
       West  GERMANY           Telex     8 89 469 gmd d
                               E-mail    hausen@dbngmd21.BITNET
                               Telephone +49-2241-14-2440 or 2426
P.S.:GMD (Gesellschaft fuer Mathematik und Datenverarbeitung)
     German National Research Institute of Computer Science
     German Federal Ministry of Research and Technology (BMFT)


Dear colleague, we have also employed AI techniques, rule
mechanisms in particualar, to model method and tool usage. Below,
you will find an abstract of our most recent paper.



KNOWLEDGE BASED HANDLING OF SOFTWARE VALIDATION METHODS AND TOOLS }
        H.L.Hausen - H.J.Neusser
      GMD, Schloss Birlinghoven, 5205 Sankt Augustin 1
                 1988-01-27


*************************************************************************

From: jacob@nrl-css.arpa (Rob Jacob)

Saw your message about software engineering techniques for expert
systems on the AIList.  This may not be quite what you had in mind,
but, here at the Naval Research Laboratory Judy Froscher and I have
been working on developing a software engineering method for expert
systems.  We are interested in how rule-based systems can be built so
that they will be easier to change.  Our basic solution is to divide
the set of rules up into pieces and limit the connectivity of the
pieces.

R.J.K. Jacob and J.N. Froscher, "Facilitating Change in Rule-based
Systems," pp. 251-286 in Expert Systems: The User Interface, ed. J.A.
Hendler, Ablex Publishing Co., Norwood, N.J. (1988).

R.J.K. Jacob and J.N. Froscher, "Software Engineering for Rule-based
Systems," Proc. Fall Joint Computer Conference pp.  185-189, Dallas,
Tex. (1986).

J.N. Froscher and R.J.K. Jacob, "Designing Expert Systems for Ease of
Change," Proc. IEEE Symposium on Expert Systems in Government pp.
246-251, Washington, D.C. (1985).

R.J.K. Jacob and J.N. Froscher, "Developing a Software Engineering
Methodology for Rule-based Systems," Proc. 1985 Conference on
Intelligent Systems and Machines pp. 179-183, Oakland University
(1985).

R.J.K. Jacob and J.N. Froscher, "Developing a Software Engineering
Methodology for Knowledge-based Systems," NRL Report 9019,  Naval
Research Laboratory, Washington, D.C. (1987).

*******************************************************************

From: lewis@cs.umass.edu (a couple others I found)

Gates, K.H.; Adelman, L.; Lemmer, J. F. "Management of AI System Software
Development for Military Decision Aids" in Proc. IEEE Symposium on Expert
Systems in Government, pp. 36-42, Washington, DC (1985).

Silverman, Barry G. "Reflections on Some Next Generation of AI Tools"
Proc. 2nd IEEE Symposium on Expert Systems in Government, pp. 426-427,
McLean, Virginia (1986).

*******************************************************************

...and from Jack Wileden, our seminar's original reading list:

Balzer,R.
{\sl A 15 Year Perspective on Automatic Programming},
{\bf IEEE Transactions on Software Engineering},
Vol.SE--11, No.11, Nov.\ 1985, pp.1257--1268.

Barstow,~D.,
{\sl Domain Specific Automatic Programming},
{\bf IEEE Transactions on Software Engineering},
Vol.SE--11, No.11, Nov.\ 1985, pp.1321--1336.

Bobrow,~D.,
{\sl If Prolog is the Answer, What is the Question? or
What it Takes to Support AI Programming Paradigms},
{\bf IEEE Transactions on Software Engineering},
Vol.SE--11, No.11, Nov.\ 1985, pp.1401--1408.

Bobrow,~D. and Stefik,~M.,
{\sl Perspectives on {AI} Programming},
{\bf Science}, 231, 1986, pp.951--957.

Doyle,~J.,
{\sl Expert Systems and the ``Myth'' of Symbolic Reasoning},
{\bf IEEE Transactions on Software Engineering},
Vol.SE--11, No.11, Nov.\ 1985, pp.1386--1390.

Erman,~L. and Lesser,~V.,
{\sl System Engineering Techniques for Artificial Intelligence Systems},
in {\bf Computer Vision Systems}, Riseman and Hanson, eds., Academic Press,
1978, pp.37--45.

Green,~C., Luckham,~D., Balzer,~R., Cheatham,~T. and Rich,~C.,
{\sl Report on a Knowledge-Based Software Assistant},
Kestrel Institute Technical Report KES.U.83.2,
Palo Alto, Ca., 1983.

Houghton,~R. and Wallace,~D,
{\sl Characteristics and Functions of Software Engineering Environments:
An Overview},
{\bf Software Engineering Notes}, Jan.\ 1987, pp.64-84.

Huff,~K. and Lesser,~V.,
{\sl A Plan-Based Intelligent Assistant that Supports the Process of
Programming}, COINS Technical Report 87-09, Sept.\ 1987.

Kaiser,~G. and Feiler,~P.,
{\sl An Architecture for Intelligent Assistance in Software Development},
{\bf Proceedings Ninth International Conference on Software Engineering},
Monterey, Ca., 1987, pp.180--188,

Narain,~S., McArthur,~D. and Klahr,~P.,
{\sl Large-Scale System Development in Several {L}isp Environments},
{\bf Proceedings of the Eighth International Joint Conference on
Artificial Intelligence}, Karlsruhe, Federal Republic of Germany,
1983, pp.859--861.

Osterweil,~L.,
{\sl Software Processes are Software Too},
{\bf Proceedings Ninth International Conference on Software Engineering},
Monterey, Ca., 1987, pp.2--13.

Partridge,~D. and Wilks,~Y.,
{\sl Does AI have a methodology different from Software Engineering?},
New Mexico State University Technical Report MCCS-85-53,
Las Cruces NM, 1985.

Ramamoorthy,~C., Shekhar,~S. and Garg,~V.,
{\sl Software Development Support for {AI} Systems},
{\bf IEEE Computer}, Jan.\ 1987, pp.30--40.

Sheil,~B.,
{\sl Power Tools for Programmers},
{\bf Datamation}, Feb.\ 1983, pp.131--144.

Simon,~H.,
{\sl Whether Software Engineering Needs to be Artificially Intelligent},
{\bf IEEE Transactions on
Software Engineering}, Vol.SE--12, No.7, July 1986, pp.726--732.

Smith,~D., Kotik,~G. and Westfold,~S.
{\sl Research on Knowledge-Based Software Environments at Kestrel
Institute},
{\bf IEEE Transactions on Software Engineering},
Vol.SE--11, No.11, Nov.\ 1985, pp.1278--1295.

Subrahmanyam,~P.,
{\sl The ``Software Engineering'' of Expert Systems:
Is Prolog Appropriate?},
{\bf IEEE Transactions on Software Engineering},
Vol.SE--11, No.11, Nov.\ 1985, pp.1391--1400.

Taylor,~R., Baker,~D., Belz,~F., Boehm,~B., Clarke,~L., Fisher,~D.,
Osterweil,~L., Selby,~R., Wileden,~J., Wolf,~A. and Young,~M.,
{\sl Next Generation Software Environments: Principles, Problems
and Research Directions},
COINS Technical Report 87-63, July 1987.

Teitelman,~W., {\sl A Tour Through Cedar}, {\bf IEEE Transactions on
Software Engineering}, Vol.SE--11, No.3, March 1985, pp.285--302.

Teitelman,~W. and Masinter,~L.,
{\sl The Interlisp Programming Environment},
{\bf IEEE Computer}, Vol. 14, No.4, April 1981, pp.25--34.

Walker,~J., Moon,~D., Weinreb,~D. and McMahon,~M.,
{\sl The Symbolics Genera Programming Environment},
{\bf IEEE Software}, Nov.\ 1987, pp.36--45.

Waters, R.,
{\sl The Programmer's Apprentice: A Session with KBEmacs},
{\bf IEEE Transactions on Software Engineering},
Vol.SE--11, No.11, Nov.\ 1985, pp.1296--1320.

Waters,~R.,
{\sl KBEmacs: Where's the AI?},
{\bf The AI Magazine}, Vol. VII, No.1, Spring 1986, pp.47--56.

Wile,~D. and Allard,~D.,
{\sl Worlds: an Organizing Structure for Objec-Bases},
{\bf Proceedings of the Second SIGSOFT/SIGPLAN Symposium on
Practical Development Environments},
Dec.\ 1986. (published as SIGPLAN Notices, Jan. 1987).

********************************************************************

------------------------------

End of AIList Digest
********************

∂25-Mar-88  0437	LAWS@KL.SRI.COM 	AIList V6 #57 - Theorem Prover, Models of Uncertainty
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 25 Mar 88  04:37:28 PST
Date: Thu 24 Mar 1988 21:40-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #57 - Theorem Prover, Models of Uncertainty
To: AIList@SRI.COM


AIList Digest            Friday, 25 Mar 1988       Volume 6 : Issue 57

Today's Topics:
  AI Tools - Boyer-Moore Theorem Prover in Common Lisp,
  Theory - Uncertainty and Imprecision

----------------------------------------------------------------------

Date: Sun, 20 Mar 88 08:57:29 PST
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Re: Boyer-Moore theorem prover in Common Lisp


      Boyer and Moore use a Symbolics, but I ported it over to the version
of Franz Lisp that comes with 4.3BSD several years ago, and they merged the
code back into their system.  The changes were minor.  The system
used to be available by anonymous FTP from UTEXAS-20.ARPA, and still may be
on-line there.  Bob Boyer is now at Computational Logic, Inc. in Austin,
Texas.

                                        John Nagle

------------------------------

Date: Tue 22 Mar 88 12:04:38-PST
From: Enrique Ruspini <RUSPINI@SRI-IU.ARPA>
Subject: Uncertainty and Imprecision

It is usually very difficult (as well as annoying) to engage on
another round of the "acrimonious" debate on approximate reasoning
since often the discussion deals with side-issues or supposed
"paradoxes" or "inconsistencies" of theories that require, for proper
understanding, a good deal of sophistication.

The contribution of Star to AILIST (10 Mar 88) provides, however,
reasonable grounds for discussion as Star makes reasonably succint points
describing the bases for his subjectivist preferences. Unfortunately,
that conciseness is not matched by solid scientific arguments.

Before analyzing his four purportedly unique characteristics of the
subjectivist approach, let me just say that it is plain wrong to
consider fuzzy sets as an alternative to either Dempster-Shafer or
classical probability. It is actually an approach that complements
probabilistic reasoning by providing another type of insight on the
state of real world systems.

If, for example, we say that the probability of `"economic recession"
is 80%' we are indicating that there is either a known (if we are
thinking of probabilities in an objective sense) or believed (if take
a subjectivist interpretation) tendency or propensity of an economical
system to evolve into a state called "recession".

If, on the other hand, we say that the system will move into a state
that has a possibility of 80% of being a recession, we are saying that
we are *certain* that the system will evolve into a state that
resembles or is similar at least to a degree of 0.8 (in a preagreed
scale) to a state of recession (note the stress on certainty with
imprecision about the nature of the state as opposed to a description
of a believed or previously observed tendency).

Clearly, possibilistic and probabilistic approaches have different
epistemological bases. To try to force a unique view on uncertain
reasoning (and on all interpretations of probability) as suggested by
some is a bit similar to trying to understand the physics of light
using solely wave-based models.

Passing now to the unique distinguishing characteristics of
"mainstream" approaches let us take them one by one:

> Subjective probability ...is the only approach that has the
> characteristics of

> 1. Being based on a few, acceptable simple axioms.

The multiple assertions contained in this first statement are all
either wrong or, at best, rather subjective matters.

Wheter or not Savage's axioms (Foundations of Statistics) are
acceptable (whatever that means) or sufficient is arguable. Readers
may find it interesting to look at Savage's axioms and decide by
themselves if they are either "simple" or "acceptable". The
insufficiency of such systems to capture enough aspects of rational
behavior, for example, has been criticized (e.g.,Kyburg,H., J. Phil.,
1974).  It is interesting to note also that some of these systems
(e.g., Cox) contain axioms that many find objectionable because they
appear to have been introduced solely to validate certain
characteristics of the approach (e.g., Bayes conditionalization). So
much for acceptability.

As for simplicity it is interesting to note that if one does away with
some of the axioms in systems such as Savage or Cox one immediately
gets systems (which are, necessarily, simpler) characterized by
interval- (rather than number-) valued probabilities. See, for example,
the critique of Savage axioms in "Suppes, P., The Measurement of
Belief, J. Roy. Stat. Soc., 1974.

There are also enough paradoxes around showing that rather rational
folk (including prominent subjectivists) often engage in behavior
which is inconsistent with their own prescriptions (e.g., the
well-known "Ellsberg" and "Allais" paradoxes). Of course one can
define "rational behavior" any way one wants and declare that even
oneself is not always rational but the shortcomings of these tricks
in word-play are clear: one should prescribe standards for rationality
and find out if one's recipe always assures compliance with them, rather
than define "rational behavior" solely as that which comes out of
following one's preferred procedures (which are far from being
noncontroversial) !!!

To end this point it is important to note that axiomatizations of
fuzzy sets and D/S exist (including recent development of
model-theoretic semantics for them --- more below on this) although I
must admit that I feel that it is rather silly to try to defend or
attack approaches on such bases: often systems (including those used
to reason) are very complex and it is ridiculous to expect that formal
systems will be capable of capturing such complexity using
straightforward formalisms. Under such conditions, simplicity should be
a reason for suspicion and concern.

> 2. Being able to connect directly with decision theory
> (Dempster-Shafer can't).

This is nonsense. Interval probability approaches (of which D/S is an
example) may be applied to decision analysis by simply extending the
techniques proposed by subjectivists. The difference (and I suspect
that this is what Star means by "connection") is that by producing
intervals of expected values for each decision they fail to tell
sometimes if a decision is better or worse than another. There is
nothing wrong with this, though: all that the results tell you is that
there are neither rational bases nor factual data to assure you that A
is preferrable to B, a true fact of life. Insisting that one approach
is better because it always produce a needed decision (the famous
"pragmatic necessity") even when the factual support is not there
leaves one wondering about such a choice of standards (if something
must be done, then an interval-based approach followed by coin
flipping will produce results that are as "reliable" or "rational").

It is interesting to note that if the word "belief" is replaced by
"temperature" in some of these curious epistemological arguments, then
we may convince ourselves that we always know the temperature of
everything (or act as if we do) and decide to do away with
thermometers.  Readers may also wonder about the curious circularity
of support in the pairs of arguments: "We always know degrees of
belief about any proposition (or act as if we do) because in the end
we always do something" and "We do something that is best because we
always have rationally derived degrees of belief."

It is also interesting to wonder whether Bayesian decision theory
(based primarily on the notion of expected utility as the unique
performance measure for decisions) is sufficient for the variety of
complex problems found on modern applied science (e.g., "What is the
sense of choosing business strategies that are only good in the
long-run when a (single) unfortunate turn of events may leave us
broke?").

> 3. Having efficient algorithms for computation.

I do not know where Star got this notion. Efficient algorithms are
available for both D/S and fuzzy sets. Being present at the last AAAI
in Seattle, I recall that Judea Pearl mentioned the great
computational and formal similarities between some of the Bayesian and
D/S network algorithms (e.g., some of Pearl's algorithms and the
procedures for evidence combination of Dempster/Kong). It is difficult
to believe that if a prominent Bayesian makes such an assessment, one
class of procedures could be efficient while the other is not (of
course, efficiency per se is of little value if your method gives you
the wrong solution!)

As for fuzzy sets, their very purpose is to simplify the analysis of
complex systems by trading unneeded precision for increased depth of
understanding. AIListers may be interested to know more about an
operational subway system control in Japan (in the city of Sendai)
designed by Hitachi, over a reported 8 years period, which uses fuzzy
logic. Works describing this effort (Proceedings IFSA 1987) indicate
also the reasons why fuzzy logic was used over other alternatives
(anybody with a background in control theory will shudder at the
complexities of handling --even if they were known-- the required
probabilities, covariance matrices, etc.  involved in a classical
stochastic control approach for large complex, control systems!).

4.> Being well understood.

I do not know whether Star is criticizing other theories (which had
been studied only for a few years) for not being as well understood as
classical probability . Setting aside whether or not probability
(particularly in its subjective interpretation) is "well understood"
(a matter much disputed among philosophers of probability), I do not
feel that it is particularly surprising that recent technological
developments are not as well developed or understood as techniques
that have been around for a long while (One can only wonder why this
make the old techniques more acceptable to deal with new classes of
problems).

If one looks at the applicability and current rate of progress in new
approaches, however, one sees a different story. Both fuzzy sets and
D/S are advancing strongly at the theoretical and applied level.
Dempster/Shafer has been found to have solid foundations rooted in
classical probability and epistemic logic (Ruspini, Proc. 1987 IJCAI;
SRI Technical Note 408). Recently formal semantics have been developed
for both fuzzy sets and D/S. [Ruspini, IPMU 1988]. Readers may want to
contrast this vigorous advance with subjectivists, who, after 40 or 50
years, have failed to generate convincing arguments supporting their
conceptual reliance on algorithms that assume that we always know
probabilities of events (or act as if we do) even when rational or
empirical bases to support such knowledge are admittedly absent !

I do not know where Star is looking ("Look at what people are doing
with Dempster-Shafer belief functions or fuzzy sets.") but I, for one,
have looked enough, and with a considerable background in formal
sciences, I do not see anything that brings to mind the images that
these approaches evoke in Star's mind ("mainstream approaches" versus
"more experimental").

To repeat myself, it is ridiculous to expect to find strong formalisms
around newly evolving theories (Would anybody expect Galilean
mechanics to have been formalized before being seriously considered?).
The state of development of both D/S and fuzzy sets is neither
unfounded nor solely "experimental" (a questionable epithet) or non
"mainstream" (another convenient but inaccurate qualifier), however.
One should be very leery, on the other hand, of questionable practices
that purport to derive "rational" decisions in the absence of
knowledge: a miracle that I have described elsewhere (Comp.
Intelligence, February 1988) as epistemological alchemy.

I would like to say that my comments should not be construed to negate
the value of classical probability in the study of uncertainty.
Furthermore, I believe that it is important to continue to study the
concept of belief and the problems associated with its quantification
and recognize the positive contributions that subjectivists have made
(and, undoubtely, will continue to make) to the art and science of
probabilistic reasoning.

I believe, however, that, while striving to improve existing
methodologies, however, we should keep an open mind towards novel
concepts while realizing that the former approaches might not be (at
least yet!) as comprehensive and efficient as some zealously purport
them to be.

------------------------------

End of AIList Digest
********************

∂29-Mar-88  0158	LAWS@KL.SRI.COM 	AIList V6 #58 - Seminars, Conferences 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 29 Mar 88  01:58:44 PST
Date: Mon 28 Mar 1988 20:40-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #58 - Seminars, Conferences
To: AIList@SRI.COM


AIList Digest            Tuesday, 29 Mar 1988      Volume 6 : Issue 58

Today's Topics:
  Seminars - Learning from Physical Analogies (Ames) &
    Explaining Change in Schedules and Budgets (CMU),
  Conferences - Expert Systems in Agriculture &
    Workshop on use of APL in AI &
    ARTISYST: AI for Systematics

----------------------------------------------------------------------

Date: Tue, 22 Mar 88 12:10:30 PST
From: CHIN%PLU@ames-io.ARPA
Subject: Seminar - Learning from Physical Analogies (Ames)


***************************************************************************
              National Aeronautics and Space Administration
                         Ames Research Center

                        SEMINAR ANNOUNCEMENT


SPEAKER:   Mr. Brian C. Falkenhainer
           University of Illinois

TOPIC:     Learning From Physical Analogies

ABSTRACT:

To make programs that understand and interact with the world as well as
people do, we must duplicate the kind of flexibility people exhibit when
conjecturing plausible explanations of the diverse physical phenomena they
encounter.  We view this flexibility as arising from an ability to detect
similarities, within and across domains, between the various observed
behaviors.  Interpreting an observation often requires the flexible integration
of knowledge from diverse sources and the formation of new theories about
the world.  For example, understanding processes such as heat flow and
diffusion may involve reference to known theories of liquid flow, while
explaining the behavior of an oscillating LC electric circuit may require
a knowledge of springs.

Verification-Based Analogical Learning is an approach to theory formation
and revision which relies on analogical inference to hypothesize new theories,
and gedanken experiments (i.e., simulation) to analyze their validity.
It is an iterative process of hypothesis formation, verification, and revision
which focuses on the problem of validating analogically derived models.
This talk will describe the basic elements of verification-based analogical
learning, the kinds of flexible yet constrained reasoning they enable, and
discusses its implications for analogical reasoning in general.  A number
of examples from the current implementation, PHINEAS, will be used to explain
and demonstrate the utility and diversity of this approach.


BIOGRAPHY:

Mr. Falkenhainer is a graduate student in the Ph.D program in Philosophy in
Computer Science at the University of Illinois, and is a Research
Assistant in the Qualitative Reasoning Group.  His research in artificial
intelligence focuses on the general tasks of theory formation and observation
interpretation.  A paper which appeared in the journal, Machine Learning,
summarized the results of his master's thesis on the discovery of functional
relationships in numeric data.  A general tool for performing various types
of analogical mappings, called the Structure-Mapping Engine (SME), is
extensively described and analyzed in a paper recently submitted to the
journal, Artificial Intelligence.  In support of SME, a probabilistic
generalization to traditional truth-maintenance systems was developed and
is described in a paper from the 1986 workshop on uncertainty in AI.



DATE: Monday, March 28, 1988  TIME: 3:00 - 4:00 pm     BLDG. 244   Room 209

                        --------------


POINT OF CONTACT: Marlene Chin   PHONE NUMBER: (415) 694-6525
     NET ADDRESS: chin%plu@ames-io.arpa

***************************************************************************

VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18.  Do not
use the Navy Main Gate.

Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance.  Submit requests to the point of
contact indicated above.  Non-citizens must register at the Visitor
Reception Building.  Permanent Residents are required to show Alien
Registration Card at the time of registration.
***************************************************************************

------------------------------

Date: Mon, 28 Mar 88 10:39:04 EST
From: Anurag.Acharya@CENTRO.SOAR.CS.CMU.EDU
Subject: Seminar - Explaining Change in Schedules and Budgets (CMU)


                         AI SEMINAR

TOPIC:       Explaining Change in Schedules and Budgets

SPEAKER:     Steve Roth                         (412) 268-7690
             DH 3321, The Robotics Institute
             Carnegie Mellon University, Pittsburgh, PA  15213
             roth@isl1.ri.cmu.edu

WHEN:        Tuesday, April 5, 1988   3:30pm

WHERE:       Wean Hall 5409

                        ABSTRACT

This talk will present an approach to the automatic explanation of changes
in the results generated by quantitative project scheduling and budget
systems. Previous experience with CALLISTO, an experimental project
management system, revealed that managers have difficulty isolating and
determining the relationships among changes generated by systems like these.
These systems usually involve large algebraic models with frequently
changing inputs, which managers need to analyze frequently to track the
course of their projects. Our goal was to support this task using techniques
for identifying relevant and significant changes, composing well structured
sequences of assertions for describing these and selecting and composing one
or more graphical styles or pictures of the relevant data.

Our approach accomplishes this by synthesizing three previous approaches:
@i(comparative analysis), a technique for identifying relevant causes of
change in financial models; approaches to text planning based on rhetorical
models (e.g. of descriptions of database structure or justifications of
reasoning for expert systems); and an approach to automatic selection of
displays for conveying quantitative data.  Descriptions  of change are
generated using combinations of text and graphical displays and therefore
serves as a vehicle for exploring the interaction of the two modes of
presentation and the need for coordination.

------------------------------

Date: Wed, 23 Mar 88 13:21 EST
From: Thieme@BCO-MULTICS.ARPA
Subject: Conference - Expert Systems in Agriculture


                               CALL FOR PAPERS


 TITLE:   Integration of Expert Systems with Conventional Problem Solving
                         Techniques in Agriculture

 SPONSORED BY:  AAAI Applied Workshop Series

 DESCRIPTION:

      Problem solving techniques such as modelling, simulation, optimization,
 and network analysis have been used for several years to help agricultural
 scientists and practitioners understand and work with biological problems.
 By their nature, most of those problems are difficult to define quantita-
 tively.  In addition many of the models and simulations that have been
 developed are not "user-friendly"  enough to entice practitioners to use
 them.  As a result, several scientists are integrating expert system
 technology with conventional problem solving techniques in order to increase
 robustness of their systems as well to increase usability and to aid in
 result interpretation.  The goal of this workshop is to investigate the
 similarities and differences of leading scientists' approaches and to
 develop guidelines for similar work in the future.

 CONDITIONS OF PARTICIPATION:

      Primary authors (presumably primary investigators) of submitted
 manuscripts will be invited to participate in the workshop if their
 manuscript is selected.  Manuscripts will be submitted in full by JUNE 1,
 1988.  The manuscripts will be reviewed for originality and clear
 presentation of the topic of integration by a committee appointed by the
 coordinating committee.  Only 40 participants will be selected in order to
 maximize free exchange of ideas.  The manuscripts will be distributed to
 the participants prior to the workshop in order to help them prepare
 questions for other authors.  The proceedings will be published in a
 peer-reviewed journal.

 LOCATION AND TIME:

      August 10-12, 1987 at the Hyatt Hotel in San Antonio, TX

 FOR MORE INFORMATION CONTACT:

 Dr. A. Dale Whittaker                            (409) 845-8379
 Agricultural Engineering Department
 Texas A&M University                             WHITTAK at TAMAGEN.BITNET
 College Station, TX 77843-2117


 Dr. Ronald H. Thieme                             (617) 671-3772
 Honeywell Bull, Inc.
 300 Concord Road                                 THIEME at BCO-MULTICS.ARPA
 Mail Station 895A
 Billerica, MA 01821


 Dr. James McKinion                               (601) 323-2230
 Crop Science Research Laboratory
 Crop Simulation Research Unit
 P.O. Box 5367
 Mississippi State, MI  39762-5367


 Earl Kline                                       (409) 845-3693
 Agricultural Engineering Department
 Texas A&M University                             KLINE at TAMAGEN.BITNET
 College Station, TX  77843-2117

------------------------------

Date: 24 Mar 88 19:44:27 GMT
From: portal!cup.portal.com!pcb@uunet.uu.net
Subject: Conference - Workshop on use of APL in AI


                      WORKSHOP ANNOUNCEMENT

    A workshop on APL Techniques in Expert Systems will be held at Syracuse
August 16-20, 1988 under the joint sponsorship of ACM SigAPL and Syracuse
University. The co-chairs are Garth Foster (Dept. of Electrical Engineering and
Information Science) and James Kraemer (School of Business).

    The workshop will include group lectures and demonstration plus parallel
hands-on sessions on both mainframe and micro-based systems. Implementation
vehicles include IBM APL2, STSC APL*Plus, and Sharp APL under Unix(TM). Topics
include:

    Comparison of rule-based systems: Kenneth Fordyce and Gerald Sullivan, IBM,
Kingston, NY;

    Fuzzy sets: Andreas Geyer-Schulz, the Economics University of Vienna;

    Associative semantic processing: W. D. Hagamen MD, Cornell Medical College,
New York;

    Multi-mode logic: John McInturff, Boeing Advanced Systems, Seattle;

    Knowledge-representation case studies with SASNEST: Graeme Jones & David
Bonyun, I.P. Sharp Associates, Ottawa.

    The workshop fee is $600;  there are discounts for early registration and
to members of ACM or SigAPL. Registration is limited to 50; experience in APL
programming is required. For registration: Dr. Davice Chimene, University
College, Syracuse University, Syracuse, NY 13210; (315) 423-3269. Technical and
program information: Dr. James R. Kraemer, Quantitative Methods, School of
Business, Syracuse University, Syracuse, NY 13210; (315) 423-3747. E-mail:
kraemer@suvm.bitnet


                                        SigAPL Program Committee
                                        3787 Louis Road
                                        Palo Alto, CA 94303-4512
                                        Contact: Paul Berry  415 494-2031
                                        E-mail: pcb on IPSA network;
                                        pcb@cup.portal.com

_______ Unix is a trademark of AT&T - Bell Labs.

------------------------------

Date: Mon, 28 Mar 88 09:13:53 PST
From: Michael Walker <WALKER@SUMEX-AIM.Stanford.EDU>
Subject: Conference - ARTISYST: AI for Systematics


                     The ARTISYST Workshop

          (ARTificial Intelligence for SYSTematics)


Uses of Artificial Intelligence and modern computer methods for
                 systematic studies in biology.

    Contact: Renaud Fortuner at (916) 445-4521 or through
BITNET or ARPANET at rfortuner@ucdavis.

An organization committee with R. Fortuner, J. Sorensen (Calif. Dept.
of Food & Agriculture), J. Milton, J. Diederich (UC Davis), M. Walker
(Stanford), J. Woolley and N. Stone (Texas A&M) is planning to study
the uses of artificial intelligence and modern computer methods for
systematic studies in biology. This study was suggested by the
Systematic Biology panel of National Science Foundation.

The committee will recruit a group of about twenty systematists, and
about a dozen computer scientists interested in the possible
application of modern computing techniques to a new domain. The
collaborators will meet twice, in early January 1989 and March 1989
during two three-day workshops at University of California, Davis.

During the first workshop the participants will make a list of
questions and problems in systematics that might be solved by modern
computing techniques such as applications of artificial intelligence
in expert systems, computer vision, databases, graphics, etc. After
this initial workshop, small workgroups of specialists from both
fields will collaborate to characterize the options in terms of
computing techniques, and to define the most promising approaches to
their solutions.

During a second workshop, the importance for systematic biology
of each of the problems studied, and the current, short, and long
term availability of relevant computer techniques will be
discussed. A final report will serve as guidelines for NSF
funding for future applications of computer techniques to
systematic biology. The proceedings of both workshops will be
published to serve as a review of state-of-the-art computer
methods that may be of use in systematics.

Systematic biology is the science that studies the relationships among
organisms, and that classifies these organisms according to these
relationships.  The National Science Foundation is interested in
supporting the implementation of expert systems and modern computing
techniques for systematic biology. Currently, systematics relies
heavily on statistics and algorithmic computer programming. However,
different computer methods may be used to solve some of its problems.
Expert systems can help with diagnostic identification, correct
application of the rules of nomenclature, etc. Capture of the data
requires computer vision and image analysis. Museum curation and
retrieval of published information could be helped by intelligent
access to large databases. Computer graphics can be used for
identification and teaching. Finally the systematic studies and the
definition of a classification may be helped by intelligent access to
databases and relevant statistics.

The domain of systematic biology is vast and its boundaries are ill
defined. However it is possible to define sub domains that may be
studied separately. First, the organisms to be studied must be
recovered and measured. In this preliminary phase, sub domains
include:

     - capture of the data: observation of shapes, measurement of
lengths, angles, areas, position of one feature in relation to
another, etc.

     - museum curation: finding specimens relevant to the study
in museum collections by an intelligent search of the collection
records.

     - identification of the specimens during field surveys: it
is necessary to know before studying a group of organisms
whether an organism found in the field belongs to this group.

     - information retrieval from a variety of published sources.

A second phase is represented by taxonomic analyses of the
relationships existing between organisms and their characteristics.
It includes:

     - ontogeny: the study of the development of the embryo, and
the appearance of ancestral features in the embryo that are then
used to support hypotheses on its phylogeny.

     - biogeography: geographical distribution of groups of
organisms.

     - fossil record: the study of ancestral states of characteristics
and their evolution along a fossil sequence.

     - comparative anatomy: comparison of the aspects taken by a
feature in related organisms

     - DNA and gene analysis

     - definition of apomorphies: search for evolved characters
present in all the members of a group.

     - resolution of homoplasies: search for characteristics that
appear similar in two groups, but that are the result of parallel
or convergent evolution rather than originating in a common
ancestor.

     - weighting the characteristics: giving more importance to
characteristics that supposedly are strong indicators of
phylogenetic relationships.

     - transformation of the raw data: for example, a log
transformation may restore normality.

These analyses result in the ordering of the organisms studied into
groups arranged into some sort of relational networks (trees).
Taxonomic analyses and the construction of trees are in fact
integrated processes, and they may have to be treated as a whole in
the ARTISYST Project. Several methods are in conflict for the
definition of the best methods for the definition of a classification
tree: parsimony (the shortest tree is the best), maximum likelihood
(each taxon is added in turn on the tree where it fits best),
ordination (relies on multi variate analyses), etc. All approaches
make an extensive use of statistics and algorithmic computer
programming but it has been said that most systematic problems cannot
be solved by any algorithm.  Availability of expert systems may
suggest other, non- algorithmic, approaches.

Once a tree has been accepted as a working hypothesis, the various
taxa in the tree are named according to the rules of the International
Code of Zoological Nomenclature (or its Botanical correspondent) and
the jurisprudence established over the years by the rulings of the
International Commissions of Zoological or Botanical Nomenclature. It
may be possible to include rules and jurisprudence into an
expert-system similar to legal systems currently under development.

Diagnostic identification is the process through which an unknown
specimen is allotted to its correct place in an existing
classification. This phase is the most promising for the application
of current expert system technology.

Finally, any new method must have a very friendly man-machine
interface to have a chance to be accepted by most systematists.

Each topic will be studied by a small workgroup including one or
several systematists and one or several computer scientists.
Computer scientists interested in participating in this project
should contact Renaud Fortuner at (916) 445-4521 or through
BITNET at rfortuner@ucdavis.

------------------------------

End of AIList Digest
********************

∂29-Mar-88  0519	LAWS@KL.SRI.COM 	AIList V6 #59 - POPLOG, microExplorer, Inference, Cognitive Agent   
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 29 Mar 88  05:19:24 PST
Date: Mon 28 Mar 1988 20:53-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #59 - POPLOG, microExplorer, Inference, Cognitive Agent
To: AIList@SRI.COM


AIList Digest            Tuesday, 29 Mar 1988      Volume 6 : Issue 59

Today's Topics:
  Queries - Control of Batch Reactors & DEFT Project &
    AI in CAD & Conversational Programs & Expert Systems,n
  AI Tools - POPLOG Availability in the US &
    Parallel Inference Mechanism & TI microExplorer,
  Project - Cognitive Agent

----------------------------------------------------------------------

Date: Fri, 25 Mar 88 12:17:28 IST
From: Oren Regev <CERRLOR%TECHNION.BITNET@CUNYVM.CUNY.EDU>
Subject: Re: AI in ISRAEL


 HELLO!

   WE ARE DEALING HERE IN THE I.I.T (ISRAEL INSTITUTE  OF TECHNOLOGY)
IN IMPLIMENTATION OF EXPERT SYSTEMS ON THE CONTROL OF BATCH REACTORS.

   THIS IS DONE BY BUILDING A RULE BASE AND CONNECTING IT TO A SIMULATOR
BASED ON A MODEL OF SUCH A SYSTEM.

   THE CONTROL OF SUCH A REACTOR IS SO HARD BECAUSE OF INTERNAL HEAT
GENERATION DURING REACTION (IT IS EXOTHERMIC REACTION).

   WE ARE INTERESTED IN ANY INFORMATION THAT YOU CAN SEND US

    OREN REGEV  chemical engineering  faculty  CERRLOR AT TECHNION

------------------------------

Date: 24 Mar 88 15:42:09 GMT
From: ucsdhub!hp-sdd!ncr-sd!rb-dc1!tjeff@sdcsvax.ucsd.edu  (Jeff
      Enderwick)
Subject: DEFT

Does anyone know of any ref's for IBM's DEFT project (Diagnostic
expert system for disk drives) ?

        Thanks - Jeff

        sdcsvax!ncr-sd!rb-dc1!tjeff
        ↑
        |
        sdcsvax.UCSD.EDU

------------------------------

Date: 25 Mar 88 12:28:32 GMT
From: mcvax!unido!infko!uro@uunet.uu.net  (Uwe und Roland)
Subject: need info on AI in CAD

In doing research on intelligent CAD-systems, I'm looking
for literature, technical/research reports and conference
proceedings on following topics:

- Architecture of intelligent CAD-systems
- Design and Implementation of AI-models which enable
the user to include structural knowledge (i.e. design knowledge),
exceeding topological knowledge while describing his objects to the
CAD-system. I'm looking for a kind of shell, surrounding the realization
of the topological/geometrical model, but beeing as independent
as possible from the latter one.
- Applications of results of neural-network research
in the construction of CAD-systems

Does anybody know, whether there are "intelligent" CAD-systems
already available for "real-life" applications?

--
Roland Berling, Uni Koblenz (EWH), Informatik
               Rheinau 3-4, D-5400 Koblenz  (West Germany)
UUCP: ..!unido!infko!uro             uro@infko.UUCP

------------------------------

Date: 25 Mar 88 21:29:41 GMT
From: ok2@psuvm.bitnet
Subject: conversations?


     Hi, I'm trying to locate versions of PARRY and similar programs that
simulate conversation with a human being (such as ELIZA and SHRDLU).
     I do know that PARRY, which simulates conversing with a Paranoid person
was written by  K.M. Colby, based on the information processing approach
(of a paranoiac) of S.S. Tomkins.
     Also, I know that SHRDLU, which simulates talking to and giving orders
to a robot in a computer generated simulation, was written by Terry Winograd.

     Both PARRY and ELIZA take advantage of the tactic of predefining the
context of the conversation (a conversation with a paranoiac, or conversation
with a therapist) to imply real meaning to sentences the program generates from
key words picked from the human's sentences.
     SHRDLU on the other hand is using an internal model of the 'world' which
the simulated 'robot' is in.  The program is limited to talking about the
actions of the robot in this simulated world....

    Any information about where to find copies of these programs (for IBMpc,
Apple, or VMS) or about programs like these (or better than them...) will be
greatly appreciated.

                                    Steven

"To err is human, but when the eraser wears out before the pencil,
     you're overdoing it"  - from Soundingboard

------------------------------

Date: Fri, 25 Mar 88 09:40:17 EDT
From: CMSBE1%EOVUOV11.BITNET@CUNYVM.CUNY.EDU
Subject: Please, send me information about...

Date: 25 March 1988, 09:26:16 EDT
From: Juan Francisco Suarez Vicente  (KIKO)          CMSBE1   at EOVUOV11
To:   AILIST-REQUEST at KL.SRI

Hi from Spain !!!
Could anyone send me (via electronic mail) any of these informations:
- plausible reasoning and certainty factors
- knowledge adquisition in medical diagnosis
- improving diagnostics obtained using certainty factors
- more medical knowledge representations than typical rule-based systems ?
Thanks a lot in advancing to all answers. If there is no people who
had this electronic information, I'm looking for bibliography too.
Please, send your mail for AILIST distribution, or directly to:
                   CMSBE1@EOVUOV11      (EARN network)

------------------------------

Date: Fri, 25 Mar 88 09:15:58 GMT
From: Aaron Sloman <aarons%cvaxa.sussex.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: request for information about POPLOG

>From: uazchem!dolata@arizona.edu (Dolata)
>Subject: POPLOG availability in the US

>Can someone give me a pointer to the party who distributes POPLOG in the
>US?   Since my net connections are a bit rocky,  could you send me both
>email and US Snail mail addresses?? (Phone number?)

This news item has not reached me direct. Someone forwarded it without
the Snail mail address. So I thought I should reply both direct and
via AI Digest.

POPLOG is developed by the School of Cognitive Sciences at Sussex
University, Brighton, England and distributed world wide by Systems
Designers. However we have an arrangement with Prof Robin Popplestone
at Univ. Amherst Mass to distribute it at a much reduced price to
academics in US and Canada.

He can also arrange for provision of evaluation licences, provide
demonstrations, etc. It is now possible to have ML in Poplog though it
is not yet part of the official release.

US Contact addresses for POPLOG:
For Academic enquiries/sales in USA and Canada

    Prof Robin Popplestone
    Dept. of Computer and Information Science
    Lederle Graduate Research Center
    University of Massachusetts
    Amherst, MA  01003, USA

Email pop@cs.umass.edu

or
    Prof Robin Popplestone
    Computable Functions Inc.,
    35 South Orchard Drive,
    Amherst, MA 01002, USA      Phone(413) 253-7637

For non-academic enquiries/sales
    Systems Designers International Inc
    Industrial Division
    New Castle Corporate Commons,
    55 Read's Way,
    New Castle,
    Delaware 19720, USA
    Phone (302) 323 1900  (800)888-9988

I hope that helps.
Aaron Sloman,
School of Cognitive Sciences, Univ of Sussex, Brighton, BN1 9QN, England
    ARPANET : aarons%uk.ac.sussex.cvaxa@nss.cs.ucl.ac.uk
              aarons%uk.ac.sussex.cvaxa%nss.cs.ucl.ac.uk@relay.cs.net
    BITNET:   aarons%uk.ac.sussex.cvaxa@uk.ac
As a last resort (it costs us more...)
    UUCP:     ...mcvax!ukc!cvaxa!aarons
Phone:  University +(44)-(0)273-678294  (Direct line. Diverts to secretary)

------------------------------

Date: 25 Mar 88 19:01:46 GMT
From: nau%frabjous@mimsy.umd.edu (Dana Nau)
Reply-to: nau@frabjous.UUCP (Dana Nau)
Subject: Re: POPLOG availability in the US

In article <8803210643.AA02468@uazchem.SGI> dolata@uazchem.UUCP (Dolata)
 writes:
>Can someone give me a pointer to the party who distributes POPLOG in the
>US?   Since my net connections are a bit rocky,  could you send me both
>email and US Snail mail addresses?? (Phone number?)   Thanks for the help.


Why don't you get in touch with Robin Popplestone, the author?  He's in the
department of Computer and Info. Science at UMass; I think his e-mail address
is pop@cs.umass.edu.

Dana S. Nau                             ARPA & CSNet:  nau@mimsy.umd.edu
Computer Sci. Dept., U. of Maryland     UUCP:  ...!{allegra,uunet}!mimsy!nau
College Park, MD 20742                  Telephone:  (301) 454-7932

------------------------------

Date: 24 Mar 88 20:03:32 GMT
From: maui!leon@locus.ucla.edu  (Leon Alkalaj)
Subject: Re: parallel inference mechanism


There are (at least) two reports from ICOT on the KABU-WAKE method
for parallel inference:
        TM-0131, july 1985: A New Parallel Inference Mechanism Based
        on Sequential Processing,
        Y. Sohma, K. Satoh, K. Kumon, H. Masuzawa and A. Itashiki.

        TR-150, March 1986: KABU-WAKE: A New Parallel Inference Method and
        Its Evaluation,
        K. Kumon, H. Masuzawa, A. Itashiki, K. Satoh and Y. Sohma.


Leon Alkalaj,
UCLA Computer Science Dept.

------------------------------

Date: 25 Mar 88 23:58:22 GMT
From: voder!apple!striepe@ucbvax.Berkeley.EDU  (Harald Striepe)
Subject: Re: TI microExplorer (Mac II coprocessor) ...


In article <1180@kodak.UUCP> luciw@kodak.UUCP (bill luciw) writes:
>Well, our KBS Lab is ordering a microExplorer, the coprocessor for the Mac II.

>1) What impact (if any) does the alledged lack of "true" DMA have on the
>paging performance of the microExplorer?
I do not have exact figures on this, but overall performance is 50% + of TI's
Explorer II; contributors to this differential are reduced clock speed of the
CPU to reduce power consumption, different memory organization (cache), and
disk performance. However, it is nice to have a single file system rather
than dealing with multiple partitions.

>2) Is TI's implementation of RPC available to other applications (such as those
>developed under MPW)?
Texas Instruments is publishing the RPC spec.

>3) How well integrated is the microExplorer into the rest of the Mac
>environment - (cut, copy, paste, print on an AppleTalk printer) ?
The microExplorer uses the Apple peripheral devices. Although the user
interface integration is not yet complete (you are running an Explorer window
system in one or more Macintosh windows under MultiFinder), TI is working
agressively on deeper integration.

>4) Can you install the "load bands" on third party disks (SuperMac 150) or do
>they need to remain on the Apple hard disk (the load bands are supposed to be
>normal, finder accessible files)?
Although we have not tried this, there should be no reason why this should not
work (a Macintosh volume is a Macintosh volume).

>5) How much of a hassle is it to port applications over to the little beastie
>from a normal Explorer (what about ART, KEE, SIMKIT, etc.)?
Some vendors have installed their application in less than a day. Inference,
IntelliCorp and Carnegie Group all announced support of the microExplorer.

>6) Do any benchmarks (ala Gabriel) exist for this machine?
You might want to contact TI, they ran a whole suite. Unfortunately, I do not
have the details.

>7) How about ToolBox access from the Lisp Environment? (or am I dreaming?)
Not available in the first release, but a kit is planned. Since RPC is public,
you would have to "roll your own" in the meantime. Another approach would be
to use Coral's Allegro CL on the Macintosh side, implement the RPC, and...

>Thankyou in advance for all your comments and I will post our experiences
>(good or bad, of course) as they develop ...
Although the microExplorer is supported by TI, we all would be interested
in hearing about your experiences, and definitely would be willing to help
you reach the right people, should you run into unforeseen problems in getting
help.

--
Harald Striepe
Business Development Manager, Artificial Intelligence
Apple Computer, Inc.
email: striepe@APPLE.COM       AppleLink: STRIEPE2

------------------------------

Date: 25 Mar 88 14:57:47 GMT
From: sunybcs!nobody@rutgers.edu
Reply-to: sunybcs!rapaport@rutgers.edu (William J. Rapaport)
Subject: Re: Software Wanted to Build a Mind


In article <8803250637.AA22481@ucbvax.Berkeley.EDU> POPX@VAX.OXFORD.AC.UK
writes:
>                             SOFTWARE WANTED
>                                    -
>                             TO BUILD A MIND


You might be interested in the following document, excerpts of which
follow; the full document is available by contacting us.

                                        William J. Rapaport
                                        Assistant Professor

Dept. of Computer Science||internet:  rapaport@cs.buffalo.edu
SUNY Buffalo             ||bitnet:    rapaport@sunybcs.bitnet
Buffalo, NY 14260        ||uucp: {ames,boulder,decvax,rutgers}!sunybcs!rapaport
(716) 636-3193, 3180     ||



             DEVELOPMENT OF A COMPUTATIONAL COGNITIVE AGENT

                      Stuart C. Shapiro, Director
                William J. Rapaport, Associate Director

                          SNePS Research Group
                     Department of Computer Science
                            SUNY at Buffalo
                             226 Bell Hall
                           Buffalo, NY 14260
            shapiro@cs.buffalo.edu, rapaport@cs.buffalo.edu

OVERVIEW.

The long term goal of the SNePS Research  Group  is  to  understand  the
nature  of intelligent cognitive processes by developing and experiment-
ing with a computational cognitive agent that will be able  to  use  and
understand  natural language, and will be able to reason and solve prob-
lems in a wide variety of domains.

...

ACCOMPLISHMENTS.

In pursuit of our long term goals, we have developed:

 (1)   The  SNePS  Semantic  Network  Processing  System,  a  knowledge-
       representation/reasoning system that allows one to design, imple-
       ment, and use specific knowledge representation  constructs,  and
       which  easily  supports nested beliefs, meta-knowledge, and meta-
       reasoning.

 (2)   SNIP,  the  SNePS  Inference  Package,  which  interprets   rules
       represented in SNePS, performing bi-directional inference, a mix-
       ture of forward chaining and backward chaining which focuses  its
       attention  on the topic at hand.  SNIP can make use of universal,
       existential, and numerical quantifiers, and a  specially-designed
       set  of propositional connectives that include both true negation
       and negation-by-failure.

 (3)   Path-Based Inference, a very general method of  defining  inheri-
       tance rules by specifying that the existence of an arc in a SNePS
       network may be inferred from the existence  of  a  path  of  arcs
       specified by a sentence of a ``path language'' defined by a regu-
       lar grammar.  Path-based reasoning is fully integrated into SNIP.

 (4)   SNeBR, the SNePS Belief Revision system, based on SWM,  the  only
       extant, worked-out logic of assumption-based belief revision.

 (5)   A Generalized Augmented Transition  Network  interpreter/compiler
       that  allows  the  specification  and  use of a combined parsing-
       generation grammar, which can be used to parse a natural-language
       sentence  into  a SNePS network, generate a natural-language sen-
       tence from a SNePS network,  and  perform  any  needed  reasoning
       along the way.

 (6)   A theory of Fully Intensional Knowledge Representation, according
       to  which  we  are developing knowledge representation constructs
       and grammars for the Computational Cognitive Mind.   This  theory
       also  affects the development of successive versions of SNePS and
       SNIP.  For instance, the insight we  developed  into  the  inten-
       sional  nature  of  rule  variables led us to design a restricted
       form of unification that cuts down on the search space  generated
       by SNIP during reasoning.

 (7)   CASSIE, the Computational Cognitive Mind we  are  developing  and
       experimenting  with,  successive  versions  of which represent an
       integration of all our current work.

CURRENT RESEARCH.

Current projects being carried out  by  various  members  of  the  SNePS
Research Group, some joint with other researchers, include:

 (1)   VMES, the Versatile Maintenance Expert System:

...
 (2)   Discussing and Using Plans:

...
 (3)   Intelligent Multi-Media Interfaces:

...
 (4)   Cognitive and Computer Systems for Understanding Narrative Text:

...
 (5)   The Representation of Natural Category Systems and Their Role  in
       Natural-Language Processing:

...
 (6)   Belief Representation, Discourse Analysis, and Reference in  Nar-
       rative:

...
 (7)   Understanding Pictures with Captions:

...
BIBLIOGRAPHY.

A bibliography of over 90 published  articles,  technical  reports,  and
technical  notes  may  be obtained from Mrs. Lynda Spahr, at the address
given above, or by electronic mail to spahr@gort.cs.buffalo.edu.

------------------------------

End of AIList Digest
********************

∂29-Mar-88  0852	LAWS@KL.SRI.COM 	AIList Digest   V6 #60 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 29 Mar 88  08:51:47 PST
Date: Mon 28 Mar 1988 21:01-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V6 #60
To: AIList@SRI.COM


AIList Digest            Tuesday, 29 Mar 1988      Volume 6 : Issue 60

Today's Topics:
  Opinion - The Future of AI,
  Expert Systems - Student Versions of OPS5 & Certainty Factors,
  Theory - Models of Uncertainty

----------------------------------------------------------------------

Date: 28 Mar 88 12:46:29 GMT
From: eagle!sbrunnoc@ucbvax.Berkeley.EDU  (Sean Brunnock)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>What do people think of the PRACTICAL future of artificial intelligence?
>
>Is AI just too expensive and too complicated for practical use?  I
>
>Does AI have any advantage over conventional programming?

   Bear with me while I put this into a sociological perspective. The first
great "age" in mankind's history was the agricultural age, followed by the
industrial age, and now we are heading into the information age. The author
of "Megatrends" points out the large rise in the number of clerks as
evidence of this.

   The information age will revolutionize agriculture and industry just as
industry revolutionized agriculture one hundred years ago. Industry gave to
the farmer the reaper, cotton gin, and a myriad of other products which
made his job easier. Food production went up an order of magnitude and by
the law of supply and demand, food became less valuable and farming became
less profitable.

   The industrial age was characterized by machines that took a lot of
manual labor out of the hands of people. The information age will be
charcterized by machines that will take over mental tasks now accomplished
by people.

   For example, give a machine access to knowledge of aerodynamics,
engines, materials, etc. Now tell this machine that you want it to
design a car that can go this fast, use this much fuel per mile, cost
this much to make, etc. The machine thinks about it and out pops a
design for a car that meets these specifications. It would be the
ultimate car with no room for improvement (unless some new scientific
discovery was made) because the machine looks at all of the possibilities.
These are the types of machines that I expect AI to make possible
in the future.

   I know this is an amateurish analysis, but it convinces me to study
AI.

   As for using AI in conventional programs? Some people wondered what
was the use of opening up a trans-continental railroad when the pony
express could send the same letter or package to where you wanted in just
seven days. AI may be impractical now, but we have to keep making an effort
at it.


       Sean Brunnock
       University of Lowell
       sbrunnoc@eagle.cs.ulowell.edu

------------------------------

Date: 28 Mar 88 17:08:20 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

In article <5789@swan.ulowell.edu> sbrunnoc@eagle.UUCP (Sean Brunnock) writes:
>industrial age, and now we are heading into the information age. The author
>of "Megatrends" points out the large rise in the number of clerks as
>evidence of this.

      The number of office workers in the U.S. peaked in 1985-86 and has
declined somewhat since then.  White collar employment by the Fortune 500
is down substantially over the last five years.  The commercial real estate
industry has been slow to pick up on this, which is why there are so many
new but empty office buildings.  The new trend is back toward manufacturing.
You can't export services, except in a very minor way.  (Check the numbers
on this; they've been published in various of the business magazines and
can be obtained from the Department of Commerce.)

>   For example, give a machine access to knowledge of aerodynamics,
>engines, materials, etc. Now tell this machine that you want it to
>design a car that can go this fast, use this much fuel per mile, cost
>this much to make, etc. The machine thinks about it and out pops a
>design for a car that meets these specifications. It would be the
>ultimate car with no room for improvement (unless some new scientific
>discovery was made) because the machine looks at all of the possibilities.

      Wrong.  Study some combinatorics.  Exhaustive search on a problem like
that is hopeless.  The protons would decay first.

                                        John Nagle

------------------------------

Date: 25 Mar 88 16:23:17 GMT
From: trwrb!aero!srt@ucbvax.Berkeley.EDU  (Scott R. Turner)
Subject: Re: Student versions of OPS5

In article <8803161533.AA03130@decwrl.dec.com> barabash@midevl.dec.com
(Digital has you now!) writes:
>
>  In article <26695@aero.ARPA> srt@aero.UUCP (Scott R. Turner) writes:
>> I don't think the Vax version [of OPS5] uses Rete (at least, it allows
>> calculations in the LHS).
>
>  In fact, VAX OPS5 uses the high-performance compiled Rete, first used
>  by OPS83, wherein each node in the network is represented by machine
>  instructions.  This makes it easy for the compiler to support inline
>  expression evaluation and external function calls in the LHS.

My understanding of the Rete algorithm (and admittedly, I don't have
the paper at hand) was that speed was obtained by building an
immediately executable tree of database checks.  In essence, a
compiled D-Net based on the contents of WM.  So the speed increase
isn't merely because you compile the LHS of all the rules (at some
level every language represents things as machine instructions, after
all), but because the Rete algorithm builds a discrimination tree that
efficiently orders the tests and guarantees that a minimum number of
tests will be taken to determine the applicable rules.  Much of OPS5's
conflict resolution strategy falls directly out of Rete, I think, in
the order in which applicable rules are found.

So, can executable functions be included in the LHS without ruining
this scheme?  I'd say no, with reservations.  At the very least, two
rules with an identical function call in the LHS will result in the
function being executed twice (since the compiler can't guarantee that
the function doesn't have a side-effect which will change its result
between different invocation with identical arguments).  So, in the
Rete scheme, if two rules have an identical WM check in the LHS, that
check is made only once each cycle of the OPS5 machine.  If they have
an executable in the LHS, that gets executed twice.  If you allow
functions which can change WM to exist in the LHS, things get much
worse.
                                        -- Scott Turner

------------------------------

Date: 25 Mar 88 18:05:25 GMT
From: mcvax!unido!tub!tmpmbx!netmbx!morus@uunet.uu.net  (Thomas M.)
Subject: Re: Student versions of OPS5

In article <27336@aero.ARPA> srt@aero.UUCP (Scott R. Turner) writes:
>In article <1501@netmbx.UUCP> muhrth@db0tui11.BITNET (Thomas Muhr) writes:
>>I have now available a few common-lisp
>>sources (each about 100KB big) which I will try to convert to a PC-runnable
>>version in the near future.
>
>It should be possible to write an OPS5-like language in a lot less than
>100K.  The only difficult part of OPS5 to implement is the RETE algorithm.
>Throw that out, ignore some of the rules for determining which rule out
>of all the applicable rules to use (*), and you should be able to implement
>OPS5 in a couple of days.  Of course, this version will be slow and GC
>every few minutes or so, but those problems will be present to some extent
>in any version written in Lisp.

Right, but after all the proposed deletions this code would hardly cover 2
pages. Leaving Rete-match out is not just affecting run-time (the decrease
in performance is incredible!) - but it would eliminate all features which
make OPS5 an interesting language - mainly the heuristics for selecting
rule-instantiations intelligently.
>
>(*) My experience is that most OPS5 programmers (not that there are many)
                                Is this right ? ---↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
>ignore or actively counter the "pick the most specific/least recently used"
>rules anyway.
Well, it would be fine to have a little more influence on conflict-resolution
strategies - but the mentioned ones are very important:
Default strategies via "specifity", controlling loops via "recency" are very
powerful features.
Ignoring these mechanisms means that they have chosen the wrong paradigm.

-------- Thomas Muhr

Knowledge Based Systems Group
Technical University of Berlin
--
@(↑o↑)@   @(↑x↑)@   @(↑.↑)@   @(↑_↑)@   @(*o*)@   @('o`)@   @(`!')@   @(↑o↑)@
@  Thomas Muhr    Tel.: (Germany) 030 - 87 41 62    (voice!)                @
@  NET-ADRESS:    muhrth@db0tui11.bitnet  or morus@netmbx.UUCP              @
@  BTX:           030874162                                                 @

------------------------------

Date: Sun, 27 Mar 88 22:14:42 HNE
From: Spencer Star <STAR%LAVALVM1.BITNET@CORNELLC.CCS.CORNELL.EDU>
Subject: Certainty Factors

This is a reply to KIKO's message asking about the validity of using
certainty factors in an expert system.

You can use certainty factors as a way of coping with uncertainty, but
you run the risk of introducing substantial errors in the analysis
due to the simplifying assumptions that underlie CFs.  In some domains,
such as certain medical areas, the same treatment will be used to
cover several different diseases, e.g., antibiotics to cover infections of
various sorts.  In that case, the incoherence introduced by using CFs is
often covered up by the lack of need to be very precise.  I certainly
would not like to try to defend the use of CFs in a liability suit brought
against a person who made a poor diagnosis based on an expert system.  I
would recommend doing a lot of research on the various ways that uncertainty
is handled by people working on uncertainty.  An excellent starting point
is the paper, "Decision Theory in Expert Systems and Artificial Intelligence",
by Eric J. Horvitz, John S. Breese, and Max Henrion.  It will be in the
Journal of Approximate Reasoning, Special Issue on Uncertain Reasoning,
July, 1988.  Prepublication copies might be available from Horvitz ,
Knowledge Systems Laboratory, Department of Computer Science, Stanford
University, Stanford, CA 94305.  This paper will become a standard
reference for people interested in using     Bayesian decision theory
in AI.
   Putting uncertainty into an expert system using decision theory is not
as simple as using certainty factors.  But getting it right is not always
easy.
   I hope this will be of some help.  Best of luck.
                     Spencer Star  (star@lavalvm1.bitnet) or
                             arpa: (star@b.gp.cs.cmu.edu)

------------------------------

Date: Mon, 28 Mar 88 02:17:39 HNE
From: Spencer Star <STAR%LAVALVM1.BITNET@CORNELLC.CCS.CORNELL.EDU>
Subject: Re: AIList V6 #57 - Theorem Prover, Models of Uncertainty


I'll try to respond to Ruspini's comments about my reasons for
choosing the Bayesian approach to representing uncertainty.
[AIList March 22, 1988]

>If, for example, we say that the probability of "economic
>recession" is 80%, we are indicating that there is either a
>known...or believed...tendency or propensity of an economic
>system to evolve into a state called "recession".
>
>If, on the other hand, we say the system will move into a
>state that has a possibility of 80% of being a recession, we are
>saying that we are *certain* that the system will evolve into a
>state that resembles or is similar at least to a degree of 0.8
>to a state of recession (note the stress on certainty with
>imprecision about the nature of the state as opposed to a
>description of a believed or previously observed tendency).

I think this is supposed to show the difference between something
that probabilities can deal with and a "fuzzy approach" that
probabilities can't deal with.  However, probabilities can deal
with both situations, even at the same time.  The following
example demonstrates how this is done.

Suppose that there is a state Z such that from that state there
is a P(Z)=60% chance of a recession in one year.  This
corresponds to the state in the second paragraph.  Suppose that
there are two other states, X and Y, that have probabilities of
entering a recession within a year of P(X)=20% and P(Y)=40%.   My
beliefs that the system will enter those states are Bel(X)=.25
Bel(Y)=.30  Bel(Z)=.45, where beliefs are my subjective prior
probabilities conditional on all the information available to me.
Then the probability of a recession is, according to my beliefs,
P(recession)=alpha*[Bel(X)*P(X) + Bel(Y)*P(Y) + Bel(Z)*P(Z)],

where alpha is a normalization factor making

       P(recession) + P(no recession) = 1.

In this example, a 25% belief in state X occurring gives a 5%
chance of recession and a 20% chance of no recession.  Summing
the little buggers up across states gives a 44% chance of
recession and a 56% chance of no recession with alpha=1.  These
latter figures give my beliefs about there being or not being a
recession.

So far, I haven't found a need to use fuzzy logic to represent
possibilities.  Peter Cheeseman, "Probabilistic versus Fuzzy
Reasoning", in Kanal and Lemmer, Uncertainty in AI, deals with
this in more detail and comes to the same conclusion.  Bayesian
probabilities work fine for the examples I have been given.  But
that doesn't mean that you shouldn't use fuzzy logic.  If you
feel more comfortable with it, fine.

I proposed simplicity as a property that I value in choosing a
representation.  Ruspini seems to believe that a relatively
simple representation of a complex system is "reason for
suspicion and concern."  Perhaps.  It's a question of taste and
judgment.  However, my statement about the Bayesian approach
being based on a few simple axioms is more objective.
Probability theory is built on Kolmogoroff's axioms, which for
the uninitiated say things like the sum of the probabilities must
equal 1, that we can sum probabilities for independent events,
and that probabilities are numbers less than or equal to one.
Nothing very complicated there.  Decision theory adds utilities
to probabilities.  For a utility function to exist, the agent
must be able to order his preferences, prefer more of a
beneficial good rather than less, and a few other simple things.

Ruspini mentions that "rational" people often engage in
inconsistent behavior when viewed from a Bayesian framework.
Here we are in absolute agreement.  I use the Bayesian framework
for a normative approach to decision making.  Of course, this
assumes the goal is to make good decisions.  If your goal is to
model human decision making, you might very well do better than
to use the Bayesian model.  Most of the research I am aware of
that shows inconsistencies in human reasoning has made
comparisons with the Bayesian model.  I don't know if fuzzy sets
or D-S provides solutions for paradoxes in probability
judgments.  Perhaps someone could educate me on this.


>Being able to connect directly with decision theory.
>(Dempster-Shafer can't)
Here, I think I was too hard on D-S.  The people at SRI,
including Ruspini, have done some very interesting work using the
Dempster-Shafer approach in a variety of domains that require
decisions to be made.  Ruspini is correct in pointing out that if
no information is available, the Bayesian approach is often to
use a maximum entropy estimate (everything is equally likely),
which could also be used as the basis for a decision in D-S.  I
have been told by people in whom I trust that there are times
when D-S provides a better or more natural representation of the
situation than a strict Bayesian approach.  This is an area ripe
for cooperative research.  We need to know much more about the
comparative advantages of each approach on practical problems.

>Having efficient algorithms for computation.
When I stated that the Bayesian approach has efficient algorithms
for computation, I did not mean to state that the others did not.
Shafer and Logan published an efficient algorithm for Dempster's
rule in the case of hierarchical evidence.  Fuzzy sets are often
implemented efficiently.  This statement was much more a defence
of probability theory against the claim that there are too many
probabilities to calculate.  Kim and Pearl have provided us with
an elegant algorithm that can work on a parallel machine.  Cycles
in reasoning still present problems, although several people have
proposed solutions.  I don't know how non-Bayesian logics deal
with this problem.  I'd be happy to be informed.

>Being well understood.
This statement is based on my observations of discussions
involving uncertainty.  I have seen D-S advocates disagree
numerous times over whether or not the particular paper X, or
implementation Y is really doing Dempster-Shafer evidential
reasoning.  I have seen very bright people present a result on
D-S only to have other bright people say that result occurs only
because they don't understand D-S at all.  I asked Glen Shafer
about this and his answer was that the theory is still being
developed and is in a much more formative stage than Bayesian
theory.  I find much less of this type of argument occurring
among Bayesians.  However, there is also I.J. Good's article
detailing the 46,656 varieties of Bayesians possible, given the
major different views on 11 fundamental questions.

The Bottom Line
     There has been too much effort put into trying to show that
one approach is better or more general than the other and not
enough into some other important issues.  This message is already
too long, so let me close with what I see as the major issue for
the community of experts on uncertainty to tackle.

  The real battle is to get uncertainty introduced into basic AI
research.  Uncertainty researchers are spending too much of their
time working with expert systems, which is already a relatively
mature technology.  There are many subject areas such as machine
learning, non-monotonic reasoning, truth maintenance, planning,
etc. where uncertainty has been neglected or rejected.  The world
is inherently uncertain, and AI must introduce methods for
managing uncertainty whenever it wants to leave toy micro-worlds
to deal with the real world.  Ask not what you can do for the
theory of uncertainty; ask what the theory of uncertainty can do
for you.

               Spencer Star
               Bitnet: star@lavalvm1.bitnet
               Arpanet: star@b.gp.cs.cmu.edu

------------------------------

End of AIList Digest
********************

∂01-Apr-88  0055	LAWS@KL.SRI.COM 	AIList V6 #61 - UK Addresses, Seminars
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 1 Apr 88  00:53:41 PST
Date: Thu 31 Mar 1988 23:00-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #61 - UK Addresses, Seminars
To: AIList@SRI.COM


AIList Digest             Friday, 1 Apr 1988       Volume 6 : Issue 61

Today's Topics:
  Bindings - Jocelyn Paine's Email Address & UK Addresses,
  Seminars - The Knowledge Based Specification Assistant (ISI) &
    Nonmonotonic Temporal Reasoning and Causation (CMU) &
    Architecture-Independent Parallel Programming (BBN) &
    A Comparison of Spatial and Symbolic Reference (CMU)

----------------------------------------------------------------------

Date: Tue, 29 Mar 88 10:42:29 PST
From: lambert@cod.nosc.mil (David R. Lambert)
Subject: Jocelyn Paine's email address

For those wishing to respond to Jocelyn Paine's "Software Minds"
query, the email address listed:

popx%vax.oxford.ac.uk%ac.uk@ukacrl.bitnet

did not work from my .mil address.  An address which did work is:

popx@vax.oxford.ac.uk

Dave


David R. Lambert, PhD
Code 772 (S)
Naval Ocean Systems Center
San Diego, CA  92152

Commercial: (619) 553-1093
Autovon:     (AV) 553-1093
Email:      lambert@nosc.mil

------------------------------

Date: Tue, 29 Mar 88 09:00:47 BST
From: US x6111 <US@IB.RL.AC.UK> (US at UKACRL)
Subject: Problems with e-mail to UK

Addresses within the UK are handled in big-endian order, so that mail
to oxford should be addressed like:

POPX%UK.AC.OXFORD.VAX%UKACRL.BITNET@CUNYVM.CUNY.EDU


Jonathan Wheeler
User Support and Marketing Group
Rutherford Appleton Laboratory
Didcot, Oxfordshire, England

------------------------------

Date: 28 Mar 88 21:55:21 GMT
From: luu@vaxa.isi.edu (Kim Chau Luu)
Reply-to: luu@vaxa.isi.edu (Kim Chau Luu)
Subject: Seminar - The Knowledge Based Specification Assistant (ISI)


Title    : The Knowledge Based Specification Assistant
Speaker  : Lewis Johnson
Location : USC/Information Sciences Institute
           4676 Admiralty Way
           Marina Del Rey, CA 90292-6695
           11th Floor Large Conference Room
Date     : April 6, 1988
Time     : 3:00 - 5:00PM

Abstract:

Software specification is the process of constructing a design for a
system to achieve some desired outcome in the world.  It involves
analyzing the domain of application, identifying requirements which the
software must meet, and then developing a specification of a system that
can meet these requirements.  The Knowledge-Based Specification
Assistant Project (KBSA) is building a tool to actively assist in this
process.  It supports our view of the specification process as one of
incremental model construction and transformational derivation.  A
specifier starts by developing an explicit model of the application
domain, and of the requirements to be met by the software.  This model
is developed incrementally over time, as the specifier's understanding
of the problem improves.  A specification is then developed by
transforming the requirements into an implementable form.  The KBSA
system assists this process as follows: a) by applying the
transformations necessary to develop the specification, b) by analyzing
the specification, to help identify where transformations must be
applied, and c) by paraphrasing and explaining the specification.

------------------------------

Date: Tue, 29 Mar 88 15:39:01 EST
From: Anurag.Acharya@CENTRO.SOAR.CS.CMU.EDU
Subject: Seminar - Nonmonotonic Temporal Reasoning and Causation (CMU)


                         AI SEMINAR

TOPIC:       Nonmonotonic Temporal Reasoning, and Causation

SPEAKER:      Professor Yoav Shoham
              Dept. of Computer Science
              Stanford University
              Stanford, CA  94305
              shoham@score.stanford.edu

WHEN:        Tuesday, April 12, 1988   3:30pm

WHERE:       Wean Hall 5409

                        ABSTRACT

We define two problems that arise from the conflicting goals of rigor
and efficiency in temporal reasoning, called the {it qualification
problem} and the {it extended prediction problem}, which subsume the
infamous {it frame problem}. We then offer solutions to those.

The solution relies on making nonmonotonic inferences. We present
our very simple, semantical approach to nonmonotonic logics.

We then define a particular nonmonotonic logic, called the logic
of {it chronological ignorance}, which combines
elements of temporal logic and the modal logic of knowledge.

We illustrate how the logic can be used to solve the qualification
problem. (In the unlikely event of time permitting, we will do the same
for the extended prediction problem).

Although the logic of chronological ignorance is, in general, badly
undecidable, we identify a restricted class of theories, called
{it causal theories}, which have very nice properties: They each have
a model that is (in a certain sense) unique, and that is (in a certain sense)
easily computable.

We argue that the above analysis offers an attractive account of
the concept of {it causation}, and of its central role in
common sense reasoning.

The talk presupposes only basic understanding of first-order logic.

------------------------------

Date: Wed 30 Mar 88 14:50:19-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Seminar - Architecture-Independent Parallel Programming (BBN)

                    BBN Science Development Program
                       AI Seminar Series Lecture

                   AN ARCHITECTURE-INDEPENDENT MODEL
                        FOR PARALLEL PROGRAMMING

                             Gary W. Sabot
                         Harvard University and
                     Thinking Machines Corporation
                            (GARY@THINK.COM)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                       10:30 am, Tuesday April 5


     The paralation model consists of a new data structure and a small
number of operators.  The model has two main goals.  As a model, it must
be high-level and abstract.  It should ask programmers to describe an
algorithm, not every detail of the algorithm to hardware mapping.  This
leads to programming languages that are easy to use for general
application programming.  On the other hand, the constructs of the model
must be easy to compile into efficient code for a variety of
architectures (for example, MIMD or SIMD processors; bus-based,
butterfly, or grid interconnect; etc.).  An inefficient programming
language, no matter how expressive and easy-to-use, cannot gain
widespread acceptance.

     The talk describes the paralation model in detail.  Programming
examples are presented in Paralation Lisp, a language based on the
model.  A number of compilers for Paralation Lisp have been written.
Paralation Lisp code can currently be run in parallel on the 65,536
processor Connection Machine, or serially on any implementation of
Common Lisp.

------------------------------

Date: Wed, 30 Mar 88 14:25:17 EST
From: Anurag.Acharya@CENTRO.SOAR.CS.CMU.EDU
Subject: Seminar - A Comparison of Spatial and Symbolic Reference
         (CMU)


                          UNDERSTAND SEMINAR

                        Tuesday, April 5, 1988
                           12:00 - 1:20
                        Adamson Wing, Baker Hall

  Knowing What vs. Where: A Comparison of Spatial and Symbolic Reference

                        Susan Dumais, Bellcore
                        email: std@bellcore.com


The traditional name-based approach to storing and retrieving
information in computers in now being supplemented on some
systems by a spatial alternative - often driven by an office
or desktop metaphor. These systems attempt to take advantage
of the important role that location plays in retrieving objects
in the real world (i.e. we must know where things are in order
to retrieve them). Several experiments examined the usefulness
of location-based and name-based methods for representing,
organizing and retrieving information in computerized databases.
Accuracy of location reference in a Location-only condition
was initially comparable to that in a Name-only condition,
but declined much more rapidly with increases in the number of
objects and delay between initial storage and subsequent retrieval.
Adding Location to Name information did not substantially improve
retrieval accuracy. These results call into question the
generality of spatial metaphors for information retrieval
applications.

------------------------------

End of AIList Digest
********************

∂01-Apr-88  0248	LAWS@KL.SRI.COM 	AIList V6 #62 - RACTER, Expert Systems, Circuit Design    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 1 Apr 88  02:48:26 PST
Date: Thu 31 Mar 1988 23:11-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #62 - RACTER, Expert Systems, Circuit Design
To: AIList@SRI.COM


AIList Digest             Friday, 1 Apr 1988       Volume 6 : Issue 62

Today's Topics:
  Queries - LISP on VMS & AI in CAD & Expert Systems in Education &
    Expert System Project Management Course &
    Commercial distribution of AutoLISP Applications,
  Bindings - McRobbie & Ralph London,
  Application - Conversational Programs: RACTER,
  Philosophy - Formal Semantics and Computational Models,
  Expert Systems - Automatic Knowledge Extraction,
  AI Tools - Circuit-Design Translator in Prolog/Lisp &
    Student Versions of OPS5

----------------------------------------------------------------------

Date: 29 Mar 88 19:05:55 GMT
From: "Eric T. Freeman" <nasa@e.ms.uky.edu>
Subject: LISP on VMS


NOTE: The following is posted for a friend, you may respond to either his
address or mine, I will forward responses to him.

******************************************************************************
WANTED--A public domain (or very inexpensive) copy of LISP for the VAX/VMS
(not Unix).  Must have compiler.  Must have someone to answer questions.
franz lisp would be fine.  Send mail to jones@dftnic.gsfc.nasa.gov

Thanks,

Tom Jones                                          jones@dftnic.gsfc.nasa.gov
******************************************************************************

Eric Freeman
University of Kentucky Computer Science
nasa@g.ms.uky.edu
freeman@dftnic.gsfc.nasa.gov

------------------------------

Date: 30 Mar 88 10:15:45 GMT
From: csli!rustcat@labrea.stanford.edu  (Vallury Prabhakar)
Subject: Re: need info on AI in CAD

In article <332@infko.UUCP> uro@infko.UUCP (Uwe und Roland) writes:

# In doing research on intelligent CAD-systems, I'm looking
# for literature, technical/research reports and conference
# proceedings on following topics:
#
# - Architecture of intelligent CAD-systems
# - Design and Implementation of AI-models which enable
# the user to include structural knowledge (i.e. design knowledge),
# exceeding topological knowledge while describing his objects to the
# CAD-system. I'm looking for a kind of shell, surrounding the realization
# of the topological/geometrical model, but beeing as independent
# as possible from the latter one.

I don't know if it's a bit late, but could you kindly forward pertinent
responses to the above portions of your original query to me?

On a related note, I've been looking for literature/publications relevant
to topics such as pattern recognition in a CAD "drawing", such as determining
line connectivies, and figuring out whether a series of lines form a closed
path, etc.  I would appreciate any information/pointers on this.

If this is the wrong newsgroup for such a question, I apologize.  I've
tried comp.graphics but wasn't able to get much from there.

Enjoy,

                                                -- Vallury Prabhakar
                                                   rustcat@cnc-sun.stanford.edu

PS: Please reply directly to my e-mail address.  I don't come here very often.


  [You might want to look into the work of Deborah Walters (of SUNY)
  on perceptual groupings of edge elements and lines.  Check the
  computer vision conferences for her papers.  The work finds lines
  that seem to belong together, rather than those that match some
  stored shape template.  -- KIL]

------------------------------

Date: Thu, 31 Mar 88 09:26 EST
From: ARMAND%BCVMS.BITNET@MITVMA.MIT.EDU
Subject: STATUS OF EXPERT SYSTEMS?


A committee here at Boston College is presently investigating the state of the
art of Expert Systems in higher education.  We have determined several
situations on campus which would benefit by this technology.

We would very much appreciate any information and/or direction regarding the
following queries.

1) What is the current interest level of faculty, staff, and students in
   applications of this techology?

2) What instructional applications of expert systems are already in existence?
   What type of development is going on in this area?

3) What level of success, if any, has resulted from internally developed expert
   systems?

4) What hardware, software, and other resources are currently in use?  Future
   planning?

5) What problems should be foreseeable and addressed in planning and developing
   an Expert Systems laboratory?


Please feel free to share your experiences/stories as we have very little to go
on at this point.  We would appreciate very much ANY response to these
questions.  If you could spare a few moments for a phone call, that would be
great - call collect if you wish.


                                    Armand H. Doucette
                                    VAX Systems Programmer
                                    Boston College Computer Center
                                    Chestnut Hill, MA 02167
                                    (617) 552-4463
                                    BITNET: ARMAND@BCVMS

------------------------------

Date: Thu, 31 Mar 88 10:48:40 PST
From: bienk@spam.istc.sri.com (Marie Bienkowski)
Subject: Expert System Project Management Course

Can anyone point me to a course on managing the development
of an expert system?   Something that an AI company or such
would offer, introducing expert systems and suggesting ways
for efficient management of projects involving their development.
Thanks in advance,
Marie Bienkowski
(bienk@istc.sri.com)

------------------------------

Date: Thu 31 Mar 88 18:00:02-EST
From: Ben Olasov <G.OLASOV@CS.COLUMBIA.EDU>
Subject: Commercial distribution of AutoLISP applications

Hello,

I'd like to hear from any readers who know of commercial distributors/
clearinghouses of AutoLISP code (code for the integrated LISP interpreter
in the AutoCAD CAD package).

Please send responses to this account: G.OLASOV@CS.COLUMBIA.EDU

Thank you,

Ben

------------------------------

Date: Thu, 31 Mar 88 07:23:42 CST
From: lusk%antares@anl-mcs.arpa
Subject: address needed


Here is everything I know about Zorba: (well, not quite everything)

alias mcrobbie mam%arp.anu.oz@uunet.uu.net

Michael McRobbie 61-62-492035  \
  Dr. Michael McRobbie, Automated Reasoning Project, RSSS, \
  Australian National University, G.P.O. Box 4, Canberra, 2601, A.C.T.

------------------------------

Date: 30 Mar 88 11:54:52 GMT
From: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Reply-to: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Subject: Re: Request for Reference on Auto User Interface Pgm


In article <8803100728.AA07906@ucbvax.Berkeley.EDU> TAYLOR%PLU@ames-io.ARPA
writes:
>Sometime during the summer of 1987, there was a seminar given at either
>Stanford or SRI by someone maybe named Jack London concerning his
>PhD thesis on the subject of automatic generation of user interface
>specifications and/or code based on the specifications/contents of
>a given data base. Obviously this is vague - it is second hand
>information. Can someone give me an accurate reference to the
>work presented in the seminar?

Sounds like Ralph London from Tektronix.  See London and Duisberg in
ACM Trans on Graphics 1986. Sorry no more details, moving office and
can't reach for the paper.
--
Gilbert Cockton, Scottish HCI Centre, Heriot-Watt University, Chambers St.,
Edinburgh, EH1 1HX.  JANET:  gilbert@uk.ac.hw.hci
ARPA: gilbert%hci.hw.ac.uk@cs.ucl.ac.uk UUCP: ..{backbone}!mcvax!ukc!hci!gilbert

------------------------------

Date: 30 Mar 88 10:54 +0100
From: Kai Quale <quale%si.uninett@TOR.nta.no>
Subject: Re: conversations?

ok2@psuvm.bitnet writes:
>     Hi, I'm trying to locate versions of PARRY and similar programs that
>simulate conversation with a human being (such as ELIZA and SHRDLU).
>     I do know that PARRY, which simulates conversing with a Paranoid person
>was written by  K.M. Colby, based on the information processing approach
>(of a paranoiac) of S.S. Tomkins.
>     Also, I know that SHRDLU, which simulates talking to and giving orders
>to a robot in a computer generated simulation, was written by Terry Winograd.
>[Rest deleted]

RACTER is another conversation program which can be run in
monologue (Essay, Poetry, Free prose, ...) or dialogue mode. I don't know
what kind of hardware it runs on; I have only read a book written by the
program : "The Policeman's Beard is Half Constructed". The book is packed
with poems and philosophical discussions between RACTER and its programmers,
and RACTER and itself. The real gem is a conversation between RACTER and ELIZA.
You wouldn't believe how quickly ELIZA is pressed into a corner, and starts
responding to everything with "Interesting" and "Please go on". The following
passage does not exist in the book (it's a long time since I read it), but may
serve as a taste of the general atmosphere :

R : I wonder intensely about the meaningfulness of a hamburger.

E : Have you felt this for a long time ?

R : Yes, it inhabits my mind like a clawed hippopotamus, never letting go.

E : I see.

R : Have you ever thought about the fundamental similarity of Bach and the
    color pink ?

E : Is this really what you want to see me about ?

R : Why do you ask such a prosaic question ? Don't you feel the organic
    wholeness of the world ? The singing of the celestial spheres ? Ah,
    what a state the world is coming to.

E : Please go on.

etc.

Kai Quale <quale%si.uninett@tor.nta.no>

------------------------------

Date: 29 Mar 88 16:17:32 GMT
From: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Reply-to: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Subject: Re: Question on arguments against AI


In article <8803051150.AA05897@ucbvax.Berkeley.EDU> ADLASSNI@AWIIMC11.BITNET
("Adlassnig, Peter") writes:
>
>Is it true that there are two main arguments against the feasibility
>of AI?
> ....
Forget categories for the moment and come bottom up.
Within formal semantics there are a whole set of problems which reduce
confidence in the comprehensiveness of computational models of human
beliefs and behaviour.

Formal semantics is largely AI off-line, and has an intellectual and
scholarly tradition which pre-dates the LISP bar of AI.  I suggest you
pick up the Cambridge University Press catalogue and chase up any
Linguistics text with 'semantics' in the title.  Most of these monographs
and texts have consensus examples of problems for mathematical accounts
of meaning, especially ones based on two-valued logics.  Everyone in
NLP should know about them.  Basically, AI won't succeed until it
cracks these problems, and there is no reason to believe that they
will ever get anywhere near cracking them.  The gap between
mathematical accounts and reality remains too large.
--
Gilbert Cockton, Scottish HCI Centre, Heriot-Watt University, Chambers St.,
Edinburgh, EH1 1HX.  JANET:  gilbert@uk.ac.hw.hci
ARPA: gilbert%hci.hw.ac.uk@cs.ucl.ac.uk UUCP: ..{backbone}!mcvax!ukc!hci!gilbert

------------------------------

Date: 29 Mar 88 18:10:52 GMT
From: mcvax!unido!tub!tmpmbx!netmbx!morus@uunet.uu.net  (Thomas M.)
Subject: Re: automatic knowledge extraction


In article <8803250634.AA22441@ucbvax.Berkeley.EDU> SYSTHVU@BLEKUL11.BITNET
(Van Uytven Herman) writes:
>
>I'm interested in systems for automating the knowledge extraction process.
>In the literature there's a lot of information about these systems.
>I'm looking for some references of commercially available products.
>Can anyone provide me this information ?
>Is there anyone who uses these systems, or has some experience with them ?
>I'd be very grateful if you could send your comments to me.
>Thanks in advance,
>
>Chris Vanhoutte

I think that this topic might be of interest for some in the ai-community.
Please post responses to one of the relevant newsgroups. I am just doing
a study about knowledge elicitation methods - mainly using the articles
of the Knowledge Acquisition for Knowledge Based Systems Workshop, Banff,
Canada, 1986 and 1987. My main interest lie in psychological aspects, the
"rebirthing" of common theories and tools which emerged from psychotherapy
and diagnostics like structered interviews, repertory grids, MDS and the like.

My personal opinion is, that the ideal knowledge engineer has to be a kind of
psychotherapist and well trained in general and social psychology more than
a technician trained in representation methods and using appropriate tools.
A lot of commercially available tools for building XPS cannot account for the
psychological and philosophical problems which emerge with such a difficile
topic as knowledge elicitation. If you read adverts for those tools you could
get the impression that these problems are already solved - just fill in a
couple of so called examples and the tool will create a knowledge base for
you. Experts - be aware!

  --- Thomas Muhr

PS.: What about elicitation by hypnosis ? !-)

--
@(↑o↑)@   @(↑x↑)@   @(↑.↑)@   @(↑_↑)@   @(*o*)@   @('o`)@   @(`!')@   @(↑o↑)@
@  Thomas Muhr    Tel.: (Germany) 030 - 87 41 62    (voice!)                @
@  NET-ADRESS:    muhrth@db0tui11.bitnet  or morus@netmbx.UUCP              @
@  BTX:           030874162                                                 @

------------------------------

Date: Thu, 31 Mar 88 07:28:53 CST
From: lusk%antares@anl-mcs.arpa
Subject: translators in Prolog/Lisp

You might try Peter Rentjes (sp?) somewhere in North Carolina.  (Sorry I can't
be more specific)  He has a large circuit design language translator written
in Prolog, parts of which were released into the public domain at the recent
Prolog benchmarking workshop in Los Angeles.  For Peter's address you might
try Rick Stevens (stevens@anl-mcs.arpa).

------------------------------

Date: 30 Mar 88 16:08:05 GMT
From: trwrb!aero!srt@ucbvax.Berkeley.EDU  (Scott R. Turner)
Subject: Re: Student versions of OPS5

In article <1580@netmbx.UUCP> morus@netmbx.UUCP (Thomas Muhr) writes:
>In article <27336@aero.ARPA> srt@aero.UUCP (Scott R. Turner) writes:
>>It should be possible to write an OPS5-like language in a lot less than
>>100K.  The only difficult part of OPS5 to implement is the RETE algorithm.
>
>Right, but after all the proposed deletions this code would hardly cover 2
>pages. Leaving Rete-match out is not just affecting run-time (the decrease
>in performance is incredible!) - but it would eliminate all features which
>make OPS5 an interesting language - mainly the heuristics for selecting
>rule-instantiations intelligently.

Actually, I think specificity and recency are fairly easy to
implement.  (Without using Rete, that is...)  Specificity simply means
you have to keep your rule database ordered.  That's simple enough to
do.  Recency is a little bit harder.  The "obvious" execution cycle is
to fire the first available rule.  To do recency, you have to keep the
fact database ordered by recency (easy) and then find the first rule
that fires on the first fact (and so on down the fact database).
That's not particularly hard to do.  I suspect OPS5 (with specificity
and recency) could be written in a couple of pages of Prolog.

The point is, if you just want a student version of OPS5 - something
to play around with - then you don't need all the Rete trappings.
They can be easily (and inefficiently) duplicated.

>>(*) My experience is that most OPS5 programmers (not that there are many)
>                                Is this right ? ---↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
>>ignore or actively counter the "pick the most specific/least recently used"
>>rules anyway.

My guess is that there are very few active OPS5 programmers out there.
For the most part I would say it is a dead language.  It is years out
of date (in terms of representation power, etc.), has an awkward
syntax, and promotes a rather strained coding style.  The fact that
there are only two or three people contributing to this topic should
give you some idea of how popular it is in regards to the net.

                                                -- Scott Turner

------------------------------

End of AIList Digest
********************

∂01-Apr-88  0445	LAWS@KL.SRI.COM 	AIList V6 #63 - Future of AI, Evidential Reasoning   
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 1 Apr 88  04:45:11 PST
Date: Thu 31 Mar 1988 23:23-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #63 - Future of AI, Evidential Reasoning
To: AIList@SRI.COM


AIList Digest             Friday, 1 Apr 1988       Volume 6 : Issue 63

Today's Topics:
  Administrivia - Slight Delay
  Review - The Ecology of Computation,
  Opinion - The Future of AI,
  Theory - On the D/S Theory of Evidence

----------------------------------------------------------------------

Date: Thu 31 Mar 88 22:50:53-PST
From: Ken Laws <LAWS@KL.SRI.COM>
Reply-to: AIList-Request@SRI.COM
Subject: Administrivia - Slight Delay

There will be a delay of about a week before the next AIList
issue is posted.

                                        -- Ken

------------------------------

Date: Tue, 29 Mar 88 11:51:02 PST
From: Ken Kahn <Kahn.pa@Xerox.COM>
Subject: The Ecology of Computation

A new book entitled "The Ecology of Computation" editted by B.A. Huberman has
just been published by North-Holland.  The collection includes papers by
Huberman, Hewitt, Lenat, Brown, Miller, Drexler, Hogg, Rosenschein, Genesereth,
Malone, Fikes, Grant, Howard, Rashid, Liskov, Scheifler, Kahn, and Stefik.  Its
the first collection of papers about open systems (very large scale distributed
systems) and several of the papers make important connections to AI.

------------------------------

Date: 29 Mar 88 09:55:31 GMT
From: otter!cwp@hplabs.hp.com  (Chris Preist)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]


Whatever the future of AI is, it's almost certainly COMPANY CONFIDENTIAL!

:-)                       Chris


Disclaimer:
In this case, the opinion expressed probably IS the opinion of my employer!

------------------------------

Date: 29 Mar 88 17:02:39 GMT
From: ssc-vax!bcsaic!rwojcik@beaver.cs.washington.edu  (Rick Wojcik)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>
>Is AI just too expensive and too complicated for practical use?  I
>spent 3 years in the field and I'm beginning to think the answer is
>mostly yes.  In my opinion, all working AI programs are either toys or
>could have been developed much more cheaply using conventional
>techniques.
>
Your posting was clearly intended to provoke, but I'll try to keep the
flames low :-).  Please try to remember that AI is a vast subject area.
It is expensive because it requires a great deal of expertise in language,
psychology, philosophy, etc.--not just programming skills.  It is also a
very high risk area, as anyone can see.  But the payoff can be
tremendous.  Moreover, your opinion that conventional techniques can
replace AI is ludicrous.  Consider the area of natural language.  What
conventional techniques that you know of can extract information from
natural language text or translate a passage from English to French?
Maybe you believe that we should stop all research on robotics.  If not,
would you like to explain how conventional programming can be used get
robots to see objects in the real world?  But maybe we should give up on
the whole idea.  We can replace robots with humans.  Would you like to
volunteer for the bomb squad :-)?  In the development stage, AI is expensive,
but in the long term it is cost effective.  Your pessimism about the field
seems to be based on the failure of expert systems to live up to the hype.
The future of AI is going to be full of unrealistic hype and disappointing
failures.  But the demand for AI is so great that we have no choice but to
push on.
--
Rick Wojcik   csnet:  rwojcik@boeing.com
              uucp:  {uw-june  uw-beaver!ssc-vax}!bcsaic!rwojcik
address:  P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone:    206-865-3844

------------------------------

Date: 29 Mar 88 14:24:11 GMT
From: otter!cdfk@hplabs.hp.com  (Caroline Knight)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

Whatever the far future uses of AI are we can try to make the
current uses as humane and as ethical as possible. I actually
believe that AI in its current form should complement humans
not make them redundant. It should increase the skill of the
person doing the job by doing those things which are boring
or impractical for humans but possible for computers.

This is the responsibility mostly of people doing applications
but can also form the focus of research. When sharing a job
with a computer which tasks are best automated and which best
given to the human - not just which is it possible to automate!
Then the research can move on to how to automate those that it
is desirable to have autmoated instead of simply trying to show
how clever we all are in mimicking "intelligence".

Perhaps computers will free people up so that they can go back
to doing some of the tasks that we currently have machines do
- has anyone thought of it that way?

And if we are going to do people out of jobs then we'd better
start understanding that a person is still valuable even if
they do not do "regular work".
How can AI actually improve life for
those that are made jobless by it? Can we improve on previous
revolutions by NOT treading rough shod over the people that
are displaced?

Either that or prepare to give up our world to the machines -
perhaps thats why we are not looking after it very carefully!

Caroline Knight

What I say is said on my own behalf - it is not a statement of
company policy.

------------------------------

Date: Wed, 30 Mar 88 20:51:06 EST
From: Bob Hummel <hummel@relaxation.nyu.edu>
Subject: On the D/S Theory of Evidence


     Reading the renewed discussion on the meaning of  various  calculi  of
uncertainty  reasoning  prompts  me to inject a note on the Dempster/Shafer
formalism.  I will summarize a few observations here, but direct interested
readers  to  a joint paper with Michael Landy, which appeared this month in
IEEE Pattern Analysis and Machine Intelligence [1].

     These observations were inspired, oddly enough, by reading  Dempster's
original  paper,  in which he introduces the now-famous combination formula
[2].  It seems logical that the original source should contain the  motiva-
tion  and  interpretation  of  the  formula.  But  what  is odd is that the
interpretation migrated over the years, and that the clear, logical founda-
tions  became obscure.  There have been numerous attempts to reconstruct an
explanation of the true meaning of the belief values and the  normalization
terms  and  the  combination  method  in the Dempster/Shafer work.  Some of
these attempts succeed reasonably well.  Along these  lines,  I  think  the
best  work  is  represented  by Kyberg's treatment in terms of extrema over
collections of opinions [3], and Ruspini's work connecting  Dempster/Shafer
formalism to Bayesian analysis [4].  Shafer also constructs what he calls a
"canonical example" which is supposed to be  used  as  a  scale  to  invoke
degrees  of  belief into general situations, based on "coded messages." The
idea, described for example in [5], is isomorphic to the observations  made
here and in [1] based on the foundations laid by Dempster [2].  The problem
is that none of these interpretations lead to generalizations  and  explain
the precise intent of the original formulation.

     Before giving the succinct interpretation, which, it turns out,  is  a
statistical  formulation,  I should comment briefly on the compatibility of
the various interpretations.  When lecturing on the  topic,  I  have  often
encountered  the  attitude  that  the  statistical  viewpoint is simply one
interpretation, limited in scope, and not very  helpful.   The  feeling  is
that  we  should  be willing to view the constructs in some sort of general
way, so as to be able to map the formalism onto more general  applications.
Here, I believe, is one source of the stridency:  that if I would only per-
mit myself to view certain values as subjective degrees of belief in  anal-
ogy  with  some  mystical  frame  of reference, then I will see why certain
arguments make perfectly logical sense.  Accordingly, our treatment of  the
statistical  viewpoint  is  introduced in the framework of algebraic struc-
tures, and our results are based  on  proving  an  isomorphism  between  an
easily  interpreted  algebraic  structure  and the structure induced by the
Dempster rule of combination acting on states of belief.  So when I use the
terms  "experts"  and  "opinions"  and  related  terms  below, an alternate
interpretation might easily use different concepts.  However, any interpre-
tation that truly captures the Dempster/Shafer calculus must necessarily be
isomorphic, under some mapping identifying corresponding concepts,  to  the
interpretation given here.


     Here is the  formulation.   Consider  a  frame  of  discernment,  here
denoted  S.   Instead  of giving a probability distribution over S, we con-
sider a collection of experts, say E, where each expert e  in  E  gives  an
opinion.   The opinions are boolean, which is to say that expert e declares
which labels in S are possible, and which are ruled out.

     For the combination  formula,  suppose  we  have  two  collections  of
experts,  E1  and E2.  Each expert in E1 and each expert in E2 expresses an
opinion, in a boolean fashion, over the labels in S.  (An  important  point
is  that  the  frame  of  discernment  S is the same for all collections of
experts).  We now wish to combine the two collections of experts.  We  con-
sider the cross product E1 X E2, which is the set of all committees of two,
with a pair of experts comprised of one expert from E1 and one expert  from
E2.  For any such committee, say (e1,e2), we define the committee's boolean
opinion to be the logical intersection of the two composing opinions.  Thus
the  committee says that a label is possible only if both committee members
say that the label is possible.  We regard the collection of all such  com-
mittees  and their opinions to be a new collection of experts E, with their
boolean opinions.

     We have defined an algebraic structure.   Call  it  the  the  "Boolean
opinions  of  Experts."  The elements of this space consist of pairs (E,f),
where E is a collection of experts, and f is their opinions,  formed  as  a
map  from  E  to a vector of boolean statements about the labels in S.  Now
define an equivalence relation.  We will say that  two  such  elements  are
equivalent  if  the statistics over the collections of experts, among those
experts giving at least one possibility, are the same.  By the  statistics,
we  mean  the  following.  Let E' be the subset of experts in E for whom at
least one label is possible.  For any given subset A of S, let m(A) be  the
percentage  of  experts  in  E' that designate precisely A as the subset of
possible labels.  Note that m of the empty set is 0, by the  definition  of
E', and that m forms a probability distribution over the power set of S.

     It turns out that m is a mass function, used to define a belief  state
on  S.   Further, when sets of experts combine, the statistics, represented
by the corresponding m functions, combine in exactly the Dempster  rule  of
combination.  (This is no accident.  This is the way Dempster defined it.)

     Accordingly, the set of equivalence classes in the space  of  "Boolean
opinions  of  Experts"  is  isomorphic  to  the  Dempster/Shafer formalism,
represented as a space of belief states formed by mass distributions m.


     Some people express disappointment in the Dempster/Shafer theory, when
it  is viewed this way.  For example, it should be noted that no where does
the theory make use of probabilities of the labels.  The  theory  makes  no
distinction  between  an expert's opinion that a label is likely or that it
is remotely possible.  This is despite the fact that the belief values seem
to give weighted results.  Instead, the belief in a particular subset A, it
can be shown, corresponds to the fraction of experts in E' who  state  that
every  label outside of A is impossible.  The weighted values come about by
maintaining multiple boolean opinions, instead of one single weighted opin-
ion.

     In the PAMI paper, Mike Landy and I suggest an extension [1], where we
track  the  statistics  of probabilistic opinions.  In this formulation, we
track the mean and covariance  of  the  log's  of  probabilistic  opinions.
Details are in Section 5 of the paper.

     In a follow-on paper [6], presented at the 1987 IJCAI, Larry  Manevitz
and I extend the formulation to weaken the necessary notion of independence
of information.  It is always true that  some  independence  assumption  is
necessary.   Larry  and  I  defined  a one-parameter measure of a degree of
dependence, and show how the formulas tracking means  and  covariances  are
transformed.  We also consider a case where we combine bodies of experts by
union, as opposed to cross product.

     To those who study these extensions, it will  become  clear  that  the
formulas bear some resemblance to treatments of uncertainty based on Kalman
filtering.  For specific applications involving the observation of data and
the  estimation  of parameters, the Kalman theory is certainly to be recom-
mended if it can be applied.
                                                Robert Hummel
                                                Courant Institute
                                                New York University

References

1.   Hummel, Robert A. and Michael S. Landy, "A  statistical  viewpoint  on
     the  theory  of  evidence,"  IEEE Transactions on Pattern Analysis and
     Machine Intelligence,  pp. 235-247 (1988).

2.   Dempster, A. P., "Upper and lower  probabilities  induced  by  a  mul-
     tivalued  mapping," Annals of Mathematical Statistics Vol. 38 pp. 325-
     339 (1967).

3.   Kyburg, Jr., Henry E., "Bayesian and  non-bayesian  evidential  updat-
     ing," University of Rochester Dept. of Computer Science Tech. Rep. 139
     (July, 1985).

4.   Ruspini, E., Proceedings of  the  International  Joint  Conference  on
     Artificial Intelligence, (August, 1987).  Also SRI Technical Note 408.

5.   Shafer, Glenn, "Belief functions and parametric  models,"  Journal  of
     the Royal Statistical Society B Vol. 44 pp. 322-352 (1982).  (Includes
     commentaries).

6.   Hummel, Robert and Larry  Manevitz,  "Combining  bodies  of  dependent
     information,"  Tenth  International  Joint  Conference  on  Artificial
     Intelligence, (August, 1987).

------------------------------

End of AIList Digest
********************

∂12-Apr-88  0212	LAWS@KL.SRI.COM 	AIList V6 #64 - Seminars, Conferences 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 12 Apr 88  02:12:09 PDT
Date: Mon 11 Apr 1988 23:26-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #64 - Seminars, Conferences
To: AIList@SRI.COM


AIList Digest            Tuesday, 12 Apr 1988      Volume 6 : Issue 64

Today's Topics:
  Seminars - Modules and Lexical Scoping (Unisys) &
    Visual Indexing (SUNY) &
    Representation Design for Problem Solving (BBN) &
    Aquarius Multiprocesor Architectures (SRI),
  Conferences - ICNN-88 Deadline &
    Machine Learning &
    Workshop on Explanation &
    AAAI-88 Workshop on AI and Hypertext &
    Workshop on Case-Based Reasoning at AAAI-88

----------------------------------------------------------------------

Date: Mon, 4 Apr 88 20:40:07 EDT
From: finin@PRC.Unisys.COM
Subject: Seminar - Modules and Lexical Scoping (Unisys)


                              AI SEMINAR
                     UNISYS PAOLI RESEARCH CENTER


      Providing Modules and Lexical Scoping in Logic Programming

                                Dale Miller
                      Computer and Information Science
                         University of Pennsylvania
                          Philadelphia, PA  19104

A first-order extension of Horn clauses, called first-order hereditary
Harrop formulas, possesses a meta theory which suggests that it would
make a suitable foundations for logic programming.  Hereditrary Harrop
formulas extended the syntax of Horn clauses by permitting
conjunctions, disjunctions, implications, and both existential and
universal quantifiers into queries and the bodies of program clauses.
A simple non-deterministic theorem prover for these formulas is known
to be complete with respect to intuitionistic logic.  This theorem
prover can also be viewed as an interpreter.  We shall outline how this
extended language provides the logic programming paradigm with a
natural notion of module and lexical scoping of constants.


                      2:00 pm Wednesday, April 6
                     Unisys Paloi Research Center
                      Route 252 and Central Ave.
                            Paoli PA 19311

   -- non-Unisys visitors who are interested in attending should --
   --   send email to finin@prc.unisys.com or call 215-648-7446  --

------------------------------

Date: Fri, 8 Apr 88 08:46:18 EDT
From: rapaport@cs.Buffalo.EDU (William J. Rapaport)
Subject: Seminar - Visual Indexing (SUNY)


                STATE UNIVERSITY OF NEW YORK AT BUFFALO

                     The Steering Committee of the
              GRADUATE STUDIES AND RESEARCH INITIATIVE IN

                   COGNITIVE AND LINGUISTIC SCIENCES

                                PRESENTS

                             ZENON PYLYSHYN

                      Center for Cognitive Science
                     University of Western Ontario

            ENCODING "HERE" AND "THERE" IN THE VISUAL FIELD:
     A Sketch of the FINST Indexing Hypothesis and Its Implications

I introduce a distinction between encoding the  location  of  a  feature
within  some frame of reference, and individuating or indexing a feature
so later processes can refer  to  and  access  it.   A  resource-limited
indexing  mechanism  called a FINST is posited for this purpose.  FINSTs
have the property that they index features in a way  that  is  (in  most
cases) transparent to their retinal location, and hence "point to" scene
locations.  The basic assumption is that  no  operations  upon  sets  of
features can occur unless all the features are first FINSTed.

A number of implications of this hypothesis will  be  explored  in  this
talk, including its relevance to phenomena such as the spatial stability
of visual percepts, the ability to track  several  independently  moving
targets  in parallel, the ability to detect a class of spatial relations
requiring the use of  "visual  routines",  and  various  mental  imagery
phenomena.   I will also discuss one of the main reasons for postulating
FINSTs:  the possibility that such indexes might be used  to  bind  per-
ceived  locations  to  arguments in motor commands, thereby serving as a
step towards perceptual-motor coordination.

                         Monday, April 25, 1988
                               4:00 P.M.
                        280 Park, Amherst Campus

There will also be an informal evening discussion at a place and time to
be  announced.   Call Bill Rapaport (Dept. of Computer Science, 636-3193
or 3180) for further information.

------------------------------

Date: Fri 8 Apr 88 14:25:22-EDT
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Seminar - Representation Design for Problem Solving (BBN)

                    BBN Science Development Program
                       AI Seminar Series Lecture

               REPRESENTATION DESIGN FOR PROBLEM SOLVING

                          Jeffrey Van Baalen
                           MIT AI Laboratory
                          (jvb@HT.AI.MIT.EDU)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                      10:30 am, Thursday April 14


It has long been acknowledged that having a good representation is key
in effective problem solving.  But what is a ``good'' representation?
In this talk, I overview a theory of representation design for problem
solving that answers this question for a class of problems called
analytical reasoning problems.  These problems are typically very
difficult for general problem solvers, like theorem provers, to solve.
Yet people solve them comparatively easily by designing a specialized
representation for each problem and using it to aid the solution
process.  The theory is motivated, in large part, by observations of the
problem solving behavior of people.

The implementation based on this theory takes as input a straightforward
predicate calculus translation of the problem, gathers any necessary
additional information, decides what to represent and how, designs the
representations, creates a LISP program that uses those representations,
and runs the program to produce a solution.  The specialized representation
created is a structure whose syntax captures the semantics of the problem
domain and whose behavior enforces those semantics.

------------------------------

Date: Tue, 5 Apr 88 09:43:31 PDT
From: lunt@csl.sri.com (Teresa Lunt)
Subject: Seminar - Aquarius Multiprocesor Architectures (SRI)


SRI COMPUTER SCIENCE LAB SEMINAR ANNOUNCEMENT:


                     THE AQUARIUS PROJECT

                        Alvin M. Despain
                 Professor EECS, U.C. Berkeley

                 Wednesday, April 6 at 2:00 pm
                           Room IS109


The Aquarius Project has, as the fundamental goal of its research,
to establish the principles by which very large improvements in
performance can be achieved in machines specialized for calculating
difficult problems in design automation, expert systems, and
symbolic components.  We are committed to the eventual design of a
very high performance heterogeneous MIMD multiprocessor tailored to
the execution of both numeric and logic calculations.  Currently we
are focusing on an experimental multiprocessor architecture for the
execution of Prolog that wil contain 12 processors specialized for
Prolog, and four others, for a total of 16 processors.

------------------------------

Date: Mon 11 Apr 88 23:16:28-PDT
From: Ken Laws <LAWS@KL.SRI.COM>
Reply-to: AIList-Request@SRI.COM
Subject: Conference - ICNN-88 Deadline

I have been told that the deadline for submitting full papers
to the IEEE Conference on Neural Networks has been extended a
few days, to Wednesday, April 13, 1988.

                                        -- Ken

------------------------------

Date: Fri, 1 Apr 88 09:04:53 EST
From: laird@caen.engin.umich.edu (John Laird)
Subject: Conference - Machine Learning

The Fifth International Conference on Machine Learning

June 12-14, 1988
The University of Michigan
Ann Arbor, Michigan


Sponsored by:
    the Cognitive Science and Machine Intelligence Laboratory of
    The University of Michigan

With support from
   American Association of Artificial Intelligence,
   the ONR Computer Sciences Division and the ONR Cognitive Science Program.

In cooperation with
   ACM/SIGART.


The Fifth International Conference on Machine Learning will be held at
the University of Michigan, Ann Arbor, June 12-14 1988. The goal of
the conference is to bring together researchers from all areas of
machine learning in a open forum.  This will be the first Machine
Learning Conference with open attendance.

The main focus of the conference will be the Technical Program.
Papers will be presented from all areas of machine learning,
including: empirical methods, explanation-based learning, genetic
algorithms, connectionist learning, discovery, and formal models of
learning.  During the three days of the conference there will be 20
papers presented in plenary sessions.  These talks will be based on
the papers accepted by the program committee following a stringent
review of 150 submitted papers.  A poster session will be held during
the evening of June 12 to allow the attendees to discuss the 30 short
papers that were accepted for the conference.  Plenty of free time is
reserved for informal meetings and discussions.

Three invited talks will be presented by experts in subareas of
machine learning:
     David Haussler, University of California, Santa Cruz
       Theoretical results in Machine Learning

     Geoffery Hinton, University of Toronto
       Connectionist Learning

     John Holland, University of Michigan
       Genetic Algorithms
These speakers will review the state-of-the-art in each subarea,
emphasizing current research topics and their relation to the broader
field of machine learning.

Registration material for the conference will be mailed out later this
month to all those that submitted papers as well as subscribers of the
Machine Learning journal.  If you wish to receive registration
material and are not a submitter or a subscriber, send your address
via e-mail to
 laird@umix.cc.umich.edu
or via US mail to
 Machine Learning Conference
 Cognitive Science and Machine Intelligence Laboratory
 The University of Michigan
 904 Monroe Street
 Ann Arbor, MI 48109-1234

------------------------------

Date: Fri, 1 Apr 88 09:12:28 CST
From: wick@umn-cs.cs.umn.edu (Michael Wick)
Subject: Conference - Workshop on Explanation


Here is a call for participation I would like to post on AIList.  As the
deadline for submission to the announced workshop is May 1, I am hopeful
that the call can be posted as soon as possible.  Thank you.



                *******   CALL FOR PARTICIPATION   *******

The first Workshop on Explanation will be held on Monday, August 22 prior
to the Seventh National Conference on Artificial Intelligence (AAAI-88) in
Minneapolis, Minnesota.  The one day workshop will bring together active
researchers in expert system explanation.  The major focus of the
workshop will be to explicitly outline the goals of explanation systems
and possible architectures for achieving these goals.  Potential participants
are invited to submit an extended abstract on issues relating to the
workshop's focus, including but not limited to: categorizations of
explanation, specific explanation designs, goals of explanation,
assumptions of explanation, and properties of explanation that influence
system design.

Abstracts are limited to 1000 words (LaTeX 11 point article type
preferred) and should include all key figures and references.  Potential
participants should send four (4) copies of their abstract to the
Workshop Chairman listed below no later than May 1, 1988.  Each abstract
must be marked with the author's name, address, net address (if
available), and telephone number.  Contributors will be notified of a
acceptance or rejection no later than June 1, 1988.

Workshop Chairman: Michael R. Wick
                   Computer Science Department
                   University of Minnesota
                   Minneapolis, MN 55455

------------------------------

Date: Wed 6 Apr 88 09:38:29-EDT
From: Steve Feiner <Feiner@CS.COLUMBIA.EDU>
Subject: Conference - AAAI-88 Workshop on AI and Hypertext

AAAI-88 WORKSHOP
AI AND HYPERTEXT: ISSUES AND DIRECTIONS

Tuesday, August 23, 1988
St. Paul, MN

OBJECTIVES

The development of practical hypertext systems has evoked new interest in
hypermedia through much of the AI community.  Concurrently, progress in
knowledge representation, user models, natural language synthesis and
understanding, and in qualitative reasoning all promise to enhance the scope
and utility of hypertext documents.  Research into development and utilization
of massive knowledge bases is of intense interest to both disciplines.

The AAAI-88 Workshop on Artificial Intelligence and Hypertext will explore
novel and controversial issues at the frontier of AI and hypertext research.
Suitable topics include, but are not limited to:

        * automatic creation of hypertext from linear documents
        * user models and adaptive documents
        * integrating hypertext and heuristic systems
        * truth maintenance, argumentation, and collaborative writing of text
          and of programs
        * design, development and utilization of large knowledge bases and
          docuverses

This half-day workshop is intended to promote interaction among
leading researchers and practitioners.  Several brief position statements will
introduce central issues, to be followed by extensive general discussion.

ATTENDANCE

To promote lively and candid interchange, workshop attendance will be limited
to 35 participants.  Invitations to participate in the workshop will be
extended on the basis of a position paper, outlining the writer's relevant
work in, and positions on, the hypertext/AI frontier.

ORGANIZING COMMITTEE

Mark Bernstein, Eastgate Systems, Inc.
K Eric Drexler, Stanford University
Steven Feiner, Columbia University

REQUIREMENTS

The deadline for submitting position papers is May 1, 1988.  Online
submissions will not be accepted; hard copy only, please.  Position papers
should not exceed four pages.  Send position papers to:

        Mark Bernstein
        Eastgate Systems, Inc.
        PO Box 1307
        Cambridge, MA  02238,  USA
        (617) 782-9044

Invitations to participate will be extended in early June.

------------------------------

Date: 7 Apr 88 18:41:13 GMT
From: king@rd1632.Dayton.NCR.COM (James King)
Reply-to: king@rd1632.Dayton.NCR.COM (James King)
Subject: Conference - Workshop on Case-Based Reasoning at AAAI-88


                       Call for Participation

                             Workshop on

                        Case-Based Reasoning

           August 23, 1988, Radisson - St. Paul, Minnesota
                          Sponsored by AAAI


      Case-Based Reasoning  (CBR) involves  the use  of  past
cases to  analyze and  solve a new situation.  In precedent-
based CBR,  for example,  the objective  is to  construct an
analysis  and   argument  using  past  cases  as  supportive
justifications. In  other types  of CBR, the objective is to
construct  a   new  solution   (e.g.,  a   plan)  based   on
transformations of those already existing in case memory.

      Objective:   The goal  of this  workshop  is  to  bring
together both  active reasearchers  in CBR  as well as those
with potential  interest in  using CBR  in their own problem
domains. Through  overview  lectures,  paper  presentations,
panels, and  informal discussions, participants will explore
central issues  and current  results in CBR, discuss domains
and problems  where CBR  might prove  helpful, and establish
new contacts within the CBR community.  Areas for discussion
include:

      -  Representation of prior experiences and cases;
      -  Methods for indexing and retrieval of cases;
      -  Assessment of relevancy of past cases to a new case;
      -  Transformation of solutions from past cases;
      -  Comparison, explanation and justification using cases;
      -  Using hypothetical cases to test implications of a
         new analysis or solution;
      -  Use of cases as a knowledge acquisition strategy;
      -  Generic architectures for CBR systems.

      Attendance:   Limited to  approximately 35 participants
chosen by  the program  committee on  the basis of submitted
materials.

      Submission:  Six copies of an abstract of approximately
1500 words  and a short biographical sketch including a list
of a  few representative prior publications, particularly on
CBR. For those new to CBR, who wish to attend, instead of an
abstract  submit   a  statement   of  interest  in  CBR,  in
particular, a  description of the submitter's problem domain
and its case-based aspects.

     Send abstracts,  etc. to  Program Chair  by  April  20,
1988. Acceptances will be sent by May 20, 1988. Final papers
will be due by July 1. They will be bound and distributed at
the workshop.

     Program Chair:    Edwina  L.  Rissland,  Department  of
Computer & Information Science, University of Massachusetts,
Amherst, MA 01003

     Program Committee:   Kevin  Ashley  (UMASS),  James  A.
King, (NCR  Corporation),  Janet  Kolodner
(Georgia  Institute  of  Technology),  Christopher  Riesbeck
(Yale), Robert Simpson (DARPA/ISTO)

------------------------------

End of AIList Digest
********************

∂13-Apr-88  0054	LAWS@KL.SRI.COM 	AIList V6 #65 - Applications, Racter, Modal Logic, OPS5, microExplorer   
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 13 Apr 88  00:54:19 PDT
Date: Tue 12 Apr 1988 22:28-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #65 - Applications, Racter, Modal Logic, OPS5, microExplorer
To: AIList@SRI.COM


AIList Digest           Wednesday, 13 Apr 1988     Volume 6 : Issue 65

Today's Topics:
  Applications - Circuit-Design Translators in Prolog/Lisp &
    Automatic Knowledge Extraction & Racter,
    Logic - Modal Logic References,
  AI Tools - Student Versions of OPS5 & TI microExplorer

----------------------------------------------------------------------

Date: Thu, 31 Mar 88 07:28:53 CST
From: lusk%antares@anl-mcs.arpa
Subject: translators in Prolog/Lisp

You might try Peter Rentjes (sp?) somewhere in North Carolina.  (Sorry I can't
be more specific)  He has a large circuit design language translator written
in Prolog, parts of which were released into the public domain at the recent
Prolog benchmarking workshop in Los Angeles.  For Peter's address you might
try Rick Stevens (stevens@anl-mcs.arpa).

------------------------------

Date: 3 Apr 88 00:31:00 GMT
From: portal!cup.portal.com!fiorentino1@uunet.uu.net
Subject: Re: automatic knowledge extraction

In response to Thomas Muhr: I refer you to Expert Systems for Experts
by Kamran Parsaye and Mark Chignell; John Wiley 1988 which covers the area
discussed quite well. I am specifically interested in repertory grids and have
toyed with the idea of purchasing Finn Tschudi Flexigrid to use in learning
to apply grids in the psychotherapeutic process. I have a degree in Philosophy
and are starting a dissertation in Counseling Psychology for a PhD. I have been
an investigator for twenty years doing thousands of interviews. I realized
recently how everything I have done may have prepared me to try being
a knowledge engineer. I would be intersted in knowing what literature
you recommend covering the use of grids and what avaible software is best?
I find much validity in what you say and would appreciate hearing your
advice,

------------------------------

Date: 5 Apr 88 04:52:59 GMT
From: portal!cup.portal.com!tony_mak_makonnen@uunet.uu.net
Subject: automated knowledge

I contacted Kamran Parsaye co-author of "Expert Systems For Experts"
and am considering buying a software package put out by his company
called Auto-Intelligence which incorporates repertory grids in which
I have special interest. I am torn between getting his package ( cost
$ 490.00) and getting Finn Tschudi's Flexigrid which I have seen demonstrated
(cost $ 400.00). Anyone out there with some acquaintance with either or
both of these programs? I can use some advice before I spend the
money.

------------------------------

Date: 5 Apr 88 15:20 PDT
From: JJD.MDC; Jacob J. L. Dickinson / McDonnell Douglas 
      <JJD.MDC@OFFICE-1.ARPA>
Subject: Re: AIList V6 #62 - RACTER, Expert Systems, Circuit Design

Racter (AIList V6 #62) is available for the Macintosh (I think about $30
retail), and possibly for the IBM PC.

--------------------------------

Date: 30 Mar 88 01:22:23 GMT
From: killer!usl!cal@ames.arpa  (Craig Anthony Leger)
Subject: Re: Modal Logic and AI -- References Needed

In article <1988Feb27.021115.11206@gpu.utcs.toronto.edu>
kurfurst@gpu.utcs.toronto.edu (Thomas Kurfurst) writes:
>I am seeking references to seminal works relating modal logic to artifical
>intelligence research, especially more theoretical (philosophical)
>papers rather than applications per se.

This is a list that I sent to a friend a couple of months ago.
These works do not represent current research in modal logic,
but are very useful as starting points and as standard reference works.
The comments are highly subjective, but provide some indication
as to whether the work has a philosophical or mathematical perspective.

%H BC 51 B64
%A Raymond Bradley
%A Norman Swartz
%T Possible Worlds:  An Introduction to Logic and Its Philosophy
%I Hackett
%C Indianapolis, Indiana
%D 1979
%X
This is a very enjoyable work that looks at modal logic
from the perspective of the philosopher.  Numerous sections
dealing with the relation between symbolic logic and
epistemology and the philosophy of science.  Sections 4.5
and 4.6 (pp. 205-245), together with a table (pp. 327-28),
are the most valuable parts of the book.

%H BC 135 L43
%A Clarence Irving Lewis
%A Cooper Harold Langford
%T Symbolic Logic
%I The Century Company
%C New York
%D 1932
%S The Century Philosophy Series
%E Sterling P. Lamprecht
%X
The classic work on modal logic.  Good essays on the notions
of logical implication and deduction.

%H BC 135 W7
%A Georg Henrik von Wright
%T An Essay in Modal Logic
%I North-Holland
%C Amsterdam
%D 1951
%S Studies in Logic and the Foundations of Mathematics
%E L. E. J. Brouwer, E. W. Beth, A. Heyting
%X
This is a very short work (90 pp.), yet perhaps the most illuminating.
Modal logic is treated almost entirely on the symbolic level;
very little discussion of conflicting interpretations.
It is my major source for those (relatively) undisputed results
in modal logic.

Bradley & Swartz  --  Lewis  --  von Wright
<== most philosophical most mathematical ==>

Good reading to you,
Craig Anthony Leger
cal@usl.usl.edu

------------------------------

Date: 10 Apr 88 15:17:19 GMT
From: dailey@tcgould.tn.cornell.edu  (John H. Dailey)
Subject: Re: Modal Logic and AI -- References Needed


  Though it is somewhat mathematically sophisticated, I think that the best
recent book on modal logic is: Modal Logic and Classical Logic, by Johan van
Benthem, Bibliopolis, 1983. A mathematically easier text is Hughes and
Cresswell's Companion to Modal Logic -- I don't have it here for the publishing
data, but it came out only a couple of years ago. Another, more specialized book
is: The Unprovability of Consistency, by George Boolos, Cambridge U. Press.
 For a more philosophical look at possible worlds you should read: Inquiry, by
Robert Stalnaker, MIT Press, 1984. Though none of these books deal with AI, they
are some of the best books recently done on modal logic. For work closer to AI
(actually, natural language processing) you might want to look at Montague
Semantics, which incorporates a possible worlds approach (see also Gallin's
Intensional Mathematics--(North Holland?) which gives some completeness results
for Montague systems). For criticisms of this approach see, e.g. the first
chapter of Topics in the Syntax and Semantics of Infinitives and Gerunds,
by Gennaro Chierchia, Ph.D. dissertation, UMass, Amherst.
  The list of articles on modal logic is endless, especially for natural
language semantics, but the above books should give you a good feel for the
subject.

|----------------------------------------------------------------------------|
|                                     John H. Dailey                         |
|                                     Center for Applied Math.               |
|                                     Cornell U.                             |
|                                     Ithaca, N.Y. 14853                     |
|                            dailey@CRNLCAM (Bitnet)                         |
|                            dailey@amvax.tn.cornell.edu   (ARPANET)         |
|----------------------------------------------------------------------------|

------------------------------

Date: 11 Apr 88 04:25:39 GMT
From: dailey@tcgould.tn.cornell.edu  (John H. Dailey)
Subject: Re: Modal Logic and AI -- References Needed


 Ooops. In a previous article I credited Steward Shapiro's Intentional Math. to
 D. Gallin. I meant to recommend D. Gallin, Intensional and Higher Order Logic,
 North Holland, 1975. Perhaps a good starting reference for various aspects of
 modal logic is the Handbook of Philosophical Logic, Vol. II, ed. D. Gabbay
 and F. Guenthner, D. Reidel, 1984.

                                 -John H. Dailey

--------------------------------

Date: 5 Apr 88 19:04:25 GMT
From: mtunx!lzaz!nitro!prophet@rutgers.edu  (Mike Brooks)
Subject: Re: Student versions of OPS5

In article <28259@aero.ARPA> srt@aero.UUCP (Scott R. Turner) writes:
>In article <1580@netmbx.UUCP> morus@netmbx.UUCP (Thomas Muhr) writes:
>>In article <27336@aero.ARPA> srt@aero.UUCP (Scott R. Turner) writes:
>>>(*) My experience is that most OPS5 programmers (not that there are many)
>>                                Is this right ? ---↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
>>>ignore or actively counter the "pick the most specific/least recently used"
>>>rules anyway.
>
>My guess is that there are very few active OPS5 programmers out there.
>For the most part I would say it is a dead language.  It is years out
>of date (in terms of representation power, etc.), has an awkward
>syntax, and promotes a rather strained coding style.  The fact that
>there are only two or three people contributing to this topic should
>give you some idea of how popular it is in regards to the net.

I can't give an exact number for OPS5 programmers (active) who are out here
but I personally have found OPS5 to be a stable and instructive
rule-based programming environment (though not the only one!).
I am working on a prototype system to handle resource and
activity planning  within a test lab.
Although at first I was a little annoyed by the lacks that I initially
perceived, I discovered that OPS5, as an environment to learn rule-based
programming, is ideal because it doesn't have nifty full screen user
interfaces or tons of libraries; it focuses your attention on the real
beef: the innards of the of the system or project at hand.
I find that at times, having so much to choose from confuses the issue
of what needs to be done.
When needs arose for functionality not terribly well handled in OPS5,
it's simply another call to an external procedure which tests for some
measure of success.

I want to stress that I am not advocating OPS5 as a do-all, end-all
tool, just that it is still useful, and if there are any OPS5 or OPS83
programmers out there I would love email from you detailing your
experiences with these *dinosaurs*.



Michael P. Brooks
E-mail: {mtuxo,ihnp4}!attunix!nitro!prophet

--------------------------------

Date: Fri, 8 Apr 88  17:21:44 CDT
From: Paul Fuqua <pf@ti-csl.csc.ti.com>
Subject: Re: TI microExplorer (Mac II coprocessor) ... [AIList V6 #52]

[I forwarded the message from Bill Luciw (V6 #52) to an internal mailing list
that deals with the microExplorer, and received a reply from the project
manager, Mike Field.  With his permission, and only formatting changes, here
are his comments.  I hope they prove useful.  - pf]


Date: Thursday, March 17, 1988  8:30am (CST)
From: rochester!kodak!luciw at louie.udel.edu  (bill luciw)
Subject: TI microExplorer (Mac II coprocessor) ... [AIList V6 #52]

2) Is TI's implementation of RPC available to other applications (such as those
   developed under MPW)?

RPC availability to other applications - we plan to address this in future
releases.


3) How well integrated is the microExplorer into the rest of the Mac
   environment - (cut, copy, paste, print on an AppleTalk printer) ?

Integration with Mac environment - the desk top, window system, and file
system integration is excellent; however, coupling the Lisp kill ring with
cut/copy/paste, and a direct interface to Apple printers are features to be
addressed in future releases.


4) Can you install the "load bands" on third party disks (SuperMac 150) or do
   they need to remain on the Apple hard disk (the load bands are supposed to
   be normal, finder accessible files)?

Installing "load bands" on 3rd party disks - no problem, as long as they
work with the Mac II.


5) How much of a hassle is it to port applications over to the little beastie
   from a normal Explorer (what about ART, KEE, SIMKIT, etc.)?

Porting applications - most ports we've looked at so far are fairly
trivial, or will be by first release. There are special requirements
related to screen updates and lack of mouse warping that must be covered,
however.


6) Do any benchmarks (ala Gabriel) exist for this machine?

Benchmarks - yes, we run Explorer benchmarks on microExplorer. It runs
about 50% of an Explorer II.


7) How about ToolBox access from the Lisp Environment? (or am I dreaming?)

Toolbox access from Lisp - future release.


 Our group is responsible for testing this type of technology and developing a
 "delivery vehicle strategy."  Ideally, said delivery vehicle should be under
 $10K, but it looks like we'll be around $20K before we're through. This puts
 the microExplorer in the same price range as a "reasonably" equiped Sun
 3/60FC.

Pricing vs Sun 3/60FC - this is so far superior in performance and
environment to the Sun, it should not be an issue.


Paul Fuqua
Texas Instruments Computer Science Center, Dallas, Texas
CSNet:  pf@csc.ti.com (ARPA too, eventually)
UUCP:   {smu, texsun, im4u, rice}!ti-csl!pf

--------------------------------

End of AIList Digest
********************

∂13-Apr-88  0321	LAWS@KL.SRI.COM 	AIList V6 #66 - Probability, Intelligence, Ethics, Future of AI
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 13 Apr 88  03:21:24 PDT
Date: Tue 12 Apr 1988 22:41-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #66 - Probability, Intelligence, Ethics, Future of AI
To: AIList@SRI.COM


AIList Digest           Wednesday, 13 Apr 1988     Volume 6 : Issue 66

Today's Topics:
  Theory - Probability,
  Opinion - Simulated Intelligence & Ethics of AI & The Future of AI

----------------------------------------------------------------------

Date: Thu, 7 Apr 88 00:13:57 HAE
From: Spencer Star <STAR@LAVALVM1>
Reply-to: <AIList@Stripe.SRI.Com>
Subject: Probability: is it appropriate, necessary, or practical

> a uniquely probabilistic approach to uncertainty may be
> inappropriate, unnecessary and impractical.  D. J. Spiegelhalter

  For the record, it was this quote from Spiegelhalter by Paul Creelman
  that prompted my questioning the accuracy of the quote.  I have been
  able to track down the quote.  A few words were dropped from what
  Spiegelhalter actually wrote.  "Deviations from the methodological
  rigor and coherence of probability theory have been criticized,
  but in return it has been argued that a uniquely probabilistic
  approach to uncertainty may be inappropriate, unnecessary, and
  impractical."  Spiegelhalter goes on to discuss these points and then
  reply to them point by point.  Roses  to Paul for sending me the source
  and brickbats to him for the liberties he took with it.

------------------------------

Date: 5 Apr 88 01:26:17 GMT
From: mind!eliot@princeton.edu  (Eliot Handelman)
Subject: Simulated Intelligence

Intelligence draws upon the resources of what Dostoevsky, in the "Notes from
Underground", called the "advantageous advantage" of the individual who found
his life circumscribed by "logarithms", or some form of computational
determinism: the ability to veto reason. My present inclination is to believe
that AI, in the long run, may only be a test for underlying mechanical
constraints of of theories of intelligence, and therefore inapplicable to
to the simulation of human intelligence.


I'd like to hear this argued for or against.

Best wishes to all,

Eliot Handelman

------------------------------

Date: Mon, 4 Apr 88 12:18 EST
From: Jeffy <JCOGGSHALL%HAMPVMS.BITNET@MITVMA.MIT.EDU>
Subject: submission: Ethics of AI

From:<ssc-vax!bcsaic!rwojcik@beaver.cs.washington.edu  (Rick Wojcik)>
  (I have deleted much text from in between the following lines)
>But the payoff can be tremendous.
>In the development stage, AI is expensive,
>but in the long term it is cost effective.
>the demand for AI is so great that we have no choice but to
>push on.

        I would question the doctrine of "what is most _cost-effective_
(in the long term of course) is best." I think that, as Caroline Knight
said,
        "Whatever the far future uses of AI are
        we can try to make the current uses as
        humane and as ethical as possible."
        I mean, what are we developing it for anyway? It often seems that
AI is being developed for a specific purpose, but nobody seems to want to
be explicit about what it is. Technology is not neutral. If you develop AI
mainly as a war technology, then you will have a science that is most
easily suited for war (as far as I know, DARPA is _the_ main funder for AI
projects).
        Here is a quote from a book by Marcus Raskin and Herbert Bernstein:
        (they are talking about the Einstein-Bohr debate here, and how the
results of Quantum Mechanics show us an observer created universe):
        "Bohr's position puts man, or at least his machines, at the center
of scientific inquiry. If he is correct, science's style and purpose has to
change. The problem has been that the physicists have not wanted to make
any critical evaluation of their scientific work, an evaluation which their
research cried out for just because of their belief that human beings
remain at the center of inquiry, and man cannot know fundamental laws of
nature. They rejected Einstins's conception of a Kantian reality and
without saying it, his view of scientific purpose. Even though no
fundamental laws can be found independent of man's beliefs and machines he
constructs, scientists abjure making moral judgements as part of their
work, even though they know - and knew - that the very character of science
had changed.
        Standards for rational inquiry demand that moral judgements should
be added as an integral part of any paricular experiment. Unless shown
otherwise, I do not see how transformations of social systems, or the
generation of a new consciousness can occur if we hold on to narrow
conceptions of rational inquiry. Inquiry must now focus on relationsips.
The reason is that rational inquiry is not, cannot, and should not be
sealed from everyday life, institutional setting or the struggles which are
carried on throughout the world. How rational inquiry is carried on, who we
do it for, what we think about and _what we choose to see_ is never
insulated or antiseptic. Once we communticate it through the medium of
language, the symbols of mathematics, the metaphors and clishes of everyday
like, we call forth in the mids of readers of fellow analysts other issues
and considerations that may be outside of the four cornedrs of the
experiment or inquiry. What they bring to what they see, read, or replicate
is related to their purpose or agenda or their unconscious interpretations"
        - from: "NEW WAYS OF KNOWING", page 115
        In any case,

        >But the payoff can be tremendous.
        >In the development stage, AI is expensive,
        >but in the long term it is cost effective.
        >the demand for AI is so great that we have no choice but to
        >push on.

        is a wrong way to view what one is doing, when one is doing AI.

                                                - Jeff Coggshall
                                        (JCOGGSHALL@HAMPVMS.BITNET)


        (This article is also in response to:
                From: ARMAND%BCVMS.BITNET@MITVMA.MIT.EDU
                Subject: STATUS OF EXPERT SYSTEMS?)

------------------------------

Date: 31 Mar 88 01:26:50 GMT
From: jsnyder@june.cs.washington.edu  (John Snyder)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

In article <4640@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>...  But the demand for AI is so great that we have no choice but to
>push on.

We always have the choice not to develop a technology; what may be lacking
are reasons or will.

jsnyder@june.cs.washington.edu              John R. Snyder
{ihnp4,decvax,ucbvax}!uw-beaver!jsnyder     Dept. of Computer Science, FR-35
                                            University of Washington
206/543-7798                                Seattle, WA 98195

------------------------------

Date: 1 Apr 88 22:26:06 GMT
From: arti@vax1.acs.udel.edu  (Arti Nigam)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]


In article <4565@june.cs.washington.edu> you write:
>
>We always have the choice not to develop a technology; what may be lacking
>are reasons or will.

I heard this from one of the greats in computer-hardware-evolution, only
I don't remember his name.  What he said, and I say, is essentially this;
if you are part of an effort towards progress, in whatever field or
domain, you should have some understanding of  WHERE you are going and
WHY you want to get there.

Arti Nigam

------------------------------

Date: 31 Mar 88 15:10:30 GMT
From: ulysses!sfmag!sfsup!saal@ucbvax.Berkeley.EDU  (S.Saal)
Subject: Re: The future of AI - my opinion

I think the pessimism about AI is a bit more subtle.  Whenever
something is still only vaguely understood, it is considered a
part of AI.  Once we start understanding the `what,' `how,' and
(sometimes) `why' we no longer consider it a part of AI.  For
example, all robotics used to be part of AI.  Now robotics is a
field unto itself and only the more difficult aspects (certain
manipoulations, object recognition, etc) are within AI anymore.
Similarly so for expert systems.  It used to be that ES were
entirely within the purview of AI.  That was when the AI folks
had no real idea how to do ESs and were trying all sorts of
methods.  Now they understand them and two things have happened:
expert systems are an independant branch of computer science and
people have found that they no longer need to rely on the
(advanced) AI type languages (lisp, etc) to get the job done.

Ironically, this makes AI a field that must make itself obsolete.
As more areas become understood, they will break off and become
their own field.  If not for finding new areas, AI would run out
of things for it to address.

Does this mean it isn't worth while to study AI?  Certainly not.
If for no other reason than AI is the think tank, problem
_finder_ of computer science.  So what if no problem in AI itself
is ever solved?  Many problems that used to be in AI have been,
or are well on their way to being, solved.  Yes, the costs are
high, and it may not look as though much is actually coming out
of AI research except for more questions, but asking the
questions and lookling for the answers in the way that AI does,
is a valid and useful approach.
--
Sam Saal         ..!attunix!saal
Vayiphtach HaShem et Peah HaAtone

------------------------------

Date: 3 Apr 88 11:03:30 GMT
From: bloom-beacon.mit.edu!boris@bloom-beacon.mit.edu  (Boris N
      Goldowsky)
Subject: Re: The future of AI - my opinion


In article <2979@sfsup.UUCP> saal@sfsup.UUCP (S.Saal) writes:

   Ironically, this makes AI a field that must make itself obsolete.
   As more areas become understood, they will break off and become
   their own field.  If not for finding new areas, AI would run out
   of things for it to address.

Isn't that true of all sciences, though?  If something is understood,
then you don't need to study it anymore.

I realize this is oversimplifying your point, so let me be more
precise.  If you are doing some research and come up with results that
are useful, people will start using those results for their own
purposes.  If the results are central to your field, you will also
keep expanding on them and so forth.  But if they are not really of
central interest, the only people who will keep them alive are these
others... and if, as in the case of robotics, they are really useful
results they will be very visibly and profitably kept alive.  But I
think this can really happen in any field, and in no way makes AI
"obsolete."

Isn't finding new areas what science is all about?

Bng


--
Boris Goldowsky     boris@athena.mit.edu or @adam.pika.mit.edu
                         %athena@eddie.UUCP
                         @69 Chestnut St.Cambridge.MA.02139
                         @6983.492.(617)

------------------------------

Date: 4 Apr 88 16:25:49 GMT
From: hubcap!mrspock@gatech.edu  (Steve Benz)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

From article <1134@its63b.ed.ac.uk>, by gvw@its63b.ed.ac.uk (G Wilson):
> In article <4640@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>>         Moreover, your opinion that conventional techniques can
>>replace AI is ludicrous.  Consider the area of natural language.  What
>>conventional techniques that you know of can extract information from
>>natural language text or translate a passage from English to French?
>
> Errmmm...show me *any* program which can do these things?  To date,
> AI has been successful in these areas only when used in toy domains.

In a real world (real world at least as far as real money will carry you...)
project here, we developed a nearly-natural-language system that deals
with the "toy domain" of reading mail, querying databases, and some other stuff.

It may be a toy, but some folks were willing to lay out some signifigant
number of dollars to get it.  These applications are based off of
a lazy-evaluation, functional language (I wouldn't call that a "conventional
technique.")

But the best part about the whole thing (as far as our contract monitor is
concerned) is that it really wasn't all that expensive to do--less than
20 man-months went into the development of the language and fitting out
the old menu-driven software with the new technique.  Overall, it was a
highly successful venture, allowing us to create high-quality user-interfaces
very quickly, and develop them semi-independently of the application itself.
None of the "conventional techniques" we had used before allowed us this.

So you see, AI has application, I think the problem is that AI techniques
like expert systems, and functional/logic programming simply haven't
filtered out of the University in sufficient quantity to make an impact on
the marketplace.  The average BS-in-CS-graduate probably has had a very
limited exposure to these techniques, hence he/she will be afraid of the
unknown and will prefer to stick with "conventional techniques."

To say that AI will never catch on is like saying that high-level languages
should never have cought on.  At one point it looked unlikely that HLL
would gain wide acceptance, better equipment and better understanding by
the programming community made them practical.

                                        - Steve
                                        mrspock@hubcap.clemson.edu
                                        ...!gatech!hubcap!mrspock

------------------------------

Date: 3 Apr 88 08:51:59 GMT
From: mcvax!ukc!its63b!gvw@uunet.uu.net  (G Wilson)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

In article <4640@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>         Moreover, your opinion that conventional techniques can
>replace AI is ludicrous.  Consider the area of natural language.  What
>conventional techniques that you know of can extract information from
>natural language text or translate a passage from English to French?

Errmmm...show me *any* program which can do these things?  To date,
AI has been successful in these areas only when used in toy domains.

>The future of AI is going to be full of unrealistic hype and disappointing
>failures.

Just like its past, and present.  Does anyone think AI would be as prominent
as it is today without (a) the unrealistic expectations of Star Wars,
and (b) America's initial nervousness about the Japanese Fifth Generation
project?

>           But the demand for AI is so great that we have no choice but to
>push on.

Manifest destiny??  A century ago, one could have justified
continued research in phrenology by its popularity.  Judge science
by its results, not its fashionability.

I think AI can be summed up by Terry Winograd's defection.  His
SHRDLU program is still quoted in *every* AI textbook (at least all
the ones I've seen), but he is no longer a believer in the AI
research programme (see "Understanding Computers and Cognition",
by Winograd and Flores).

Greg Wilson

------------------------------

Date: 31 Mar 88 20:02:22 GMT
From: necntc!linus!philabs!ttidca!hollombe@AMES.ARC.NASA.GOV  (The
      Polymath)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>What do people think of the PRACTICAL future of artificial intelligence?

My empoloyers just sponsored a week-long in-house series of seminars,
films, vendor presentations and demonstrations of expert systems
technology.  I attended all of it, so I think I can reasonably respond to
this.

Apparently, the expert systems/knowledge engineering branch of so called
AI (of which, more later) has made great strides in the last few years.
There are many (some vendors claim thousands) of expert system based
commercial applications running in large and small corporations all over
the country.

In the past week we saw presentations by Gold Hill Computers (GOLDWORKS),
Aion Corp. (ADS), Texas Instruments (Personal Consultant Plus) and Neuron
Data (Nexpert Object).  The presentations were impressive, even taking
into account their sales nature.  None of the vendors is in any financial
trouble, to say the least.  All claimed many delivered, working systems.

A speaker from DEC explained that their Vax configurator systems couldn't
have been developed without an expert system (they tried and failed) and
is now one of the oldest and most famous expert systems running.

It was pointed out by some of the speakers that companies using expert
systems tend to keep a low profile about it.  They consider their systems
as company secrets, proprietary information that gives them an edge in
their market.

Personal Impressions:

The single greatest advantage of expert systems seems to be their rapid
prototyping capability.  They can produce a working system in days or
weeks that would require months or years, if it could be done at all, with
conventional languages.  That system can subsequently be modified very
easily and rapidly to meet changing conditions or include new rules as
they're discovered.  Once a given algorithm has stabilized over time, it
can be re-written in a more conventional language, but still accessed by
the expert system.  The point being that the algorithm may never have been
determined at all but for the adaptable rapid prototyping environment.
(The DEC Vax configurator, mentioned above, is an example of this.  Much of
it, but not all, has been converted to conventional languages).

As for expense, prices of systems vary widely, but are coming down.  TI
offers a board with a LISP mainframe-on-a-chip (their term) that will turn
a MAC-II into a LISP machine for as little as $7500.  Other systems went
as high as an order of magnitude over that.  I personally think these
won't really take off 'til the price drops another order of magnitude to
put them in the hands of the average home hacker.

Over all, I'd have to say that expert systems, at least, are alive and
well with a bright future ahead of them.

About Artificial Intelligence:

I maintain this is a contradiction in terms, and likely to be so for the
forseeable future.  If we take "intelligence" to mean more than expert
knowledge of a very narrow domain there's nothing in existence that can
equal the performance of any mammal, let alone a human being.  We're just
begining to explore the types of machine architectures whose great↑n-
grandchildren might, someday, be able to support something approaching
true AI.  I'll be quite amazed to see it in my lifetime (but the world has
amazed me before (-: ).

--
The Polymath (aka: Jerry Hollombe, hollombe@TTI.COM)   Illegitimati Nil
Citicorp(+)TTI                                           Carborundum
3100 Ocean Park Blvd.   (213) 452-9191, x2483
Santa Monica, CA  90405 {csun|philabs|psivax|trwrb}!ttidca!hollombe

------------------------------

End of AIList Digest
********************

∂13-Apr-88  0546	LAWS@KL.SRI.COM 	AIList V6 #67 - Future of AI
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 13 Apr 88  05:46:25 PDT
Date: Tue 12 Apr 1988 22:48-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #67 - Future of AI
To: AIList@SRI.COM


AIList Digest           Wednesday, 13 Apr 1988     Volume 6 : Issue 67

Today's Topics:
  Opinion - The Future Of AI

----------------------------------------------------------------------

Date: 31 Mar 88 09:18:18 GMT
From: mcvax!ukc!dcl-cs!simon@uunet.uu.net  (Simon Brooke)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

In article <5789@swan.ulowell.edu> sbrunnoc@eagle.UUCP (Sean Brunnock) writes:
>In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>>What do people think of the PRACTICAL future of artificial intelligence?
>>
>>Is AI just too expensive and too complicated for practical use?  I
>>
>>Does AI have any advantage over conventional programming?
>
>   Bear with me while I put this into a sociological perspective. The first
>great "age" in mankind's history was the agricultural age, followed by the
>industrial age, and now we are heading into the information age. The author

Oh God! I suppose the advantage of the net is that it allows us to betray
our ignorance in public, now and again. This is 'sociology'? Dear God!

>   For example, give a machine access to knowledge of aerodynamics,
>engines, materials, etc. Now tell this machine that you want it to
>design a car that can go this fast, use this much fuel per mile, cost
>this much to make, etc. The machine thinks about it and out pops a
>design for a car that meets these specifications.

And here we really do have God - the General Omnicompetent Device - which
can search an infinite space in finite time. (Remember that Deep Thought
took 7 1/2 million years to calculate the answer to the ultimate question
of life, the universe, and everything - and at the end of that time could
not say what the question was).

Seriously, if this is why you are studying AI, throw it in and study some
philosophy. There *are* good reasons for studying AI: some people do it in
order to 'find out how people work' - I have no idea whether this project
is well directed, but it is certain to raise a lot of interesting
problems. Another is to use it as a tool for exploring our understanding
of such concepts as 'understanding', 'knowledge', 'intelligence' - or, in
my case, 'explanation'. Obviously I believe this project is well directed,
and I know it raises lots of of interesting problems...

And occasionally these interesting problems will spin off technologies
which can be applied to real world tasks. But to see AI research as driven
by the need to produce spin-offs seems to me to be turning the whole
enterprise on its head.


** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      *
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*************************************************************************

------------------------------

Date: 7 Apr 88 18:35:41 GMT
From: trwrb!aero!srt@ucbvax.Berkeley.EDU  (Scott R. Turner)
Subject: Re: The future of AI - my opinion

I think the important point is that as soon as AI figures something out,
it is not only no longer considered to be AI, it is also no longer considered
to be intelligence.

Expert systems is a good example.  The early theory was, let's try and
build programs like experts, and that will give us some idea of why
those experts are intelligent.   Now a days, people say "expert
systems - oh, that's just rule application."  There's some truth to
that viewpoint - I don't think expert systems has a lot to say about
intelligence - but it's a bad trap to fall into.

Eventually we'll build a computer that can pass the Turing Test and
people will still be saying "That's not intelligence, that's just a
machine."
                                                -- Scott Turner

------------------------------

Date: 7 Apr 88 18:13:10 GMT
From: bloom-beacon.mit.edu!boris@bloom-beacon.mit.edu  (Boris N
      Goldowsky)
Subject: Re: The future of AI - my opinion


In article <28619@aero.ARPA> srt@aero.ARPA (Scott R. Turner) writes:

   Eventually we'll build a computer that can pass the Turing Test and
   people will still be saying "That's not intelligence, that's just a
   machine."
                                   -- Scott Turner
This may be true, but at the same time the notion that a machine could
never think is slowly being eroded away.  Perhaps by the time such a
"Turing Machine"* could be built, "just a machine" will no longer
imply non-intelligence, because they'll be too many semiinteligent
machines around.

But I think it is a good point that every time we do begin to understand
some subdomain of intelligence, it becomes clear that there is much
more left to be understood...

                                        ->Boris G.

(*sorry.)
--
Boris Goldowsky     boris@athena.mit.edu or @adam.pika.mit.edu
                         %athena@eddie.UUCP
                         @69 Chestnut St.Cambridge.MA.02139
                         @6983.492.(617)

------------------------------

Date: 6 Apr 88 18:27:25 GMT
From: ssc-vax!bcsaic!rwojcik@beaver.cs.washington.edu  (Rick Wojcik)
Subject: Re: The future of AI

In article <1134@its63b.ed.ac.uk> gvw@its63b.ed.ac.uk (G Wilson) writes:
>[re: my reference to natural language programs]
>Errmmm...show me *any* program which can do these things?  To date,
>AI has been successful in these areas only when used in toy domains.
>
NLI's Datatalker, translation programs marketed by Logos, ALPs, WCC, &
other companies, LUNAR, the LIFER programs, CLOUT, Q&A, ASK, INTELLECT,
etc.  There are plenty.  All have flaws.  Some are more "toys" than
others.  Some are more commercially successful than others.  (The goal
of machine translation, at present, is to increase the efficiency of
translators--not to produced polished translations.)

>...  Does anyone think AI would be as prominent
>as it is today without (a) the unrealistic expectations of Star Wars,
>and (b) America's initial nervousness about the Japanese Fifth Generation
>project?
>
I do.  The Japanese are overly optimistic.  But they have shown greater
persistence of vision than Americans in many commercial areas.  Maybe
they are attracted by the enormous potential of AI.  While it is true
that Star Wars needs AI, AI doesn't need Star Wars.  It is difficult to
think of a scientific project that wouldn't benefit by computers that
behave more intelligently.

>Manifest destiny??  A century ago, one could have justified
>continued research in phrenology by its popularity.  Judge science
>by its results, not its fashionability.
>
Right.  And in the early 1960's a lot of people believed that we
couldn't land people on the moon.  When Sputnik I was launched my 5th
grade teacher told the class that they would never orbit a man around
the earth.  I don't know if phrenology ever had a respectable following
in the scientific community.  AI does, and we ought to pursue it whether
it is popular or not.

>I think AI can be summed up by Terry Winograd's defection.  His
>SHRDLU program is still quoted in *every* AI textbook (at least all
>the ones I've seen), but he is no longer a believer in the AI
>research programme (see "Understanding Computers and Cognition",
>by Winograd and Flores).

Weisenbaum's defection is even better known, and his Eliza program is
cited (but not quoted :-) in every AI textbook too.  Winograd took us a
quantum leap beyond Weisenbaum.  Let's hope that there will be people to take
us a quantum leap beyond Winograd.  But if our generation lacks the will
to tackle the problems, you can be sure that the problems will wait
around for some other generation.  They won't get solved by pessimists.
Henry Ford had a good way of putting it:  "If you believe you can, or if
you believe you can't, you're right."
--
Rick Wojcik   csnet:  rwojcik@boeing.com
              uucp:  {uw-june  uw-beaver!ssc-vax}!bcsaic!rwojcik
address:  P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone:    206-865-3844

------------------------------

Date: 8 Apr 88 12:24:51 GMT
From: otter!cdfk@hplabs.hp.com  (Caroline Knight)
Subject: Re: Re: The future of AI - my opinion

The Turing Test is hardly adequate - I'm surprised that people
still bring it up - indeed it is exactly the way in which people's
expectations change with what they have already seen on a computer
which makes this a test with continuously changing criteria.

For instance, take someone who has never heard of computers
and show them any competent game and the technically
unsophisticated may well believe the machine is playing
intelligently (I have trouble with my computer beating
me at Scrabble) but those who have become familiar with
such phenomena "know better" - its "just programmed".

The day when we have won is the inverse of the Turing Test - someone
will say this has to be a human not a computer - a computer
couldn't have made such a crass mistake  - but then maybe
the computer just wanted to win and looked like a human...

I realise that this sounds a little flippant but I think that
there is a serious point in it - I rely on your abilities
as intelligent readers to read past my own crassness and
understand my point.

Caroline Knight

------------------------------

Date: 7 Apr 88 18:47:28 GMT
From: hpcea!hpnmd!hpsrla!hpmwtla!garyb@hplabs.hp.com  (Gary
      Bringhurst)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

>    Some people wondered what
> was the use of opening up a trans-continental railroad when the pony
> express could send the same letter or package to where you wanted in just
> seven days....
>
>       Sean Brunnock
>       University of Lowell
>       sbrunnoc@eagle.cs.ulowell.edu

I have to agree with Sean here.  So let's analyze his analogy more closely.
AI is to the railroad as conventional CS wisdom is to the pony express.
Railroads can move mail close to three times faster than ponys, therefore
AI programs perform proportionately better than the alternatives, and are not
sluggish or resource gluttons.  Trains are MUCH larger than ponys, so AI
programs must be larger as well.  Trains travel only in well defined tracks,
while ponys have no such limitations...

Hey, don't trains blow a lot of smoke?

Gary L. Bringhurst

------------------------------

Date: 3 Apr 88 18:11:49 GMT
From: pur-phy!mrstve!mdbs!kbc@ee.ecn.purdue.edu  (Kevin Castleberry)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

> It should increase the skill of the
>person doing the job by doing those things which are boring
>or impractical for humans but possible for computers.
>...
> When sharing a job
>with a computer which tasks are best automated and which best
>given to the human - not just which is it possible to automate!

For the most part, this is what I see happening in the truly succesful
ES applications I see implemented.  Occasionally there is one that provides
a solution to a problem so complex that humans did not try.  Most of
the time it is just providing the human a quicker and more reliable way
to get the job done so s/he can move on to more interesting tasks.

>Perhaps computers will free people up so that they can go back
>to doing some of the tasks that we currently have machines do
>- has anyone thought of it that way?

I certainly have observed this.  Often the human starts out doing interesting
designing, problem solving etc., but then gets bogged down in the necessities
of keeping the *system* running.  I have observed such automation giving
humans back the job they enjoy.

>And if we are going to do people out of jobs then we'd better
>start understanding that a person is still valuable even if
>they do not do "regular work".

My own belief is if systems aren't developed to help us work smarter
then the jobs will disappear anyway to the company that does develop such
systems.


        support@mdbs.uucp
                or
        {rutgers,ihnp4,decvax,ucbvax}!pur-ee!mdbs!support

        The mdbs BBS can be reached at: (317) 447-6685
        300/1200/2400 baud, 8 bits, 1 stop bit, no parity

Kevin Castleberry (kbc)
Director of Customer Services

Micro Data Base Systems Inc.
P.O. Box 248
Lafayette, IN  47902
(317) 448-6187

------------------------------

Date: 11 Apr 88 01:56:48 GMT
From: hubcap!mrspock@gatech.edu  (Steve Benz)
Subject: Re: The future of AI - my opinion

From article <2070012@otter.hple.hp.com>, by cdfk@otter.hple.hp.com
(Caroline Knight):
> The Turing Test is hardly adequate - I'm surprised that people
> still bring it up...
>
> The day when we have won is the inverse of the Turing Test - someone
> will say this has to be a human not a computer - a computer
> couldn't have made such a crass mistake...
>
> ...Caroline Knight

  Isn't this exactly the Turing test (rather than the inverse?)
A computer being just as human as a human?  Well, either way,
the point is taken.

  In fact, I agree with it.  I think that in order for a machine to be
convincing as a human, it would need to have the bad qualities of a human
as well as the good ones, i.e.  it would have to be occasionally stupid,
arrogant, ignorant, etc.&soforth.

  So, who needs that?  Who is going to sit down and (intentionally)
write a program that has the capacity to be stupid, arrogant, or ignorant?

  I think the goal of AI is somewhat askew of the Turing test.
If a rational human develops an intelligent computer, it will
almost certainly have a personality quite distinct from any human.

                                - Steve
                                mrspock@hubcap.clemson.edu
                                ...!gatech!hubcap!mrspock

------------------------------

Date: 11 Apr 88 07:46:11 GMT
From: cca.ucsf.edu!daedalus!brianc@cgl.ucsf.edu  (Brian Colfer)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

Douglas Hofsteader says in Godel, Escher, Bach that we are probably
too dumb to understand ourselves at level to make an intelligence
comparable to our own.  He uses the analogy of giraffes which just
don't have the bio-hardware to contemplate their own exisitance.

We too may just not have the bio-hardware to organize a true
intelligence.  Now there are many significant things to be done short
of this goal.  The real question for AI is, "Can there really be an
alternative paradigm to the Turing test which will guide and inspire
the field in significant areas?"

Well...thats my $0.02


===============================================================================
             : UC San Francisco       : brianc@daedalus.ucsf.edu
Brian Colfer : Dept. of Lab. Medicine : ...!ucbvax!daedalus.ucsf.edu!brianc
             :  PH. 415-476-2325      : brianc@ucsfcca.bitnet
===============================================================================

------------------------------

Date: 12 Apr 88 04:33:54 GMT
From: phoenix!pucc!RLWALD@princeton.edu  (Robert Wald)
Subject: Re: The future of AI - my opinion

In article <1348@hubcap.UUCP>, mrspock@hubcap.UUCP (Steve Benz) writes:

>  Isn't this exactly the Turing test (rather than the inverse?)
>A computer being just as human as a human?  Well, either way,
>the point is taken.
>
>  In fact, I agree with it.  I think that in order for a machine to be
>convincing as a human, it would need to have the bad qualities of a human
>as well as the good ones, i.e.  it would have to be occasionally stupid,
>arrogant, ignorant, etc.&soforth.
>
>  So, who needs that?  Who is going to sit down and (intentionally)
>write a program that has the capacity to be stupid, arrogant, or ignorant?


  I think that you are missing the point. Its because you're using charged
words to describe humans.

Ignorant: Well, I would certainly expect an AI to be ignorant of things
or combinations of things it hasn't been told about.

Stupid: People are stupid either because they don't have proper procedures
to deal with information, or because they are ignorant of the real meaning
of the information they do possess and thus use it wrongly. I don't see
any practical computer having some method of always using the right procedure,
and I've already said that I think it would be ignorant of certain things.
People think and operate by using a lot of heuristics on an incredible
amount of information. So much that it is probably hopeless to develop
perfect algorithms, even with a very fast computer. So i think that computers
will have to use these heuristics also.
  Eventually, we may develop methods that are more powerful and reliable
than humans. Computers are not subject to the hardware limitations of the
brain. But meanwhile I don't think that what you have mentioned are
'bad' qualities of the brain, nor unapplicable to computers.

Arrogance: It is unlikely that people will attempt to give computers
emotions for some time. On the other hand, I try not (perhaps
failing at times) to be arrogant or nasty. But as far as the turing
test is concerned, a computer which can parse real language could
conceivably parse for emotional content and be programmed to
respond. There may even be some application for this, so it may
be done. The only application for simulating arrogance production
might be if you are really trying to fool workers into thinking
their boss is a human, or at least trying to make them forget it
is a computer.

I'm not really that concerned with arrogance, but I think that an
AI could be very 'stupid' and 'ignorant'. Not ones that deal with limited
domains, but ones that are going to operate in the real world.
-Rob Wald                Bitnet: RLWALD@PUCC.BITNET
                         Uucp: {ihnp4|allegra}!psuvax1!PUCC.BITNET!RLWALD
                         Arpa: RLWALD@PUCC.Princeton.Edu
"Why are they all trying to kill me?"
     "They don't realize that you're already dead."     -The Prisoner

------------------------------

End of AIList Digest
********************

∂13-Apr-88  0827	LAWS@KL.SRI.COM 	AIList V6 #68 - AI News, Supercomputing, Seminars    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 13 Apr 88  08:27:06 PDT
Date: Tue 12 Apr 1988 22:59-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #68 - AI News, Supercomputing, Seminars
To: AIList@SRI.COM


AIList Digest           Wednesday, 13 Apr 1988     Volume 6 : Issue 68

Today's Topics:
  Opinion - Justification of AI,
  Reviews - Spang Robinson Report, V4 N2 &
    Spang Robinson Supercomputing, V2 N2,
  Seminars - Adaptive Knowledge for Genetic Algorithms (BBN) &
    Automated Inductive Reasoning about Logic Programs (SU)

----------------------------------------------------------------------

Date: 06 Apr 88  2341 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: Revenge at last!

In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>
>Is AI just too expensive and too complicated for practical use?  I
>spent 3 years in the field and I'm beginning to think the answer is
>mostly yes.  In my opinion, all working AI programs are either toys or
>could have been developed much more cheaply using conventional
>techniques.

At last I get to use a retort that I thought of a half hour too late
almost 30 years ago.  After one of my first public lectures on LISP
in about 1960 in which I gave examples of algebraic computations,
someone in the back of the audience, I think his name might have
been Carl Peterson, said scornfully, "I could easily have programmed all
that in assembly language".  The retort should have been, "Well then,
why didn't you?"

------------------------------

Date: Tue, 12 Apr 88 19:53:48 CDT
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: Review - Spang Robinson Report, V4 N2

Summary of Spang Robinson Report on Artificial Intelligence
Volume 4, No. 2, February 1988

Lead article is on "Who's Buying AI in 1988"

AI Users in National Institutes for Health and National Library of
Medicine have "not had their AI efforsts substantially affected by
the cuts so far.  In fact, NLM is actively recruiting AI programmers."

The Aerospace Daily (12/21/87) said "Artificial Intelligence may turn out
to be the most pivotal technology of this century."

"Equitable Life has disbanded its entire R&D group, including AI.  The
trader's workstation, the hot topic of yesteryear, appears to be a taboo
subject these days.  And a large number of resuems are circulating from
financial services AI programmers."

The number of insurance companies in AI grows almost daily.  Price
Waterhouse, for example, has opened an AI research center in Menlo
Park, CA.  Arthur ADL intends to double its AI staff by the end of
1988.  Coopers and Lybrand will shortly open two more AI field offices."
&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑&↑
Applications

Westinghouse Electric Corporation has developed an on-line system for
monitoring plant chemistry in nuclear power plants.

General Electric implemented PHASEID to identify phases in the
nickel-based superalloy, Inconel 718 in Exsys.  It uses Rockwell
Hardness, optical metallography and energy dispersive spectroscopy.
(D. J. Parker, J. M. Arde, Jr. and S. T. Wlodek)

Canadian Pacific developed an expert system to analyze oil samples
form a diesel locomotive.  It interprets the data from a spectrometer.
The system contains 490 rules and has analyzed 10,000 samples and
is now deployed at five sites.  The system is being marketed to other
railroads.  A mechanic decided to disregard the recommendations of the
system causing a $250,000 failure.

Texas Instrumetns developed a technicians assistant to handle epi reactors,
used in semiconductor manufacturing.  The system has saved
at least $80,000 per year.
by improving mean-time-to-repair by 34 per cent and mean-time-between-failures
by 44 per cent.  It handles 95 percent of the problems.  It uses a
database of failures.  The success of the project lead
to new projects for proble station repair, sputtering
stations, dry etchers and a compression nitride depositon system.

50,000 plus PC-based expert systems shells of various types
have been sold.

((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((
Review of DEC's expert system seminar.

This seminar approaches "cultural planning"  It has case study approach
and discusses eight different models of organizational changes.
A consulting failure was described where AI was brought in to
fix a failing business unit using expert system technology.

DEC believes that since expert systems distribute knowledge, they tend
to decentralize the orgaization and distribute power.  The course costs
$2,000 and in the opinion of the review, "well worth the price of
admission."

U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_U_

Discusson on Teknowledge.  It has layed off 30% of its employees
and will become an AI services company.  It lost eight million
due to costs from "tool products."

They will not do the data base integration and application
packages of Copernicus and will not sell it through its direct
sales force.  It will continue to maintain M.1 and S.1.


#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@

Shorts:

Inference is founding a consortium including current
customers to develop expert systems for IBM mainframes.

They will be porting ART to Hewlett-Packard 9000 workstations.

Prophecy will market Contessa, ak nowledge-based applicatoin generator
for financial services, on the Sun-3's and Sun-4's.

IFPS/plus is a version of the famous financial modelling system IFPS
which has artifical intelligence language capability.  It will be
available on Apollo computers.

Dec's internal ROI on Ai applications is 200 to 300 percent.

Intellicorp announced a $972,000 loss for quarter ending December 31, 1987.
Revenues were five million..

Russell Notsker and Brian Sear (CEO and COO, respectively) have
resigned from Symbolics.  For the second quarter, the company lost
fifteen million dollars on 23 million in revenue which included
a restructuring charge of 12 million.

Carnegie Group has added tools to Knowledge Craft to have
displays of dial meters and thermometers and maintenance of a calendar
of events.

UNISYS is setting up an AI systems family so it can be a one -source
vendor for AI applicatoins.

The Commerce Department reports that there 2000 to 3000 LISP programmers
in the United States.  They make between $50,000 and $100,00 and
continuue to be in short supply.

------------------------------

Date: Sat, 2 Apr 88 21:02:17 CST
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: Review - Spang Robinson Supercomputing, V2 N2

The Spang Robinson Re[port on Supercomputing and Parallel Processing
February, 1988  Volume 2, Number 2

Lead Article is on "The Parallel Software Picture"

This article discusses varioius "grainiedness" of processing.

25 of Cray's customer's converted to Unix with 50 installations
intending to convert to UNIX.  Half of Cray's new orders are UNIX.

The article reports that for fine-grained parallelism (vector
processing), pre-compilers and compilers are extracting most of the available
parallelism.

Companies providing parallelizing tools for vector computers include:

COMPASS, Pacific Sierra Research, Scientific Computer Associates

Coarse-grained optimization is not in good shape (the article has
quotes from many to support these claims).

A "language triangle" is shown where the three viewpoints are
"prescriptive," e. g. machine language, logic programming and
"denotative" e. g. pure lisp or FP. Various languages are put in
the triangle at various places.
************************************************************
ETA has a contenst where the prize is an ETA 10P to a high school.
ETA will pay the costs including electricity for two years.

The high schools participating will be submitting a project done
by a three student team.
&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*&*
Cray Announcements

Cray announced a top of the line Y-MP 832, a DS-40 disk system and
the FEI-3 interface.  The computer is 30 times the speed of the original CRAY-1.
The price is twenty million.

The FEI-3 interfaces Ethernet to the Cray.
>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)>)
Shorts

Floating point system appointed HOward Thrailkill as its new CEO.

James R. Newcomb, who headed up the PDA international (mechanical
engineering CADCAM) is now vice president for strategic software
business development for Ardent Computers.

Celerity has dropped its vector-scalar 6000 lines and laid off 70 people
out of 100.

Sequent reported a 92 percent increase in revenues.  It had 38.5 revenue
and 4 million in profits.

Cray Research made 687 million and installed a total of 55 computer systems.

BMW has bought a Cray X-MP/28.

Multiflow now has an installed based of 19.

The United Kingdom's Meteorological Office has bought an ETA10-E.

Supertek, a Cray compatible manufacturer, has raised at least four
million in its second financing round.

National Science Foundation has selected MERIT to manage the
implementation of the NSFNet backbone center.  IBM will contribute
packet switching hardware nad MCI will provide T1 circuits.

Oregon State has sponsored formation of the Oregon Institute for
Advanced Computing which is affiliated with the Oregon Graduate Center.

Cydrome's dataflow unit has achieved 10.4 Megaflops on the Linpack 100x100
test and 3.7 for the Livermore Fortran Kernals.  System costs $575,000.

Encore has announced a 4 MIPS entry level system for $89,000.
Alliant introduced the FX/40 and FX/80.
The FX/80 is rated at 65MFLOPS for the 1000 x 1000
Linpack measure.

San Diego Scientific Computing System has announced the SCS-30 XM a machine that
delivers 75 percent of the performance of an SCS-40 at 60 percent of the price.

------------------------------

Date: Tue 12 Apr 88 08:33:03-EDT
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: Seminar - Adaptive Knowledge for Genetic Algorithms (BBN)


                       BBN Science Development Program
                             AI Seminar Series

            ADAPTIVE KNOWLEDGE REPRESENTATION:  A CONTENT SENSITIVE
                 RECOMBINATION MECHANISM FOR GENETIC ALGORITHMS

                           J. David Schaffer
                          Philips Laboratories
                   North American Philips Corporation
                      Briarcliff Manor, New York


                          BBN Laboratories Inc.
                           10 Moulton Street
                    Large Conference Room, 2nd Floor

                10:30 a.m., Tuesday, April 19, 1988

Abstract: This paper describes ongoing research on content sensitive
recombination operators for genetic algorithms. A motivation behind this
line of inquiry stems from the observation that biological chromosomes appear
to contain special nucleotide sequences whose job is to influence the
recombination of the expressible genes. We think of these as punctuation marks
telling the recombination operators how to do their job. Furthermore, we
assume that the distribution of these marks (part of the representation) in
a gene pool is determined by the same survival-of-the-fittest and genetic
recombination mechanisms that account for the distribution of the expressible
genes (the knowledge). A goal of this project is to devise such mechanisms
for genetic algorithms and thereby to link the adaptation of a representation
to the adaptation of its contents. We hope to do so in a way that capitalizes
on the intrinsically parallel behavior of the traditional genetic algorithm.
We anticipate benefits of this for machine learning.

We describe one mechanism we have devised and present some empirical evidence
that suggests it may be as good as or better than a traditional genetic
algorithm across a range of search problems. We attempt to show that its
action does successfully adapt the search mechanics to the problem space
and provide the beginnings of a theory to explain its good performance.

------------------------------

Date: 08 Apr 88  1430 PDT
From: Vladimir Lifschitz <VAL@SAIL.Stanford.EDU>
Subject: Seminar - Automated Inductive Reasoning about Logic Programs
         (SU)


        AUTOMATED INDUCTIVE REASONING ABOUT LOGIC PROGRAMS

            Charles Elkan (elkan@iving.cs.cornell.edu)
                  Department of Computer Science
                        Cornell University


                      Friday, April 1, 3:15pm
                              MJH 252

David McAllester and I have developed a prototype theorem prover
that applies induction in a new way to prove properties of logic
programs.  The soundness of the proof rules of our system follows
directly from the standard minimal model semantics of logic programs.
I shall describe the perspective on inductive theorem proving that
gave rise to our system, and then its architecture and proof rules,
using some varied examples of what it can prove.  Then I shall raise
for discussion various plans for future work, both theoretical and
practical.

------------------------------

End of AIList Digest
********************

∂14-Apr-88  0112	LAWS@KL.SRI.COM 	AIList V6 #69 - Queries
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 14 Apr 88  01:12:34 PDT
Date: Wed 13 Apr 1988 22:48-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #69 - Queries
To: AIList@SRI.COM


AIList Digest           Thursday, 14 Apr 1988      Volume 6 : Issue 69

Today's Topics:
  Queries - Terminal Selection in Deep Diagnosis &
    comp.ai vs. comp.ai.digest & VMS LISP &
    Expert Systems in Clinical Psychology &
    Qualitative Process Programs &
    Test Engineering and AI & Functions in Expert Systems &
    Exciting Work in AI & Fuzzy Logic &
    Bibliography of Machine Learning & Kyoto Common Lisp

----------------------------------------------------------------------

Date: 11 Apr 88 20:34:11 GMT
From: techunix.BITNET!ameen@ucbvax.Berkeley.EDU  (Ameen Abu_Hanna)
Subject: Heuristics for terminal selection in deep diagnosis


    In model based troubleshooting, probing  into  the  diagnosed
system  to  examine  some  terminal's  output,  is  _one  way_ to
discriminate between suspect components (competing hypothesis).

    Clearly, *choosing* a "good" terminal/port for examination is
vital  for  efficiency.  I  need  suggestions  for  heuristics to
estimate  how  "good"  is  a  terminal  examination   (i.e.   how
discriminatory   power  it  might  yield  in  case  such  a  test
succeeds/fails).

    The diagnosed system  in  my  case  is  concerned  about  the
electrical/digital   domain   and  modeled  (structurally)  by  a
hierarchical representation where a component might be  either  a
primitive or a module consisting of other (sub)components.

     Aspects like number of pins a  chip  has  (more  pins  of  a
suspected  component  raise probability of it's "failure belief",
hence an affected terminal  by  such  component  might  be  worth
considering),  price of "observability" of the expected output at
some terminal, number of possible contributor suspect  components
to the terminal, terminal accessibility etc. are some criteria to
be considered. Any suggestions  ?  (partial/conceptual  ones  are
welcomed).

              Thanks,
              Ameen Abu-Hanna,

Domain: ameen@techunix.technion.ac.il UUCP:    ...!ucbvax!ameen@techunix.bitnet
BITNET: ameen@techunix                ARPANET: ameen%techunix.bitnet@wiscvm.arpa

------------------------------

Date: Fri, 1 Apr 88 08:38:07 PST
From: nakashim@russell.stanford.edu (Hideyuki Nakashima)
Reply-to: nakashim@russell.UUCP ()
Subject: What is the difference between comp.ai and comp.ai.digest?


I have difficulty in distinguishing comp.ai.digest from comp.ai.
What is the difference between these two?

--
Hideyuki Nakashima
CSLI and ETL
nakashima@csli.stanford.edu (until Aug. 1988)
nakashima%etl.jp@relay.cs.net (afterwards)


  [Comp.ai is an unmoderated Usenet stream.  I merge its contents
  with messages from other networks to form the AIList digest.
  I then post these other messages to comp.ai.digest so that
  Usenet readers don't miss anything.  -- KIL]

------------------------------

Date: 1 Apr 88 15:41:01 GMT
From: ukma!nasa@TUT.CIS.OHIO-STATE.EDU  (Eric T. Freeman)
Subject: VMS LISP


NOTE: The following is posted for a friend, you may respond to either his
address or mine, I will forward responses to him.

******************************************************************************
WANTED--A public domain (or very inexpensive) copy of LISP for the VAX/VMS
(not Unix).  Must have compiler.  Must have someone to answer questions.
franz lisp would be fine.  Send mail to jones@dftnic.gsfc.nasa.gov

Thanks,

Tom Jones
******************************************************************************

Eric Freeman
University of Kentucky Computer Science
nasa@g.ms.uky.edu
freeman@dftnic.gsfc.nasa.gov

------------------------------

Date: Mon, 04 Apr 88 10:16:36 CDT
From: "Daniel J. Uetrecht"
      <C0013%UMRVMB.BITNET@CORNELLC.CCS.CORNELL.EDU>
Subject: Expert systems in Clinical Psychology


I am doing research into the use of expert systems in clinical psychology.
Specifically, I am interested in the role expert systems can play in
DSM diagnosis of clinical disorders, related areas in education and
teaching diagnostic procedures, and current software offerings in these
areas.  Can anyone point me to relevant sources of information?  All
responses will be greatly appreciated.

Thanks in advance....

Daniel J. Uetrecht
University of Missouri-Rolla
Acknowledge-To: <C0013@UMRVMB>

------------------------------

Date: 4 Apr 88 19:20:42 GMT
From: jas@cadre.dsl.pittsburgh.edu  (Jeffrey A. Sullivan)
Subject: Qualitative Process Programs

I am interested in obtaining a current version of a program running
simulations using qualitative process theoretic modes of knowledge
representation.  I have recently read a paper by Benjamin Kuipers in
which the system Q (a specialization of QSIM) is mentioned.  I would
liek to get a pointer to Mr. Kuipers on the net or to anyone having
current versions of QPT programs.

I am also interested in references to articles where the limitations of QPT
are discussed.


--
..........................................................................
Jeffrey Sullivan                          | University of Pittsburgh
jas@cadre.dsl.pittsburgh.edu              | Intelligent Systems Studies Program
jasper@PittVMS.BITNET, jasst3@cisunx.UUCP | Graduate Student

------------------------------

Date: 4 Apr 88 19:38:15 GMT
From: hubcap!ncrcae!ncr-sd!ncrlnk!rd1632!king@gatech.edu  (James King)
Subject: Test Engineering and AI


I am looking for knowledgeable researchers involved in the application
of knowledge-based system (AI) approaches to test engineering.  If
anyone knows of people involved in this field could you please let me
know.

Test engineering --> board level testing, platform level testing, IC's, etc.

I would appreciate any assistance.

Jim King
NCR Corporation
513-445-1090

j.a.king@dayton.ncr.com

------------------------------

Date: 4 Apr 88 12:11:00 GMT
From: eliot.cs.uiuc.edu!riedesel@a.cs.uiuc.edu
Subject: functions in expert systems


I am interested in finding out the relative occurance
of simple statments, arithmetic operators, and arbitrary
functions in expert systems knowledge bases.

Specifically, what percentage of your expert system has
statements that are simple tests and assignments;
what percentage of statements use arithmetic operators
(e.g. +, -, *, etc.) to compute a value for an assignment;
and what percentage use arbitrary functions (e.g. external
LISP function calls)?

I am interested in understanding the importance of the
function in expert systems.  From an analysis point of view
functions complicate expert systems quite a bit.

Please send replies to:
riedesel@aisunj.cs.uiuc.edu

thanks, Joel

------------------------------

Date: Thu, 7 Apr 88 09:50:06 EDT
From: reiter@harvard.harvard.edu (Ehud Reiter)
Subject: Exciting Work in AI

I was recently asked (by a psychology graduate student) if there was
any work being done in AI which was widely thought to be exciting and
pointing the way to further progress.  Specifically, I was asked for work
which:
        1) Was highly thought of by at least 50% of the researchers in
the field.
        2) Was a positive contribution, not an analysis showing problems
in previous work.
        3) Was in AI as narrowly defined (i.e. not in robotics or vision)

I must admit that I was (somewhat embarassingly) unable to think of
any such work.  All the things I could think of which have people excited
(ranging from non-monotonic logic to connectionism) seemed controversial
enough so that they could not be said to have the support of half of all
active AI researchers.

Any suggestions?  Please remember that I need things which are widely approved
of, not things which excite you personally.
                                                Ehud Reiter
                                                reiter@harvard.harvard.edu
                                                reiter@harvard  (BITNET,UUCP)

------------------------------

Date: Thu, 7 Apr 88 10:21 CDT
From: LMASON%MCOPN1%eg.ti.com@RELAY.CS.NET
Subject: @ailist >> REQUEST FOR INFO OF FUZZY LOGIC

This month's IEEE Computer magazine has an article by Lofti Zadeh on Fuzzy
Logic.  He mentions two of the 'several' expert system shells based on fuzzy
logic as Reveal and Flops.  Does anyone has experience with either of these
or know of other available systems?

Thanks in advance,
  Larry Mason

lmason%mcopn1%ti-eg@csnet-relay.arpa

------------------------------

Date: 6 Apr 88 13:47:37 GMT
From: stefan@gmu90x.gmu.edu  (Pawel Stefanski)
Subject: Bibliography of Machine Learning Research.

Together with K. Bridgeman and P. Scarbrough we are preparing the update
to the bibliography of machine learning research 1984-1988. We have
examined all major sources, like proceedings of IJCAI,AAAI,ML Workshops
etc. It is still possible, however, that some important work was
omitted, either because it was published somewhere else, or unpublished
at all. Therefore, I will appreciate, if an author of such a work
send me short info about it (or possibly, article) asap.
The detailed list of all examined sources follows.
Thanks in advance, and send info to: stefan@gmu90x.gmu.edu.
List of examined sources:
Proceedings of IJCAI 85,87
Proceedings of AAAI 84,86,87
Proceedings of Int.Conf.Genetic Algorithms 85,87.
Proceedings of the ACM SIGART Int.Symp. on Methodologies for Intell.
Systems 86
MLJournal 84-87
Artificial Intelligence 84-87
AI Magazine 84-87
IEEE Trans. on Patt.Anal.and Mach.Intell. 84-87
IEEE Trans. on Systems,Man and Cyber. 84-87
IEEE Expert 84-87
MLWorkshop 85 Rutgers, 87 Irvine.
Proceed. of Europeean Work.Sess. on Learning Bred, Yygosl. 87

------------------------------

Date: 7 Apr 88 07:10:11 GMT
From: mcvax!unido!laura!atoenne@uunet.uu.net  (Andreas Toenne)
Subject: Kyoto Common Lisp


        Someone posted an article about Kyoto Common Lisp about a year ago.

How much does it cost (I think there was a handling charge only) ?
Where can I get a tape/streamer copy ?

Are there other *cheap* full Common Lisp implementations ?

        Thanks in advance,

        Andreas Toenne
        atoenne@unido.uucp

------------------------------

End of AIList Digest
********************

∂14-Apr-88  0322	LAWS@KL.SRI.COM 	AIList V6 #70 - Queries
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 14 Apr 88  03:22:50 PDT
Date: Wed 13 Apr 1988 22:54-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #70 - Queries
To: AIList@SRI.COM


AIList Digest           Thursday, 14 Apr 1988      Volume 6 : Issue 70

Today's Topics:
  Queries - Proposal for Nanotechnology Group &
    Macsyma & Connectionist Models Paper &
    Structured Methodologies for Expert Systems &
    Can you name this project? & AI Journal Subscription &
    Semantic Networks & Rulebases &
    Object-Oriented Techniques & Human Face Recognition  &
    Genetic Algorithms Paper & ART and Knowledge Craft

----------------------------------------------------------------------

Date: 8 Apr 88 04:27:13 GMT
From: aramis.rutgers.edu!klaatu.rutgers.edu!josh@rutgers.edu  (J Storrs Hall)
Subject: Proposal for Nanotechnology group

I wish to start a new group to discuss nanotechnology.  I have searched high
and low for such a group amongst the existing ones with no success.

Nanotechnology is the as-yet-nascent engineering of molecular-scale machines.
The word "nanotechnology" was introduced by Eric Drexler in "The Engines
of Creation".

A nanotech group would take into its purview somewhat disparate areas,
such as
        -- molecular modelling
        -- physics of computation (as relates to atomic-sized gates...)
        -- hyper-parallel fine-grain architectures (ditto)
        -- theory of self-reproducing machines
        -- molecular machine design (gears, pulleys, levers...)
        -- Higher-level organizational issues, ie, how do you build
                a house using submicroscopic machines
        -- "profiles of the future" (although limited to what things
                nanotechnology will or will not make possible, and
                leave the "social implications" to other groups.)

There is an ARPANET nanotech list, which I would try to gateway.

I propose this to be a moderated list, and I volunteer to be the
moderator.  As a technical group, I don't expect there to be any
significant arguments on the list, so the moderator's function would
be primarily to eliminate redundancy and to insert directly information
from various outside sources, which I already collect.

Because this is to be a technical group, the proper Usenet
classification appears to be sci.nanotech.

Usenet backbone rules require a vote on new groups, so I hereby
solicit your vote:  Please mail to me, and don't post to this newsgroup:

   josh@klaatu.rutgers.edu
or ...!rutgers!klaatu.rutgers.edu!josh

 a message saying that you vote YES for the sci.nanotech newsgroup.
(If you use the "r" command, save off a copy before sending as the
"r" command often misroutes mail;  if the "r" message bounces you can
mail to the appropriate address above.)

After the standard 30-day period, the votes will be counted.  That works
out to be May 8 as I write this.  That leaves you long enough to go out
and buy "Engines of Creation" and read it (if you haven't already).

Then send me a yes before seven days in May have elapsed...

--JoSH

Inventor, n.:  One who assembles an ingenious collection of gears,
               pulleys, and levers, and believes it Civilization.
  -- Ambrose Bierce

------------------------------

Date: 9 Apr 88 06:46:37 GMT
From: stride!tahoe!unsvax!jimi!johnny!robert@gr.utah.edu  (Robert
      Cray)
Subject: Macsyma


  I need to get macsyma (for vms).  As far as I can tell, there are two
  choices, the Argonne verion for 3.1k and the Symbolics version for 10k.
  Can anyone tell me what if anything I get from symbolics for that
  extra 7k?  Argonne says "I have no idea", and I don't trust symbolics
  to give me a straight answer...Thanks.

                                --robert

--
robert@jimi.cs.unlv.edu
cray%lvva.span@sds.sdsc.edu

------------------------------

Date: Mon, 11 Apr 88 13:19:33 MET
From: TNPHHBU%HDETUD1.BITNET@CUNYVM.CUNY.EDU
Subject: Connectionist Models Paper Wanted


Can anyone tell us how we can obtain a copy of:
"Connectionist Models: Not Just a Notational Variant, Not a Panacea,"
D.Waltz. (TINLAP-3 Proceedings, NM State University, Jan. 87) ?

It is mentioned in a Thinking Machines Corp. 'technical papers overview',
where it says that no copies of this paper are presently available
from them.

Thanks in advance,

Hans Buurman, Martin Kraaijveld
Pattern Recognition Group
Department of Applied Physics
Delft University of Technology
Delft
the Netherlands

------------------------------

Date: Fri, 1 Apr 88 11:30:43 PST
From: lambert@cod.nosc.mil (David R. Lambert)
Subject: Query - Structured Methodologies for Expert Systems

A friend who is not (yet) receiving AIList has asked me to post a
request for information on STRUCTURED METHODOLOGIES FOR EXPERT
SYSTEMS.  I recall at least one recent posting but no longer have it
available.  Please send responses to Robert Briggs, email:
briggs@calstate.bitnet (not to me).  Thanks.

David R. Lambert (lambert@nosc.mil)

------------------------------

Date: Tue, 05 Apr 88 13:48:15 EDT
From: Paul.Birkel@K.GP.CS.CMU.EDU
Subject: Can you name this project?


In a recent exposition on parallel computing in the popular press the
following paragraph appeared. Can anyone name this project, its investigators,
a contact, a publication, or provide any further information?

        "Another Yale program - to monitor the equipment in an
        intensive-care unit - is more flexible still. Each processor
        in this system runs a different program, which monitors the
        equipment for signs of a particular problem or ailment. Because
        each program has its own processor, it is ever-vigilant for
        signs of its disease."

Thank you,

Paul A. Birkel
Dept. of Computer Science
Carnegie-Mellon University
Pittsburgh, PA 15213

(412) 268-8893

------------------------------

Date: Wed, 6 Apr 88 12:14:17 BST
From: Graeme Smith BBN Labs <gsmith%scotland.bbn.com@RELAY.CS.NET>
Subject: AI Journal subscription

Can anybody out there help me with getting the addresses
and subscription details for the AI Journal and AAAI magazine

Thanks in advance

Graeme Smith

  [The details are somewhere in the first few pages of the
  magazines, of course, but I believe you can get information
  about AAAI from OFFICE@SUMEX-AIM.STANFORD.EDU.  -- KIL]

------------------------------

Date: 7 Apr 1988 14:29:55-WET DST
From: rich <rich%EASBY.DURHAM.AC.UK@CUNYVM.CUNY.EDU>
Subject: Semantic Networks : HELP !!!

  I am investigating search techniques over realistically sized
semantic net structures as part of the work for my Ph.D. thesis.
I would be very glad to hear from anyone who can provide a copy of a
large semantic network on which to test my techniques, as the time
overhead in building my own semantic net is prohibitive. The language
of implementation of the network and its content is immaterial - any
size of network would be most welcome.

  I would also appreciate hearing from anyone who is undertaking work
on semantic network search strategies and the use of semantic network
structures as the basis for commercial databases. Hopefully we may be
able to help each other.

  Thanks very much.........

                    Richard.


E-MAIL:  UUCP  ...!mcvax!ukc!easby!rich
     ARPA  rich%EASBY.DUR.AC.UK@CUNYVM.CUNY.EDU
     BITNET rich%DUR.EASBY@AC.UK

FROM:  Mr. R.M.Piercy,
        Computing Dept,
     S.E.A.S,
      Science Laboratories,
       South Road,
        DURHAM DH1 3LE,
         ENGLAND.

------------------------------

Date: Sat, 9 Apr 88 09:57:58 est
From: WRM%WPI.BITNET@husc6.harvard.edu
Subject: Looking for rulebases

I am looking for a selection of rulebases (20-200 rules) to assist in
some parallel systems research.  The type or content of the rulebase
does'nt really matter.  What I'm out to determine is if there are any
'typical' characteristics which apply to a wide variety of applications.
A simple ASCII readable list is fine.  A brief explination of any
non-obvious syntax would be appreciated.   Thanks.

Bill Michalson  wrm@wpi.bitnet

------------------------------

Date: Tue, 12 Apr 88 10:31:32 EDT
From: CMSBE1%EOVUOV11.BITNET@CUNYVM.CUNY.EDU
Subject: Request about Object-Oriented techniques.  Thanks for
         previous

Date: 12 April 1988, 10:21:56 EDT
From: Juan Francisco Suarez Vicente  (KIKO)          CMSBE1   at EOVUOV11
To:   AILIST-REQUEST at KL.SRI

Hello from Spain !!!
I've sent several requests to AILIST about uncertainty and medical
diagnosis, and I've received only few answers. Thanks to all anyway.
(Thanks to Robert Hummel, Donald Mitchell, Ben Bloom, Gerald Quirchmayr,
and others who answered me.)
Because we are beginning with Expert Systems Applications at this node,
(we are not restricted to medical applications, it's only one of our
Expert System projects, cause we are in a Data Process & Computer Center)
we're trying to obtain all possible information about it. And now we often
hear things about "Object-oriented programming and languages".
Could anyone illustrate me about the main concepts of object-oriented
techniques, languages, available shells, etc.?
We also hear things about "OO databases". What's the main difference
between "OO-DBs" and "relational-DBs"?
We'd like learn things about this subject, and we'd be grateful
if there is anyone who send us some bibliographic references of OOP and L, and
which are the sources to obtain these references.
By the way, we'd like contact with all users who can help us in OO
subjects. THANK YOU VERY MUCH.
         Best Wishes from Spain.

My postal address is:
         Juan Francisco Suarez Vicente
         C/ Santa Teresa de Jesus, 20, sexto derecha, escalera B
         33007 - OVIEDO
         ASTURIAS
         SPAIN

THANK YOU ALL.
              CMSBE1@EOVUOV11.BITNET   (KIKO)


  [Two articles comparing database approaches are Gio Wiederhold's
  "Views, Objects, and Databases", Computer, December 1986, pp. 37-44,
  and Bic and Gilbert's "Learning from AI: New Trends in Database
  Technology", Computer, March 1986, pp. 44-54.  -- KIL]

------------------------------

Date: 12 Apr 88 23:37:41 GMT
From: skyi@june.cs.washington.edu (Seungku Yi)
Subject: human face recognition


I am doing my research on human face recognition but unfortunately I
don't have enough references. I have Kelly's thesis (1970) and Kanade's (1973)
thesis and nothing else. They are pretty old. I would like to know where
I can get good references to face recognition. If you know something about
this, please share the information with me. I prefer e-mail. My mail address
is "skyi@june.cs.washington.edu" Thanks in advance.

        -SeungKu

------------------------------

Date: Wed, 13 Apr 88 11:03:37 EST
From: Manoel Fernando Tenorio <tenorio@ee.ecn.purdue.edu>
Subject: Re: AIList V6 #68 - Genetic Algorithms

How can I get a copy of BBN's seminar paper on Genetic ALgorithms? --ft.

------------------------------

Date: Wed, 13 Apr 88 21:05:50 PDT
From: Mark Richer <RICHER@SUMEX-AIM.Stanford.EDU>
Subject: Need info on ART & Knowledge Craft

I am updating some information in an article I wrote 2 years ago
that included descriptions of ART 2.0 and Knowledge Craft 3.0. If
you are familiar with the current versions, I'd appreciate exchanging
a few brief messages to help update me on some aspects of the products.
All I can offer is an acknowledgement in the book I am editing and my
heartfelt thanks. Please respond to me directly rather than to this
list (as admittedly I haven't had time to read the digest).

thanks,

Mark

P.S. I have tried repeatedly to contact someone at Inference and
Carnegie Group to help me with this, but time is running out and I
don't feel like trying again.

------------------------------

End of AIList Digest
********************

∂18-Apr-88  0158	LAWS@KL.SRI.COM 	AIList V6 #71 - Moderator Needed, AI Goals Discussion
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 18 Apr 88  01:58:37 PDT
Date: Sun 17 Apr 1988 23:25-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #71 - Moderator Needed, AI Goals Discussion
To: AIList@SRI.COM


AIList Digest            Monday, 18 Apr 1988       Volume 6 : Issue 71

Today's Topics:
  Administrivia - New AIList Moderator(s) Needed,
  Opinion - AI Goals

----------------------------------------------------------------------

Date: Fri 26 Feb 88 09:54:36-PST
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: New AIList Moderator(s) Needed

It seems that I will be taking leave of absence from SRI to
work at NSF for a couple of years, starting this July.  (I
will report more details later.)  AIList will thus need a
new moderator by mid June.  Any volunteers?

I suspect that AIList has become too much for an inexperienced
moderator to handle alone.  About half my effort has been spent
editing seminar and conference announcements, so I suggest
dropping those or spinning them off to another list.  (Usenet
has a widely read conference list; perhaps it should be used
for AI notices also.  I have had positive feedback about the
seminar notices, but they seem out of place in a discussion
list -- and provide little that one cannot get by scanning the
latest conference proceedings.)

Separate lists might also be created to handle AI-related
hardware/software queries and discussions, expert systems,
AI in business and engineering, logics, commonsense
reasoning, philosophy, psychology, cognitive science, etc.
We should try to follow the Usenet list structure where
possible, but the most important ingredient is an enthusiastic
moderator or administrator.

Setting up a discussion list is not terribly difficult.  All
you need is the ability to remail messages to a bunch of people,
preferably in batch form to reduce the number of bounce messages
from broken connections.  I can help with such mechanics as
determining return paths for messages having nonstandard header
syntax.  The postmasters on the net have always been very
helpful, particularly Erik Fair at UCBVAX.

If no moderator is found, AIList will continue on the
unmoderated Usenet comp.ai stream.  There are advantages to
this format, including fast turnaround and ease of saving or
replying to individual messages.  (Disadvantages include lack
of thematic grouping and of editorial screening.)  Someone
on the Arpanet (or other network?) could redistribute the
messages much as I have done.  Submissions could be sent
directly to the comp.ai gateway, or to the Arpanet redistributor
if the gateway manager preferred it so.  I am not sure whether
gatewaying to BITNET must go through the Arpanet, but something
can be worked out.

I have enjoyed being the moderator, and will continue to
participate in discussions.  Thanks to all of you for making
this effort such a success.

                                        -- Ken Laws

------------------------------

Date: Wed, 13 Apr 88 13:28:23 -0400
From: koomen@cs.rochester.edu
Subject: Re: Review - Spang Robinson Report, V4 N2  [AIList V6 #68]

> Canadian Pacific developed an expert system to analyze oil samples
> form a diesel locomotive.  [...]  A mechanic decided to disregard the
> recommendations of the system causing a $250,000 failure.

A dangerous bias.  What about the times that the disregard was correct,
or the acceptance incorrect?  If we do not allow for mistakes on the
part of the human operator, the "recommendations" will no longer be
recommendations but injunctions.  And then where does that leave us?
Without further particulars about the case, the following comment would
have done equally well: "However, the system explained its reasons for a
recommendation so poorly that a mechanic decided to disregard it,
causing a $250,000 failure."

-- Hans

EMail:  Koomen@CS.Rochester.Edu         Paper:  Johannes A. G. M. Koomen
                                                Dept. of Computer Science
Phone:  (716) 275-9499 [work]                   University of Rochester
        (716) 442-4836 [home]                   Rochester, NY  14627

------------------------------

Date: Wed, 13 Apr 88 09:30 EST
From: INS_ATGE%JHUVMS.BITNET@CUNYVM.CUNY.EDU
Subject: AI -- Will we be "programming"?

   Something which has recently struck me is how little programming
is actually done in neural networks.
   An experiment was recently done by Dr. Sejnowski of JHU regarding the
interneurons or "hidden units" which develop in a neural network which is
trained to recognize concave vs. convex features on sight.
   Now although I'm sure the researchers had some guesses as to the
"receptive" and "projective" areas of the hidden units, they never
"programmed" them.  The neural network was trained (using back propogation
or possibly a local minima avoiding algorithm), and the hidden units
ended up looking like neurons found in the cat visual pathway which
correspond to a subclass of what were originally thought to be
edge-detection neurons (note: the concave vs. convex neruons are a subset
of the so called "complex" cells thought to be edge detectors--not all the
data on complex cells fits all the concave vs. convex hidden units, so
it appears they are a subclass (particularly, the subclass which do not
respond well in the center of field)).
   In other words, although the experimenters tried to create something,
they did not "program" the entire system--it organized -itself-.
   Not only that, but it added a new hypothesis to the area of neuroscience.
Thus this new area of science is labeled "Computational Neuroscience."
  -Thomas Edwards

------------------------------

Date: Wed, 13 Apr 88 15:36 CDT
From: SHERRARD%CODSD1%eg.ti.com@RELAY.CS.NET
Subject: Silly Discussion


        I seems to me to be silly to discuss things like "when a computer
passes the Turing test it will be intelligent."  Intelligence is not
a binary, you have it or you don't thing.  It is all a matter of degree.

        I agree with Caroline Knight on her comment about a machine
having the human qualities of arrogance, ignorance and the like.  For
the rest of you (Rob Wald), that believe a 'test' is required to solve
the have intelligence vs. the not have intelligence, I suggest you ask
a third grader if it takes any intelligence to multiply and divide.

        If you look at intelligence the way I do you will see how
far we have come in creating machine intelligence.

-jeff sherrard

------------------------------

Date: 14 Apr 88 14:03:10 GMT
From: eniac.seas.upenn.edu!lloyd@super.upenn.edu  (Lloyd Greenwald)
Subject: Re: Free Will

In Article 1646  channic@uiucdcsm.cs.uiuc.edu writes:

>                                 I think AI by and large ignores
>the issue of free will as well as other long standing philoshical problems

I don't want to start a philosophical argument in this newsgroup, but
I would like to know why AI should be concerned with the issue of
"free will" considering that it is still an open question.  It may
turn out that the complexities involved in what we call "free will"
are similar to that of natural language understanding or other AI
areas, which would reduce the problem to one of design.  Admittedly,
this may not ever be accomplished, I just don't think we should draw
such a strong conclusion at this point.

====> Hi Elaine
                        Lloyd Greenwald
                        lloyd@eniac.seas.upenn.edu

------------------------------

Date: 13 Apr 88 03:20:00 GMT
From: channic@m.cs.uiuc.edu
Subject: Re: The future of AI - my opinion


In article <1348@hubcap.UUCP>, mrspock@hubcap.UUCP (Steve Benz) writes:

>  In fact, I agree with it.  I think that in order for a machine to be
>convincing as a human, it would need to have the bad qualities of a human
>as well as the good ones, i.e.  it would have to be occasionally stupid,
>arrogant, ignorant, etc.&soforth.
>
>  So, who needs that?  Who is going to sit down and (intentionally)
>write a program that has the capacity to be stupid, arrogant, or ignorant?

Another way of expressing the apparent necessity for bad qualities "for a
machine to be convincing as a human" is to say that free will is fundamental
to human intelligence.  I believe this is why the reaction to any "breakthrough"
in intelligent machine behavior is always "but its not REALLY intelligent,
it was just programmed to do that."  Choosing among alternative problem
solutions is an entirely different matter than justifying or explaining
an apparently intelligent solution.  In complex problems of politics, economics,
computer science, and I would even venture to say physics, there are no right
or wrong answers, only opinions (which are choices) which are judged as such
on the basis of creativity and how much it agrees with the choices of those
considered expert in the field.  I think AI by and large ignores
the issue of free will as well as other long standing philoshical problems
(such as the mind/brain problem) which lie at the crux of developing machine
intelligence.  Of course there is not much grant money available for addressing
old philosphy.  This view are jaded, I admit, but five years of experience in
the field has led me to believe that AI is not the endeavor to make machines
that think, but rather the endeavor to make people think that machines can
think.


tom channic
uiucdcs.uiuc.dcs.edu
{ihnp4|decvax}!pur-ee!uiucdcs!channic

------------------------------

Date: Fri Apr 15 16:15:49 EDT 1988
From: sas@BBN.COM
Subject: AIList V6 #67 - Future of AI


- Watch those Pony Express arguments.  Remember, the Pony Express was
a temporary hack.  It ran for a bit under two years before it was
replaced by the transcontinental railroad.

- If you want a humorous account of the problems of really, really
understanding things I'll recommend Morris Zap's Semantics as Strip
Tease talk in David Lodge's novel Small World.  Granted, he was
talking about the problems with deconstruction, but it's marvelously
applicable to AI.

- Phrenology was largely considered hokum in the last century, but
craniometry was highly regarded.  Check out Gould's the Mismeasure of
Man.

- I think AI has already proven its worth by attacking and to some
extent solving certain classes of problems.  Don't expect the
solutions to look as magical as the problems.  Chess programs,
symbolic math programs, expert systems, robotics and the like are all
real technologies now.  Materials science may not have all the glamor
of QCD but it's a pretty exciting field none the less.

- I finally, figured out what bothers me about nanotechnology.  In a
sense it is irrelevant.  If there is a way to make machines that can
translate Dickens into Turkish it doesn't really matter if they are as
big as a carwash, twice as ugly, and are limited by the speed of soapy
water in a vacuum. Once we know how to translate, we can always make
the machine smaller and faster.  Nanotechnology seems to ignore the
hard part of the problem in favor of the easy part.

                                        Seth

---- What do they call these things down at the bottom anyway?
                                Letterfeet?                     ----

------------------------------

Date: 16 Apr 88 16:49:46 GMT
From: uflorida!codas!novavax!maddoxt@gatech.edu  (Thomas Maddox)
Subject: Re: Simulated Intelligence

In article <2051@mind.UUCP> eliot@mind.UUCP (Eliot Handelman) writes:
>Intelligence draws upon the resources of what Dostoevsky, in the "Notes from
>Underground", called the "advantageous advantage" of the individual who found
>his life circumscribed by "logarithms", or some form of computational
>determinism: the ability to veto reason. My present inclination is to believe
>that AI, in the long run, may only be a test for underlying mechanical
>constraints of of theories of intelligence, and therefore inapplicable to
>to the simulation of human intelligence.

        If it's AI, it will incorporate irrationality.  As you, after
Dostoevsky, imply, intelligence is a superset of reason.  Think
of the human organism as a bag of perceptions and hormonal
interactions with the mind as the way station for the whole
exceedingly tangled perceptual/emotional/intellectual circus.
        At present, we have only begun to understand the complexities
of the brain's neurotransmitter interactions, so we are only beginning
to know the brain, but we have already grasped that
that the mind's complexity far exceeds earlier estimates.
        If you're interested in seeing my best shot at portraying
these ideas, look for a few sf stories:  "The Mind like a Strange
Balloon" in the April, 1985 _Omni_, "Snake Eyes" in the April, 1986
_Omni_ (and in the _Mirrorshades_ anthology, coming out in paperback
almost instantly), and "The Robot and the One You Love" in the March,
1988 _Omni_.  They are my attempts at thinking through these
problems.

------------------------------

Date: Wed, 13 Apr 88 09:17:53 EDI
From: prem@research.att.com
Subject: Prof. McCarthy's retort


This is a very cute, and compact retort, but not very convinving; it admits
of very many similar cute and compact retorts, one of which is given below
as an example :

"Why would I want to write a program in assembly language that figured out
how to stack colored blocks on a table, and very very slowly at that ?"

or,

Prem Devanbu

(A diehard lisp fan who would like to see a better argument for lisp,
even if it is less cute or compact)

------------------------------

Date: 16 Apr 88 16:33:18 GMT
From: steinmetz!ge-dab!codas!novavax!maddoxt@uunet.uu.net  (Thomas
      Maddox)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

In article <1134@its63b.ed.ac.uk> gvw@its63b.ed.ac.uk (G Wilson) writes:

>I think AI can be summed up by Terry Winograd's defection.  His
>SHRDLU program is still quoted in *every* AI textbook (at least all
>the ones I've seen), but he is no longer a believer in the AI
>research programme (see "Understanding Computers and Cognition",
>by Winograd and Flores).
>
        Using this same reasoning, one might given up quantum
mechanics because of Einstein's "defection."  Whether a particular
researcher continues his research is an interesting historical
question (and indeed many physicists lamented the loss of Einstein),
but it does not call into question the research program itself, which
must stand or fall on its own merits.
        AI will continue to produce results and remain a viable
enterprise, or it won't and will degenerate.  However, so long as it
continues to feed powerful ideas and techniques into the various
fields it connects with, to dismiss it seems remarkably premature.  If
you are one of the pro- or anti-AI heavyweights, i.e., someone with
power, prestige, or money riding on society's evaluation of AI
research, then you join the polemic with all guns firing.
        The rest of us can continue to enjoy both the practical and
intellectual fruits of the research and the debate.

------------------------------

End of AIList Digest
********************

∂18-Apr-88  0502	LAWS@KL.SRI.COM 	AIList V6 #72 - Queries
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 18 Apr 88  05:01:50 PDT
Date: Sun 17 Apr 1988 23:35-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #72 - Queries
To: AIList@SRI.COM


AIList Digest            Monday, 18 Apr 1988       Volume 6 : Issue 72

Today's Topics:
  Queries - AI Texts & Cognitive Science &
    Software Engineering vs. Knowledge Engineering &
    Undergraduate AI Curriculum &
    Explorer (vs. Sun) Experience &
    Expert System for Scheduling &
    Emycin & Holographic Memory &
    AI and Self-Awareness &
    Expert Systems in the Railroad Industry

----------------------------------------------------------------------

Date: 13 Apr 88 22:41:53 GMT
From: stride!tahoe!wheeler!greg@gr.utah.edu  (Greg Sharp)
Subject: References needed


I am looking for introductory references (books, articles...) concerning ai.
My background is primarily engineering, with a small amount of computer
science experience.  Replies via email are fine.

                                Greg Sharp  (greg@wheeler.wrc.unr.edu)

------------------------------

Date: Fri, 15 Apr 88 09:37 CST
From: PMACLIN%UTMEM1.BITNET@CUNYVM.CUNY.EDU
Subject: CONTRIBUTION TO AILIST

For my first time, I'm teaching a college undergraduate course on
Introduction to Artificial Intelligence. In a few weeks, we will be
discussing Cognitive Science, an area in which I am comparatively
weak. I would like some current thinking and new ideas in this field.

If you or an associate have written any papers in the field of
Cognitive Science or related topics in the past two or three years,
I would much appreciate it very much if you would mail me a copy.
Thanks in advance.

Philip Maclin, AI Specialist
University of Tennessee at Memphis
Computer Science Dept.
877 Madison Ave., Suite 330
Memphis, TN 38163

PMACLIN@UTMEM1

------------------------------

Date: Thu, 14 Apr 88 12:14:39 EDT
From: CMSBE1%EOVUOV11.BITNET@CUNYVM.CUNY.EDU
Subject: Software Engineering VERSUS Knowledge Engineering. What do
         you think?

Date: 14 April 1988, 12:10:31 EDT
From: Juan Francisco Suarez Vicente  (KIKO)          CMSBE1   at EOVUOV11
To:   AILIST at SRI

Hello from Spain!!!
This request is very precise, and I need to know all possible expert
opinions from you (if you want help me,of course).
  - Which are the main points of view,in your opinion, about the
    subject "Software Engineering VERSUS Knowledge Engineering"?
  - How should I explain it in a Conference?
  - How could I relate Soft and Knowledge engineerings, in a non-conflictive
    way? Or, contrarily, Should I explain these concepts oppositely?
                        -----o------
         It's a very important question to me. THANKS A LOT.
    Answers to:     AILIST   or   CMSBE1@EOVUOV11  (spanish userid)

------------------------------

Date: 15 Apr 88 12:26:00 CST
From: "HENRY::TSATSOUL" <tsatsoul%henry.decnet@space-tech.arpa>
Reply-to: "HENRY::TSATSOUL" <tsatsoul%henry.decnet@space-tech.arpa>
Subject: Undergraduate AI curriculum


        The recent discussion on this list about AI has touched upon a subject
that I thought might be interesting to pursue. Specifically, what is the
state of the undergraduate AI education? It is my impression that AI is
rapidly reaching that point that Computer Science had reached about 30 years
ago. At that time lots of people started discussing and writing about
curriculum requirements for CS degrees on all levels, resulting to today's CS
departments and degrees.

        I am very  interested in compiling a list of courses, textbooks and
requirements that various Universities and Departments offer on the under-
graduate level and which are considered (or can be considered) AI-oriented.
Especially interesting are curricula that offer BS or BA degrees with AI
specialization, or, even more, ``pure'' AI degrees.

        It would also be interesting to start compiling everybody's thoughts
on a bachelor's in AI. What should be included? How much CS, how much
engineering, how much psychology, philosophy, mathematics? What kind of
different specializations and emphases?

        I hope there are enough people out there with enough interest to
start a discussion. I volunteer to gather and distribute information from
and to anyone interested. As a first step people can send me descriptions
of the undergraduate AI courses and curricula in their departments. If you
have the information on file, please e-mail it. Otherwise, just US-mail
me catalogs, brochures, etc.

        Cheers,
        Costas Tsatsoulis

--------------------------------------------------------------------------------
| Costas Tsatsoulis                            |  tsatsoul @ space-tech.arpa   |
| Dept. of Electrical and Computer Engineering |-------------------------------|
| Nichols Hall                                 |                               |
| The University of Kansas                     |         H  A  I  K  U         |
| Lawrence, KS 66045                           |                               |

------------------------------

Date: 12 Apr 88 21:29:16 GMT
From: mikeb@ford-wdl1.arpa  (Michael H. Bender)
Subject: Explorer (vs. Sun) Experience ?

PLEASE - if you have any experience with the TI Explorer environment,
or have made any comparisons between it and the SUN environment,
please help us by lettin us know ....

An associate of mine is debating between the purchase of a Mac-II with
the TI Explorer board, or a Sun workstation.  Currently, he has a Sun,
and he wants to buy 2 Mac's and link them togehter (NFS? IP/TCP?).  He
will be running Knowledge Craft Primarily.

QUESTION 1)
How hard is it to learn to use the Lisp environment on the Explorer?
Is it as difficult as the Symbolics used to be?

In the past - people have told me that it takes close to a year to
become expert on the Symbolics (much less on the Sun) ... is this true
for the Explorer also?

QUESTION 2)
How hard is it to maintain the software and environment?  He is afraid
that if he gets a Sun he will need to hire a Unix guru.... Will he
have to hire an Explorer/Zeta-Lisp expert if he gets a MacII with the
TI board?

QUESTION 3)
Does the TI environment (which I assume will completely run on the
Mac-II) provide a large number of libraries that would otherwise have
to be developed on the SUN workstations?


Please share your experiences with us...



Mike Bender  mhb@ford-wdl1

------------------------------

Date: Fri, 15 Apr 88 15:50 H
From: ANANDA%NUSDISCS.BITNET@CUNYVM.CUNY.EDU
Subject: Expert System for scheduling

We at the Department of Information Systems & Computer Science, National
University of Singapore are in the process of developing an expert system
for crew scheduling for a simple subway train network. We would appreciate
information on any similar project, pointers to literature on any expert
system project involving scheduling, and tools that may be useful in the
development work.  At present we are planning to use TI's PCPLUS shell.

Please send your response directly to me

Thanks in advance.

A.L.Ananda

Bitnet address: ananda@NUSDISCS

------------------------------

Date: 14 Apr 88 21:37:55 GMT
From: mnetor!spectrix!yunexus!oz@uunet.uu.net  (Ozan Yigit)
Subject: Emycin... where can I find it ??

I am looking for the source (common lisp and/or franz) of emycin, to
study + play. All pointers would be appreciated.

oz
--
... and they will all           Usenet: [decvax|ihnp4]!utzoo!yunexus!oz
bite the dust ...                       .......!uunet!mnetor!yunexus!oz
        comprehensively. ...    Bitnet: oz@[yusol|yulibra|yuyetti]
Archbishop Tutu                 Phonet: +1 416 736-5257 x 3976

------------------------------

Date: 15 Apr 88 09:20:43 GMT
From: munnari!basser.cs.su.oz.au!ray@uunet.UU.NET
Subject: holographic memory and pattern recognition


>From John Haugeland, "The Nature and Plausibility of Cognitivism",
Behavioural and Brain Sciences, Vol. 1, No. 2 (1978), pp 215-260 ...

  >  ... if a hologram of an arbitrary scene is suitably illuminated with
  >  the light from a reference object, bright spots will appear
  >  indicating (virtually instantaneously) the presence and location of
  >  any occurrences of the reference object in the scene (and dimmer
  >  spots indicate "similar" objects).  So some neurophysiological
  >  holographic encoding might account for a number of perplexing
  >  features of visual recall and recognition ...

This stuff is part of AI folklore.  There are many papers that discuss
the philosophical implications for AI of this phenomenon (as does
Haugeland's paper), or propose neural implementations of this sort of
process. But what I want is a paper by somebody who has ACTUALLY PERFORMED
THIS EXPERIMENT.  Can anyone point me to such a paper?

Raymond Lister
Basser Department of Computer Science
University of Sydney
NSW  2006
AUSTRALIA

ACSnet:   ray@basser.cs.su.oz
Internet: ray%basser.cs.su.oz.au@uunet.uu.net
CSNET:    ray%basser.cs.su.oz@csnet-relay
UUCP:     {uunet,hplabs,pyramid,mcvax,ukc,nttlab}!munnari!basser.cs.su.oz!ray
JANET:    munnari!basser.cs.su.oz!ray@ukc

  [Fourier-based template matching is trivial to set up on an
  optical workbench, and is not considered experimental AI.  In
  character recognition, for instance, one can project a text letter
  through a font mask and choose the mask position corresponding to
  the greatest response in the Fourier plane.  (Holograms can be
  used, but are not required when you have the optics generating
  real-time Fourier planes.  Computer vision research usually
  substitutes digital FFT transforms for the optics.  Fielded
  target-recognition systems are likely to use holograms or acoustic-
  wave devices because they are faster than digital techniques and
  more robust than complex lens systems.)  Such template matching
  works great if the text characters are complete, isolated, and
  not distorted.  Holographic systems storing dozens of different
  views of tanks and aircraft have been demonstrated.  -- KIL]

------------------------------

Date: Sat, 16 Apr 88 13:54:02 est
From: Mr. David Smith <dsmith@gelac.arpa>
Subject: AI and Self-Awareness

I have an interesting piece of software which has caused me to think at
various levels about self-awareness.  First, I will describe the software.  It
is an aid for learning to type correctly.  It types a word randomly selected
from a large dictionary, and waits for the user to type the word, checking
that each letter is correct and requiring the use of the backspace key to
correct any errors.  So far, a useful tool.
Unbeknownst to the user, it is also measuring his proficiency in typing each
letter combination and adapting its dictionary selections to emphasize those he
is weak at typing.

The first question is this: "Is this AI?" to which the most likely answer might
be: "You have not given me enough information."

The second question then arises: "What would you need to know about this
program to decide whether it is AI or not? "  Its size? language? The method
for determining proficiency? The adaptation technique? The size of the
dictionary?  Where the 'expertise' came from?

The third question: "Why are we in the AI community always asking the first two
questions?"  Can you measure the "AI-ness" of something by its behavior, its
structure or its source of wisdom?  Should not AI as a scientific discipline
be content, as the manufacturers of pencils, compilers etc. are, to become an
integral useful part of engineering or computer science?


David Smith.

David Smith: dsmith@gelac.arpa

------------------------------

Date: 16 Apr 88 18:00:17 GMT
From: lagache@violet.Berkeley.EDU  (Edouard Lagache)
Subject: Expert Systems in the Railroad Industry.


          I attended a lecture by Hubert Dreyfus on the problems in
     Artificial Intelligence, and he mentioned that he was aware of only 2
     Expert systems that work as well or better than the human experts that
     they were based on.  What does this have to do with trains?  Well, one
     of the systems (called ALPS) is designed to optimally load a cargo
     planes, which is a problem that looks isomorphic with the problem of
     loading a railroad switch yard.

          That raises an interesting question for those interested in
     computers and trains: what sort of expert systems have developed for
     the railroad industry?  It seems to me that there are a number of
     promising areas:

     1.)  Scheduling.

     2.)  Optimal switching moves and train assembly.

     3.)  Cargo routing and loading.

     4.)  Equipment Maintenance.

          Does anyone know of what work (if any) has been done by railroads
     or A.I. outfits in this area?  Interestingly enough, Dreyfus would
     probably claim that the first 3 areas would be very promising domains
     for expert systems.


                                             Edouard Lagache
                                             School of Education
                                             U.C. Berkeley
                                             lagache@violet.berkeley.edu



     P.S. I has posted this to both 'rec.railroad', and 'comp.ai'.  Please
          don't reply to both groups unless it is truly of general interest.

------------------------------

End of AIList Digest
********************

∂21-Apr-88  0156	LAWS@KL.SRI.COM 	AIList V6 #73 - Fraud, Theorem Prover, New Lists, Bibliographies    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 21 Apr 88  01:56:21 PDT
Date: Wed 20 Apr 1988 22:14-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #73 - Fraud, Theorem Prover, New Lists, Bibliographies
To: AIList@SRI.COM


AIList Digest           Thursday, 21 Apr 1988      Volume 6 : Issue 73

Today's Topics:
  News - Investigation of Fraud in Government-Funded Science,
  AI Tools - Boyer and Moore's Theorem Prover,
  Bindings - Tech-Concepts & Ag-Exp-L,
  Bibliographies - Monthly Abstracts in AI & Search Bibliography

----------------------------------------------------------------------

Date: Mon, 18 Apr 88 11:05:50 MST
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Investigation of fraud in Government-funded science underway


      The Subcommitee on Human Resources and Intergovermental Relations
of the U.S. House of Representatives is investigating fraud in government-
supported research.  Public hearings are being held, and are being broadcast
on C-SPAN.  After hearing testimony concerning scientists who'd allegedly
doctored their research data, Rep. John Conyers proposed that criminal
penalties be imposed for such offences where Federal funds are involved.
Rep. Ted Weiss commented "It is not a question of too many watchdogs.  We
seem to have no watchdogs at all in this situation."  Further details
should be available when the hearing transcripts are printed, in a few
weeks.  It is not known if the AI field was mentioned in the hearings.
However, it is something that people in the field should be aware of.

                                        John Nagle

------------------------------

Date: Mon, 18 Apr 88 16:22:13 CDT
From: Robert S. Boyer <boyer@CLI.COM>
Subject: Availability of Boyer and Moore's Theorem Prover

A Common Lisp version of our theorem-prover is now available under the
usual conditions: no license, no copyright, no fee, no support.  The
system runs well in three Common Lisps:  KCL, Symbolics, and Lucid.
There are no operating system or dialect conditionals, so the code may
well run in other implementations of Common Lisp.

Included as sample input is the work of Hunt on the FM8501
microprocessor and of Shankar on Goedel's incompleteness theorem and
the Church-Rosser theorem.

To get a copy follow these instructions:

1.   ftp to Arpanet/Internet host cli.com.
     (cli.com currently has Internet numbers
     10.8.0.62 and 192.31.85.1)
2.   log in as ftp, password guest
3.   get the file /pub/nqthm/README
4.   read the file README and follow the directions it gives.

Inquiries concerning tapes may be sent to:

    Computational Logic, Inc., Suite 290
    1717 W. 6th St.
    Austin, Texas 78703

A comprehensive manual is available.  For information on obtaining a
copy, write to the address above.

Bob Boyer         J Moore
boyer@cli.com     moore@cli.com

It seems possible that on May 1, 1988 all of Austin, Texas will be
disconnected from the Internet/Arpanet, for a while anyway.  So
connections to cli.com may be very difficult starting May 1.

------------------------------

Date: Mon, 18 Apr 88 12:12:00 EST
From: Brendan Reilly <reilly@wharton.upenn.edu>
Subject: New mailing list

There is a new mailing list at the Franklin Institute
(tech-concepts@fi.edu) discussing the concepts of modern technology.
The purpose of the list is to collect all of the concepts which the
public should learn about in dealing with computing, robotics, and
artificial intelligence in the coming years.

Any suggestions ranging from concepts to exhibits are welcome.  The
Institute is opening up exhibits on energy, space, and health as well.
Comments and suggestions are welcome on all of these topics.

Mail requests for address changes to:

        tech-concepts-request@fi.edu


                        Brendan Reilly

------------------------------

Date: Tue, 19 Apr 88 10:00:38 CDT
From: ND HECN E-mail Postmaster <INFO%NDSUVM1.BITNET@CUNYVM.CUNY.EDU>
Subject: AG-EXP-L@NDSUVM1 for Expert Systems in Agriculture

   We have set up a new list as described below.  We would appreciate it
if you could include at least the brief description in your various lists
of lists, etc.  Thank you.

Description of AG-EXP-L:

AG-EXP-L - Discusses the use of Expert Systems in Agricultural
production and management.  Primary emphasis is for practitioners,
Extension personnel and Experiment Station researchers in the land
grant system.

Procedures for AG-EXP-L:

BITNET, EARN, or NetNorth subscribers can join by sending the SUB
command with your name:
                                         <- Listserv Command ->
   For example, SEND LISTSERV@NDSUVM1    SUB AG-EXP-L Jon Doe
       or       TELL LISTSERV AT NDSUVM1 SUB AG-EXP-L Joan Doe

To be removed from the list,             <- Listserv Command ->
                SEND LISTSERV@NDSUVM1    SIGNOFF AG-EXP-L
                TELL LISTSERV AT NDSUVM1 SIGNOFF AG-EXP-L

Those without interactive access may send the Listserv Command
portion of the above lines as the first TEXT line of RFC822 standard
mail (after the blank line - NOT in the subject). For example:
       SUB AG-EXP-L Joan Doe
would be the only line in the body (text) of mail to LISTSERV@NDSUVM1.

Monthly public logs of mail to AG-EXP-L are kept on LISTSERV for a
few months.  For a list of files send the 'Index AG-EXP-L' command
to LISTSERV@NDSUVM1.

NOTE VERY WELL!!!  All commands for listserv should be sent to
LISTSERV@NDSUVM1  and NOT to the list itself!

To MAKE CONTRIBUTIONS to the list, BitNet, EARN, and Netnorth users
should send mail to AG-EXP-L@NDSUVM1.  Others may send via the
appropriate BITNET gateway (for example, from the Internet mail
to the LIST would go to  AG-EXP-L%NDSUVM1.Bitnet@CUNYVM.CUNY.EDU for
example.

For more information on LISTSERV you may send it the INFO command
(eg. TELL LISTSERV AT NDSUVM1 INFO or whatever).

 Coordinator: Sandy Sprafka - NU020746@NDSUVM1
                           or NU020746%NDSUVM1.Bitnet@CUNYVM.CUNY.EDU
                                | |  Note the two ZERO digits!

------------------------------

Date: Mon, 18 Apr 88 12:21:55 bst
From: Robin Boswell <robin%aiva.edinburgh.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Monthly Abstracts in AI

TURING INSTITUTE PRODUCT ANNOUNCEMENT - MONTHLY ABSTRACTS IN AI

  Each issue contains 200 items selected from the latest conference
proceedings, research reports, journals and books on AI and related
topics, divided into 16 categories:

            Expert Systems
            Applications
            Logic Programming
            Advanced Computer Vision
            Advanced Robotics
            Pattern-Recognition
            Programming Languages and Software
            Automatic Programming
            Human-Computer Interaction
            Hardware
            Machine Learning
            Natural Language
            Cognitive Modelling
            Knowledge Representation
            Search control and Planning
            General


   For a free sample copy and further information, please contact:

   robin@turing.ac.uk

   or

   Jon Ritchie
   Turing Institute
   George House
   36 North Hanover St.
   Glasgow G1 2AD
   U.K.

   Tel:   (041) 552-6400

------------------------------

Date: Wed, 13 Apr 88 17:18:22 PDT
From: rshu%meridian@ads.com (Richard Shu)
Subject: Search bibliography


Dickson,

I just discovered that a colleague of mine had this bibliography on search.


BACHANT, J. AND McDERMOTT, J.  R1 revisited: four years in the trenches.
AI Magazine 5#3 (1984).

BANNERJI, R.B.   A comarison of three problem solving methods. Proc 5th IJCAI
(1977), 442ff.

BANNERJI, R.B. AND ERNST, C.W.   A theory for the complete mechanization of
a GPS-type problem solver. Proc 5th IJCAI (1977), 450ff.

BARNETT, J.A.   How much is control knowledge worth? a primitive
example. Artif Intel 22 (1984), 77-89.

BARTON, G.E.   A multiple-context equality-based reasoning system.
AI Lab TR-715, MIT (1983).

BENOIT, J.W.   A use of ATMS in hierarchical planning.  Proc DARPA Knowledge
Based Planning Workshop, Austin TX (Dec 1987).

BERLINER, H.A.   Search vs. knowledge: an analysis from the domain of games.
Tech Report CMU-CS-82-104, Department of Computer Science, Carnegie-Mellon
University. { search reduction vs goal recognition }

BOBROW, D.G. AND WEGBREIT, B.   A model and stack implementation of multiple
environments. Comm ACM 16 (1973), 591-603.

BORNING, A.   ThingLab -- an object-oriented system for building simulations
using constraints. Proc 5th IJCAI (1977), 497-498.

BOTVINNIK, M.M.   Decision making and computers. Computer Chess 3 (198X), 169ff.

CHURCH, K.W.   Coordinate squares: a solution to many chess pawn endgames.
Proc 6th IJCAI (1979), 149ff. { back-tracking, knowl util, frame problem }

DAVIS, M.   The mathematics of non-monotonic reasoning. Artificial
Intelligence 13 (1980), 73ff.

DECHTER, R.   Learning while searching in constraint-satisfaction problems.
Proc AAAI (1986), 178ff.

DECHTER, R. AND PEARL, J.   Network-based heuristics for constraint
satisfaction problems. Artif Intel 34 (1988), 1ff.

DeKLEER, J., DOYLE, J., STEELE, G.L. AND SUSSMAN, G.J.   AMORD: explicit
control of reasoning. SIGART Newsletter 64 (Aug 1977), 116-125.

DeKLEER, J., DOYLE, J., STEELE, G.L.Jr. AND SUSSMAN, G.J.   Explicit control
of reasoning. ACM SIGPLAN Notices Vol 12, Proc Symp on Artificial Intelligence
and Programming Languages (Aug 1977), 116ff.

DeKLEER, J., DOYLE, J., RICH, C., STEELE, G.L.Jr. AND SUSSMAN, G.J.  AMORD, a
deductive procedure system. MIT AI Memo 435 (Jan 1978).

DeKLEER, J. AND SUSSMAN, G.J.   Propagation of constraints applied
to circuit synthesis. Circuit Theory and Applications 8 (1980).

DeKLEER, J.   Choices without backtracking. Proc AAAI (1984).

DeKLEER, J.   An assumption-based TMS.  Artif Intel 28 (1986), 127ff.

DeKLEER, J.   Extending the ATMS. Artif Intel 28 (1986), 163ff.

DeKLEER, J.   Problem solving with the ATMS. Artif Intel 28 (1986), 197ff.

DeKLEER, J. AND WILLIAMS, B.C.   Back to backtracking: controlling the ATMS.
Proc AAAI (1986), 910ff.

DHAR, V.   An approach to dependency directed backtracking using domain
specific knowledge. Proc 9th IJCAI (1985), 188ff.

DOYLE, J.   Truth maintenance systems for problem solving. Proc 5th IJCAI
(1977), 247.

DOYLE, J.   Truth maintenance systems for problem solving. Tech Rept
419, AI Lab, MIT (Jan 1978).

DOYLE, J.   A glimpse of truth maintenance. Proc 6th IJCAI (1979), 232ff.

DOYLE, J.   A truth maintenance system. Artificial Intelligence 12 (1979),
231ff.

DOYLE, J. AND LONDON, P.   A selected descriptor-indexed bibliography to the
literature on belief revision. Memo 568, AI Lab, MIT (1980).

DOYLE, J.   Methodological simplicity in expert system construction -- the
case of judgments and reasoned assumptions. Tech Rept CMU-CS-83-114, Dept
of CS, CMU (1983).

EAVARONE, D. AND ERNST, G.    A program that generates good
difference orderings and tables of connections for GPS. Proc IEEE
Systems Science and Cybernetics Conf, Pittsburgh, PA (Oct 1970), 226ff.

ERNST, G.    Sufficient conditions for the success of GPS. J ACM 16
(Oct 1969), 517ff.

FEIGENBAUM, E.A.   The art of artificial intelligence: 1. themes and case
studies of knowledge engineering. Proc 5th IJCAI (1977), 1014ff. { generate &
test in expert systems }

FOX, M.S.   Constraint-directed search: a case study of job-shop
scheduling. PhD thesis, Rept CMU-CS-83-161, Carnegie Mellon Univ
(Dec 1983).

FOX, M.S., ALLEN, B. AND STROHM, G.   Job-shop scheduling: an
investigation of constraint-directed reasoning. Proc AAAI (1982), 155ff.

GASCHNIG, J.   A general backtrack algorithm that eliminates most redundant
tests. Proc 5th IJCAI (1977), 457ff.

HARALICK, R.M. AND ELLIOTT, G.L.   Increasing tree search efficiency
for constraint satisfaction problems. Proc 6th IJCAI (1979), 356ff.

HARRIS, D.   A hybrid structured-object and constraint representation
language. Proc AAAI (1986), 986ff.

HAYES, P.J.  A representation for robot plans. Proc 4th IJCAI (1975).

HEWITT, C.   How to use what you know. Proc 4th IJCAI (1975), 189ff.

KASIF, S.   On the parallel complexity of some constraint satisfaction
problems. Proc AAAI (1986), 349ff.

KELLY, V.  The CRITTER system: automated critiquing of digital hardware
designs. TR WP-13, AI/VLSI project, Rutgers (1983). Also in Proc Design
Automation Conf (1984).

KELLY, V.  The CRITTER system: an AI approach to digital circuit design
critiquing. PhD thesis, Rutgers University, New Brunswick, NJ (Jan 1985).

KIBLER, D. AND MORRIS, P.   Don't be stupid. Proc 7th IJCAI (1981), 345ff.
{ use of negative heuristics (checks?) }

KLINE, P.  The superiority of relative criteria in partial matching and
generalization. Proc 7th IJCAI (1981).

KORNFELD, W.A.   The use of parallelism to implement a heuristic
search. Proc 7th IJCAI (1981), 575ff.

LAIRD, J.E. AND NEWELL, A.   A universal weak method. TR 83-141, Dept of
Computer Science, CMU (1983).

LAIRD, J.E. AND NEWELL, A.   A universal weak method: summary of results.
Proc 8th IJCAI (1983).

LAIRD, J.E.   Universal subgoaling. PhD thesis, Dept of Computer Science,
CMU (1983).

LONDON, P.   A dependency-based modelling mechanism for problem solving.
TR-589, Dept of CS, U of Maryland, College Park (Sep 1978).

LONDON, P.   Dependency networks as a representation for modelling in
general problem solvers. PhD thesis, Tech Rept 698, Dept of CS, U of
Maryland at College Park (1978).

MACKWORTH, A.K.   Consistency in networks of relations.
Artificial Intelligence 8 (1977), 99ff.

MARTINS, J.P. AND SHAPIRO, S.C.   Reasoning in multiple belief
spaces. Proc 8th IJCAI (1983).

MATWIN, S. AND PIETRZYKOWSKI, T.  Intelligent backtracking in plan-based
deduction. IEEE Trans PAMI 7 #6 (Nov 1985), 682ff.

MATWIN, S. AND PIETRZYKOWSKI, T.  Exponential improvement of efficient
backtracking: a strategy for plan-based deduction. Proc 7th Conf on Auto
Deduction.

McALLESTER, D.A.   A three-valued truth maintenance system. MIT Artificial
Intelligence Laboratory, Memo 473 (1978).

McALLESTER, D.   An outlook on truth maintenance. AI Lab Memo 551, MIT (1980).

McALLESTER, D.   Reasoning Utility Package users' manual. AIM-667, AI Lab,
MIT (1982).

McDERMOTT, D.   Contexts and data dependencies: a synthesis. IEEE Trans PAMI
5/3 (1983), 237ff.

MITTAL, S. AND FRAYMAN, F. Making partial choices in contraint reasoning
problems. Proc AAAI (1987), 631ff.

MORRIS, P.H. AND NADO, R.   Representing actions with an ATMS. Proc AAAI
(1986), 13ff.

NEVINS, A.J.   A human-oriented logic for automatic theorem proving.
J ACM 21 (1974), 606ff. { case analysis }

O'RORKE, P.  Constraint posting and propagation in explanation-based
learning. Working Paper 70, AI Group, Coordinated Science Lab, U of Illinois
at Urbana-Champaign (1986).

PIETRZYKOWSKI, T. AND MATWIN, S.   Exponential improvement of exhaustive
backtracking: data structure and implementation. Proc 7th Conf on Auto
Deduction.

POST, E.L.    Formal reductions of the general combinatorial decision
problem. Amer J of Mathematics 65 (1943), 197ff.

PURDOM, P.W.Jr.   Solving satisfiability with less searching. IEEE Trans
PAMI 6 #4 (July 1984), 510ff.

RIVEST, R.   On self-organizing sequential search heuristics. CACM 19/2
(1976), 63ff.

SEIDEL, R.   A new method for solving constraint satisfaction problems.
Proc 7th IJCAI (1981), 338ff.

SMITH, R.G.   Applications of the contract net: search. Proc 3rd
CSCSI Conf (May 1980).

STALLMAN, R.M. AND SUSSMAN, G.J.   Forward reasoning and dependency-directed
backtracking in a system for computer-aided circuit analysis. Artificial
Intelligence 9 (1978), 135ff.

STEELE, G.L.   The definition and implementation of a computer
programming language based on constraints. AI Lab TR-595, MIT (1979).

STEINBERG, L.I.   Design as refinement plus constraint propagation: the VEXED
experience. Proc AAAI (1987), 830ff.

SUSSMAN, G.J. AND STALLMAN, R.M.   Heuristic techniques in computer-aided
circuit analysis. IEEE Trans Circuits & Systems CAS-22 (Nov 1975).

SUSSMAN, G.J.   SLICES: at the boundary between analysis and synthesis.
Memo 433, AI Lab, MIT (July 1977).

SUSSMAN, G.J. AND STEELE, G.L.   CONSTRAINTS: a language for
expressing almost-hierarchical descriptions. Artif Intel 14 (1980).

WILLIAMS, C.   ART - the advanced reasoning tool - conceptual
overview. Inference Corp (1984).

Foo, Norman Y., & Anand S. Rao (1987) "Open world and closed world
negations," Report RC 13122, IBM T. J. Watson Research Center.

Foo, Norman Y., & Anand S. Rao (in preparation) "Semantics of
dynamic belief systems."

Foo, Norman Y., & Anand S. Rao (in preparation) "Belief and ontology
revision in a microworld.

Rao, Anand S., & Norman Y. Foo (1987) "Evolving knowledge and logical
omniscience," Report RC 13155, IBM T. J. Watson Research Center.

Rao, Anand S., & Norman Y. Foo (1987) "Evolving knowledge and
autoepistemic reasoning," Report RC 13155, IBM T. J. Watson Research
Center.

Rao, Anand S., & Norman Y. Foo (1986) "Modal horn graph resolution,"
Proceedings of the First Australian AI Congress, Melbourne.

Rao, Anand S., & Norman Y. Foo (1986) "DYNABELS -- A dynamic belief
revision system," Report 301, Basser Dept. of Computer Science,
University of Sydney.

Sowa, John F. (1984) Conceptual Structures:  Information Processing in
Mind and Machine, Addison-Wesley, Reading, MA.

Way, Eileen C. (1987) Dynamic Type Hierarchies:  An Approach to
Knowledge Representation through Metaphor, PhD dissertation,
Systems Science Dept., SUNY at Binghamton.


For copies of the IBM reports, write to Distribution Services 73-F11;
IBM T. J. Watson Research Center; P.O. Box 218; Yorktown Heights,
NY 10598.

For the report from Sydney, write to Basser Dept. of Computer Science;
University of Sydney; Sydney, NSW 2006; Australia.

For the dissertation by Eileen Way, write to her at the Department
of Philosophy; State University of New York; Binghamton, NY 13901.



Richard Shu
RSHU@ADS.COM

------------------------------

End of AIList Digest
********************

∂21-Apr-88  0507	LAWS@KL.SRI.COM 	AIList V6 #74 - Queries, CLOS, ELIZA, Planner, Face Recognition
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 21 Apr 88  05:07:37 PDT
Date: Wed 20 Apr 1988 22:25-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #74 - Queries, CLOS, ELIZA, Planner, Face Recognition
To: AIList@SRI.COM


AIList Digest           Thursday, 21 Apr 1988      Volume 6 : Issue 74

Today's Topics:
  Queries - Introductory Literature & Robotics &
    CommonLoops & C++ & Credit Assignment Problem &
    Demons: Was FRL First? & Common Lisp Software,
  Administrivia - Strange Bitnet Messages,
  AI Tools - CLOS Specification & ELIZA & Planner Available,
  Bibliography - Human Face Recognition

----------------------------------------------------------------------

Date: Mon, 18 Apr 88 07:36:18 PDT
From: hawk%goldie@Sun.COM (Rick Wiemholt)
Subject: newcomer request


Fellow AI enthusiasts,

        I am a mechanical engineering tech and have been monitoring
the AI newslist for several months now. Could someone guide me to
some introductory articles on AI, how it came about, and what it is
supposed to accomplish for man? My main area of interest is robotics,
but general info in all fields is welcome.

                        Rick Wiemholt
                        Manufacturing Engineering
                        Sun Microsystems

------------------------------

Date: 19 Apr 88 18:15:47 GMT
From: csli!rustcat@labrea.stanford.edu  (Vallury Prabhakar)
Subject: Expert system introductory literature

Hello,

        Could any of you suggest some books/literature that provide a good
introduction to what expert systems (and AI if possible) are all about?
I have had absolutely no background whatsoever in these areas, so I'm really
looking for the basic, trivial stuff.

        Respond in e-mail if possible.  Thank you very much.

                                                -- Vallury Prabhakar
                                                -- rustcat@cnc-sun.stanford.edu

------------------------------

Date: Tue, 19 Apr 88 16:37:31 SET
From: Faruk KOCABIYIK <FARUK%TRITU.BITNET@CNUCE-VM.ARPA>
Subject: About Robotics

We are researchers at the technical university of Istanbul computer eng.
dept. Currently we are involved in a project aimed to construct a robot
which is designed for painting applications.We have run into diffucul-
ties when degining the robot controller (The computer and associated
hardware).We will be very happy if you'd be able to get us into contact
with people competent on the subject.
P.S: If you are yourself intrested please let us know.

------------------------------

Date: 18 April 1988 1515-PST (Monday)
From: trumble@nprdc.arpa (Andy Trumble)
Reply-to: trumble@nprdc.arpa
Subject: CommonLoops

I would like to hear people's experiences
in porting software developed in Flavors
to CommonLoops. I am wondering if there are
are any automatic translators and if there
are any major drawbacks or advantages to
using CommonLoops in comparison to Flavors.

All help is greatly appreciated,
Andy Trumble
Trumble@NPRDC

------------------------------

Date: 18 April 1988 1532-PST (Monday)
From: trumble@nprdc.arpa (Andy Trumble)
Reply-to: trumble@nprdc.arpa
Subject: C++

I am trying to generate a list of
DOS C++ vendors and I would be happy to pass
on whatever I find to interested parties.

For software all I have is:

Lifeboat: Advantage C++


For Vaporware:

Microsoft: object-oriented extention to C
           (maybe c++)



Andy Trumble
Trumble@NPRDC

------------------------------

Date: 21 Apr 88 02:20:24 GMT
From: bc@media-lab.media.mit.edu  (bill coderre)
Subject: Credit Assignment Problem


I'm looking for some good ideas on the Credit Assignment Problem.

Anybody got any good papers or references?

------------------------------

Date: 20 Apr 88 22:27:58 GMT
From: mcvax!inria!crin!napoli@uunet.uu.net  (Amedeo NAPOLI)
Subject: demons: was FRL first ?

Hi everybody,

can somebody tell me if FRL was the first language to introduce the
if-needed, require, if-added and if-remove demons ?
In KRL, I only know of To-establish which is equivalent to if-needed.

Thanks in advance,
--
--- Amedeo Napoli @ CRIN / Centre de Recherche en Informatique de Nancy
EMAIL : napoli@crin.crin.fr - POST : BP 239, 54506 VANDOEUVRE CEDEX, France

------------------------------

Date: 20 Apr 88 16:12:38 GMT
From: aramis.rutgers.edu!porthos.rutgers.edu!wes@rutgers.edu  (Wes
      Braudaway)
Subject: Common Lisp Software


I'm searching for a common lisp version of Tony Hearn's
REDUCE system.  Is there one available?  How can I get
a copy of it?

Thanks

Wes Braudaway
wes@aramis.rutgers.edu

------------------------------

Date: Mon, 18 Apr 88 17:26:04 EDT
From: DOC%VTVM1.BITNET@CUNYVM.CUNY.EDU
Subject: Strange Messages

Why am I getting sub ailist messages?  (This is the second one I've
gotten -- got one and discarded it a couple of days ago.)
I sometimes get people's messages here because they have "DOC" in
a NAMES file as someone's nickname, but it happens to be my userid
so I get these notes intended for someone else.  Is that happening
now?  (I'm a fairly new AILIST subscriber myself, so I don't know
what to expect.)

  [Sometimes people send a "sub AIList" message to the list,
  AIList%LISTSERV, rather than to the BITNET LISTSERV itself.
  The bounce message then gets sent to the readership.  -- KIL]

------------------------------

Date: Wed, 20 Apr 88 12:34:20 GMT
From: Francis LOWENTHAL <PLOWEN%BMSUEM11.BITNET@CUNYVM.CUNY.EDU>
Subject: Re: subscription.

What does your message mean ?
                             F. Lowenthal
Acknowledge-To: <PLOWEN@BMSUEM11>

  [There have been several messages lately that were sent to
  Bitnet AIList readers directly rather than through AIList@SRI.COM.
  Some of these were signup or retrieval messages that should have
  been sent to the LISTSERV rather than to AIList@LISTSERV.  Others
  were legitimate messages, with the authors apparently unaware
  that they were limiting distribution to Bitnet readers alone.
  I only find out about such traffic if an obsolete net address
  bounces an error message back to the list moderator.  -- KIL]

------------------------------

Date: Wed, 20 Apr 88 11:35 EDT
From: Sonya E. Keene <skeene@STONY-BROOK.SCRC.Symbolics.COM>
Subject: CLOS Specification Completion Date?

    Date: 12 Feb 88 21:59:37 GMT
    From: pitt!cisunx!jasst3@cadre.dsl.pittsburgh.edu  (Jeffrey A.
          Sullivan)

    Does anyone know when the CLOS standard will be frozen so that language
    developers will be willing to support it in commercial CL packages?

    --
    ..........................................................................
    Jeff Sullivan                           University of Pittsburgh
    pitt!cisunx!jasst3                      Intelligent Systems Studies Program
    jasper@PittVMS (BITNET)                 Graduate Student


The CLOS standard has two parts.   The Programmer Interface is finished,
and will be voted on in June.    The Metaobject Protocol (the underlying
layer) is in progress, and will be voted on somewhat later.

------------------------------

Date: 18 Apr 88 10:29:00 EST
From: wilsonjb@afwal-aaa.arpa
Reply-to: <wilsonjb@afwal-aaa.arpa>
Subject: Reply To: Looking For An ELIZA-like Program

                                        From:      Lt James B Wilson
                                        Dept:      AFWAL/AAI
                                        Tel No:    51491/55800

Several months ago there was a request for an ELIZA-like program.  I have
located one if anyone is interested.  It is written in BASIC.  It is more
powerful than the original ELIZA program was.

Send Replies To:
Arpanet:        WILSONJB@AFWAL-AAA.ARPA
US Mail:        Maj Johnson
                AFWAL/AAOR
                Wright Patterson AFB, OH 45433
Phone:          (513) 255-6453

------------------------------

Date: Monday, 18 April 1988 16:15:56 EST
From: Steven.Minton@cad.cs.cmu.edu
Subject: planner available

Ken,
        Can you post the following on AIList? Thanks.



I recently saw a note on AIList (v6 #52) from Vibhu Mittal
asking whether there are any planning systems written
in vanilla CommonLisp that are available
for educational purposes. I think that the PRODIGY system built here at
CMU would probably be well-suited to his purposes. PRODIGY is a
domain-independent problem solving system based on the STRIPS architecture
(with some interesting improvements). We use it as a testbed
for machine learning research (for example, see articles
in IJCAI-87 and the '85 and '87 machine learning workshops).

There are several hot issues in planning that the current PRODIGY implementation
does not deal with (such as reasoning about uncertainty, temporal constraints,
conditional plans). So if you are looking for a planner that is state
of the art in all respects, PRODIGY is probably not what you are looking for.
On the other hand, it is a relatively elegant and powerful system
that extands the well-understood STRIPS approach to planning. So
it's just fine if you're primarily interested in ML issues (especially if you
want to be able to compare your work to previous work in AI).
I think it would also be perfect for educational purposes.
Although PRODIGY allows you to specify surprisingly complex domains,
if you want, you can simply load in the old blocksworld or STRIPS domains.
Students can get there hands on a running system, and see the advantages
and limitations of this approach to planning.

The vanilla PRODIGY 1.0 problem solver was recently released for external
use, and comes with a manual and a few example task domains.
An interactive trace facility and graphics capibilities are included.
The system's description language is based on predicate calculus
(and includes conjunction, disjunction, and existential and universal
quantification). The user can write explicit search control rules to guide the
problem solver's search. The system is free and can be FTPed from CMU.


PRODIGY 2.0 is expected to be released this summer (no guarantees,
I still have yet to complete the port from Franzlisp), and will
include an advanced EBL learning system (as was described in my thesis).
Later versions are expected to include an automatic abstraction mechanism,
a learning-by-experimentation module, a derivational analogy module,
an interface to the CMU World modelers system,
and a full extension of the learning apprentice interface. These
are all current research projects at CMU.

Send requests to snm@cs.cmu.edu. Allow some time for me to get back to you,
I'm trying to get some real work done too.  (I can't promise anything in
the way of support.)

- Steve Minton
  (& Craig Knoblock)

------------------------------

Date: Fri 15 Apr 88 09:33:28-PST
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: Re: human face recognition

I haven't done a full survey, but the following references on face
recognition may give you some leads.  I believe Dr. Scott Cannon
at Utah State University is active in this area.

                                        -- Ken

A.L.~Allen,
{\it Personal Descriptions,}
Butterworth, London
(1950).

A.~Bertillon,
{\it Signaletic Instructions,}
Werner, Chicago
(1893).

C.L.~Bisson,
{\it Preliminary Investigation on Measurements by Computer of the Distances
  On and About the Eyes,}
Report PRI:18,
Panoramic Research Inc., Palo Alto, California
(April 1965).

C.L.~Bisson,
{\it Location of Some Facial Features by Computer,}
Report PRI:20,
Panoramic Research Inc., Palo Alto, California
(June 1965).

W.W.~Bledsoe,
{\it The Model Method in Facial Recognition,}
Report PRI:15,
Panoramic Research Inc., Palo Alto, California
(August 1964).

W.W.~Bledsoe,
{\it Man-Machine Facial Recognition,}
Report PRI:22,
Panoramic Research Inc., Palo Alto, California
(August 1966).

J.L.~Bradshaw and G.~Wallace,
``Models for the Processing and Identification of Faces,''
{\it Perception and Psychophysics,}
Vol.~9, No.~5,
pp.~443--448
(1971).

S.R.~Cannon, G.W.~Jones, R.~Campbell, and N.W.~Morgan,
``A Computer Vision System for Identification of Individuals,''
{\it Proc.\ IECON '86 Conf.,}
Milwaukee, Wisconsin
(September 1986).

R.E.~El'bur,
``Utilization of the Apparatus of Projective Geometry in the Process
  of the Identification of Individuals by Their Photographs,''
in
V.N.~Kudryavtsev (ed.),
{\it Problems of Cybernetics and Law,}
Nauka Publishing House, Moscow,
pp.~321--348
(1967).
Available through U.S.~Dept.\ of Commerce, Joint Publications Research Service,
No.~JPRS:~43,954
(January 10, 1968).

M.A.~Fischler and R.A.~Elschlager,
``The Representation and Matching of Pictorial Structures,''
{\it IEEE Trans.\ on Computers,}
Vol.~C--22, No.~1,
pp.~67--92
(January 1973).
Also published as a technical report by
Lockheed Missiles and Space Co.\
(September 1971).

A.J.~Goldstein and E.J.~Mackenberg,
``Recognition of Human Faces from Isolated Facial Features:
  A Developmental Study,''
{\it Psychonomic Science,}
Vol.~6, No.~4,
pp.~149-150
(1966).

A.J.~Goldstein, L.D.~Harmon, and A.B.~Lesk,
``Identification of Human Faces,''
{\it Proc.\ IEEE,}
Vol.~59,
pp.~748--760
(May 1971).

A.J.~Goldstein and L.D.~Harmon,
``Man-Machine Interaction in Human Face Identification,''
{\it Bell Syst.\ Tech.\ J.,}
Vol.~51,
pp.~399--427
(February 1972).

L.D.~Harmon,
``Some Aspects of Recognition of Human Faces,''
{\it Pattern Recognition in Biological and Technical Systems,}
Springer-Verlag, New York
(1971).

L.D.~Harmon,
``The Recognition of Faces,''
{\it Scientific American,}
Vol.~229,
pp.~71--82
(November 1973).

L.D.~Harmon and W.F.~Hunt,
``Automatic Recognition of Human Face Profiles,''
{\it Computer Graphics and Image Processing,}
Vol.~6,
pp.~135--156
(1978).

L.D.~Harmon, M.K.~Khan, R.~Lasch, and P.F.~Ramig,
``Machine Identification of Human Faces,''
{\it Pattern Recognition,}
Vol.~13, No.~2,
pp.~97--110
(1981).

J.~Hochberg and R.E.~Galper,
``Recognition of Faces: 1.\ An Exploratory Study,''
{\it Psychonomic Science,}
Vol.~9, No.~12,
pp.~619--620
(1967).

Y.~Kaya and K.~Kobayashi,
``A Basic Study on Human Face Recognition,''
{\it Proc.\ Int.\ Conf.\ on Frontiers of Pattern Recognition,}
Hawaii,
Academic Press, New York,
pp.~265--289
(1971).

M.D.~Kelly,
{\it Visual Identification of People by Computer,}
Memo AI-130,
Stanford Artificial Intelligence Project,
Stanford University, Stanford, California
(July 1970).

T.~Sakai, M.~Nagao, and M.~Kidode,
``Processing of Multilevel Pictures by Computer---The Case of
  Photographs of Human Faces,''
{\it Systems, Computers, Controls,}
Vol.~2, No.~3,
pp.~47--54
(1971).
Reprinted in
O.~Firschein (ed.),
{\it Artificial Intelligence,}
AFIPS Press, Reston, Virginia,
pp.~219--226
(1984).

T.~Sakai, M.~Nagao, and T.~Kanade,
``Computer Analysis of Classification of Photographs of Human Faces,''
{\it Proc.\ 1st USA-Japan Computer Conf.,}
pp.~55--62
(October 1972).

------------------------------

End of AIList Digest
********************

∂21-Apr-88  0901	LAWS@KL.SRI.COM 	AIList V6 #75 - Functions, Modal Logic, Explorer, MACSYMA 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 21 Apr 88  09:00:41 PDT
Date: Wed 20 Apr 1988 22:38-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #75 - Functions, Modal Logic, Explorer, MACSYMA
To: AIList@SRI.COM


AIList Digest           Thursday, 21 Apr 1988      Volume 6 : Issue 75

Today's Topics:
  Expert Systems - Functions in Expert Systems,
  References - Introductory Text & AM Follow-on & Modal Logic &
    Railroad Application,
  AI Tools - Explorer (vs. Sun) Experience &
    Realtime Knowledge Daemon Project & MACSYMA Information

----------------------------------------------------------------------

Date: Thu, 14 Apr 88 10:36:03 EDT
From: aboulang@WILMA.BBN.COM
Reply-to: aboulanger@bbn.com
Subject: functions in expert systems

  I am interested in understanding the importance of the
  function in expert systems.  From an analysis point of view
  functions complicate expert systems quite a bit.

Check out my masters thesis:
"The Expert System PLANT/CD:
A Case Study in Applying The General Purpose Inference System Advise
to Predicting Black Cutworm Damage in Corn." This has some discussion
of the roles of function invocation in expert systems.

This is Report # UIUCDCS-R-83-1134 (July 83) which should be no
problem for you to get.


Albert Boulanger
Aboulanger@bbn.com

------------------------------

Date: 14 Apr 88 15:15:35 GMT
From: sunybcs!rapaport@boulder.colorado.edu  (William J. Rapaport)
Subject: Re: References needed

In article <1201@tahoe.unr.edu> greg@wheeler (Greg Sharp) writes:
>
>I am looking for introductory references (books, articles...) concerning ai.

I strongly recommend as a reference the following:

Shapiro, Stuart C. (ed.) (1987), Encyclopedia of Artificial Intelligence
(New York:  John Wiley & Sons).

                                        William J. Rapaport
                                        Assistant Professor

Dept. of Computer Science||internet:  rapaport@cs.buffalo.edu
SUNY Buffalo             ||bitnet:    rapaport@sunybcs.bitnet
Buffalo, NY 14260        ||uucp: {ames,boulder,decvax,rutgers}!sunybcs!rapaport
(716) 636-3193, 3180     ||

------------------------------

Date: Thu, 14 Apr 88 21:31:04 EST
From: PJURKAT@VAXC.STEVENS-TECH.EDU
Subject: Thanks for the references to AM follow on work!

This is to thank all the people who responded to my request for ideas and
references on any work that had been done, since the original, on the kind of
heuristics that were developed by Lenat in his AM.  Several of you not only
provided references but also suggested ideas and comments that proved
interested to both my student and myself.  Thanks again.

Several responses, without providing any other thoughts, merely suggested that
my student look in "Science Citation Index".  My student, of course, had done
so but at Stevens the use of the Index is not free.  The search, when finally
completed, cost over $60.  Not all the students who attend Stevens can afford
that and neither the school nor the Department is in a position to fund
searches for all such queries.  Not all of us work at affluent organizations
and I was hoping that the AIList members consider themselves enough of a
community to provide such help.  I was not wrong!!!

cheers - peter jurkat

------------------------------

Date: Wed, 13 Apr 88 14:16:04 PDT
From: ladkin@kestrel.ARPA (Peter Ladkin)
Subject: modal logic references

Others have mentioned the books by Hughes and Cresswell (there are two,
the Introduction and the Companion), the Handbook of Philosophical
Logic volume 2 (articles by many) and Johan van Benthem's monograph
on Modal Logic and Correspondence Theory (Bibliopolis, Naples, available
through Humanities Press here, i think).

There are other important and helpful works. Brian Chellas's book
Modal Logic (Cambridge) is widely available and easy to read.
Lemmon and Scott's monograph on Modal Logic (Blackwell) is a classic,
but may not be in print. Kripke's original articles are well worth reading.
Johan van Benthem has another monograph, A Manual of Intensional Logic,
in the CSLI lecture note series (U. Chicago), and Goldblatt
has a volume on Logics of Time and Computation in the same series.
Segerberg's thesis is unfortunately not widely available.
Gabbay has a book on his work with modal logics (Reidel), containing
a good number of his highly technical results, but is not really an
introduction.

Since temporal logics are a form of modal logic, I also recommend
van Benthem's monograph (yes, he is prolific) on The Logic of Time (Reidel).
For the provability logic, Boolos's book was mentioned, and there is
another by Craig Smorynski, Self Reference and Modal Logic (Springer),
which studies the provability logic in detail.

There is also substantial literature on the algebraic approach to
modal logics - just as propositional logic and Boolean algebras
correspond, so normal modal propositional logics correspond to
Boolean algebras with an extra unary operator. But that is another
story.

peter ladkin
ladkin@kestrel.arpa

------------------------------

Date: 18 Apr 88 17:22:55 GMT
From: mtunx!mtuxo!mtgzy!jaw@rutgers.edu  (XMRN60000[bsm]-j.a.welsh)
Subject: Re: Expert Systems in the Railroad Industry.

>      What sort of expert systems have developed for the railroad
>      industry?

Strangely enough, the one that I know of is a General Electric locomotive
maintenance expert system.  It was mentioned in a computer magazine and
one of the railfanning mags. last year.

------------------------------

Date: 14 Apr 88 17:20:03 GMT
From: frabjous!nau@mimsy.umd.edu  (Dana Nau)
Subject: Re: Explorer (vs. Sun) Experience ?

In article <3470003@wdl1.UUCP> mikeb@wdl1.UUCP (Michael H. Bender) writes:
>PLEASE - if you have any experience with the TI Explorer environment,
>or have made any comparisons between it and the SUN environment,
>please help us by lettin us know ....

I've had extensive experience with Suns, Symbolics machines, and Explorers.
Currently, my research group has two Explorers and three Suns.  We use the
Explorers for Lisp programming, and the Suns for other stuff.  I haven't had
any experience with a Mac-II, so I can't comment on that.

>An associate of mine is debating between the purchase of a Mac-II with
>the TI Explorer board, or a Sun workstation.  Currently, he has a Sun,
>and he wants to buy 2 Mac's and link them togehter (NFS? IP/TCP?).  He
>will be running Knowledge Craft Primarily.
>
>QUESTION 1)
>How hard is it to learn to use the Lisp environment on the Explorer?
>Is it as difficult as the Symbolics used to be?
>
>In the past - people have told me that it takes close to a year to
>become expert on the Symbolics (much less on the Sun) ... is this true
>for the Explorer also?

The operating systems for both the Explorer and the Symbolics are based on
some code which was originally developed at MIT.  Thus, at one time, the
operating systems for the Explorer and Symbolics were nearly the same.
Lately, TI and Symbolics have diverged a bit in the enhancements and
modifications they've made to the operating systems, but there are still a
lot of similarities.

The operating system is complex, and when I was first trying to learn it, I
got pretty frustrated.  However, it certainly didn't take me as long as you
indicate above; I was pretty proficient after using the machines for only a
few months.  Furthermore, it was well worth the effort, because once I
became proficient, I found Lisp programming on the Lisp machine to be much
easier than it had ever been on a Sun.

>QUESTION 2)
>How hard is it to maintain the software and environment?  He is afraid
>that if he gets a Sun he will need to hire a Unix guru.... Will he
>have to hire an Explorer/Zeta-Lisp expert if he gets a MacII with the
>TI board?

I don't know anything about the Mac, but we're doing pretty well with the
Explorers on our own.

We do have a maintenance staff for the Suns, but that's because my
department has several dozen Suns and has made a commitment to maintaining
them for everyone in the department.  Our staff has made a lot of
modifications and enhancements to the Sun operating system--and what it
would be to use a Sun without our maintenance staff, I don't know.

>QUESTION 3)
>Does the TI environment (which I assume will completely run on the
>Mac-II) provide a large number of libraries that would otherwise have
>to be developed on the SUN workstations?

For Lisp programming, I much prefer an Explorer or Symbolics rather than a
Sun; for text processing and such, I use the Sun.  On the Lisp machines,
Lisp is thoroughly integrated with the operating system, and as a result,
you can quite easily do things with windows, menus, editing, debugging,
etc., that would be pretty painful to do in Lisp on the Sun.  For example,
if I want a pop-up a menu on the explorer, I simply call a built-in Lisp
function, giving it the menu title and menu entries, and telling what should
be done for each menu entry.  That kind of thing is substantially more
difficult on the Sun.

If the Mac II has the same kind of Lisp/Operating System integration that
the Explorer has, then there might be some advantages to it since it can do
other general-purpose programming too.  However, I'd want to check it out
carefully first.  The Mac operating system and window environment are
substantially different from those on the Explorer and Symbolics, and I have
no idea how they've integrated Lisp with all this.

Dana S. Nau                             ARPA & CSNet:  nau@mimsy.umd.edu
Computer Sci. Dept., U. of Maryland     UUCP:  ...!{allegra,uunet}!mimsy!nau
College Park, MD 20742                  Telephone:  (301) 454-7932

------------------------------

Date: Thu, 14 Apr 88 14:41:22 EDT
From: Michael Factor <factor-michael@YALE.ARPA>
Subject: [gelernter-david: "Another Yale program"]

Date: Thu, 14 Apr 88 12:49:50 EDT
From: David Gelernter <gelernter-david>
To: Paul.Birkel@K.GP.CS.CMU.EDU
Subject: "Another Yale program"
Cc: factor, leichter


   Subject: Can you name this project?


   In a recent exposition on parallel computing in the popular press the
   following paragraph appeared. Can anyone name this project, its
   investigators, a contact, a publication, or provide any further information?

        "Another Yale program - to monitor the equipment in an
        intensive-care unit - is more flexible still. Each processor
        in this system runs a different program, which monitors the
        equipment for signs of a particular problem or ailment. Because
        each program has its own processor, it is ever-vigilant for
        signs of its disease."


This program is a so-called "Realtime Knowledge Daemon" written
by Mike Factor of the Linda group here, in collaboration with
some anaesthesiologists.  The Economist guy got it wrong: each
PROCESS (not processor) runs a different decision procedure.
I'm appending abstracts from a couple of recent reports on the system.
I'll send you copies of the papers if you want them.  (The program
is also discussed in a paper forthcoming in the SIGPLAN PPEALS conference
this summer.)



-------------------------------------------------------

 The Parallel Process Lattice as an
 Organizing Scheme for Realtime Knowledge Daemons

 Michael Factor and David Gelernter

{\it Yale University \\
Department of Computer Science \\
P.O. Box 2158 Yale Station \\
New Haven, Connecticut 06520-2158} \\

{\bf Abstract.} A {\it realtime knowledge daemon} is a program that
fastens itself to a collection of data streams and monitors them,
generating appropriate comments and responding to queries, in
realtime.  This type of program is called for whenever data-collection
capacity outstrips realtime data {\it monitoring} and {\it
understanding} capacity.  A {\it parallel process lattice} is an
organizing structure for realtime knowledge daemons (and more broadly
for expert systems in general).  A process lattice is a network of
communicating concurrent processes arranged in a series of ranks or
layers; data values flow upward through the lattice and queries may
flow downward.  The intent of the process lattices is to ``waste''
processing power (an ever-cheaper commodity) by constantly monitoring
the likelihood of rare events, and eagerly computing the answers to
questions rarely asked, so that the system can respond rapidly and
gracefully to unusual circumstances.  Further, the application's
character as a collection of heterogeneous, communicating expert
processes means that concurrency leads to a far simpler program than
would have been the been the case given a conventional, sequential
organization.  We explain the process lattice and discuss its
suitability by describing a simple but fairly realistic prototype
designed for monitoring in an ICU.




  A Prototype Realtime Knowledge Daemon
  for ICU Monitoring

  Michael Factor*, David Gelernter*, Perry Miller\dag and Stanley Rosenbaum\dag

{\it *Yale University \\
Department of Computer Science \\
%P.O. Box 2158 Yale Station \\
New Haven, Connecticut} \\

{\it \dag Yale University School of Medicine \\
Department of Anaesthesiology \\
New Haven, Connecticut} \\


A {\it realtime knowledge daemon} is a program that fastens itself to
a collection of data streams and monitors them, generating appropriate
comments and responding to queries, in realtime.  We describe a
prototype designed for monitoring patients in a post-operative ICU.
The prototype is of interest because ($a$) its performance seems
reasonable and correct, and (our experience suggests) should continue
to be reasonable as the system grows (the current prototype isn't
comprehensive enough for clinical testing, but it continues to
expand); ($b$) the program is written using a novel ``process
lattice'' organization that holds a number of advantages for building
large expert systems.  The process lattice structure results in a
program with an easily-visualizable logical structure that reflects
the structure of the domain; it imposes a regular organization on an
arbitrarily-heterogeneous set of decision procedures; it's well suited
to a parallel implementation.  The prototype we discuss is written in
the parallel language Linda and runs on a commercial parallel
processor.

------------------------------

Date: Thu, 14 Apr 88 13:00 EDT
From: Richard Petti <petti@ALLEGHENY.SCRC.Symbolics.COM>
Subject: MACSYMA information

Robert,

Thank you for your interest in MACSYMA. Here is some information on our new
release of MACSYMA for VAX VMS systems. We would be happy to answer any
questions you have.

In addition to these product features, our software is supported by a staff
of 14 people, including seven technical staff. Service, training,
installation guides and release notes are available.

Dick Petti
Director, Computer Aided Mathematics Group
Symbolics, Inc.
Eleven Cambridge Center
Cambridge, MA 02142

tel: (617) 621-7770
     (800) MACSYMA
email: petti@scrc-stony-brook.arpa
       petti@symbolics.com


                                                                   March 1988


   HIGHLIGHTS OF COMMON LISP MACSYMA 412.61 FOR VAX/VMS USERS


Since January 1985, we have poured most of our development effort into a new
generation of MACSYMA(R) software based on Common Lisp.  Versions of this
software are now available on Symbolics(TM) and Apollo(R) workstations, and we
plan to deliver it on VAX(TM)/VMS(TM) systems in April or May of this year.
Versions of this product for SUN(TM) workstations and VAX UNIX(TM) systems will
be available at a later date.

The following enhancements are planned for VAX/VMS MACSYMA release 412.61.
Some of these are old packages which have not been delivered in recent VAX/VMS
releases, some are improvements to existing packages, and some are entirely
new.  While we expect all of these enhancements to be included in the product,
we will not hold up the release if some of them are not ready on schedule.

 o Symbolic Algebra:
   - GROBNER: The Grobner algorithm enables MACSYMA to find more solutions
     to systems of polynomial equations.
   - JORDAN_FORM, a command for computing the Jordan form of matrices, has
     been added.

 o Symbolic Calculus:
   - Ordinary Differential Equations (O.D.E.'s):
     . ODEFI finds first integrals of first order O.D.E.'s, using the powerful
       Prelle-Singer algorithm.
     . ODE, the main solution package for O.D.E.'s, has been made more
       reliable.
   - INTEQN: The integral equation package has been repaired and extended.
   - Tensor analysis:
     . CTENSOR, the component tensor package, has been extended to include
       frame fields, affine torsion and conformal nonmetricity.
     . ITENSR, the indicial tensor analysis package, has been repaired and
       is fully functional for VAX users for the first time.
     . CARTAN, a package for performing exterior calculus, is repaired.
   - OPTVAR, a package for solving variational problems, is available.

 o Symbolic Approximation Methods:
   - Taylor methods:
     . TAYLOR_SOLVE: Solves algebraic and transcendental equations in Taylor
       series.  Very useful for equations which do not have closed-form
       solutions, or whose exact solutions are very complicated.
     . TAYLOR_ODE: finds Taylor series solutions of systems of simultaneous
       ordinary differential equations which satisfy Lipshitz conditions.
       Useful for studying local behavior of complicated systems of O.D.E.'s.
   - Perturbation theory methods for O.D.E.'s:
     . LINDSTEDT: Finds periodic series solutions for perturbed oscillator
       equations using Lindstedt's method.
     . AVERAGE_PERIODIC_ODE: Implements the method of averaging for periodic
       O.D.E.'s.  This is the most popular method for finding qualitative
       information about the family of solutions of an ordinary differential
       equation.

 o Numerical analysis:
   - Runge-Kutta numerical integration of systems of O.D.E.'s.
   - Newton-Cotes numerical integration.
   - Interpolation of numerical roots of equations.
   - FFT: Fast Fourier transforms.
   - LSQ: Least squares polynomial fit to scattered data.

 o Fortran Links:
   - GENTRAN, a very powerful Fortran generator, has been installed.  It can
     translate mathematical expressions, iteration statements, if-then
     statements, data type declaration information and much more into Fortran,
     `C' or Ratfor.  In its `template mode', Gentran enables users to write
     "mixed Fortran-MACSYMA code".

 o Graphics: VAX/VMS users will have access to full MACSYMA plotting
   capabilities in two and three dimensions.

 o Pattern Matching: MACSYMA's capabilities were extended in 1986, and these
   improvements will be included in the new VAX/VMS version of MACSYMA.

 o Compilation: Thanks to the VAX LISP runtime version we are shipping under
   MACSYMA, users can for the first time compile their own MACSYMA code.  This
   results in a 2-10 times speed improvement in execution of large MACSYMA
   programs.

 o Documentation:
   - User's Guide: In July 1987 we made available the MACSYMA User's Guide,
     which is much more accessible than the MACSYMA Reference Manual.
   - Reference Manual: In the summer of 1988 we will deliver version 13 of the
     MACSYMA Reference Manual, which will be reorganized, and much easier to
     use.

 o Reliability: Our top priority has been to improve the reliability of
   Macsyma over the past two years.  Many minor improvements have been made.


--
MACSYMA(R) is a registered trademark of Symbolics, Inc.
Symbolics is a trademark of Symbolics, Inc.
Apollo(R) is a registered trademark of Apollo Computer Inc.
VAX and VMS are trademarks of the Digital Equipment Corporation.
SUN is a trademark of Sun Microsystems, Inc.
UNIX is a trademark of AT&T Bell Laboratories.

(C) Copyright 1988 Symbolics, Inc.
All rights reserved.

------------------------------

End of AIList Digest
********************

∂21-Apr-88  1233	LAWS@KL.SRI.COM 	AIList V6 #76 - Hot Topics, Goals of AI, Free Will   
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 21 Apr 88  12:33:04 PDT
Date: Wed 20 Apr 1988 22:48-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #76 - Hot Topics, Goals of AI, Free Will
To: AIList@SRI.COM


AIList Digest           Thursday, 21 Apr 1988      Volume 6 : Issue 76

Today's Topics:
  Opinion - Hot Research Topics & Goals of AI & Free Will

----------------------------------------------------------------------

Date: Thu, 14 Apr 88 11:11:35 HAE
From: Spencer Star <STAR%LAVALVM1.BITNET@CORNELLC.CCS.CORNELL.EDU>
Subject: Re: AIList V6 #69 - Queries

     Exiting work in AI.  There's lots of it.  The three criteria
are: 1. Highly thought of by at least 50% in the field.
     2. Positive contribution
     3. Real AI

Machine learning is a real AI field; there is general agreement
that learning is central to real AI.  Machine learning is perhaps
the major subfield devoted to AI learning, although most other
subfields also touch upon learning.  Some, like neural networks,
are centered on learning.

Surprisingly, I don't see that much controversy in machine learning.  There
is solid progress being made on several fronts.  Recent controversies have
been on esoteric issues like whether there is a tradeoff between generalization
and efficiency, whether facts in the deductive closure of a system can be
said to be learned, etc.  No big battles with rival camps raging at each
other.

There is, however, solid research.  Hal Valiant and David Haussler have
made good theoretical progress at defining a certain type of learning.
Explanation-based learning is a very exiting hot!!! area for research
right now.  At the Stanford Symposium many people made progress reports
on hybrid systems that use the deductive inference engine based on
PROLOG-EBG or EGGS or some variant, and then include inductive techniques
to do learning on both deductive and inductive levels.  Another area
involves classification trees of the sort generated by Quinlan's ID3
program.  There is wide agreement that this is a positive contribution.
And it is not a controversial technique.

SOAR is an architecture being worked on by people at severl universities.
Although the claims of the group have been controversial, the actual
work they are doing is well thought of.  And copies of the program
are available to researchers for their own experimentation.

Take your pick.  There is lots to choose from.

              Spencer Star

------------------------------

Date: 18 Apr 88 14:08:45 GMT
From: hubcap!ncrcae!gollum!rolandi@gatech.edu  (rolandi)
Subject: consensus in ai and psychology

In response to Ehud Reiter's:
>I was recently asked (by a psychology graduate student) if there was
>any work being done in AI which was widely thought to be exciting and
>pointing the way to further progress.  Specifically, I was asked for work
>which:
>       1) Was highly thought of by at least 50% of the researchers in
>the field.
>       2) Was a positive contribution, not an analysis showing problems
>in previous work.
>       3) Was in AI as narrowly defined (i.e. not in robotics or vision)

>I must admit that I was (somewhat embarassingly) unable to think of
>any such work.  All the things I could think of which have people excited
>(ranging from non-monotonic logic to connectionism) seemed controversial
>enough so that they could not be said to have the support of half of all
>active AI researchers.

Psychology itself would look pretty bad if asked the same sort of questions.
No discipline is more factionalized than psychology.  Its representatives
range from scientific materialists to existential philosophers  I don't
think you could get 50% of psychologists to even agree as to the proper
subject matter of their discipline.

Walter Rolandi
rolandi@gollum.UUCP
NCR Advanced Systems Development, Columbia, SC
University of South Carolina Departments of Psychology and Linguistics

------------------------------

Date: 18 Apr 88 14:19:46 GMT
From: mit-amt!mob%mit-amt.MEDIA.MIT.EDU@EDDIE.MIT.EDU (Mario O
      Bourgoin)
Reply-to: mit-amt!mob%media-lab.MEDIA.MIT.EDU@EDDIE.MIT.EDU (Mario O
          Bourgoin)
Subject: Re: Prof. McCarthy's retort


A better, cute and compact argument for Lisp: Scheme.

------------------------------

Date: Mon Apr 18 10:48:41 EDT 1988
From: sas@BBN.COM
Subject: Why AI? ---- Slightly humorous ----

I wonder if we can get the same arguments going about "business
programming".  For example,

-----

- How can you tell if a program is a business program?

        They deal with dollars (or other monetary units).
        Hmm, so an employee assignment program which only deals with
        people's names, times and office numbers is not a business
        program.

        How about, it deals with time or money?
        So if we write a program to see who is elible for class IIB
        promotions, but we don't actually check existing salaries,
        then it isn't a business program, but if it checks salaries
        then it is.

        So how about, people, time or money?
        I see, so a program that can read Victorian novels and answer
        questions about them is a business program.

- Why do we program in COBOL?  After all, COBOL is a pretty
specialized language that can barely run on a SUN 3/50.

        You can obviously rewrite anything written in COBOL in C.

        But C doesn't support pictures.
        Good grief, you can write a library routine.  Besides, who
        makes enough money to really need commas.

        And it isn't file I/O oriented.
        You can use structures and call UNIX subroutines.

        And it doesn't support decimal arithmetic.
        Call a subroutine and call FASB and explain why the C way of
        computing interest payments is better than their way.

        And where's the report generator?  I can get more done in 10
        lines of RPG than 10 pages of hairy C code ....
        Serves me right for arguing with a COBOL programmer..

Now that I've sundered every DP department to its philosphical roots,
they'll all start hacking 68020 machine code and deactivate my cash
machine card.

---- Sound familiar? ----

I'll make the argument that business people use languages like COBOL
(or 1-2-3) and AI people use languages like LISP (or LOOPS) because
they make it easier to write down the concepts they are dealing so a
computer (or other programmer) can understand them.  You don't write a
loop to slide the dollar sign next to amount payable and you don't
shift bits around to see if an item is a fixnum or a rational.

Business programmers write programs that deal with the structures of
business.  This means they deal with people, resources, time and money
or whatever else it takes to keep the business running.

AI programmers write programs that deal with the structures of
knowledge and reasoning.  This means they deal with ontologies,
relationships, searching, recognition and transformation or whatever
else it takes to make a computer perform the tasks that are associated
with human intelligence.

I won't try and define "intelligence" here and I won't try and define
"business" here.

                                        Seth

                        ---- Letter Foot ----

------------------------------

Date: 18 Apr 88 16:52:09 GMT
From: govett@avsd.uucp (David Govett)
Subject: Re: AIList V6 #67 - Future of AI


>
> - Watch those Pony Express arguments.  Remember, the Pony Express was
> a temporary hack.  It ran for a bit under two years before it was
> replaced by the transcontinental railroad.
>

Not true.  The PE was obviated by the transcontinental telegraph in
1861, I believe.  The transcontinental RR wasn't completed until
May 1869.

------------------------------

Date: 18 Apr 88  1745 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: re: AIList V6 #72 - Queries

[In reply to message sent Sun 17 Apr 1988 23:35-PDT.]

McCarthy, John and P.J. Hayes (1969):  ``Some Philosophical Problems from
the Standpoint of Artificial Intelligence'', in D. Michie (ed), Machine
Intelligence 4, American Elsevier, New York, NY discusses the problem of
free will for machines.  I never got any reaction to that discussion,
pro or con, in the 19 years since it was published and would be grateful
for some.

------------------------------

Date: 18 Apr 88 19:32:00 GMT
From: channic@m.cs.uiuc.edu
Subject: Re: Free Will


I can't justify the proposition that scientific endeavors grouped
under the name "AI" SHOULD NOT IGNORE issues of free wil, mind-brain,
other minds, etc.  If these issues are ignored, however, I would
strongly oppose the use of "intelligence" as being descriptive
of the work.  Is it fair to claim work in that direction when
fundamental issues regarding such a goal are unresolved (if not
unresolvable)?  If this is the name of the field, shouldn't the
field at least be able to define what it is working towards?
I personally cannot talk about intelligence without concepts such
as mind, thoughts, free will, consciousness, etc.  If we, as AI
researchers make no progress whatsoever in clarifying these issues,
then we should at least be honest with ourselves and society, and find a
new title for our efforts.  Actually the slight modification,
"Not Really Intelligence" would be more than suitable.


Tom Channic
Dept. of CS
Univ. of Illinois
channic@uiucdcs.uiuc.edu
{ihnp4|decvax}!pur-ee!uiucdcs!channic

------------------------------

Date: Mon, 18 Apr 88 20:09:47 EST
From: carole hafner <hafner%corwin.ccs.northeastern.edu@RELAY.CS.NET>
Subject: Future of AI


In article <1134@its63b.ed.ac.uk> gvw@its63b.ed.ac.uk (G Wilson) writes:

>I think AI can be summed up by Terry Winograd's defection.  His
>SHRDLU program is still quoted in *every* AI textbook (at least all
>the ones I've seen), but he is no longer a believer in the AI
>research programme (see "Understanding Computers and Cognition",
>by Winograd and Flores).
>
I'm glad to see "Understanding Computers and Cognition" mentioned in this
discussion.  It includes a lengthy section listing all the "justifications"
for AI, and then refutes them one by one.  Conspicuously absent from this
list is "curiosity".

I think AI is the expression of the species' curiosity about this new artifact
(the computer) that it has invented.  We want to find out what else can it do,
how smart can we make it, can we find a way to make it improve itself?  Of
course, we have to pretend that we have socially relevant goals like helping
people or national defense in order to get the money to pursue these inquiries.
And sometimes the two (curiosity and socially relevant goals) are compatible,
since we need some tasks to focus our attention and test our theories.

Is this heresy?  Or merely stating the obvious?

--Carole Hafner

------------------------------

End of AIList Digest
********************

∂22-Apr-88  0218	LAWS@KL.SRI.COM 	AIList Digest   V6 #77 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 22 Apr 88  02:17:49 PDT
Date: Thu 21 Apr 1988 21:50-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V6 #77
To: AIList@SRI.COM


AIList Digest            Friday, 22 Apr 1988       Volume 6 : Issue 77

Today's Topics:
  Article - Associative Learning,
  Seminars - Constrained Reformulation (Rutgers) &
    Bayesian Spectrum Analysis (NASA),
  Conferences - AAAI88 Workshop on Knowledge Acquisition &
    ACL 1988 Annual Meeting Program and Registration &
    AI and Simulation

----------------------------------------------------------------------

Date: 18 Apr 88 20:13:16 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Associative learning: Call for Commentators


The following is the abstract of a target article to appear in
Behavioral and Brain Sciences (BBS).  All BBS articles are accompanied
by "open peer commentary" from across disciplines and around the
world. For information about serving as a commentator on this article,
send email to harnad@mind.princeton.edu or write to BBS, 20 Nassau
Street, #240, Princeton NJ 08540 [tel: 609-921-7771]. Specialists in
the following areas are encouraged to contribute: connectionism/PDP,
neural modeling, associative modeling, classical conditioning, operant
conditioning, cognitive psychology, behavioral biology, neuroethology.


CLASSICAL CONDITIONING: THE NEW HEGEMONY

Jaylan Sheila Turkkan
Division of Behavioral Biology
Department of Psychiatry and Behavioral Sciences
The Johns Hopkins University School of Medicine

Converging data from different disciplines are showing that the role
of classical [associative] conditioning processes in the elaboration
of human and animal behavior is larger than previously supposed. Older
restrictive views of classically conditioned responses as merely secretory,
reflexive or emotional are giving way to a broader conception that includes
problem-solving and other rule-governed behavior thought to be under
the exclusive province of either operant conditioning or cognitive
psychology. There have also been changes in the way conditioning is
conducted and evaluated. Data from a number of seemingly unrelated
phenomena such as postaddictive drug relapse, the placebo
effect and immune system conditioning turn out to be related to
classical conditioning. Classical conditioning has also been found in
simpler and simpler organisms and has recently been demonstrated in
brain slices in utero. This target article will integrate the diverse
areas of classical conditioning research and theory; it will also
challenge teleological interpretations of classically conditioned
responses and will offer some basic principles to guide experimental
testing in diverse areas.
--

Stevan Harnad            harnad@mind.princeton.edu       (609)-921-7771

------------------------------

Date: 15 Apr 88 00:56:26 GMT
From: gauss.rutgers.edu!aramis.rutgers.edu!lightning.rutgers.edu!ctong
      @rutgers.edu  (Chris Tong)
Subject: Seminar - Constrained Reformulation (Rutgers)

The following thesis proposal defense will be held at 10am, Mar. 29,
in Hill Center, room 423, Busch Campus, Rutgers University, New
Brunswick, NJ., and will be chaired by Chris Tong.

                            CONSTRAINT INCORPORATION
                      USING CONSTRAINED REFORMULATION

                               Wesley Braudaway
                               wes@aramis.rutgers.edu

ABSTRACT. The goal of this research is to develop knowledge
compilation techniques to produce a problem-solving system from a
declarative solution description.  It has been shown that a
Generate-and-Test problem-solver can be compiled from a declarative
language that represents solutions as instances of a (hierarchically
organized) solution frame; the generator systematically constructs
instances of the solution frame, until one is found that meets all the
tests.  However, this Generate-and-Test architecture is
computationally infeasible as a problem-solver for all but trivial
problems.  Optimization techniques must be used to improve the
efficiency of the resulting problem-solving system.  Test
Incorporation is one such optimization technique that moves testers,
which test the satisfaction of the problem constraints, back into the
generator sequence to provide early pruning.

This proposal defines a special kind of test incorporation called
Constraint Incorporation.  This technique modifies the generators so
they enumerate only those generator values that satisfy the problem
constraints defined by the tests.  Because of this complete
incorporation, the tests defining the incorporated constraints can be
removed from the Generate-and-Test architecture.  This results in a
significant increase of problem-solving efficiency over test
incorporation when the test cannot be partitioned into subtests that
affect a single generator.  These cases seem to occur when a mismatch
exists between the language used to represent (and construct)
solutions and the language used to define the problem constraints.  To
incorporate these constraints, the representations of solutions and
problem constraints should be shifted (i.e., reformulated) so as to
bridge the gap between them.

One method for bridging the gap is to search the space of solution and
problem representations until incorporation is enabled. However,
because of the difficulties encountered (e.g., the space is large and
difficult to generate), an alternative method is proposed that will
constrain the reformulation process.  This method incorporates
constraints by compiling an abstract solution description into a
problem-solver.  By using an abstract solution description, the system
does not commit prematurely to a detailed and biased representation of
the solution description.  The problem constraints are refined into
procedural specifications and merged to form a partial specification
of the problem-solver. The problem-solver is partial in that it only
generates those solution details mentioned in the constraints.  In
this way, the compiler is focusing on just those details of the
solution language that are relevant to incorporating the constraints.
The partial problem-solver is then extended into a complete one by
adding generators for the remaining details. Any such extension is
guaranteed to have successfully incorporated all the constraints.

This method has been applied to a house floorplanning domain, using
extensive paper traces. It is currently being implemented, and will be
applied to a second domain.

------------------------------

Date: Tue, 19 Apr 88 13:45:34 PST
From: CHIN%PLU@ames-io.ARPA
Subject: Seminar - Bayesian Spectrum Analysis (NASA)


              National Aeronautics and Space Administration
                         Ames Research Center

                        SEMINAR ANNOUNCEMENT


SPEAKER:   Dr. George L. Bretthorst
           University of Washington

TOPIC:     Bayesian Spectrum Analysis and Parameter Estimation

ABSTRACT:

Bayesian spectrum analysis is still in its infancy.  It was born when E.
T. Jaynes derived the periodogram as a sufficient statistic for determining
the spectrum of a time sampled data set containing a single stationary
frequency.  Here we extend that analysis and explicitly calculate the joint
posterior probability that multiple frequencies are present, independent
of their amplitude and phase and the noise level.  This is then generalized
to include other parameters such as decay and chirp.  Results are given
for computer simulated data and for real data ranging from magnetic resonance
to astronomy to economic cycles.  We find substantial improvements in
resolution over previous Fourier transform methods.



DATE: Friday        TIME: 10:30 - 11:30 a.m.     BLDG. 244   Room 103
      May 6, 1988
                        --------------


POINT OF CONTACT: Marlene Chin   PHONE NUMBER: (415) 694-6525
     NET ADDRESS: chin@pluto.arc.nasa.gov

***************************************************************************

VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18.  Do not
use the Navy Main Gate.

Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance.  Submit requests to the point of
contact indicated above.  Non-citizens must register at the Visitor
Reception Building.  Permanent Residents are required to show Alien
Registration Card at the time of registration.
***************************************************************************

------------------------------

Date: 16 Apr 1988 1742-EDT
From: Alain Rappaport <RAPPAPORT@C.CS.CMU.EDU>
Subject: Conference - AAAI88 Workshop on Knowledge Acquisition

From: Brian Gaines <gaines%calgary.cdn%ean.ubc.ca@relay.cs.net>
Subject: AAAI KAW

AAAI-88 WORKSHOP

INTEGRATION OF KNOWLEDGE ACQUISITION AND PERFORMANCE SYSTEMS

Sunday August 21 1988
St Paul, Minnesota

CALL FOR PAPERS & PARTICIPANTS

One-week Knowledge Acquisition Workshops were held in North
America in 1986 and 1987 under AAAI sponsorship, and a third
was held in Europe in September 1987.  In 1988 the AAAI
Workshop will be held at Banff in November and the European
one in June.  These Workshops have attracted large-scale
interest and involvement from those involved in knowledge
acquisition studies.  However, there are issues of
integration at these Workshops that involve other research
communities.  In particular, the integration of knowledge
acquisition and performance tools involves major problems and
issues.  Expert  system shells and knowledge acquisition
systems have been developed by different groups with
different approaches to knowledge representation, user
interfaces and other critical factors. There are also
fundamental problems in transforming acquired knowledge
intoforms appropriate to existing shells.

This Workshop will address the theoretical and practical
issues of integrating knowledge acquisition and performance
systems.

STRUCTURE

The one-day Workshop is intended for active participants.  It
will be based on a number of short position, experience and
survey papers leading into group discussion.

Contributions on all aspects of the integration of
acquisition and performance systems are welcome.  In
particular, we are looking for some short case histories of
experience, both positive and negative, in transfer between
acquisition tools and shells.

SUBMISSIONS

Papers: send 6 copies of a long abstract (at least 6 pages)
or a draft paper.

Participants: send 6 copies of a short bio, including
relevant publications, and a short
description of your relevant experience and projects.

Submissions should be sent to Alan Rappaport by 1st May 1988.

Please send a note or e-mail about the intention to submit
and a provisional title as soon as possible.  Notification
about acceptance of papers and participation will be
sent out by the end of May.  Final papers and project
synopses will be due by the end of June for the Workshop
Proceedings.

ORGANIZING COMMITTTEE

Alan Rappaport, Neuron Data, (Alain.Rappaport@c.cs.cmu)
Brian R. Gaines, University of Calgary, (gaines@calgary.cdn)
John H. Boose, Boeing Computer Services (john@boeing.com)

SUBMISSIONS TO

Alan Rappaport
Neuron Data
444 High Street
Palo Alto
CA 94301, USA
(415) 321 4488

------------------------------

Date: 19 Apr 88 02:20:15 GMT
From: FLASH.BELLCORE.COM!walker@ucbvax.Berkeley.EDU  (Don Walker)
Subject: Conference - ACL 1988 Annual Meeting Program and Registration

The printed version of the following program and registration information will
be mailed to ACL members by the end of the week.  Others are encouraged to use
the attached form or write for a program flier to the following address:
                Dr. D.E. Walker (ACL)
                Bellcore - MRE 2A379
                445 South Street - Box 1910
                Morristown, NJ 07960-1910, USA
or send net mail to walker@flash.bellcore.com or bellcore!walker@uunet.uu.net,
specifying "ACL Annual Meeting Information" on the subject line.

                 ASSOCIATION FOR COMPUTATIONAL LINGUISTICS
                            26th Annual Meeting

                               7-10 June 1988
        Knox 20, State University of New York at Buffalo (Amherst Campus)
                           Buffalo, New York, USA

  [This went out on the NL-KR list, so I won't rebroadcast it here.
  Contact the author if you need a copy.  -- KIL]

------------------------------

Date: Tue, 19 Apr 88 23:30:15 WUT
From: ADELSBER%AWIWUW11.BITNET@CUNYVM.CUNY.EDU
Subject: Conference - AI and Simulation

The first meeting of the working group (Arbeitskreis) "AI and
Simulation" of the German speaking ASIM (Arbeitsgemeinschaft Simulation)
will be held 20 - 21 July, 1988 in Vienna at the Technical University.

The conference language is German.

For further information please contact:
Heimo H. Adelsberger
ADELSBER at AWIWUW11.bitnet (EARN)

------------------------------

End of AIList Digest
********************

∂22-Apr-88  0500	LAWS@KL.SRI.COM 	AIList V6 #77 - Learning, Seminars, Conferences 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 22 Apr 88  05:00:19 PDT
Date: Thu 21 Apr 1988 21:50-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #77 - Learning, Seminars, Conferences
To: AIList@SRI.COM


AIList Digest            Friday, 22 Apr 1988       Volume 6 : Issue 77

Today's Topics:
  Article - Associative Learning,
  Seminars - Constrained Reformulation (Rutgers) &
    Bayesian Spectrum Analysis (NASA),
  Conferences - AAAI88 Workshop on Knowledge Acquisition &
    ACL 1988 Annual Meeting Program and Registration &
    AI and Simulation

----------------------------------------------------------------------

Date: 18 Apr 88 20:13:16 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Associative learning: Call for Commentators


The following is the abstract of a target article to appear in
Behavioral and Brain Sciences (BBS).  All BBS articles are accompanied
by "open peer commentary" from across disciplines and around the
world. For information about serving as a commentator on this article,
send email to harnad@mind.princeton.edu or write to BBS, 20 Nassau
Street, #240, Princeton NJ 08540 [tel: 609-921-7771]. Specialists in
the following areas are encouraged to contribute: connectionism/PDP,
neural modeling, associative modeling, classical conditioning, operant
conditioning, cognitive psychology, behavioral biology, neuroethology.


CLASSICAL CONDITIONING: THE NEW HEGEMONY

Jaylan Sheila Turkkan
Division of Behavioral Biology
Department of Psychiatry and Behavioral Sciences
The Johns Hopkins University School of Medicine

Converging data from different disciplines are showing that the role
of classical [associative] conditioning processes in the elaboration
of human and animal behavior is larger than previously supposed. Older
restrictive views of classically conditioned responses as merely secretory,
reflexive or emotional are giving way to a broader conception that includes
problem-solving and other rule-governed behavior thought to be under
the exclusive province of either operant conditioning or cognitive
psychology. There have also been changes in the way conditioning is
conducted and evaluated. Data from a number of seemingly unrelated
phenomena such as postaddictive drug relapse, the placebo
effect and immune system conditioning turn out to be related to
classical conditioning. Classical conditioning has also been found in
simpler and simpler organisms and has recently been demonstrated in
brain slices in utero. This target article will integrate the diverse
areas of classical conditioning research and theory; it will also
challenge teleological interpretations of classically conditioned
responses and will offer some basic principles to guide experimental
testing in diverse areas.
--

Stevan Harnad            harnad@mind.princeton.edu       (609)-921-7771

------------------------------

Date: 15 Apr 88 00:56:26 GMT
From: gauss.rutgers.edu!aramis.rutgers.edu!lightning.rutgers.edu!ctong
      @rutgers.edu  (Chris Tong)
Subject: Seminar - Constrained Reformulation (Rutgers)

The following thesis proposal defense will be held at 10am, Mar. 29,
in Hill Center, room 423, Busch Campus, Rutgers University, New
Brunswick, NJ., and will be chaired by Chris Tong.

                            CONSTRAINT INCORPORATION
                      USING CONSTRAINED REFORMULATION

                               Wesley Braudaway
                               wes@aramis.rutgers.edu

ABSTRACT. The goal of this research is to develop knowledge
compilation techniques to produce a problem-solving system from a
declarative solution description.  It has been shown that a
Generate-and-Test problem-solver can be compiled from a declarative
language that represents solutions as instances of a (hierarchically
organized) solution frame; the generator systematically constructs
instances of the solution frame, until one is found that meets all the
tests.  However, this Generate-and-Test architecture is
computationally infeasible as a problem-solver for all but trivial
problems.  Optimization techniques must be used to improve the
efficiency of the resulting problem-solving system.  Test
Incorporation is one such optimization technique that moves testers,
which test the satisfaction of the problem constraints, back into the
generator sequence to provide early pruning.

This proposal defines a special kind of test incorporation called
Constraint Incorporation.  This technique modifies the generators so
they enumerate only those generator values that satisfy the problem
constraints defined by the tests.  Because of this complete
incorporation, the tests defining the incorporated constraints can be
removed from the Generate-and-Test architecture.  This results in a
significant increase of problem-solving efficiency over test
incorporation when the test cannot be partitioned into subtests that
affect a single generator.  These cases seem to occur when a mismatch
exists between the language used to represent (and construct)
solutions and the language used to define the problem constraints.  To
incorporate these constraints, the representations of solutions and
problem constraints should be shifted (i.e., reformulated) so as to
bridge the gap between them.

One method for bridging the gap is to search the space of solution and
problem representations until incorporation is enabled. However,
because of the difficulties encountered (e.g., the space is large and
difficult to generate), an alternative method is proposed that will
constrain the reformulation process.  This method incorporates
constraints by compiling an abstract solution description into a
problem-solver.  By using an abstract solution description, the system
does not commit prematurely to a detailed and biased representation of
the solution description.  The problem constraints are refined into
procedural specifications and merged to form a partial specification
of the problem-solver. The problem-solver is partial in that it only
generates those solution details mentioned in the constraints.  In
this way, the compiler is focusing on just those details of the
solution language that are relevant to incorporating the constraints.
The partial problem-solver is then extended into a complete one by
adding generators for the remaining details. Any such extension is
guaranteed to have successfully incorporated all the constraints.

This method has been applied to a house floorplanning domain, using
extensive paper traces. It is currently being implemented, and will be
applied to a second domain.

------------------------------

Date: Tue, 19 Apr 88 13:45:34 PST
From: CHIN%PLU@ames-io.ARPA
Subject: Seminar - Bayesian Spectrum Analysis (NASA)


              National Aeronautics and Space Administration
                         Ames Research Center

                        SEMINAR ANNOUNCEMENT


SPEAKER:   Dr. George L. Bretthorst
           University of Washington

TOPIC:     Bayesian Spectrum Analysis and Parameter Estimation

ABSTRACT:

Bayesian spectrum analysis is still in its infancy.  It was born when E.
T. Jaynes derived the periodogram as a sufficient statistic for determining
the spectrum of a time sampled data set containing a single stationary
frequency.  Here we extend that analysis and explicitly calculate the joint
posterior probability that multiple frequencies are present, independent
of their amplitude and phase and the noise level.  This is then generalized
to include other parameters such as decay and chirp.  Results are given
for computer simulated data and for real data ranging from magnetic resonance
to astronomy to economic cycles.  We find substantial improvements in
resolution over previous Fourier transform methods.



DATE: Friday        TIME: 10:30 - 11:30 a.m.     BLDG. 244   Room 103
      May 6, 1988
                        --------------


POINT OF CONTACT: Marlene Chin   PHONE NUMBER: (415) 694-6525
     NET ADDRESS: chin@pluto.arc.nasa.gov

***************************************************************************

VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18.  Do not
use the Navy Main Gate.

Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance.  Submit requests to the point of
contact indicated above.  Non-citizens must register at the Visitor
Reception Building.  Permanent Residents are required to show Alien
Registration Card at the time of registration.
***************************************************************************

------------------------------

Date: 16 Apr 1988 1742-EDT
From: Alain Rappaport <RAPPAPORT@C.CS.CMU.EDU>
Subject: Conference - AAAI88 Workshop on Knowledge Acquisition

From: Brian Gaines <gaines%calgary.cdn%ean.ubc.ca@relay.cs.net>
Subject: AAAI KAW

AAAI-88 WORKSHOP

INTEGRATION OF KNOWLEDGE ACQUISITION AND PERFORMANCE SYSTEMS

Sunday August 21 1988
St Paul, Minnesota

CALL FOR PAPERS & PARTICIPANTS

One-week Knowledge Acquisition Workshops were held in North
America in 1986 and 1987 under AAAI sponsorship, and a third
was held in Europe in September 1987.  In 1988 the AAAI
Workshop will be held at Banff in November and the European
one in June.  These Workshops have attracted large-scale
interest and involvement from those involved in knowledge
acquisition studies.  However, there are issues of
integration at these Workshops that involve other research
communities.  In particular, the integration of knowledge
acquisition and performance tools involves major problems and
issues.  Expert  system shells and knowledge acquisition
systems have been developed by different groups with
different approaches to knowledge representation, user
interfaces and other critical factors. There are also
fundamental problems in transforming acquired knowledge
intoforms appropriate to existing shells.

This Workshop will address the theoretical and practical
issues of integrating knowledge acquisition and performance
systems.

STRUCTURE

The one-day Workshop is intended for active participants.  It
will be based on a number of short position, experience and
survey papers leading into group discussion.

Contributions on all aspects of the integration of
acquisition and performance systems are welcome.  In
particular, we are looking for some short case histories of
experience, both positive and negative, in transfer between
acquisition tools and shells.

SUBMISSIONS

Papers: send 6 copies of a long abstract (at least 6 pages)
or a draft paper.

Participants: send 6 copies of a short bio, including
relevant publications, and a short
description of your relevant experience and projects.

Submissions should be sent to Alan Rappaport by 1st May 1988.

Please send a note or e-mail about the intention to submit
and a provisional title as soon as possible.  Notification
about acceptance of papers and participation will be
sent out by the end of May.  Final papers and project
synopses will be due by the end of June for the Workshop
Proceedings.

ORGANIZING COMMITTTEE

Alan Rappaport, Neuron Data, (Alain.Rappaport@c.cs.cmu)
Brian R. Gaines, University of Calgary, (gaines@calgary.cdn)
John H. Boose, Boeing Computer Services (john@boeing.com)

SUBMISSIONS TO

Alan Rappaport
Neuron Data
444 High Street
Palo Alto
CA 94301, USA
(415) 321 4488

------------------------------

Date: 19 Apr 88 02:20:15 GMT
From: FLASH.BELLCORE.COM!walker@ucbvax.Berkeley.EDU  (Don Walker)
Subject: Conference - ACL 1988 Annual Meeting Program and Registration

The printed version of the following program and registration information will
be mailed to ACL members by the end of the week.  Others are encouraged to use
the attached form or write for a program flier to the following address:
                Dr. D.E. Walker (ACL)
                Bellcore - MRE 2A379
                445 South Street - Box 1910
                Morristown, NJ 07960-1910, USA
or send net mail to walker@flash.bellcore.com or bellcore!walker@uunet.uu.net,
specifying "ACL Annual Meeting Information" on the subject line.

                 ASSOCIATION FOR COMPUTATIONAL LINGUISTICS
                            26th Annual Meeting

                               7-10 June 1988
        Knox 20, State University of New York at Buffalo (Amherst Campus)
                           Buffalo, New York, USA

  [This went out on the NL-KR list, so I won't rebroadcast it here.
  Contact the author if you need a copy.  -- KIL]

------------------------------

Date: Tue, 19 Apr 88 23:30:15 WUT
From: ADELSBER%AWIWUW11.BITNET@CUNYVM.CUNY.EDU
Subject: Conference - AI and Simulation

The first meeting of the working group (Arbeitskreis) "AI and
Simulation" of the German speaking ASIM (Arbeitsgemeinschaft Simulation)
will be held 20 - 21 July, 1988 in Vienna at the Technical University.

The conference language is German.

For further information please contact:
Heimo H. Adelsberger
ADELSBER at AWIWUW11.bitnet (EARN)

------------------------------

End of AIList Digest
********************

∂22-Apr-88  0759	LAWS@KL.SRI.COM 	AIList V6 #78 - Expert Database Systems    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 22 Apr 88  07:59:34 PDT
Date: Thu 21 Apr 1988 21:53-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #78 - Expert Database Systems
To: AIList@SRI.COM


AIList Digest            Friday, 22 Apr 1988       Volume 6 : Issue 78

Today's Topics:
  Conference - 2nd Intl Expert Database Systems

----------------------------------------------------------------------

Date: 10 Apr 88 02:01:21 GMT
From: mykines!timos@mimsy.umd.edu  (Timos Sellis)
Subject: Conference - 2nd Intl Expert Database Systems


ADVANCE PROGRAM

The Second International Conference on
Expert Database Systems
April 25-27, 1988

Sheraton Premiere at Tysons Corner, Virginia

Sponsored by:           George Mason University

In Cooperation With:    American Association for Artificial Intelligence
                        Association for Computing Machinery - SIGART and SIGMOD
                        IEEE Computer Society - T. C. on Data Engineering


Conference Objectives

The International Conference on Expert Database Systems has
established itself as a leading edge forum that explores the
theoretical and practical issues in making database systems more
intelligent and supportive of Artificial Intelligence (AI)
applications.  Expert Database Systems represent the confluence of R&D
activities in Artificial Intelligence, Database Management, Logic and
Logic Programming, Information Retrieval, and Fuzzy Systems Theory.
It is precisely this synergism among disciplines which makes the
Conference both stimulating and unique.


Organizing Committee

Conference Chairman
Edgar H. Sibley, George Mason University

Program Chairman
Larry Kerschberg, George Mason University


Program Committee

Robert Abarbanel, IntelliCorp
Hideo Aiso, Keio University
Antonio Albano, Univ. di Pisa
Stephen Andriole, GMU
Robert Balzer, USC/ISI
Francois Bancilhon, GIP Altair, France
Don Batory, Univ. of Texas
Alex Borgida, Rutgers University
Michael Brodie, GTE Labs, Inc.
Janis Bubenko, Univ. of Stockholm
Peter Buneman, Univ. of Pennsylvania
Stefano Ceri, Politecnico di Milano
Umesh Dayal, Computer Corp. of America
Mark Fox, Carnegie-Mellon University
Antonio L. Furtado, IBM do Brasil
Herve Gallaire, ECRC, FRG
Barbara Hayes-Roth, Stanford University
Yannis Ioannidis, Univ. of Wisconsin
Sushil Jajodia, National Science Foundation
Matthias Jarke, Univ. of Passau
Jonathan King, Teknowledge, Inc.
Roger King, Univ. of Colorado
Robert Meersman, Tilburg University
Tim Merrett, McGill University
Matthew Morgenstern, SRI International
John Mylopoulos, Univ. of Toronto
Sham Navathe, Univ. of Florida
Erich Neuhold, GMD, FRG
Setuo Ohsuga, Univ. of Tokyo
Stott Parker, UCLA
Alain Pirotte, Philips Research Lab
Don Potter, Univ. of Georgia
Larry Reeker, BDM Corporation
Nick Roussopoulos, Univ. of Maryland
Erik Sandewall, Linkoping University
Timos Sellis, Univ. of Maryland
John Smith, Kendall Square Research
Reid Smith, Schlumberger Palo Alto Res.
Arne Solvberg, Univ. Trondeim
John Sowa, IBM SRI
Jacob Stein, Servio Logic Dev. Corp.
Michael Stonebraker, UC - Berkeley
Adrian Walker, IBM TJ Watson Center
Andrew Whinston, Purdue University
Gio Wiederhold, Stanford University
Eugene Wong, UC - Berkeley
Carlo Zaniolo, MCC


Tutorial and Panel Coordinator
Lucian Russell, Computer Sciences Corp.

Conference Coordinators
Juliette Gregory and Barbara Framer, GMU

Exhibit Coordinators
Diane Tosh Entner, RAMCOR, REassociates
Carolyn  Komada, E-Systems, Melpar

Publicity Chairman
Jorge Diaz-Herrera, GMU

[...]


============================
Conference Technical Program
============================

---------------------
Monday, April 25, 1988
----------------------
8:45-9:00 am    Opening Remarks
Chairman:  Edgar H. Sibley, George Mason University, USA

9:00-10:00 am   Keynote Address
Chairman:  Larry Kerschberg, George Mason University, USA

Future Directions in Expert Database Systems
Michael Stonebraker, Univ. of California at Berkeley, USA

10:00-10:30 am  Coffee Break

10:30-12:00 am  Object-Oriented Systems
Chairman:  Jacob Stein, Servio Logic, USA

Abstract Objects in an Object-Oriented Data Model
J. Zhu and D. Maier, Oregon Graduate Center, USA

KIVIEW:  An Object-Oriented Browser
A. Motro, Univ. of Southern California, USA, A. D'Atri and L.
Tarantino,
Univ. of Rome, Italy

Towards a Unified View of Design Data and Knowledge Representation
B. Mitschang, Universitat Kaiserslautern, FRG

12:00-1:30 pm   Luncheon

1:30- 3:00 pm   Constraint Management
Chairmen:  Herve Gallaire, ECRC, FRG  and Alain Pirotte, Philips Labs, Belgium

Implementing Constraints in a Knowledge-Base
J.A. Wald, Schlumberger-Doll Research, USA

Update-Oriented Database Structures
L. Tucherman and A.L. Furtado, IBM Rio Scientific Center,  Brazil

Distribution Design of Integrity Constraints
X. Qian, Stanford University, USA

3:00-3:30 pm    Coffee Break

3:30-5:00 pm  Panel:  Constraint-Based Systems:  Knowledge about Data
Chairman:  Matthew Morgenstern, SRI International, USA

5:30-6:30 pm    Hospitality Hour

7:00-10:00 pm   Campaign Capers

-----------------------
Tuesday, April 26, 1988
-----------------------

8:30-10:00 am   Expert Database System Architectures
Chairmen:  Robert Meersman, Tilburg University, and Sushil Jajodia, NSF

BERMUDA - An Architectural Perspective on Interfacing Prolog to a
Database Machine
Y.E. Ioannidis, J. Chen, M.A. Friedman and M.M. Tsangaris, U. of Wisconsin

A Look at Loosely-Coupled Prolog/Database Systems
B. Napheys and D. Herkimer, Martin Marietta, USA

Combining Top Down and Bottom Up Computation in Knowledge Based Systems
M. Nussbaum, ETH, Switzerland

10:00-10:30 am  Coffee Break

10:30-12:00 am  Morning Parallel Sessions

IA:     Knowledge/Data System Architectures
Chairmen:  Roger King, Univ. of Colorado  and Robert Abarbanel, IntelliCorp

A Distributed Knowledge Model for Multiple Intelligent Agents
Y.P. Li, Jet Propulsion Laboratory, USA

The Relational Production Language:  A Production Language for
Relational Databases
L.M.L. Delcambre and J.N. Etheredge, U. of Southwestern Louisiana, USA

A Transaction Oriented Mechanism to Control Processing in a Knowledge
Base Management System
L. Raschid, Univ. of Maryland, USA

IB:     Recursive Query Processing
Chairman:  Tim H. Merrett, McGill University

Transitive Closure of Transitively Closed Relations
P. Valduriez and S. Khoshafian, MCC, USA

Transforming Nonlinear Recursion to Linear Recursion
Y.E. Ioannidis, Univ. of Wisconsin  and E. Wong, UC-Berkeley, USA

A Compressed Transitive Closure Technique for Efficient Fixed-Point
Query Processing
H.V. Jagadish, AT&T Bell Laboratories, USA

12:00-1:30 pm   Luncheon

1:30-3:00 pm    Afternoon Parallel Sessions

IIA:    Learning and Adaptation in Expert Databases
Chairmen:  Alex Borgida, Rutgers University  and Don Potter, Univ. of Georgia

An Automatic Improvement Processor for an Information Retrieval System
K.P. Brunner, Merit Technology, Inc. and R.R. Korfhage, Univ. of
Pittsburgh, USA

Supporting Object Flavor Evolution through Learning in an
Object-Oriented Database System
Q. Li and D. McLeod, Univ. of Southern California, USA

Implicit Representation of Extensional Answers
C.D. Shum and R. Muntz, UCLA, USA

IIB:    Knowledge Management in Deductive Databases
Chairmen:  Sham Navathe, U. of Florida  and Francois Bancilhon, GIP Altair

Deep Compilation of Large Rule Bases
T.K. Sellis and N. Roussopoulos, Univ. of Maryland, USA

Handling Knowledge by its Representative
C. Sakama and H. Itoh, ICOT, Japan

Integrity Constraint Checking in Deductive Databases using a Rule/Goal Graph
B. Martens and M. Bruynooghe, Katholieke Universiteit Leuven, Belgium

3:00-3:30 pm    Coffee Break

3:30-5:00 pm  Panel:  Knowledge Distribution and Interoperability
Chairman:  Michael Brodie, GTE Labs, USA

6:00-11:00 pm   Spirit of Washington Cruise

-------------------------
Wednesday, April 27, 1988
-------------------------

9:00-10:30 am   Intelligent Database Interfaces
Chairmen:  Erich Neuhold, GMD, FRG  and Larry Reeker, BDM Corp.

Musing in an Expert Database
S. Fertig and D. Gelernter, Yale University, USA

Cooperative Answering:   A Methodology to Provide Intelligent Access
to Databases
F. Cuppens and R. Demolombe, ONERA-CERT, France

G+:  Recursive Queries without Recursion
I.F. Cruz, A.O. Mendelzon and P.T. Wood, Univ. of Toronto, Canada

10:30-11:00 am  Coffee Break

11:00-12:30 pm  Semantic Query Optimization
Chairman:  Matthias Jarke, Univ. of Passau, FRG

Automatic Rule Derivation for Semantic Query Optimization
M.D. Siegel, Boston University, USA

A Metainterpreter to Semantically Optimize Queries in Deductive Databases
J. Lobo and J. Minker, Univ. of Maryland, USA

>From QSQ towards QoSaQ:  Global Optimization of Recursive Queries
L. Vieille, ECRC, FRG

12:30-2:00 pm   Luncheon

2:00-3:30 pm  Panel:  Knowledge Management
Chairman:  Adrian Walker, IBM T.J. Watson Research Center,  USA

Panelists:       R. Kowalski, Imperial College, London, D. Lenat, MCC,
Austin, E. Soloway, Yale University  and M. Stonebraker, UC - Berkeley
=========================
Tutorial Program
=========================

Tutorial I - Monday Afternoon, April 25, 1:30 pm - 5:00 pm

Logic and Databases
Instructor:  Dr. Carlo Zaniolo, MCC, Austin, Texas

Dr. Zaniolo heads a group at MCC performing research on deductive
databases and logic programming.  He has held positions at Sperry
Research and Bell Laboratories.  He is the author of over 40 technical
papers, a member of numerous Program Committees, and edited the
December 1987 Data Engineering special issue on Databases and Logic.

Course Description:  There is a growing demand for supporting
knowledge-based applications by means of Knowledge Management Systems;
these will have to combine the inference mechanisms of Logic with the
efficient and secure management of data provided by Database
Management Systems(DBMS).  The major topics are:  Logic and relational
query languages; Semantics of Horn Clauses; Prolog and DBMSs; Coupling
Prolog with a DBMS; Making Prolog a database language; Integrating
Logic and Database Systems:  Sets, Negation and Updates; Choosing an
Execution Model; Compilation: magic sets to support recursive
predicates; Optimization and Safety; Overview of selected R&D
projects.

-------------------------------------------------------------------

Tutorial II - Tuesday Morning, April 26, 8:30 am - 12:00 am

Distributed Problem Solving in Knowledge/Data Environments
Instructor:  Prof. Victor Lesser, University of Massachusetts,
Amherst, MA

Dr. Lesser is Professor of Computer and Information Science at UMASS,
where he heads research groups in Distributed Artificial Intelligence
and Intelligent User Interfaces.  Prior to joining UMASS in 1977, he
was on the faculty of Carnegie-Mellon University, where he was a
Principal in the development of the HEARSAY Speech Understanding
System and responsible for the system architecture.

Course Description:  This tutorial will explore the major concepts and
systems for cooperative knowledge-based problem solving.  The major
topics include:  Connectionist, Actor and Cooperating ES paradigms;
Conceptual Issues including:  examples of distributed search,
interpretation, planning and cooperation, global coherence, dealing
with inconsistency and incompleteness, sharing world views, and design
rules for a cooperating ES; System Architectures for satisficing,
negotiation, tolerance of inconsistency in problem-solving,
organizational structuring, integration of local and network control,
and expectation-driven communication; Discussion of working systems
including Contract Nets, Partial Global Planning, AGORA MACE, ABE,
DPS, and MINDS; and Future Directions.
-----------------------------------------------------------------------

Tutorial III - Tuesday Afternoon, April 26, 1:30 pm - 5:00 pm

Knowledge Representation and Data Semantics
Instructor:  Prof. John Mylopoulos, University of Toronto, Canada

Dr. John Mylopoulos is Professor of Computer Science at the University
of Toronto and research fellow of the Canadian Institute for Advanced
Research. His research interests include knowledge representation and
its applications to Databases and Software Engineering.  Dr.
Mylopoulos has edited three books on the general topic of AI and
Databases. He received his Ph.D degree from Princeton University.

Course Description:  Knowledge Representation including history, basic
paradigms such as semantic nets, logic-based representations,
productions, frames, role of uncertainty, and inference mechanisms,
examples such as KL-ONE and OMEGA; Semantic Data Models including
historical models such as Abrial's Binary Model, Entity/Relationship,
RM/T and SDM, detailed study of ADAPLEX, TAXIS, and GALILEO,
implementation techniques; Comparison of SDMs to Object-Oriented model
such as POSTGRES and GEM as well as Deductive Databases.

------------------------------------------------------------------------
Tutorial IV - Wednesday Morning, April 27, 9:00 am - 12:30 pm

Acquisition of Knowledge from Data
Instructor:  Prof. Gio Wiederhold, Stanford University, Stanford,
California

Dr. Gio Wiederhold is Associate Professor of Medicine and Computer
Science (Research) at Stanford University.  His research involves
knowledge-based approaches to medicine, design, and planning.  He is
the Editor-in-Chief of ACM's Transactions on Database Systems  and
associate editor of M.D. Computing   and IEEE Expert  magazine.
Wiederhold has over 130 publications, including a widely used textbook
on Database Design.  In 1987, McGraw-Hill published his new book, File
Organization for Database Design.

Course Description:  The architecture of an operational system, RX, is
presented which uses knowledge-based techniques to extract new
knowledge from a large clinical database.  RX exploits both
frame-based knowledge and rules, as well as a database.  Frames are
used to store deep and interconnected knowledge about disease states
and medical actions.   Definitional and causal knowledge is
represented by inter-connections between frames that go across the
hierarchies, sideways as well as up and down, so that the aggregate
knowledge is represented by a network.  Rules select the appropriate
statistical methods used to reduce the volume of data into
information.  The database contains observations on rheumatic
diseases, collected over a dozen years.

[...]

===============================================================
Timos Sellis
CS Dept, University of Maryland, College Park, MD 20742
ARPA:timos@mimsy.umd.edu  UUCP:{decvax,allegra,...}!mimsy!timos

------------------------------

End of AIList Digest
********************

∂22-Apr-88  1706	LAWS@KL.SRI.COM 	AIList V6 #79 - Conferences: Automated Deduction, Productivity 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 22 Apr 88  17:06:15 PDT
Date: Thu 21 Apr 1988 21:57-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #79 - Conferences: Automated Deduction, Productivity
To: AIList@SRI.COM


AIList Digest            Friday, 22 Apr 1988       Volume 6 : Issue 79

Today's Topics:
  Conferences - CADE-9 Automated Deduction &
    Productivity (Preliminary Program)

----------------------------------------------------------------------

Date: Thu, 14 Apr 88 12:02:01 -0500
From: stevens%antares@anl-mcs.arpa
Subject: Conference - CADE-9 Automated Deduction


                                 CADE - 9

            9th International Conference on Automated Deduction

                              May 23-26, 1988

             Preliminary Schedule and Registration Information

CADE-9 will be held at Argonne National Laboratory (near Chicago) in  cele-
bration  of the 25th anniversary of the discovery of the resolution princi-
ple at Argonne in the summer of 1963.

Dates
        Tutorials: Monday, May 23
        Conference:  Tuesday, May 24 - Thursday, May 26

        Main Attractions:

1.   Presentation of more than sixty papers related to aspects of automated
     deduction.  (A detailed listing of the papers is attached.)

2.   Invited talks from

             Bill Miller, president, SRI International
             J. A. Robinson, Syracuse University
             Larry Wos, Argonne National Laboratory

     all of whom were at Argonne 25 years ago when the resolution principle
     was discovered.

3.   Organized dinners  every  night,  including  the  Conference  banquet,
     "Dinner with the Dinosaurs", at Chicago's Field Museum of Natural His-
     tory.

4.   Facilities for demonstration of  and  experimentation  with  automated
     deduction systems.

5.   Tutorials in a number of special areas within automated deduction.

Tutorials:

We have tried to make the tutorials relatively short and  inexpensive.   It
is  hoped  that  many  researchers that are skilled in specialized areas of
automated deduction will take the opportunity to get an overview of related
research  areas.  Some of the details (like titles, exactly which member of
a team will give the tutorial, etc.) have not yet been finalized.  The fol-
lowing  information  reflects  our  current  information.   It  may  change
slightly, but expect that no major changes will occur.

Tutorial 1:  Constraint Logic Programming

     This will be a tutorial on the Constraint  Logic  Programming  Scheme,
     and  systems that have implemented the concepts (see "Constraint Logic
     Programming", J. Jaffar and J-L Lassez, Proc. POPL87, Munich,  January
     1987).

Tutorial 2:  Verification and Synthesis

     This will be a tutorial by Zohar Manna and Richard Waldinger on  their
     work in verification and synthesis of algorithms.

Tutorial 3:  Rewrite Systems

     This will be a tutorial given by Mark Stickel covering the basic ideas
     of equality rewrite systems.

Tutorial 4:  Theorem Proving in Non-Standard Logics

     This tutorial will be given by Michael  McRobbie.   It  will  cover  a
     number of topics from his new book.

Tutorial 5:  Implementation I: Connection Graphs

     This tutorial will be given by members of the SEKI project.   It  will
     cover  issues  concerning why connections graphs are used and how they
     can be implemented.

Tutorial 6:  Implementation II: an Argonne Perspective

     This tutorial will present the central implementation issues from  the
     perspective  of  a  number  of  members of the Argonne group.  It will
     cover topics like choice of language, indexing, basic algorithms,  and
     utilization of multiprocessors.

Tutorial 7:  Open questions for Research

     This tutorial will be given by Larry Wos.  It will focus on  two  col-
     lections  of  open questions.  One set consists of questions about the
     field of automated theorem proving  itself,  questions  whose  answers
     will  materially  increase the power of theorem-proving programs.  The
     other set consists of questions taken from various fields of mathemat-
     ics  and logic, questions that one might attack with the assistance of
     a theorem-proving program.  Both sets of questions provide  intriguing
     challenges for possible research.

How to Register

Fill out the following  registration form (the part of this message between
the rows of ='s) and return as soon as possible to:

        Mrs. Miriam L. Holden, Director
        Conference Services
        Argonne National Laboratory
        9700 South Cass Avenue
        Argonne, IL 60439
        U. S. A.

Questions relating to registration and accommodations can  be  directed  to
Ms. Miriam Holden or Joan Brunsvold at (312) 972-5587.

[...]

                               Session Schedule

                                  Session 1

First-Order Theorem Proving Using Conditional Rewriting
    Hantao Zhang
    Deepak Kapur

Elements of Z-Module Reasoning
    T.C. Wang

                                  Session 2

Flexible Application of Generalised Solutions Using Higher Order Resolution
    Michael R. Donat
    Lincoln A. Wallen

Specifying Theorem Provers in a Higher-Order Logic Programming Language
    Amy Felty
    Dale Miller

Query Processing in Quantitative Logic Programming
    V. S. Subrahmanian

                                  Session 3

An Environment for Automated Reasoning About Partial Functions
    David A. Basin

The Use of Explicit Plans to Guide Inductive Proofs
    Alan Bundy

LOGICALC: an environment for interactive proof development
    D. Duchier
    D. McDermott

                                  Session 4

Implementing Verification Strategies in the KIV-System
    M. Heisel
    W. Reif
    W. Stephan

Checking Natural Language Proofs
    Donald Simon

Consistency of Rule-based Expert Systems
    Marc Bezem

                                  Session 5

A Mechanizable Induction Principle for Equational Specifications
    Hantao Zhang
    Deepak Kapur
    Mukkai S. Krishnamoorthy

Finding Canonical Rewriting Systems Equivalent to a Finite Set of
 Ground Equations in Polynomial Time
    Jean Gallier
    Paliath Narendran
    David Plaisted
    Stan Raatz
    Wayne Snyder

                                  Session 6

Towards Efficient Knowledge-Based Automated Theorem Proving for
 Non-Standard Logics
    Michael A. McRobbie
    Robert K. Meyer
    Paul B. Thistlewaite

Propositional Temporal Interval Logic is PSPACE
    A. A. Aaby
    K. T. Narayana

                                  Session 7

Computational Metatheory in Nuprl
    Douglas J. Howe

Type Inference and Its Applications in Prolog
    H. Azzoune

                                  Session 8

Procedural Interpretation of Non-Horn Logic Programs
    Arcot Rajasekar
    Jack Minker

Recursive Query Answering with Non-Horn Clauses
    Shan Chi
    Lawrence J. Henschen

                                  Session 9

Case Inference in Resolution-Based Languages
    T. Wakayama
    T.H. Payne

Notes on Prolog Program Transformations, Prolog Style, and Efficient
 Compilation to the Warren Abstract Machine
    Ralph M. Butler
    Rasiah Loganantharaj

Exploitation of Parallelism in Prototypical Deduction Problems
    Ralph M. Butler
    Nicholas T. Karonis

                                  Session 10

A Decision Procedure for Unquantified Formulas of Graph Theory
    Louise E. Moser

Adventures in Associative-Commutative Unification (A Summary)
    Patrick Lincoln
    Jim Christian

Unification in Finite Algebras is Unitary(?)
    Wolfram Buttner

                                  Session 11

Unification in a Combination of Arbitrary Disjoint Equational Theories
    Manfred Schmidt-Schauss

Partial Unification for Graph Based Equational Reasoning
    Karl Hans Blasius
    Jorg Siekmann

                                  Session 12

SATCHMO:  A theorem prover implemented in Prolog
    Rainer Manthey
    Francois Bry

Term Rewriting: Some Experimental Results
    Richard C. Potter
    David Plaisted

                                  Session 13

Analogical Reasoning and Proof Discovery
    Bishop Brock
    Shaun Cooper
    William Pierce

Hyper-Chaining and Knowledge-Based Theorem Proving
    Larry Hines

                                  Session 14

Linear Modal Deductions
    L. Farinas del Cerro
    A. Herzig

A Resolution Calculus for Modal Logics
    Hans Jurgen Ohlbach

                                  Session 15

Solving Disequations in Equational Theories
    Hans-Jurgen Burckert

On Word Problems in Horn Theories
    Emmanuel Kounalis
    Michael Rusinowitch

Canonical Conditional Rewrite Systems
    Nachum Dershowitz
    Mitsuhiro Okada
    G. Sivakumar

Program Synthesis by Completion with Dependent Subtypes
    Paul Jacquet

                                  Session 16

Reasoning about Systems of Linear Inequalities
    Thomas Kaufl

A Subsumption Algorithm Based on Characteristic Matrices
    Rolf Socher

A Restriction of Factoring in Binary Resolution
    Arkady Rabinov

Supposition-Based Logic for Automated Nonmonotonic Reasoning
    Philippe Besnard
    Pierre Siegal

                                  Session 17

Argument-Bounded Algorithms as a Basis for Automated Termination Proofs
    Christoph Walther

Automated Aids in Implementation Proofs
    Leo Marcus
    Timothy Redmond

                                  Session 18

A New Approach to Universal Unification and Its Application to AC-Unification
    Mark Franzen
    Lawrence J. Henschen

An Implementation of a Dissolution-Based System Employing Theory Links
    Neil V. Murray
    Erik Rosenthal

                                  Session 19

Decision Procedure for Autoepistemic Logic
    Ilkka Niemela

Logical Matrix Generation and Testing
    Peter K. Malkin
    Errol P. Martin

Optimal Time Parallel Algorithms for Term Matching
    Rakesh M. Verma
    I.V. Ramakrishnan

                                  Session 20

Challenge Equality Problems in Lattice Theory
    William McCune

Single Axioms in the Implicational Propositional Calculus
    Frank Pfenning

Challenge Problems Focusing on Equality and Combinatory Logic:
 Evaluating Automated Theorem-Proving Programs
    Larry Wos
    William McCune

Challenge Problems from Nonassociative Rings for Theorem Provers
    Rick Stevens

------------------------------

Date: Mon, 18 Apr 88 15:39:46 EST
From: Charles Youman <m14817@mitre.arpa> (youman@mitre.arpa)
Subject: Conference - Productivity (Preliminary Program)

I think the following conference announcement will be of
interest to this group because there are a number of papers being presented
on expert systems.

Preliminary Program -- PRODUCTIVITY:  PROGRESS, PROSPECTS, AND PAYOFF
    27th Annual Technical Symposium of the Washington DC Chapter of ACM
    Gaithersburg, Maryland  June 9, 1988
Sponsors:
     Washington DC Chapter, Association for Computing Machinery;
     Institute for Computer Sciences & Technology, National Bureau
            of Standards

Key Dates:
     Register by June 1, 1988 and save over 10% of at door rate
     Register by May 1, 1988 and save an additional 15%
     Special rate for full time students

Productivity is a key issue in the information industry.  Information
technology must provide the means to maintain and enhance productivity.
The symposium "Productivity:  Progress, Prospects, and Payoff" will
explore theoretical and practical issues in developing and applying
technology in an information-based society.

Keynote address:  "Near Term Improvements in Productivity"
     Howard Yudkin, President and CEO, Software Productivity Consortium

Plenary panel:  "What Are the Impediments to Improving Productivity?"
     Walter Douherty, IBM
     Phil Kiviat, SAGE Federal Systems
     Marshall Potter, U.S. Navy
     Al Scherr, IBM

Parallel sessions:
     Processes and Tools for Higher     Software Economics and Reuse
       Software Productivity            Uncertainty in Software Requirements
     Software Specification Tools         Development
     Panel-Data Management Standards    Expert Systems and Knowledge
       A Key to Enhanced Productivity     Engineering in Software Engineering

For more information, REPLY to this message -OR- contact the Symposium
General Chairman: Charles E. Youman
                  DC Chapter ACM             (703) 883-6349
                  P.O. Box 12953             youman@mitre.arpa
                  Arlington, VA 22209-8953

                 27th Annual Technical Symposium
                        Program Schedule

8:00 -- 9:00: Registration

9:00 -- 9:15: Introduction

Welcoming Remarks

Richard L. Muller, DC ACM Chapter Chairman
James Burrows, Director, Institute for Computer Sciences and
Technology, NBS

Introduction of the Candidates for Chapter Office

Presentation of Awards

9:15 -- 10:00: Keynote Address

How Near-Term Productivity Gains Will Be Achieved
Howard L. Yudkin, President and CEO, Software Productivity
Consortium

Dr. Yudkin received his BSEE from the University of Pennsylvania
and both MSEE and PhD degrees from the Massachusetts Institute of
Technology.  He has 30 years of experience in management,
engineering, research, and teaching.

Dr. Yudkin is President and Chief Executive Officer of the
Software Productivity Consortium, an organization established by
14 leading aerospace firms to develop tools and methods to
improve the efficiency of software development and the quality of
the product.  The Consortium focuses on prototyping and
reusability, exploiting the technologies of systems engineering
and measurement.  The organization is developing the components
and configuration techniques by which its sponsor companies can
create advanced development environments for software.

He was formerly with Booz, Allen, and Hamilton, Inc., and the
Computer Sciences Corporation.  In government, Dr. Yudkin served
as Deputy Assistant Secretary of Defense with responsibilities
for defense communications systems and many of DoD's command and
control and data processing systems.  He also served as an
Assistant Director, Defense Research and Engineering.

10:15 -- 11:00: Plenary Panel

What Are the Impediments to Improving Productivity?
Moderator:  Wilma Osborne, Institute for Computer Sciences and
Technology, NBS

Philip J. Kiviat, Vice President, Business Operations, SAGE
Federal Systems, Inc.
Walter Doherty, Technical Consultant for Computing Systems, IBM
T. J. Watson Research Center
Al Scherr, Director of Integrated Applications, IBM Information
Systems Software Development
Marshall R. Potter, Technology Assessment Division, Office of the
Assistant Secretary of the Navy for Financial Management

Mr. Kiviat is Vice President, Business Operations of SAGE Federal
Systems, Inc.  He is responsible for the acquisition and
management of application system development projects and the
marketing and sale of SAGE's product line for computer-aided
software engineering.  Mr. Kiviat was the first Technical
Director of the Federal Computer Performance Evaluation and
Simulation Center (FEDSIM).  He has held previous management
positions with Simulation Associates, Inc., CTEC, Inc., and SEI
Information Technology, and technical positions with United
States Steel Corporation and the RAND Corporation.

Mr. Doherty is currently Technical Consultant for Computing
Systems at IBM's Research Division, T. J. Watson Research Center
in Computing Systems.  He also manages the Scientific Systems
Support Laboratory there.  He has been Manager of Productivity
and Technology Transfer at IBM Research, an adjunct faculty
member at IBM's SRI, a Distinguished Visitor for the IEEE
Computer Society, and a National Lecturer for ACM.  He developed
instrumentation and performance measurement technology and used
those tools to study the human-machine interface and productivity
resulting from the use of computers for the past 20 years.

Dr. Scherr is Director if Integraated Applications, IBM
Information Systems Software Development.  He manages the
architecture for application programs to achieve consistency,
portability, and extendability.  He is the focal point for
creating and supporting SolutionPac offerings.  Earlier in his
career with IBM, Dr. Scherr managed the overall system design,
project office, and system test organizations that coordinated
the efforts of 18 development groups in producing 1.8 million
lines of code for the first release of MVS.  Dr. Scherr is an IBM
Fellow.

Mr. Potter directs the Technology Assessment Division within the
Office of the Assistant Secretary of the Navy for Financial
Management.  He provides the Department of Navy direction for
joint programs and Navy research and development initiatives
covering the entire information resources area.  Prior to this
position, Mr. Potter worked for the Naval Electronic Systems
Command, the Defense Communicatons Engineering Center, the David
Taylor Naval R&D Center, and the National Institutes of Health.

11:10 -- 1:00: Parallel Technical Sessions

Session IA:  Processes and Tools for Higher Software Productivity

Session Chairman:  Ronald Giusti, The MITRE Corporation

How to Lose Productivity with Productivity Tools
Elliot J. Chikofsky, Index Technology Corporation

Integrating Data and Process for Productive Systems Analysis
Robert Lambert, American Management Systems

What Productivity Increases to Expect from a CASE Environment  --
Results of a User Survey
Peter Lempp, SPS Software Products and Services, Inc.

A Program a Day:  Software Productivity's Four-Minute Mile
Bruce I. Blum, Johns Hopkins University/Applied Physics Laborato-
ry

Session IB:  Software Economics and Reuse

Session Chairman:  William Wong, National Bureau of Standards

Software Production Economics:  Theoretical Models and Practical
Tools
Chris F. Kemerer, MIT Sloan School of Management

Measureing the Software Development Process
Glen Winemiller, Booz, Allen, and Hamilton, Inc.

Software Reuse -- Key to Enhanced Productivity
John Gaffney, Software Productivity Consortium

Improving Small Systems Software Development Productivity Through
the Management Process
Emily Beaton, The MITRE Corporation

1:00 -- 2:00: Lunch

2:00 -- 3:15: Parallel Technical Sessions

Session IIA:  Software Specification Tools

Session Chairman:  Walter Ellis, IBM Corporation

Program Visualization as a Technique for Improving Software
Productivity
B. Kjell and P. Wang, George Mason University

Specifying Syntax-Directed Tools and Automating Their
Implementation
Larry Morell and Keith Miller, The College of William and Mary

The Visible Tools Shop:  Increasing Programmer Productivity
Through Visual Displays
P. David Stotts, Richard Furuta, and Jefferson Ogata, University
of Maryland

Session IIB:  Uncertainty in Software Requirements Development

Session Chairman:  James D. Palmer, George Mason University

Impact of Requirements Uncertainty on Software Productivity
James D. Palmer, George Mason University

A Knowledge-Based Requirements System
Dolly Samson, George Mason University

A Knowledge-Based System to Reduce Software Requirements
Volatility
Margaret E. Myers, George Mason University

3:30 -- 4:45: Parallel Technical Sessions

Session IIIA:  Panel -- Data Management Standards:  A Key to
Enhanced Productivity

Moderator:  Elizabeth Fong, National Bureau of Standards

Bruce Bargmeyer, Lawrence Berkeley Laboratory
John Campbell, National Computer Security Center
Joe Leahy, UNISYS
Melody Rood, The MITRE Corporation
Edward Stull, GTE Government Systems

Session IIIB:  Expert Systems and Knowledge Engineering in
Software Engineering

Session Chairman:  Jon Weyland, Boeing Computer Services

Applying Software Engineering to Knowledge Engineering (and
Vice-Versa)
L. H. Reeker, T. A. Blaxton, and Christopher R. Westphal, The BDM
Corporation

Typed Functional Programming for Rapid Development of Reliable
Software
Val Breazu-Tannen, O. Peter Buneman, and Carl A. Gunter,
University of Pennsylvania

An Experimental Expert System for Improving the Productivity of
Nautical Chart Cartographers
G. F. Swetnam and E. J. Dombroski, The MITRE Corporation

5:00 -- 5:15: Washington DC Chapter ACM Business Meeting


  [Contact the message author for registration info.  -- KIL]

------------------------------

End of AIList Digest
********************

∂25-Apr-88  0219	LAWS@KL.SRI.COM 	AIList V6 #80 - Moderator Needed, Credit, PatRec, AI Goals
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 25 Apr 88  02:19:08 PDT
Date: Sun 24 Apr 1988 22:40-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #80 - Moderator Needed, Credit, PatRec, AI Goals
To: AIList@SRI.COM


AIList Digest            Monday, 25 Apr 1988       Volume 6 : Issue 80

Today's Topics:
  Administrivia - AIList Going, Going, ...,
  History - Demons,
  AI Tools - Credit Assignment Problem & Conversation Programs &
    Holographic Pattern Recognition,
  Opinion - Expert Systems vs. Operations Research &
    Need for AI and AI Languages

----------------------------------------------------------------------

Date: Sun 24 Apr 88 22:31:23-PDT
From: Ken Laws <LAWS@KL.SRI.COM>
Subject: AIList Going, Going, ...

As I mentioned previously, I will not be able to continue
moderating the AIList Digest much longer.  I have accepted
the position of Program Manager, Robotics and Machine
Intelligence, at the National Science Foundation (under
Y.T. Chien, Division of Information, Robotics, and Intelligent
Systems, Directorate for Computer and Information Science
and Engineering).  This two-year appointment begins at the
end of June, and I have a lot to finish up before then.

So far there has been exactly one offer of help -- and that
was an offer of relaying services if no one volunteered as
moderator.  So, if anyone wants to take all or part of the
AIList stream, the position is still open.

If the situation doesn't change, my recommendation is that
AIList cease to exist as a digest and that Usenet comp.ai
messages be forwarded to the current AIList readers.
Submissions can be sent to the gateway address, which will
be announced later.  (The gateway maintainer has expressed
no objection to making it public.)

One problem remains.  Nearly every digest I send out results
in about ten bounce messages (due to mailer problems and
people who have abandoned their mailboxes without telling
me).  If undigested messages are distributed, each message
will produce a similar number of error returns -- for a total
of perhaps one hundred messages per day!  There are two ways
to prevent this: digesting and local redistribution.

Digesting works, obviously, but puts quite a burden on the
new administrator -- especially if it leads to editing and
full moderation.  The digesting software is also a problem
since I use a version written in SAIL, an obsolete language.
(There are lists using other digesters, but obtaining one
and modifying it would be a bit of a hassle.)  Anyway, I
have come to favor undigested streams -- we just have to
get Arpanet to solve the distribution problem as Bitnet and
Usenet have done.

Local redistribution means that we should build a tree of relay
sites rather than have most hosts connect directly to the new
comp.ai relay.  Already most AIList addresses are bboards or
alias lists, but we need to go further; hosts need to drop
from the direct distribution and reconnect to other hosts.  The
new AIList administrator will then have to tell anyone wanting
to sign up to contact his own postmaster, who can contact a
postmaster at a secondary relay site if necessary.  All this
is a hassle to set up and maintain (with no central map of all
the connections), but if done properly it can keep the bounce
messages from all propagating back to the central administrator.

Well, it's up to you.  I'm ready to abdicate as soon as we
settle on an heir.  I'll be around to help out, of course,
but AIList will not continue long in its current form unless
someone wants to take over the digesting and administrative
duties.  Meanwhile, I'd appreciate it if some of the host
administrators who get this message would offer to take over
distribution and signup/drop duties for their principal cliques.

                                        -- Ken

------------------------------

Date: 21 Apr 88 02:41:10 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: demons: was FRL first ?


     Conniver, circa 1972 (McDermott, MIT) contained a database with similar
daemon mechanisms.  The Conniver manual appears as an ancient MIT AI lab memo.

                                        John Nagle

------------------------------

Date: 21 Apr 88 10:01 PDT
From: hayes.pa@Xerox.COM
Subject: Re: AIList V6 #74 - Queries, CLOS, ELIZA, Planner, Face
         Recognition

Subject: demons: was FRL first ?

I believe that Planner, first partial implementation in MicroPlanner, was the
first language to use if-needed, if-added and if-remove  demons, called THCONSE,
THANTE and THERASE .  This is certainly where the concepts originate.

Pat Hayes

------------------------------

Date: Thu, 21 Apr 88 09:47:02 -0400 (EDT)
From: David Greene <dg1v+@andrew.cmu.edu>
Subject: Re: Credit Assignment Problem


Obviously, it will depend on the technique and domain you are involved with,
but Holland and Smith both offer some interesting insights into some of the
issues... especially with regard to genetic algorithms and classifier systems.


Holland,J.H.  "Escaping Brittleness: the Possibilities of General Purpose
Learning Algorithms Applied to Parallel Rule-Based Systems" in Machine
Learning: An Artificial Intelligence Approach, volume II,  R. Michalski, J.
Carbonell, and T. Mitchell (Eds.), Morgan Kaufmann, 1986.

Smith, S.F.  "Adaptive Learning Systems" in Expert Systems, Principles and Case
Studies, R. Forsyth (Ed.),  Chapman and Hall, Ltd., 1984, chpt. 11.


Hope these are useful.

-David
dg1v@andrew.cmu.edu

------------------------------

Date: Fri, 22 Apr 88 15:42 EST
From: PGOETZ%LOYVAX.BITNET@CUNYVM.CUNY.EDU
Subject: Conversation programs

Someone asked for sources on programs like ELIZA & SHRDLU:

R.C. Parkinson, K.M. Colby, W.S. Faught.  "Conversational Language
   Comprehension Using Integrated Pattern-Matching & Parsing."  Artificial
   Intelligence 9, 1977, p. 111-134.  Also found in a recent (1986?) collection
   from Morgan Kaufman, Understanding Natural Language (or the same words in
   some other order).  Parry: a simulation of a paranoid patient.  Program
   outline.

Michael Dyer. Understanding Natural Language. 1983.  Boris: A system to
   summarize & answer questions about narratives.  About 400 pages.
   Talks about emotional scripts (ACEs or AFFECTs, I forget), memory
   organization, extensive use of demons.

Joseph Weizenbaum.  "ELIZA - A Computer Program for the Study of Natural
   Language Communication Between Man and Machine."  Communications of the ACM,
   Jan 1966 V9 #1 p. 36-45.

Weizenbaum.  "Contextual Understanding by Computers."  CACM, Aug 67 V10 #8,
   p. 474-480.  Note that Weizenbaum's extensions to ELIZA let it do much
   more than the sample ELIZAs you see popping up in magazines every now &
   then, including learning & answering queries.

Terry Winograd.  Understanding Natural Language. 1971.  SHRDLU.


Whoever asked about Racter - it was originally written on, surprise, the
   Apple IIe.

------------------------------

Date: Thu 21 Apr 88 08:57:36-PST
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: Re: holographic pattern recognition


  Thanks for the comments you tacked onto my comp.ai.digest query about
  holographic pattern recognition.

  >  ... Field target-recognition systems are likely to use holograms or
  >  acoustic-wave devices because they are faster than digital techniques
  >  and more robust than complex lens systems ...  Holographic systems
  >  storing dozens of different views of tanks and aircraft have been
  >  demonstrated.

  Can you point me to any reference on this stuff, or is it all classified?
  Raymond Lister


Most of it isn't classified, but it's so widely distributed that
I hardly know where to begin looking.  We're talking about entire
fields of 2-D matched filtering, optical target queueing, correlation
matching, character recognition, etc.  I remember seeing conference
papers on these tank/aircraft recognizers, but would need about a
day to track them down.  The SPIE and CVPR conferences would be good
places to start.

You might like the March '87 Scientific American article by
Abu-Mostafa and Psaltis on Optical Neural Computers, although
they emphasize associative memory rather than recognition.
(Recognition simply taps a different plane in the optical system.)

David Casasent at CMU is active in this field.  I have one of his
papers that's relevant: Optical Word Recognition: Case Study in
Coherent Optical Pattern Recognition, SPIE Optical Engineering,
Vol. 19, No. 5, Sep/Oct 1980, pp. 716-721.

Another paper that comes to hand, although not a great illustration,
is Mendelsohn, Wohlers, and Leib, Digital Analysis of the Effects
of Terrain Clutter on the Performance of Matched Filters for Target
Identification and Location, SPIE Vol. 186, Digital Processing of
Aerial Images, 1979, pp. 190-196.

Some early papers on correlation matching and Fourier signatures
can be found in Computer Methods in Image Analysis, a book of
reprints edited by Aggarwal, Duda, and Rosenfeld.  Two examples are
Horwitz and Shelton, Pattern Recognition using Autocorrelation,
and Lendaris and Stanley, Diffraction-Pattern Sampling for
Automatic Pattern Recognition.

For somewhat more recent work see Agrawala's book of reprints,
Machine Recognition of Patterns.  Examples are Preston's
A Comparison of Analog and Digital Techniques for Pattern
Recognition, and Holt's Comparative Religion in Character
Recognition Machines.

I think I should emphasis that these correlation-based matching
methods are rather fragile.  Casasent has done a lot of work on
recognizing patterns that may be rotated or scaled, but most of
these techniques require exact matches of standard, isolated
characters against uniform backgrounds.  They will not recognize
handwritten characters, for instance.

                                        -- Ken

------------------------------

Date: 21 Apr 88 22:54:47 GMT
From: mcvax!ukc!its63b!epistemi!edai!ceb@uunet.uu.net  (Colin
      Bridgewater)
Subject: Re: Expert Systems in the Railroad Industry.

In article <8816@agate.BERKELEY.EDU> lagache@violet.berkeley.edu
(Edouard Lagache) writes:
                                            ....for those interested in
>     computers and trains: what sort of expert systems have developed for
>     the railroad industry?  It seems to me that there are a number of
>     promising areas:
>
>     1.)  Scheduling.
>
>     2.)  Optimal switching moves and train assembly.
>
>     3.)  Cargo routing and loading.
>
>     4.)  Equipment Maintenance.
>
>          Does anyone know of what work (if any) has been done by railroads
>     or A.I. outfits in this area?  Interestingly enough, Dreyfus would
>     probably claim that the first 3 areas would be very promising domains
>     for expert systems.


  Just to get my two penn'orth in, whatever happened to dynamic programming
for scheduling, cargo-space optimisation and inventory control etc ?  This
well-worn technique is quite adequate for the majority of purposes envisaged
by EL. I mention this to raise a wider issue which was possibly not in the
mind of the original sender, namely that of the desire to throw ever more
complex solution procedures at the simplest of problems....

  Why should we want to implement an expert system, when adequate techniques
exist already ? That is, is the application of expert system technology
appropriate to the magnitude and complexity of the problem ? Should we be
advocating the application of such 'high-tech' solutions to all and sundry ?
I have no doubt that such systems could be made to work, don't get me wrong
on that, I just question whether the level of technology required in order to
do so is justified. Surely it is better to apply the simplest solutions when-
ever possible.

  Having said that, I too, would be interested to hear of any research, actual
implementations etc that are around. As an engineer involved in AI, I look for
simple solutions, in the (vain ?) hope of being able to debug them when things
go wrong..........

                                                Colin Bridgewater
                                                Univ of Edinburgh


P.S. there is an expert system around that diagnoses faults and discusses
     repair strategies on diesel-electric locomotives. Unfortunately, I don't
     have any references to hand, but I hope that this jogs someones memory.


   The Happy Hacker loves to go a-wandering, it's legal in the UK (official).

------------------------------

Date: Thu, 21 Apr 88 19:55 EST
From: INS_ATGE%JHUVMS.BITNET@CUNYVM.CUNY.EDU
Subject: AI -- Reasons


     Carole Hafner pointed out that one reason why we pursue AI is
curiosity about what computers can do.  Another equally valid reason
is the possibility of finding out what -we- as intelligent systems can
do, and possibly -how- we do it.
    Not all of AI is directly relevant to psychological and neurological
study, but some parts of it is.  It definately provides a way to determine
the relative complexity of problems using certain AI algorithms, and thus
when we find that the computer is has trouble doing what we easily and
quickly do, we know that the brain isn't thinking in that manner.
(That is, AI provides both positive and negative evidence to psychological
theories).
    Computational neuroscience has already had an effect on modern
physiological psychology.  In the future, with neural networks and other
"natural-like" AI systems, we might learn even more.

-Thomas Edwards
 from the positivist school for good technology

------------------------------

Date: 21 Apr 88 19:58:56 GMT
From: moss!ihlpa!tracy@att.arpa (Tracy)
Subject: Re: Prof. McCarthy's retort


In article <8804180635.AA09224@ucbvax.Berkeley.EDU>, prem@RESEARCH.ATT.COM
writes:
>
> This is a very cute, and compact retort, but not very convinving; it admits
> of very many similar cute and compact retorts...

        The essence of JMC's retort was not to be convincing, but
rather to show that they missed the point of why AI (or LISP, for
that matter) is useful. Clearly, you could not convince someone that
the problem could not be solved in assembly language, because in
theory it could be done. It just is not easy.

        --Kim Tracy

AT&T Bell Laboratories, Naperville, IL, ..ihnp4!ihlpa!tracy
But of course, it's only my opinion!

------------------------------

End of AIList Digest
********************

∂25-Apr-88  0426	LAWS@KL.SRI.COM 	AIList V6 #81 - Queries, BITNET Instructions    
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 25 Apr 88  04:25:59 PDT
Date: Sun 24 Apr 1988 22:56-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #81 - Queries, BITNET Instructions
To: AIList@SRI.COM


AIList Digest            Monday, 25 Apr 1988       Volume 6 : Issue 81

Today's Topics:
  Queries - Avionics Architectures for AI & AI Texts &
    Expert Systems for Graphic Design & Users of "Steamer" &
    T.G. Evans & John Mylopoulos,
  Administrivia - Stray BITNET Messages & AILIST Sub Requests

----------------------------------------------------------------------

Date: Fri, 22 Apr 88 11:14:53 est
From: Mr. David Smith <dsmith@gelac.arpa>
Subject: Avionics Architectures for AI

Ken,

First of all,  a hearty thanks for the job you have done in keeping the wheels
turning for so long.  Our group will miss your work, and will fully support your
successor(s).

I have a plea for guidance.  The field of processors for AI is moving so fast
that this may be the best place for on-line references.  We are looking for
a processor with demonstrated capability in the following areas:

1. Proven hardware with power, weight etc. which would not prevent it from
being flown.

2. Language support - preferably Ada, but will accept C as the baseline
language.

3. Multi-tasking executive.

4. Architecture suited to hosting some type of Expert System tools with
access back to the Ada/C environment.

Replies to me may be on the net,  DSMITH@gelac.arpa,  by phone (404)494-3345
or by mail:

                David Smith
                Dept 72-64, Zone 410
                LASC-Georgia
                Marietta, Ga 30068

I will post a summary of the replies for interested parties.

                                DMS

------------------------------

Date: 23 Apr 88   00:37 EDT
From: AL148859%TECMTYVM.BITNET@CORNELLC.CCS.CORNELL.EDU
Subject: A new AIer.

Date: 23 April 1988, 00:34:05 EDT
From: AL148859 at TECMTYVM
To:   AILIST at STRIPE.SRI.COM
Subject:  A new AIer.


    I'm new in the AI worl. Who can help me to start in this discipline.

Please, sendme suggestes..

                                       Thanks
                                        ___
Juan Gabriel Ruiz Pinto                /__/   __      __
I.S.E.                                /    / /_  /-/ /_/
AL148859@TECMTYVM                     ________________________

------------------------------

Date: 24 Apr 88 00:14:52 GMT
From: g-zeiden@gumby.cs.wisc.edu  (Matthew Zeidenberg)
Subject: AI texts

I'm teaching intro AI here at the Univ. of Wisconin this coming
summer, and I'm trying to choose a text. I'm considering Rich,
Winston, Nilsson and Tanimoto's books. Any opinions?

Thanks in advance.

------------------------------

Date: 20 Apr 88 01:29:01 GMT
From: pyramid!prls!philabs!sbcs!dji@decwrl.dec.com  (the dirty vicar)
Subject: Book Rec Wanted (Thm Proving)

I need recommendations for a good, fundamental text in resolution-based
automated theorem proving.  Something a beginner in this area can get
through.  Please respond by e-mail only, as I don't read this group.

                                        Thanks in advance
                                                the vic

Dave Iannucci \ Dept of Computer Science \ SUNY at Stony Brook, Long Island, NY
ARPA-Internet: dji@sbcs.sunysb.edu / CSNet: dji@suny-sb / ICBM: 40 55 N 73 08 W
UUCP: {allegra, philabs, pyramid, research}!sbcs!dji or ....bpa!sjuvax!iannucci

------------------------------

Date: 24 Apr 88 17:33:13 GMT
From: sunybcs!dmark@boulder.colorado.edu  (David Mark)
Subject: expert systems for graphic design?

REQUEST:  Does anyone our there know of any expert systems (or other
kinds of software systems) for evaluating the graphic design of a
display?  I am thinking of something which takes either an object-
oriented description of a graphic, or a bit-map of an image, and
evaluates such graphic concepts as balance, figure-ground, contrast,
etc.

Some of us in Geography in Buffalo are working on a cartographic
expert system, and it seems to us that (at least some) general
principles of graphic design will apply and be useful in the map
domain.  So if such a system already exists, we would be wasting
time trying to re-invent it.

If people reply to me via email, I will summarize responses to the net.

David Mark, Professor, Geography
dmark@joey.cs.buffalo.edu
dmark@sunybcs.BITNET

------------------------------

Date: 21 April 1988 1407-PST (Thursday)
From: thode@nprdc.arpa (Walt Thode)
Reply-to: thode@nprdc.arpa
Subject: Users of "Steamer"

Hello--

I recently inherited the "Steamer" project after the departure from my
organization of the former researchers.  (In case you're not familiar
with it, Steamer is a Lisp-based simulation, based on a mathematical
model, of a 1200-psi steam propulsion plant like those found on a number
of the Navy's ships; it incorporates an icon-based graphics editor that
helps build views of the plant that can't be seen otherwise.)  The
basic software that constitutes Steamer was placed in the public domain
several years ago, at the National Technical Information Service
(currently part of the Department of Commerce).

I have heard that NTIS sent numerous copies of Steamer to requesters.
I would like to find out how many people, organizations, etc. have
requested the Steamer software from NTIS, how many have made some use
of the software, and what (if any) modifications have been made to it
by users.  If you fit in any of these categories, or know of anyone who
does, I'd like to hear from you.

Please reply directly to me.  If there is enough interest, I will
summarize for the net.

--Walt Thode, Navy Personnel R&D Center
  ARPANET: thode@nprdc.arpa
  MILNET:  thode@pacific.nprdc.mil
  uucp:    ihnp4  \
           akgua   \
           decvax   >-- !ucsd!nprdc!thode
           dcdwest /
           ucbvax /

------------------------------

Date: Thu 21 Apr 88 08:09:34-PST
From: Rand Waltzman <WALTZMAN@IU.AI.SRI.COM>
Subject: TG Evans


Does anyone out there know where I can find T. G. Evans, the guy
who did the work on geometric analogy problems in the early 60's?

Thanks.

Rand Waltzman

------------------------------

Date: Fri, 22 Apr 88 18:48:17 EDT
From: Deeptendu Majumder <MEIBMDM@GITVM2>
Subject: Query - Dr. John Mylopoulos

Can anybody give an address (virtual or real) where I can reach
Dr. John Mylopoulos.

Thanx
Deeptendu Majumder
BOX 30963, Ga Tech
Atlanta, GA 30332

------------------------------

Date: 22 Apr 88 10:13:00 EDT
From: Walter Roberson <WCSWR%CARLETON.BITNET@CORNELLC.CCS.CORNELL.EDU>
Subject: administrivia/ stray BITNET messages

[...]
  A small correction to the note you wrote in AIList regarding
this problem: I don't think the problem is with people sending
messages to AILIST%LISTSERV. The problem seems to be with people
sending messages to AILIST@NDSUVM1. NDSUVM1 is the BITNET host
running the LISTSERV program which supports the AILIST address.
Messages sent to AILIST@NDSUVM1 are immediately redistributed
to BITNET sites. Control messages should, in theory, be sent
to LISTSERV@NDSUVM1. Submissions should, of course, be sent
to AIList@Stripe.Sri.COM . In other words, where you wrote
'AILIST%LISTSERV', I would write 'AILIST@NDSUVM1'.

  Walter Roberson <WCSWR@CARLETON.BITNET>

------------------------------

Date: Fri, 22 Apr 88 09:31:04 CDT
From: ND HECN E-mail Postmaster <INFO%NDSUVM1.BITNET@CUNYVM.CUNY.EDU>
Reply-to: AILIST-REQUEST%STRIPE.SRI.COM,
Subject: AILIST Sub Requests

A clarification of AILIST Procedures for BITNET/NETNORTH/EARN Subscribers:

   There have apparently been some subscribe requests going out to the
AILIST mailing list.  The problem is that users do not read the instructions
on signing up (or miss the point) in the Arpa (Internet) List of Lists
(or SIGLISTs), or those instructions are not clear.  I will include some
suggested revisions below.

   ALL requests which are LISTSERV commands MUST be addressed to LISTSERV
and NOT to the list (ie. NOT to AILIST).  I realize that this is going
to the people who are already members of the list, but maybe you can
help spread the word...

   One option would be to set the list up as edited.  Then if someone
sent something to the BITNET/EARN/NETNORTH (BNEnet) part of the list it would
be sent to AILIST-REQUEST@Stripe.SRI.COM (or whatever) and NOT to the
whole list.  (The place it gets sent MUST be a human's mailbox, not a
list exploder...).

   On some lists we have also had problems with bad mailers sending
error messages to the Reply-To field instead of the Sender (as required
by RFC822).  But most of the BNEnet subscribers are protected from that.

   Let Ken Laws (AILIST-REQUEST) know if the frequency of these mis-mailings
is enough to warrant the change.  (Or, maybe we need an "AI" module to figure
out if the mail is meant as a command or a contribution...  :-).

        Marty

From Rich Zellich's Internet List of Lists:    (Note Revisions... **)

   All requests to be added to or deleted from this list, problems, questions,
   etc., should be sent to AIList-Request@SRI.COM.

   BITNET or NetNorth subscribers can join by sending the SUB command with
   your name.
** your name to LISTSERV@NDSUVM1.
   EARN subscribers should send their requests in a similar format
   to LISTSERV AT FINHUTC:
      SEND LISTSERV@NDSUVM1 SUB AILIST Jon Doe
   or TELL LISTSERV AT NDSUVM1 SUB AILIST Jon Doe
   or TELL LISTSERV AT FINHUTC SUB AILIST Johan Doe
** or you mail send mail to LISTSERV@NDSUVM1 or FINHUTC with the first line
**    of the body of the mail being   SUB AILIST Joan Doe

   To be removed from the list:
      SEND LISTSERV@NDSUVM1 SIGNOFF AILIST
   or TELL LISTSERV AT NDSUVM1 SIGNOFF AILIST
   or TELL LISTSERV AT FINHUTC SIGNOFF AILIST

** PLEASE NOTE CAREFULLY:  In ALL cases, LISTSERV commands are addressed to
** LISTSERV and NOT to AILIST!

------------------------------

End of AIList Digest
********************

∂29-Apr-88  0330	LAWS@KL.SRI.COM 	AIList V6 #82 - Seminars, Conferences 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 29 Apr 88  03:30:17 PDT
Date: Fri 29 Apr 1988 01:09-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #82 - Seminars, Conferences
To: AIList@SRI.COM


AIList Digest            Friday, 29 Apr 1988       Volume 6 : Issue 82

Today's Topics:
  Seminars - AURORA -- An Or-Parallel Prolog System (Unisys) &
    Nonmonotonic Parallel Inheritance Networks (AT&T) &
    Internet Self-Organization (SU) &
    ACM SIGART (LA) &
    Explanation-Based Search Control Learning (BBN) &
    Triangular Scheduling for Depletion-Process Control (SRI),
  Conference - Society for Philosophy and Psychology, Annual Meeting &
    Int. Neural Network Society '89 &
    Robotics and Intelligent Machine Automation &
    AAAI Workshop on Plan Recognition

----------------------------------------------------------------------

Date: 23 Apr 88 16:40:03 GMT
From: antares!finin@burdvax.prc.unisys.com  (Tim Finin)
Subject: Seminar - AURORA -- An Or-Parallel Prolog System (Unisys)


                              AI SEMINAR
                     UNISYS PAOLI RESEARCH CENTER

                AURORA - An Or-parallel Prolog System

                         Andrzej Ciepelewski
             Swedish Institute of Computer Science (SICS)


A parallel prolog system has been constructed in a cooperative effort
among Argonne National Lab, University of Manchester and SICS. The
system has been based on a state of the art sequential Prolog. It runs
on multiprocessors with shared memory and is expected to perform
better than on e.g. Sequent Symmetry than the commercial Prolog
systems available today.  The system executes "ordinary" ordinary
Prolog programs withs cuts and side effects keeping the semantics of
sequential execution. Also programs written in Prolog extended with
parallel primitives like "cavalier" commit and unorderd sided-effects
can be excuted.  The system has been designed for portability and
modifiability. It main part, the engine part and the scheduler part
are nicely interfaced. Two quite different schedulers have already
been tried.  Some preliminary performance data has already been
collected, running mostly small search and parsing problems. The
largest program ran so far have been the parallelised SICStus Prolog
compiler and Chat-80. The figures from Sequent Balance 8000 show about
20% parallel overhead in one processor case and close to linear
speed-ups. We are waiting with exitement for figures from Sequent
Symmetry where the system has been recently ported.  In my talk I will
mainly discuss implementation decisions and performance figures.


                      2:00 pm Tuesday, April 26
                           Paoli Auditorium
                     Unisys Paloi Research Center
                      Route 252 and Central Ave.
                            Paoli PA 19311

   -- non-Unisys visitors who are interested in attending should --
   --   send email to finin@prc.unisys.com or call 215-648-7446  --
Tim Finin                       finin@prc.unisys.com
Paoli Research Center           ..!{psuvax1,sdcrdcf,cbmvax,bpa}!burdvax!finin
Unisys Corporation              215-648-7446 (o)
PO Box 517, Paoli PA 19301      215-386-1749 (h)

------------------------------

Date: Mon, 25 Apr 88 12:30:25 EDT
From: dlm%research.att.com@RELAY.CS.NET
Subject: Seminar - Nonmonotonic Parallel Inheritance Networks (AT&T)


Speaker: Chiaki Sakama
         ICOT, Japan.

Time:  10:30, May 2nd, 1988
Room:  AT&T Bell Laboratories- Murray Hill 3D-436

Title: Nonmonotonic Parallel Inheritance Networks

              This paper discusses a formalization of nonmonotonic inheritance
reasoning in semantic networks using Reiter's default theory. It enables us to
define inheritance rules apart from data in a network, and improves
readability or maintenance of a network compared with other approaches.
We also present a parallel inheritance algorithm based on this method,
which generates a set of properties for an input class. This algorithm is
easily realized in a parallel logic programming language GHC (Guarded Horn
Clauses), which is developed as the kernel language of the fifth-generation
project at ICOT.


Sponsor:  David Etherington
          ether@research.att.com

------------------------------

Date: 25 Apr 1988 15:01-PDT
From: Steve Deering <deering@pescadero.stanford.edu>
Subject: Seminar - Internet Self-Organization (SU)


    INTERNET CONGESTION, SELF-ORGANIZATION AND SELF-DESTRUCTION

                            Van Jacobson
                    Lawrence Berkeley Laboratory

         DSRS: Distributed Systems Research Seminar (CS548)
                    Thursday, April 28, 4:15 pm
                      Margaret Jacks Hall 352

  A typical internet contains a large number of interdependent
  pieces, each with many degrees of freedom.  A congested internet
  forces these pieces to operate `far from equilibrium' (where
  `equilibrium' is the absence of queues).  In the natural world,
  these characteristics (coupling, choice and a non-equilibrium
  environment) usually herald a `self-organizing system' whose
  behavior becomes more than the sum of its parts.  I speculate
  that a congested internet is such a system and offer some
  evidence to support the conjecture.  Left to itself, an internet
  seems to evolve into a system bent on self-destruction.
  Although I currently know of no way to prevent this, I'll try to
  point out promising research directions.

------------------------------

Date: 27 Apr 88 00:20:43 PDT (Wednesday)
From: Bruce Hamilton <Hamilton.osbuSouth@Xerox.COM>
Reply-to: Hamilton.osbuSouth@Xerox.COM
Subject: Seminar - LA ACM SIGART

Wednesday May 4 at the Ramada Hotel, 6333 Bristol Parkway, Culver City (just
off the 405 freeway at Sepulveda) will be the kickoff event for the Artificial
Intelligence Special Interest Group of the Los Angeles chapter of the ACM.
Professional gambler Mike Caro will discuss "ORAC", his computer program which
plays world-class poker.

Dinner (optional) is at 7 PM, program at 8 PM.  Dinner is approx. $16.50;
program is free.

The attendance at this event will determine if LA SIGART is a viable idea, so
please attend and bring a friend!

--Bruce for Kim Goldsworthy

------------------------------

Date: Wed 27 Apr 88 14:23:02-EDT
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Seminar - Explanation-Based Search Control Learning (BBN)

                    BBN Science Development Program
                       AI Seminar Series Lecture

              LEARNING EFFECTIVE SEARCH CONTROL KNOWLEDGE:
                     AN EXPLANATION-BASED APPROACH

                             Steven Minton
                       Carnegie-Mellon University
                     (Steven.Minton@cad.cs.cmu.edu)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                        10:30 am, Tuesday May 3


In order to solve problems more effectively with accumulating
experience, a problem solver must be able to learn and exploit search
control knowledge. In this talk, I will discuss the use of
explanation-based learning (EBL) for acquiring domain-specific control
knowledge. Although previous research has demonstrated that EBL is a
viable approach for acquiring control knowledge, in practice EBL may not
always generate useful control knowledge. For control knowledge to be
effective, the cumulative benefits of applying the knowledge must
outweigh the cumulative costs of testing whether the knowledge is
applicable. Generating effective control knowledge may be difficult, as
evidenced by the complexities often encountered by human knowledge
engineers. In general, control knowledge cannot be indiscriminately
added to a system; its costs and benefits must be carefully taken into
account.

To produce effective control knowledge, an explanation-based learner
must generate "good" explanations -- explanations that can be profitably
employed to control problem solving.  In this talk, I will discuss the
utility of EBL and describe the PRODIGY system, a problem solver that
learns by searching for good explanations. Extensive experiments testing
the PRODIGY/EBL architecture in several task domains will be discussed.
I will also briefly describe a formal model of EBL and a proof that
PRODIGY's generalization algorithm is correct with respect to this model.

------------------------------

Date: Tue 26 Apr 88 14:59:07-PDT
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: Seminar - Triangular Scheduling for Depletion-Process
         Control (SRI)

         Triangular Scheduling for Depletion-Process Control

                           Kenneth Laws
                         SRI International

There exist phenomena in which resource depletion rate is proportional
to the amount of material available.  Control of such processes can be
exerted through replenishment of consumable resources or through
manipulation of the proportionality function.  I propose an application
of triangular numbers to such control in a simple discrete system.

Consider the ingestion of m&m's.  Unless the consumer process employs
appropriate feedback, the supply of these expensive units will be
exhausted before full psychogustatory satisfaction has been achieved.
This side-effect of the well-known greedy algorithm can be overcome
by a stict discipline of triangularization.  The steps are as follows:

1) Arrange the units in a triangular pattern, with distribution
   of colors optional.  Start with one element at the top, then
   increase the number in each successive row by one.  Any leftover
   units may be scheduled for immediate consumption.

2) Queue rows for removal in inverse order of their creation.
   A row may be deleted right to left, left to right, or in random
   order, but not from the middle outward.  The rate of consumption
   will depend on numerous parameters, including attentional factors,
   but is typically limited by sequential transport and processing--
   the so-called Laws bottleneck.

3) A delay of approximately one minute should separate removal of any
   two rows.  This enhances perception of the immanent depletion crisis,
   with possible dynamic replanning to mitigate its effects.  The
   active agent may wish to increase the delay in inverse proportion
   to the number of units remaining, possibly selecting such delays
   from the set of triangular numbers.

This simple algorithm admits many obvious variations, such as hierarchical
control systems with triangular arrangements substituting for the rows
(or even units) described above.  The demonstrated efficacy of the
technique leads to speculation about related depletion processes --
e.g., peanuts, peppermints, and Chex party mix -- but extension to
these domains has not yet been attempted.  There may be difficulties
in transferring the triangular scheduling approach to the real world,
particularly for area-intensive elements such as potato chips.

------------------------------

Date: 22 Apr 88 05:19:48 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Conference - Society for Philosophy and Psychology, Annual
         Meeting


        Society for Philosophy and Psychology: 14th Annual Meeting
                     Thursday May 19 - Sunday May 22
                University of North Carolina, Chapel Hill

Contributors will include Jerry Fodor, Ruth Millikan, Colin Beer,
Robert Stalnaker, Paul Churchland, Lynn Nadel, Michael McCloskey, James
Anderson, Alan Prince, Paul Smolensky, John Perry, William Lycan, Alvin Goldman

        Paper (PS) and Symposia (SS) on:

Naturalism and Intentional Content (SS)
Animal Communication (SS)_
The Child's Conception of Mind (PS)
Cognitive Science and Mental State, Wide and Narrow (PS)
Logic and Language (PS)
Folk Psychology (PS)
Current Controversies: Categorization and Connectionism (PS)
Current Controversies: Rationality and Reflexivity (PS)
Neuroscience and Philosophy of Mind (SS)
Connectionism and Psychological Explanation (SS)
Embodied vs Disembodies Approaches to Cognition (SS)
Emotions, Cognition and Culture (SS)
Naturalistic Semantics and Naturalized Epistemology (PS)

Registration is $30 for SPP members and $40 for nonmembers. Write to

Extension and Continuing Education
CB # 3420, Abernethy Hall
UNC-Chapel Hill
Chapel Hill NC 27599-3420

Membership Information ($15 regular; $5 students):

Professor Patricia Kitcher
        email: ir205%sdcc6@sdcsvax.ucsd.edu
Department of Philosophy B002
University of California - San Diego
La Jolla CA 92093

--

Stevan Harnad            harnad@mind.princeton.edu       (609)-921-7771

------------------------------

Date: Tue, 26 Apr 88 18:50:55 EDT
From: mike%bucasb.bu.edu@bu-it.BU.EDU (Michael Cohen)
Subject: Conference - Int. Neural Network Society '89

April 26, 1988

GOOD NEWS FOR THE NEURAL NETWORK COMMUNITY!

There are now over 2000 members of the International Neural Network Society
from 34 countries and 47 states of the U.S.A.

The INNS is thus beginning to fulfill its purpose of offering our
community an intellectual home of its own.

In particular, over 500 abstracts were submitted to the 1988 First Annual
INNS meeting in Boston, to be held on September 6--10, 1988, at the Park Plaza
Hotel. The abstracts cover the full spectrum of topics in the neural network
field.

While many are working hard on the final program and plans for the 1988
meeting, we also needed to plan further ahead. Accordingly, the INNS
Governing Board approved holding the Second Annual INNS Meeting in
Washington, DC, on September 5--9, 1989, and we have negotiated a contract
with the Omni Shoreham Hotel.

See you in Boston in '88 and Washington in '89!

Steve Grossberg, President, INNS
Demetri Psaltis, Vice President, INNS
Harold Szu, Secretary-Treasurer, INNS


----
Michael Cohen ---- Center for Adaptive Systems
Boston University (617-353-7857)
Email: mike@bucasb.bu.edu
Smail: Michael Cohen
       Center for Adaptive System
       Department of Mathematics, Boston University
       111 Cummington Street
       Boston, Mass 02215

------------------------------

Date: Wed, 27 Apr 88 09:45:59 PDT
From: meyer@tetra.nosc.mil (Kathi L. Meyer)
Subject: Conference - Robotics and Intelligent Machine Automation


         POTENTIAL 88
         ============
         Robotics and Intelligent Machine Automation Conference
         ======================================================

         Date:      24 - 25 May 1988
         Location:  McLean Hilton, McLean, VA
         POC:       Robotics International (313) 271-1500 x374


         Robotics International is sponsoring a conference on 24-
         25  May  to discuss DoD and Industrial  requirements  in
         robotics   and  intelligent  machine   automation.    In
         addition  to  the  projection  of  needs  by  Flag-level
         speakers from Army,  Navy,  Air Force,  and DARPA, VP/GM
         level  presentations  will  be  made  by  speakers  from
         Boeing,  Lockheed,  General Motors, and Martin Marietta.
         There  are also a number of projections of technological
         capabilities  being made by scientists  from  CMU,  MIT,
         UCLA, U of Md, etc.

         Whether  you're  providing research for DoD sponsors  or
         are the recipient of such technology developments,  this
         looks   like  the  conference  to  attend.

------------------------------

Date: 27 Apr 88 18:29:57 GMT
From: dvm@YALE-BULLDOG.ARPA  (Drew Mcdermott)
Subject: Conference - AAAI Workshop on Plan Recognition


                      CALL FOR PARTICIPATION
                    WORKSHOP ON PLAN RECOGNITION

AAAI-88, Minneapolis, Minnesota, Wednesday, August 24.
Radisson-St. Paul Hotel

Plan recognition is a touchstone issue for Artificial Intelligence,
which has generated thorny problems and theoretical results for years.
The class of problems we have in mind is to infer a goal-based
explanation of the behavior of one or more actors.  This class can
be extended to closely related problems like inferring an author's
plans from a text, inferring a programmer's plans from his code, or
inferring explanations of new bug types from case histories.

Problems of this sort often seem to lie at the heart of intelligence,
because people can apparently select just the right explanatory
principles from large knowledge bases.  For that reason, this problem
area has encouraged interest in nontraditional control structures such
as marker passing, parallelism, and connectionism.  To date, however,
no decisive solutions have been obtained.

The workshop will aim at bringing together individuals working in all
the active areas related to plan recognition, as well as individuals
trying to exploit research results for practical applications.  This
interaction should prove fruitful for both groups.

Contributors interested in participating in this workshop are requested
to submit a 1000-2000 word extended abstract of their work, describing
its relevance to the topic of plan recognition.  The workshop attendance
will be limited to 35, and all participants will present their work,
either in an oral presentation, or in a poster session.  Abstracts will
be refereed by the organizing committee.  Copies of the chosen abstracts
will be sent to each participant prior to the workshop.  Presenters shall
have the opportunity to expand their abstracts for inclusion in a workshop
proceedings to be published later.

Extended abstracts should be received prior to June 3, 1988.  Mail them
to:
               Jeff Maier
               TASC
               2555 University Blvd.
               Fairborn, Ohio 45324
               (513)426-1040

Authors will be notified of the status of their papers by July 8, 1988.

Organizing Committee:

             Larry Birnbaum, Yale University
             Doug Chubb, US Army, Center for Signals Warfare
             Jeff Maier, TASC
             Drew McDermott, Yale University
             Bob Wilensky, UC Berkeley
             Steve Williams, TASC

------------------------------

End of AIList Digest
********************

∂29-Apr-88  0539	LAWS@KL.SRI.COM 	AIList V6 #83 - Texts, Theorem Prover, Graphic Design, Demons  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 29 Apr 88  05:39:02 PDT
Date: Fri 29 Apr 1988 01:22-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #83 - Texts, Theorem Prover, Graphic Design, Demons
To: AIList@SRI.COM


AIList Digest            Friday, 29 Apr 1988       Volume 6 : Issue 83

Today's Topics:
  Education - AI Texts & Lisp Machine Mailing List,
  AI Tools - Boyer and Moore's Prover,
  Opinion - Exciting Work in AI & Expert Systems for Graphic Design,
  History - Demons and Other AI Constructs

----------------------------------------------------------------------

Date: 25 Apr 88 04:57:12 GMT
From: beowulf!demers@sdcsvax.ucsd.edu  (David E Demers)
Subject: Re: AI texts

In article <1516@gumby.cs.wisc.edu> g-zeiden@gumby.cs.wisc.edu
(Matthew Zeidenberg) writes:
>I'm teaching intro AI here at the Univ. of Wisconin this coming
>summer, and I'm trying to choose a text. I'm considering Rich,
>Winston, Nilsson and Tanimoto's books. Any opinions?
>
For an intro course, the above are all good; also, Charniak &
McDermott.  I suppose it depends on whether you want a broad
overview, and on what YOU think AI really is.  Some LISP should
be a prerequisite, PROLOG may be helpful.  And of course the new
hot topic is connectionism/neural networks.  (I am still a grad
student, & speak from the point of view of having studied rather
than taught...)

Dave DeMers    UCSD

------------------------------

Date: 26 Apr 88 05:23:50 GMT
From: portal!cup.portal.com!tony_mak_makonnen@uunet.uu.net
Subject: Re: AI texts

I'm teaching intro AI here at the Univ. of Wisconin this coming
summer, and I'm trying to choose a text. I'm considering Rich,
Winston, Nilsson and Tanimoto's books. Any opinions?

Thanks in advance.


I would recommend you take a look at Parsaye and chignel's book
"Expert Systems for Experts" , John Wiley & Son.  It is more
oriented to shells and less to  Lisp.

------------------------------

Date: 25 Apr 88 21:30:00 GMT
From: ong@p.cs.uiuc.edu
Subject: Re: AI texts


How about Nilsson and Genesereth's Logical Foundations of AI?  Some of
the chapters are definitely not introductory stuff, but it is written
in a very clear and concise manner.

Students might find it interesting to be exposed to neural networks in
an introductory AI course, too.

------------------------------

Date: 27 Apr 88 18:11:22 GMT
From: rochester!daemon@bbn.com  (Brad Miller)
Subject: Re: Lisp Machines mailing list sought


    Date: 25 Apr 88 20:44:14 GMT
    From: mendozag@pur-ee.UUCP (Grado)

    Are there any mailing lists concerned with the Symbolics
    Lisp Machines?
     I remember I read about one in the Arpanet some time ago.

Yes: SLUG@ai.sri.com; sign up via SLUG-Request@ai.sri.com

(SLUG stands for Symbolics Lisp Users Group)

----
Brad Miller             U. Rochester Comp Sci Dept.
miller@cs.rochester.edu {...allegra!rochester!miller}

------------------------------

Date: Fri, 22 Apr 88 19:21:13 CDT
From: Robert S. Boyer <boyer@CLI.COM>
Subject: Availability of Boyer and Moore's Prover

A Common Lisp version of our theorem-prover is now available under the
usual conditions: no license, no copyright, no fee, no support.  The
system runs well in three Common Lisps:  KCL, Symbolics, and Lucid.
There are no operating system or dialect conditionals, so the code may
well run in other implementations of Common Lisp.

Included as sample input is the work of Hunt on the FM8501
microprocessor and of Shankar on Goedel's incompleteness theorem and
the Church-Rosser theorem.

To get a copy follow these instructions:

1.   ftp to Arpanet/Internet host cli.com.
     (cli.com currently has Internet numbers
     10.8.0.62 and 192.31.85.1)
2.   log in as ftp, password guest
3.   get the file /pub/nqthm/README
4.   read the file README and follow the directions it gives.

Inquiries concerning tapes may be sent to:

    Computational Logic, Inc., Suite 290
    1717 W. 6th St.
    Austin, Texas 78703

A comprehensive manual is available.  For information on obtaining a
copy, write to the address above.

Bob Boyer         J Moore
boyer@cli.com     moore@cli.com

Due to major changes in the Arpanet, getting through to cli.com may be
difficult starting May 1 until one of the alternative Internet options
is solidly in place.

------------------------------

Date: 24 Apr 1988 18:52 (Sunday)
From: munnari!nswitgould.oz.au!wray@uunet.UU.NET (Wray Buntine)
Subject: Re: Exciting work in AI


Ehud Reiter (V6#69) was eliciting the following (summarised by Spencer Star)
>  Exiting work in AI.  The three criteria are:
>     1. Highly thought of by at least 50% in the field.
>     2. Positive contribution
>     3. Real AI

Spencer Star made a number of suggestions of "exiting" work.
I disagree on some of them.  I mention only 1 below.

>  Another area involves classification trees of the sort generated by
>  Quinlan's ID3 program.
Ross's original ID3 work (and the stuff usually reported in Machine Learning
overviews) and much subsequent work by him and others (e.g. pruning)
actually fails the "real AI" test.  It was independently developed by
a group of applied statisticians in the 70's and is well known
        Breiman, L., Friedman, J.H., Olshen, R.A. and Stone, C,J. (1984)
        "Classification and Regression Trees", Wadsworth
Ross's more recent work does significantly improve on Breiman et al.s stuff.
To my knowledge, however, it is not yet widely known.  Try looking in IJCAI-87.
His latest program is actually called C4 (heard of it?), has been for years,
and I think it is closer to real AI (e.g. concern for comprehensibility),
though it still has an applied statistics flavour.  Perhaps this fails the
"highly thought of by 50%" test.  Another year maybe.

--------------
Wray Buntine
wray@nswitgould.oz
University of Technology, Sydney

------------------------------

Date: 26 Apr 88 09:45:09 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: expert systems for graphic design?

In article <10494@sunybcs.UUCP> dmark@sunybcs.UUCP (David Mark) writes:
>REQUEST:  Does anyone our there know of any expert systems (or other
>kinds of software systems) for evaluating the graphic design of a
>display?
>If people reply to me via email, I will summarize responses to the net.
Sorry, email can be flakey from Europe.

Main references I know are

%A D.J. Streveler
%A A.I. Wasserman
%T Quantitative Measures of the Spatial Properties of Screen Designs
%J INTERACT'84
%V 1
%I Elsevier/IEE
%P 124-133 (participants edition)
%D 1984
%O 1985 edition pub. North Holland

%A T. Tullis
%T Designing a menu-based interface to an operating system
%J CHI '85
%P 79-84
%D 1985

%A T.S. Tullis
%T Optimising the Usability of Computer-Generated Displays
%B People and Computers: Designing for Usability
%E M.D. Harrison and A. Monk
%I Cambridge University Press
%C Cambridge
%P 604-613
%D 1986

These measures covered are useful, but very crude.  Graphic designers
are not ones for writing things down, nor can I see them rushing to
have their knowledge elicited by production rule hackers. It is far
more efficient to find a NUMBER of graphic designers locally and ask
them to evaluate your display layouts.  Then use your judgement to
decide on which advice to take.

If you've got a long time, you could implement some alternative
designs for aspects of the display and get a human factors expert (not
a theoretical psychologist) to help you to compare human performance
effects of the design alternatives.  You could also discuss design
alternatives in the first place with such an expert.  Use paper here,
as it's more efficient in early design than most screen generators -
MacDraw is a good early prototyping tool.

Finally, try things out on representative end-users, who may like
neither the aesthetics of your preferred designer nor the optimum
performance of the human factors experiment.

This all seems harder than just running a program, but I can assure
you that it is all a lot easier than trying to design one to do an
equivalent job.  We do not have a computational account of good
graphical design, are unlikely to gain one in the near future, and
probably never will reduce aesthetics to some thing as ugly as a
Turing equivalent formalism.  Of course, the phenomenological aspects
of end-user preferences can NEVER be automated or simulated, because
such phenomenology is defined to be human experiences and
categorically absolutely nothing else. It is as computable as a daffodil!

------------------------------

Date: 27 Apr 88 18:11:46 EDT
From: John Sowa <SOWA@ibm.com>
Subject: Historical remarks on demons and other AI constructs

In response to some recent questions, I thought that it might be useful
to cite a few historical references:

 1. The first use of the term demon in AI was for the system Pandemonium
    by Oliver Selfridge (1958).  He developed it as a system for
    learning to recognize human-keyed Morse code.  It consisted of
    low-level demons that looked for patterns.  When a demon found
    its pattern, it would "shout".  Higher-level demons listened for
    shouts from lower-level demons.  They, in turn, would shout when
    they heard a characteristic pattern of shouts.

 2. The term "demon" was introduced into physics by Maxwell, who
    used it in thought experiments in thermodynamics; e.g. imagine
    a demon who watched molecules bouncing around and opened a trap
    door to allow only the fast ones to pass through.  In principle,
    it could reduce entropy by separating hot gas from cool gas.
    However, the entropy of the demon itself would increase.  For
    a discussion of demons in physics, see von Neumann (1951), who
    contributed to physics as well as logic, set theory, and even
    computers.

 3. While we're mentioning von Neumann, I have heard some people
    distinguish highly parallel computers from "von Neumann machines."
    However, von Neumann (1958) wrote one of the first books about
    parallel computation and the possibility of simulating the brain.
    So the term "von Neumann machine" could refer either to conventional,
    single-CPU machines or to highly parallel connectionist machines.

 4. A previous note mentioned Carl Hewitt's PLANNER as a source for the
    three-way distinction between if-needed, if-added, and if-deleted
    demons.  The MIT reports may not be easy to find, but there is a
    paper by Hewitt (1969) in the first IJCAI.  That paper is confusing
    and hard to read, but you can find the three-way distinction in it.
    Although Hewitt did not invent if-needed or if-added demons, I do
    not know of any earlier version of an if-deleted demon.

 5. Goal-directed or if-needed patterns were well developed in the
    General Problem Solver.  The most definitive reference to GPS is
    the book by Ernst & Newell (1969), but there are papers on early
    versions dating back to 1959.

 6. The 1969 version of GPS also had a well developed use of "schemas,"
    which were frame-like structures that predated frames by at least
    6 or 7 years.  A schema always had unbound variables.  When all
    its variables were instantiated, it was called a "model."

 7. The term schema was introduced to Newell & Simon by Adriaan de Groot,
    who visited Carnegie in the 1960s.  De Groot (1965) wrote a highly
    influential book on thinking processes in chess, in which he applied
    the theories of the psychologist Otto Selz (1913, 1922).  Selz had
    a theory of "schematic anticipation" in which a schema served as a
    goal towards which the thinking processes were directed.  Selz even
    described backtracking search procedures as a way of satisfying
    the goal and used a network notation for his schemas.  Quillian,
    who studied with Newell & Simon, cited Selz in his thesis (1966),
    but the abridged version reprinted in Minsky (1968) doesn't
    mention Selz.

John Sowa

References:

O. G. Selfridge (1968) "Pandemonium:  a paradigm for learning," in
Mechanisation of Thought Processes, Proceedings of a symposium held
at the National Physical Laboratories, Nov. 1958, Her Majesty's
Stationery Office, London, pp. 511-531.

J. von Neumann (1951) Mathematical Foundations of Quantum Mechanics,
Princeton University Press, Princeton, NJ.

J. von Neumann (1958) The Computer and the Brain, Yale University Press,
New Haven.

C. Hewitt (1969) "PLANNER: a language for proving theorems in robots,"
Proceedings of IJCAI, pp. 295-301.

G. W. Ernst & A. Newell (1969) GPS:  A Case Study in Generality and
Problem Solving, Academic Press, New York.

A. de Groot (1965) Thought and Choice in Chess, Mouton, The Hague.

O. Selz (1913) Ueber die Gesetze des geordneten Denkverlaufs,
Spemann, Stuttgart.

O. Selz (1922) Zur Psychologie des produktiven Denkens und des Irrtums,
Friedrich Cohen, Bonn.

M. R. Quillian (1966) Semantic Memory, Report AD-641671, Clearinghouse
for Federal Scientific and Technical Information.

M. Minsky (1968) Semantic Information Processing, MIT Press, Cambridge, MA.

------------------------------

End of AIList Digest
********************

∂29-Apr-88  2235	LAWS@KL.SRI.COM 	AIList V6 #84 - Queries
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 29 Apr 88  22:35:06 PDT
Date: Fri 29 Apr 1988 20:31-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #84 - Queries
To: AIList@SRI.COM


AIList Digest           Saturday, 30 Apr 1988      Volume 6 : Issue 84

Today's Topics:
  Queries - Design Methods to Develop Rule-Based Expert Systems &
    Simulation & Bacon & Decision Theory & Designing for Diagnosis &
    Lisp Machines Mailing List & Construction Industry &
    Causal Models and Diagnosis & CMU Sidewalk Rover &
    Software Engineering & AI Courses & ES for Graphic Design &
    Chinese Character Recognition

----------------------------------------------------------------------

Date: 25 Apr 88 13:52 +0100
From: fred moerman <f_moerman%avh.unit.uninett@TOR.nta.no>
Subject: Design Methods to Develop Rule-Based Expert Systems


I try to develop a few simple examples to illustrate the use of
expert-systems to our students.
The point is that in contrary to programming and system design,
I lack general purpose tools and methodes to design my rulebase.

I'm not necessarily looking for theory, I want a usefull, practical
methode that both me and our students can use at an early stage in
the development of a rulebased expertsystem.

Is there someone out there that can help me ?


Thanks,
Fred Moerman

f_moerman%avh.unit.uninett@tor.nta.no
------------------- :-)

==================

------------------------------

Date: 25 Apr 88 10:33:00 CST
From: "Perry Alexander" <alexander@space-tech.arpa>
Reply-to: "Perry Alexander" <alexander@space-tech.arpa>
Subject: Looking for simulation papers...

Can anyone give me any pointers towards papers in the area of using expert
systems in simulation?  I am specifically looking for ideas concerning using an
expert system to manage data and choose simulation techniques for a given
system, although any papers concerning the use of expert systems (for that
matter, AI in general) in simulation would be helpful.

Thanks,
Perry

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Perry Alexander               | ARPANET : alexander@space-tech.arpa
The University of Kansas      | CSNET   : alexander%coeds@kuhub.cc.ukans.edu
Center for Research/TISL      |
2291 Irving Hill Dr.          |                 H A I K U
Lawrence, KS. 66045           |
913-864-7753                  |
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

------------------------------

Date: Tue, 26 Apr 88 09:07:52 EST
From: Kevin Barnhill <KEVIN%UCF1VM.BITNET@CUNYVM.CUNY.EDU>
Subject: Re: In search of Bacon


I am searching for a source of the Bacon programs (.1 - .7) by Langley and
Simon.  We would like to run Bacon against large amounts of imperical data
that we now have.  Thank you in advance.


Kevin Barnhill
University of Central Florida

Barnhill%UCF1VM.bitnet@wiscvm.arpa

------------------------------

Date: Tue, 26 Apr 1988 12:28-EDT
From: Oren.Etzioni@B.GP.CS.CMU.EDU
Subject: Decision Theory in AI.

I would be grateful for references on the use of ideas from decision theory
in AI theories/programs.

oren

(ETZI@CS.CMU.EDU)

------------------------------

Date: Tue, 26 Apr 88 15:03:53 edt
From: nancy@grasp.cis.upenn.edu (Nancy Orlando)
Subject: Designing for diagnosis

  I'm aware of some very good work done lately in diagnosis of
physical systems. I wonder tho', has anything been done in
*designing* systems to be diagnosed/monitored by intelligent
systems?  Or is this a problem that's been solved by engineers long ago?

Nancy Sliwa
nesliwa%nasamail@ames.arpa

------------------------------

Date: 25 Apr 88 20:44:14 GMT
From: mendozag@ee.ecn.purdue.edu  (Grado)
Subject: Lisp Machines Mailing List

   Are there any mailing lists concerned with the Symbolics
  Lisp Machines?

   I remember I read about one in the Arpanet some time ago.

   Thanks in advance,

   Victor M. Mendoza-Grado
   School of EE, Box 62
   Purdue University
   W. Lafayette, IN 47907
   mendozag@ecn.purdue.edu


  [The Symbolics Users Group is SLUG@R20.UTEXAS.EDU.  -- KIL]

------------------------------

Date: 25 Apr 88 11:25:04 GMT
From: mcvax!tnosel!hin@uunet.uu.net  (Hin Oey)
Subject: construction industry

L.S.

I am wondering if there is enough interest in a mailing list
concerning applications and research programs directed
towards AI and Expert Systems for the Building/Construction
industry.

And combined with above or more general for applications
concerning law, codes of practice and regulations.

If you are interested, please email.

Regards,
Hin Oey (hin@tnosel) Netherlands

------------------------------

Date: Wed, 27 Apr 88 10:26-0800
From: LRMC5128%BCIT.BITNET@CORNELLC.CCS.CORNELL.EDU
Subject: Causal Models and Diagnosis

 To: AILIST00--$STRIPE. ailist/

 From: Craig Larman
 Subject: Causal Models and Diagnosis

 This is my first note - hope it gets through OK.

 The IntelliCorp (KEE) folks talk enthusiastically about "model based
 reasoning". They describe this as building a "causal model" of a system (e.g.
 a gas engine, a liver) and then extending it for various needs (simulation,
 diagnosis, training).

 As a programmer, I am interested in programming/technical answers or
 programming references to these questions:

 1. I think of a software "causal model" as defining relationships between
 components and their dependant inputs/outputs. Is this it? What programming
 hints or references can folks suggest that will help me learn to build such
 models? SMALLTALK, LISP, or PROLOG are the preferred languages.

 2. Assume I've built a causal model, HOW to build a DIAGNOSIS system from it?
 I can see simulation follows naturally, but how diagnosis?

 e-mail address: LRMC5128@BCIT on the NetNorth network (the Canadian arm
 of BitNet).

 Virtually,
 Craig Larman, 432-8629
 MIS/Development Centre

------------------------------

Date: Wed, 27 Apr 88 19:39:38 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Need info on new CMU sidewalk rover


      I hear that CMU has a new autonomous vehicle, a sidewalk rover
like the Terregator, but improved.  Who is doing the work, and are there
any papers yet?

                                        John Nagle

------------------------------

Date: Thu, 28 Apr 88 09:06:01 EDT
From: CMSBE1%EOVUOV11.BITNET@CUNYVM.CUNY.EDU
Subject: S.E. VS. K.E.: bad focussed title, I think...

Date: 28 April 1988, 09:02:38 EDT
From: Juan Francisco Suarez Vicente  (KIKO)          CMSBE1   at EOVUOV11
To:   AILIST at SRI

Hello !!!
I've received few answers about "Software Eng. VS. Knowledge Eng.", and
I'm agree with you. The problem I'd submitted to AILIST was because
a few months ago was presented in Spain a Conference with this title,
and my opinion was (and still follows being ) against this viewpoint.
Spain is an absolute beginner country in AI and K.E. subjects and,
how I suposse ocurred in other countries,some people tried to demonstrate
that AI techniques and K.E. techniques are "pure invention" and they
preferred attack AI than auto-critize their own methods.
My personal opinion: I'm agree with K.E. methodologies, and I'd like
give "good performance" reasons to its detractors, to demonstrate
them that Knowledge Engineering is not a "pure invention"...it's real
and very useful !!!!
And also I think that K.E. and Soft. Eng. can survive in a perfect
symbiosis.Cause of this,"VERSUS" isn't an appropiate word to relate them.
There are some conectives more suitables: WITH, AND, etc...
Do you think the same?
                   Kiko   (CMSBE1@EOVUOV11)   SPAIN
P.S.: Ahh...thank you too for answers about O-O techniques...

------------------------------

Date: Wed, 27 Apr 88 19:09:25 HOE
From: Esteban Vegas Lozano <ZCCBEVL%EB0UB011.BITNET@CUNYVM.CUNY.EDU>
Subject: Information of courses


     I would acknowledge to receive someone information (course program,
place, admission date, duration, financial aid, ......) relating a cours
in general o summer courses in special, about of the following subjets:

          - Artificial Intelligence & Biology.
          - Cybernetics & Biology.
          - Computers & Biology.

    All message or mail about this note should be sent to

         <ZCCBEVL@EB0UB011.BITNET> for electronic mail.

    My address is:

                    Esteban Vegas Lozano
                    Centre d'Informatica de l'Universitat de Barcelona
                    Diagonal 645.
                    08028-Barcelona, Spain.

    Thanks for the help.

                                 Esteban

------------------------------

Date: 29 Apr 88 02:52:59 GMT
From: sunybcs!dmark@boulder.colorado.edu  (David Mark)
Subject: Re: expert systems for graphic design?

In article <1027@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
        (Gilbert\ Cockton) writes:
    (in response to my earlier posting)

        (several useful references deleted)

>These measures covered are useful, but very crude.  Graphic designers
>are not ones for writing things down, nor can I see them rushing to
>have their knowledge elicited by production rule hackers. It is far
>more efficient to find a NUMBER of graphic designers locally and ask
>them to evaluate your display layouts.  Then use your judgement to
>decide on which advice to take.

It is my opinion that the content and components of maps in a geographic
information systems (GIS) application vary so much that we cannot have
"our layouts" to be evaluated by expert designers.  Does anyone know of work
which confirms or contradicts this, or knowledge of a graphics domain
as complicated and variable as map production that has been 'solved' in a
design sense?

David M. Mark, Professor of Geography
dmark@joey.cs.buffalo.edu

------------------------------

Date: 27 Apr 88 21:45:15 GMT
From: hubcap!cchang@gatech.edu  (Chin Hui Chang)
Subject: Re: how to recognize a chinese character

In article <527@vmucnam.UUCP>, daniel@vmucnam.UUCP (Daniel Lippmann) writes:
> is there anybody knowing some computer-method to analize
> a chinese character to find his place in a dictionnary ?

Please post answers to this.  Better still, would Mr. Lippmann
compile all the responses he has received via e-mail and post
the collection?

                                --Zhang Jinhui

------------------------------

Date: 25 Apr 88 18:14:01 GMT
From: mcvax!inria!vmucnam!daniel@uunet.uu.net  (Daniel Lippmann)
Subject: how to recognize a chinese character

is there anybody knowing some computer-method to analize
a chinese character to find his place in a dictionnary ?
There are at least 2 problems/
- how to input the character ? by a graphical mean, with a
  system of question-answer to describe it,.....
-once inputed, how to analize it to find the page or set of pages
 where is translation can be found ? counting the number of strokes,
 identifying a key, using the 4 corners method,...
The problem to solve is : given an unknown characer how to determine
its rank in a dictionnary from its graphical aspect.
daniel (post or mail to ...!seismo!mcvax!inria!vmucnam!daniel)

------------------------------

End of AIList Digest
********************

∂30-Apr-88  0051	LAWS@KL.SRI.COM 	AIList Digest   V6 #85 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 30 Apr 88  00:51:39 PDT
Date: Fri 29 Apr 1988 20:56-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V6 #85
To: AIList@SRI.COM


AIList Digest           Saturday, 30 Apr 1988      Volume 6 : Issue 85

Today's Topics:
  Administrivia - AIList Lives!,
  AI Tools - NETtalk Database,
  Education - AI Texts & PARRY,
  Applications - Graphic Design & Railroad

----------------------------------------------------------------------

Date: Fri 29 Apr 88 00:52:12-PDT
From: Ken Laws <LAWS@KL.SRI.COM>
Reply-to: AIList-Request@SRI.COM
Subject: AIList Lives!

Several people have now volunteered to take over all or part
of the AIList administrative duties.  Some of the volunteers
are on Bitnet and CSNet, some are at commercial companies on
the Arpanet.  I thank you all (Todd Ogaswara, Scott Corzine,
Fareed Asad-Harooni, Tracy Wells, David Smith, Kevin Whiting,
David Mittman) for the offers, but will leave any delegation
of duties to the new moderator, Nick Papadakis at AI.AI.MIT.EDU.
Nick has offered to host the list after his system wizard gets
over the chicken pox.  I'm sure Nick will do a great job, and
MIT offers an excellent environment for continuing the list.

                                        -- Ken

------------------------------

Date: Wed, 27 Apr 88 09:30:36 edt
From: terry@cs.jhu.edu (Terry Sejnowski <terry@cs.jhu.edu>)
Subject: NETtalk Database

There have many requests for the NETtalk database.  A training
dictionary of 20,000 words marked with phonemes and stresses is
now available from:

Kathy Yantis
Cognitive Science Center
Johns Hopkins University
34th and Charles Streets
Baltimore, MD 21218

Please specify the media you want:

1/2" tape, 9 track
        1600, 3200 or 6250 bpi
        UNIX or ANSI labelled (VMS compatible)

1/4" Sun cartridge (Quick-11, TAR)

5 1/4" 1.2 MB floppy (MS-DOS)

Enclose a check or money order for $50 to cover costs made out
to: Johns Hopkins Cognitive Science Center.

Terry Sejnowski

------------------------------

Date: 28 Apr 88 02:27:05 GMT
From: mcvax!unido!tub!tmpmbx!netmbx!morus@uunet.uu.net  (Thomas M.)
Subject: Re: AI texts

In article <4894@sdcsvax.UCSD.EDU> demers@beowulf.UUCP (David E Demers) writes:
>In article <1516@gumby.cs.wisc.edu> g-zeiden@gumby.cs.wisc.edu
(Matthew Zeidenberg) writes:
>>I'm teaching intro AI here at the Univ. of Wisconin this coming
>>summer, and I'm trying to choose a text. I'm considering Rich,
>>Winston, Nilsson and Tanimoto's books. Any opinions?
>>
From a didactical point of view you might consider the TIME-LIFE book
"Artificial intelligence" which visualizes some of the main topics like
rule-based reasoning, learning, knowledge representations, pattern recogni-
tion. Good source for overhead-transparencies.
Some anekdotes about the "authorities" in the field too.
For the same reason - good visualization - have look at HARMON/KING
"Expert Systems" (John Wiley & Sons 1985).
For a critical look upon the ideas of KI there is DREYFUS/DREYFUS
"Mind over Machine" (The Free Press 1986).

-- Thomas Muhr

--
! Thomas Muhr    Knowledge-Based Systems Dept. Technical University of Berlin !
! BITNET/EARN:   muhrth@db0tui11.bitnet                                       !
! UUCP:          morus@netmbx.UUCP (Please don't use from outside Germany)    !
! BTX:           030874162  Tel.: (Germany 0049) (Berlin 030) 87 41 62        !

------------------------------

Date: 11 Apr 88 13:05:41 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: conversations?

In article <37291OK2@PSUVMB> OK2@PSUVMB.BITNET writes:
>     Both PARRY and ELIZA take advantage of the tactic of predefining the
>context of the conversation (a conversation with a paranoiac, or conversation
>with a therapist) to imply real meaning to sentences the program generates from
>key words picked from the human's sentences.

If my memory is correct, PARRY was not just a program, but required a
human operator to intervene between the input from the psychiatrists
and others to whom PARRY was demonstrated, presumably doing a simple
translation from English to S-expressions :-)

------------------------------

Date: Fri Apr 29 10:24:50 EDT 1988
From: sas@BBN.COM
Subject: expert systems for graphic design?

The closest I have seen is an expert system to design business cards.
Get in touch with Ron MacNeil at the MIT Arts and Media Technology
Center for more info.

                                        Seth

------------------------------

Date: 29 Apr 88 17:16:45 GMT
From: pattis@june.cs.washington.edu  (Richard Pattis)
Subject: Re: expert systems for graphic design?

Jock Mackinlay (sp?) wrote a thesis on under Mike Genesereth at Stanford,
about a year or two ago, that discussed a program that decided on the way
to graphically display such information.

------------------------------

Date: 26 Apr 88 03:15:43 GMT
From: ed298-ak@violet.Berkeley.EDU  (Edouard Lagache)
Subject: Re: Expert Systems in the Railroad Industry (is AI needed?).

In article <73@edai.ed.ac.uk> ceb@edai (Colin Bridgewater) writes:
>  Just to get my two penn'orth in, whatever happened to dynamic programming
>for scheduling, cargo-space optimisation and inventory control etc ?  This
>well-worn technique is quite adequate for the majority of purposes envisaged
>by EL. I mention this to raise a wider issue which was possibly not in the
>mind of the original sender, namely that of the desire to throw ever more
>complex solution procedures at the simplest of problems....
>
>  Why should we want to implement an expert system, when adequate techniques
>exist already ? That is, is the application of expert system technology
>appropriate to the magnitude and complexity of the problem ? Should we be
>advocating the application of such 'high-tech' solutions to all and sundry ?
>I have no doubt that such systems could be made to work, don't get me wrong
>on that, I just question whether the level of technology required in order to
>do so is justified. Surely it is better to apply the simplest solutions when-
>ever possible.
>

        Dr. Bridgewater comments are not completely off the mark.
        One reason I posted the question was to see if Expert System
        methodologies might be useful in improving the performance
        of conventional programming techniques by providing useful
        heuristics for such tasks as switching.  While, the areas
        mentions can clearly be solved by brute force methods, it is
        unlikely that human experts employ only those sorts of
        stategies (since human cognition doesn't support large active
        data structures); thus, there may be some interesting
        enhancements possible on conventional programming techniques
        by learning how human experts perform the tasks involved.

        At the same time it is very much in keeping with Hubert
        Dreyfus's comments that just railroad tasks are very
        promising areas for expert systems that will outperform
        human experts.

        Any comments?

                                                Edouard Lagache
                                                The PROLOG Forum
                                                lagache@violet.berkeley.edu

------------------------------

Date: Fri Apr 29 10:30:55 EDT 1988
From: sas@BBN.COM
Subject: Re: Expert Systems in the Railroad Industry.

I know Francis Lynch worked on this system at General Electric in (I
think) upstate New York.  At last count he was working for DEC out in
Marlborough, Mass.  I am not 100% sure, but he might have presented a
paper at the 1983 AAAI.

                                                Seth

------------------------------

Date: 27 Apr 88 18:48:30 GMT
From: mnetor!utzoo!dciem!nrcaer!cognos!roberts@uunet.uu.net  (Robert
      Stanley)
Subject: Re: Expert Systems in the Railroad Industry.

In article <4643@cup.portal.com> tony_mak_makonnen@cup.portal.com writes:
>>      What sort of expert systems have developed for the railroad
>>      industry?
>
>Strangely enough, the one that I know of is a General Electric locomotive
>maintenance expert system.  It was mentioned in a computer magazine and
>one of the railfanning mags. last year.
>yes and it was finally coded in Forth.

I missed the original post because our feed has been down for a while, so
can't e-mail a reply.  The system referred to was D.E.L.T.A. (Diesel-
Electric Locomotive Troubleshooting Assistant), and not only was it
re-coded in FORTH, it was delivered on a pretty basic IBM-PC.  An
interesting feature was that the system as installed in G.E.'s workshops
included a full set of locomotive schematics stored on optical disk, and
the user was pointed to the correct area by D.E.L.T.A.  The system was not
developed in FORTH, but once the knowledge base had been completed (they
don't introduce new designs of diesel-electric locos very often) it became
feasible to build a "conventional" re-implementation.  FORTH was chosen for
various technical reasons, including applicability of the stack approach,
and excellent run-time performance.  The development was done in LISP on
something like a PDP-11/23.  The original reference was:

  Bonissone and Johnson "Expert System for Diesel Electric Locomotive
  Repair" - IJCAI-83

I know that CN (Canadian National) have a small but active expert system
group, who have produced several small systems.  One was a diagnostic
system for walkie-talkies (they use tens of thousands), and another was
some kind of locomotive fuel usage monitor/advisor.  I haven't been in
touch with the group recently, so am not up on current work.

CAIP Co (Canadian Artificial Intelligence Products Corp) has a joint
agreement with CP Rail (Canadian Pacific) for the marketing of a lube oil
expert system.

The Transportation Development Centre (TDC) of Transport Canada in Montreal
is active in the expert system field, but I am not sure whether they have
any projects specifically for the rail sector.

The Japanese have been extremely active in this area, but I'd need to look
out a bibliography from a couple of years back for direct references.

A number of interesting systems have been developed in France, which has an
advanced railway system.  The same is true in Britain, but I would have to
do some paper file searching for details.  The only system which springs to
mind is British Telecomm's amazing British-Rail timetable advisor, which
used speech recognition and voice synthesis for unrestricted access via
telephone.

If the original poster wants more detailed information, please would (s)he
contact me via e-mail.  My apologies for cluttering the net.

Robert_S
--
Robert Stanley - Cognos Incorporated: P.O. Box 9707, 3755 Riverside Drive
Compuserve: 76174,3024                Ottawa, Ontario  K1G 3Z4, CANADA
uucp: decvax!utzoo!dciem!nrcaer!cognos!roberts  Voice: (613)738-1440(Research)
arpa/internet: roberts%cognos.uucp@uunet.uu.net   FAX: (613)738-0002

------------------------------

End of AIList Digest
********************

∂30-Apr-88  0239	LAWS@KL.SRI.COM 	AIList V6 #86 - Philosophy  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 30 Apr 88  02:39:40 PDT
Date: Fri 29 Apr 1988 21:13-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #86 - Philosophy
To: AIList@SRI.COM


AIList Digest           Saturday, 30 Apr 1988      Volume 6 : Issue 86

Today's Topics:
  Philosophy - Sociology & Free Will and Self-Awareness

----------------------------------------------------------------------

Date: 26 Apr 88 18:05:43 GMT
From: uflorida!novavax!maddoxt@gatech.edu  (Thomas Maddox)
Subject: Re: The future of AI

In article <978@crete.cs.glasgow.ac.uk> gilbert@crete.UUCP (Gilbert
Cockton) writes:
>
>Sociologists study the present, not the future.
>I presume the "Megatrends" books
>cited is Toffler style futurology, and this sort of railway journey light
>reading has no connection with rigorous sociology/contemporary anthrolopology.
>
>The only convincing statements about the future which competent sociologists
>generally make are related to the likely effects of social policy.  Such
>statements are firmly rooted in a defendible analysis of the present.
>
>This ignorance of the proper practices of historians, anthropologists,
>sociologists etc. reinforces my belief that as long as AI research is
>conducted in philistine technical vacuums, the whole research area
>will just chase one dead end after another.

        "Rigorous sociology/contemporary anthropology"?  Ha ha ha ha
ha ha ha ha, &c.  While much work in AI from its inception has
consisted of handwaving and wishful thinking, the field has produced
and continues to produce ideas that are useful.  And some of the most
interesting investigations of topics once dominated by the humanities,
such as theory of mind, are taking place in AI labs.  By comparison,
sociologists produce a great deal of nonsense, and indeed the social
"sciences" in toto are afflicted by conceptual confusion at every
level.  Ideologues, special interest groups, purveyors of outworn
dogma (Marxists, Freudians, et alia) continue to plague the social
sciences in a way that would be almost unimaginable in the sciences,
even in a field as slippery, ill-defined, and protean as AI.
        So talk about "philistine technical vacuums" if you wish, but
remember that by and large people know which emperor has no clothes.
Also, if you want to say "one dead end after another," you might
adduce actual dead ends pursued by AI research and contrast them
with non-dead ends so that the innocent who stumbles across your
remark won't be utterly misled by your unsupported assertions.

------------------------------

Date: 21 Apr 88 03:58:53 GMT
From: SPEECH2.CS.CMU.EDU!yamauchi@pt.cs.cmu.edu  (Brian Yamauchi)
Subject: Free Will & Self-Awareness

In article <3200014@uiucdcsm>, channic@uiucdcsm.cs.uiuc.edu writes:
>
> I can't justify the proposition that scientific endeavors grouped
> under the name "AI" SHOULD NOT IGNORE issues of free wil, mind-brain,
> other minds, etc.  If these issues are ignored, however, I would
> strongly oppose the use of "intelligence" as being descriptive
> of the work.  Is it fair to claim work in that direction when
> fundamental issues regarding such a goal are unresolved (if not
> unresolvable)?  If this is the name of the field, shouldn't the
> field at least be able to define what it is working towards?
> I personally cannot talk about intelligence without concepts such
> as mind, thoughts, free will, consciousness, etc.  If we, as AI
> researchers make no progress whatsoever in clarifying these issues,
> then we should at least be honest with ourselves and society, and find a
> new title for our efforts.  Actually the slight modification,
> "Not Really Intelligence" would be more than suitable.
>
>
> Tom Channic

I agree that AI researchers should not ignore the questions of free will,
consciousness, etc, but I think it is rather unreasonable to criticise AI
people for not coming up with definitive answers (in a few decades) to
questions that have stymied philosophers for millenia.

How about the following as a working definition of free will?  The
interaction of an individual's values (as developed over the long term) and
his/her/its immediate mental state (emotions, senses, etc.) to produce some
sort of decision.

I don't see any reason why this could not be incorporated into an AI
program.   My personal preference would be for a connectionist
implementation because I believe this would be more likely to produce
human-like behavior (it would be easy to make it unpredictable, just
introduce a small amount of random noise into the connections).

Another related issue is self-awareness.  I would be interested in hearing
about any research into having AI programs represent information about
themselves and their "self-interest".  Some special cases of this might
include game-playing programs and autonomous robots / vehicles.

By the way, I would highly recommend the book "Vehicles: Experiments
in Synthetic Psychology" by Valentino Braitenburg to anyone who doesn't
believe that machines could ever behave like living organisms.

______________________________________________________________________________

Brian Yamauchi                      INTERNET:    yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________

------------------------------

Date: 26 Apr 88 11:41:52 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: Free Will & Self-Awareness

In article <1484@pt.cs.cmu.edu> yamauchi@SPEECH2.CS.CMU.EDU (Brian Yamauchi)
writes:
>I agree that AI researchers should not ignore the questions of free will,
>consciousness, etc, but I think it is rather unreasonable to criticise AI
>people for not coming up with definitive answers (in a few decades) to
>questions that have stymied philosophers for millenia.

It is not the lack of answers that is criticised - it is the ignorance
of candidate answers and their problems which leads to the charge of
self-perpetuating incompetence.  There are philosophers who would
provide arguments in defence of AI, so the 'free-will' issue is not
one where the materialists, logicians and mechanical/biological
determinists will find themselves isolated without an intellectual tradition.
>
>I don't see any reason why this could not be incorporated into an AI program
So what? This is standard silly AI, and implies that what is true has
anything to do with the quality of your imagination.  If people make
personal statements like this, unfortunately the rebuttals can only be
personal too, however much the rebutter would like to avoid this position.
>
>By the way, I would highly recommend the book "Vehicles: Experiments
>in Synthetic Psychology" by Valentino Braitenburg to anyone who doesn't
>believe that machines could ever behave like living organisms.

There are few idealists or romantics who believe that NO part of an organism
can be modelled as a mechanical process.  Such a position would require that a
heart-lung machine be at one with the patient's geist, soul or psyche!  The
logical fallacy beloved in AI is that if SOME aspects of an organism can be
modelled mechanically, than ALL can.  This extension is utterly flawed.  It may
be the case, but the case must be proven, and there are substantial arguments
as to why this cannot be the case.

For AI workers (not AI developers/exploiters who are just raiding the
programming abstractions), the main problem they should recognise is
that a rule-based or other mechanical account of cognition and decision
making is at odds with the doctrine of free will which underpins most Western
morality.  It is in no way virtuous to ignore such a social force in
the name of Science.  Scientists who seek moral, ethical, epistemological
or methodological vacuums are only marginalising themselves into
positions where social forces will rightly constrain their work.

------------------------------

Date: 28 Apr 88 15:42:18 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Free Will & Self-Awareness


      Could the philosophical discussion be moved to "talk.philosophy"?
Ken Laws is retiring as the editor of the Internet AILIST, and with him
gone and no replacement on the horizon, the Internet AILIST (which shows
on USENET as "comp.ai.digest") is to be merged with this one, unmoderated.
If the combined list is to keep its present readership, which includes some
of the major names in AI (both Minsky and McCarthy read AILIST), the content
of this one must be improved a bit.

                                        John Nagle

------------------------------

Date: 29 Apr 88 13:04:43 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Free Will & Self-Awareness

One of the problems with the English Language is that most of the
words are already taken.

Rather than argue over whether AI should or should not include
investigations into consciousness, awareness, free will, etc,
why not just make up a new label for this activity.

I would like to learn how to imbue silicon with consciousness,
awareness, free will, and a value system.  Maybe this is not
considered a legitimate branch of AI, and maybe it is a bit
futuristic, but it does need a name that people can live with.

So what can we call it?  Artificial Consiousness?  Artificial
Awareness?  Artificial Value Systems?  Artificial Agency?

Suppose I were able to inculcate a Value System into silicon.
And in the event of a tie among competing choices, I use a
random mechanism to force a decision.  Would the behavior of
my system be very much different from a sentient being with
free will?

--Barry Kort

------------------------------

Date: 29 Apr 88 01:26:02 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: Re: Free Will & Self-Awareness

In article <1029@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:
> For AI workers (not AI developers/exploiters who are just raiding the
> programming abstractions), the main problem they should recognise is
> that a rule-based or other mechanical account of cognition and decision
> making is at odds with the doctrine of free will which underpins most Western
> morality.

What about compatibilism?  There are a lot of arguments that free will is
compatible with strong determinism.  (The ones I've seen are riddled with
logical errors, but most philosophical arguments I've seen are.)
When I see how a decision I have made is consistent with my personality,
so that someone else could have predicted what I'd do, I don't _feel_
that this means my choice wasn't free.

------------------------------

End of AIList Digest
********************

∂02-May-88  0051	LAWS@KL.SRI.COM 	AIList V6 #87 - Queries, Causal Modeling, Texts 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 2 May 88  00:51:44 PDT
Date: Sun  1 May 1988 20:51-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #87 - Queries, Causal Modeling, Texts
To: AIList@SRI.COM


AIList Digest             Monday, 2 May 1988       Volume 6 : Issue 87

Today's Topics:
  Queries - Scenario Machines and Training Wheels &
    The Researchers' Bible from Edinburgh Dept. of AI &
    Free Will for Machines & NEXPERT-OBJECT,
  Binding - Lisp Machines Mailing List,
  Literature - Causal Modeling & Expert System Introduction

----------------------------------------------------------------------

Date: 30 Apr 88 00:54:01 GMT
From: lagache@violet.Berkeley.EDU  (Edouard Lagache)
Subject: HELP!, need references on Scenario machines and Training
         Wheels.


          Can anyone give me a pointer to papers on application
          software help systems known as "Scenario Machines" and
          "Training wheels"  I am looking for references on work that I
          believe was done at the IBM Thomas J. Watson research center
          by John M. Carroll, probably in the 1983-86 time frame.

          A Training wheels system is a software package that has been
          striped of is more dangerous commands and embellished with
          additional help (sort of like 'edit' versus 'ex' in Berkeley
          UNIX).

          A Scenario Machine is a curriculum on using the software that
          is latched on top of the software, so that the user can see a
          scenario of how the software can be used.

          Any help in locating such papers would be *greatly*
          appreciated!

          Thanks in advance!


                                        Edouard Lagache
                                        School of Education
                                        U.C. Berkeley
                                        lagache@violet.berkeley.edu



          P.S. In case it wasn't obvious, please reply to me directly
               unless the net powers that be block your reply.

------------------------------

Date: 29 Apr 88 19:34:49 GMT
From: paul.rutgers.edu!clash.rutgers.edu!masticol@rutgers.edu  (Steve
      Masticola)
Subject: "The Researchers' Bible" - Edinburgh Dept. of AI


Hi,

        One of my professors distributed a very useful document titled
"The Researchers' Bible"*, which originated in 1982 in the Edinburgh
Department of AI. I'd like to distribute it here, but the copy we have
is in terrible shape (ninth-generation Xerox, dot-matrix printed with
wires missing from the printer, etc.) Does anyone have it on-line? If
so, could I get a copy?

Thanks,
- Steve Masticola
  masticol@paul.rutgers.edu

* D. Bundy, B. du Boulay, J. Howe, G. Plotkin, "The Researchers'
Bible", DAI Occasional Paper No. 10, University of Edinburgh, revised
September 1982.

------------------------------

Date: 30 Apr 88  1302 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: missed message

I sent you the following on the 18th.  Since it wasn't included in your
recent digest number #86, I assume it got lost due to the incorrect
address having been generated by some reply macro.  This seems like
a good occasion to solicit reactions to our 1969 notions.  If you like,
I'll send a longer message summarizing the ideas, but I probably won't
have time to do it before I go on a two week trip starting May 4.

  [This was received, and was sent out in V6 N76 on April 21.
  I have no objection to running it again, but perhaps a few
  details about your position might draw more response.  Most
  of the list members weren't following philosophical AI
  discussions in 1969.  -- KIL]

18-Apr-88  1745 JMC     re: AIList V6 #72 - Queries
To:   AIList@KL.SRI.COM
[In reply to message sent Sun 17 Apr 1988 23:35-PDT.]

McCarthy, John and P.J. Hayes (1969):  ``Some Philosophical Problems from
the Standpoint of Artificial Intelligence'', in D. Michie (ed), Machine
Intelligence 4, American Elsevier, New York, NY discusses the problem of
free will for machines.  I never got any reaction to that discussion,
pro or con, in the 19 years since it was published and would be grateful
for some.

------------------------------

Date: 28 Apr 88 13:54:14 GMT
From: mcvax!tnosel!hin@uunet.uu.net  (Hin Oey)
Subject: info request NEXPERT-OBJECT

L.S.

From my collegues at the TNO Institute for Applied Computer
Science I got the request to put the next few questions
concerning NEXPERT-OBJECT on the net:

- What are your experience with the implementations for 386,
  Mac and or Apollo;
- In general advatages and disadvantages;
- Performance
- Compability between the above mentioned implementations;
- The possibility to run other programs from Nexpert, e.g.
  to activate a graphics program, which can visualize some
  of the input data.

Regards,

Hin Oey (hin@tnosel) Netherlands
(IBBC-TNO  -  P.O.Box 49  -  2600 AA Delft)

------------------------------

Date: 1 May 88 00:38:43 GMT
From: barmar@think.com  (Barry Margolin)
Subject: Re: Lisp Machines mailing list sought


The only mailing list I know of that is specifically related to
Symbolics is SLUG@Warbucks.AI.SRI.COM.  This is the mailing list for
the Symbolics Lisp Users' Group.  If you're a Symbolics customer you
should join the users' group, and you should get on the mailing list.

Barry Margolin
Thinking Machines Corp.

barmar@think.com
uunet!think!barmar

------------------------------

Date: 1 May 88 00:49:35 GMT
From: uflorida!fish.cis.ufl.edu!fishwick@gatech.edu  (Paul Fishwick)
Subject: Causal Modeling


With regards to causal models, you may wish to check the following
references:

 (1) H. Blalock's 2 books on causal modeling - the first one is
     called "Causal Models in the Social Sciences" (or something
     close to that).

 (2) Glymour et al. at Carnegie Mellon - I forget the book name.
     (Look up Glymour in your card catalog). A PC Program called
     TETRAD comes with the book.

+------------------------------------------------------------------------+
| Paul A. Fishwick.......... INTERNET: fishwick@uflorida.cis.ufl.edu     |
| Dept. of Computer Science. UUCP: {gatech|ihnp4}!codas!uflorida!fishwick|
| Univ. of Florida.......... PHONE: (904)-335-8036                       |
| Bldg. CSE, Room 301....... FACS is available                           |
| Gainesville, FL 32611.....                                             |
+------------------------------------------------------------------------+




--
I am doing fine

------------------------------

Date: 25 Apr 88 22:38:32 GMT
From: pur-phy!mrstve!mdbs!greg@ee.ecn.purdue.edu  (Greg Feldman)
Subject: Re: Expert system introductory literature

In article <3531@csli.STANFORD.EDU> rustcat@csli.UUCP (Vallury Prabhakar)
writes:
>Hello,
>
>       Could any of you suggest some books/literature that provide a good
>introduction to what expert systems (and AI if possible) are all about?
>I have had absolutely no background whatsoever in these areas, so I'm really
>looking for the basic, trivial stuff.
>
I just e-mailed someone a reference to this text, so I gather there
is enough interest for a posting.

"Expert Systems Using Guru" is a text that seems up your alley.  It
begins with a discussion of AI, then Expert systems.  Then an example
is taken (determing sales quotas) and discusses how expert systems
may be applied to serve this application.

It uses Guru, an expert systems environment, as the expert system
for development.

"Expert Systems Using Guru", by Clyde Holsapple and Andrew Whinston,
published by Dow Jones Irwin, 1986.  Price ~$35.00.

Available from bookstores or by calling (800) 323-3629.

A good text to see how AI & Expert Systems can be applied in the
"real" world.
>                                               -- Vallury Prabhakar
>                                               -- rustcat@cnc-sun.stanford.edu


#include ".signature"
Greg Feldman--MDBS  (317) 448-6187

UUCP:   {rutgers,ihnp4,decvax,ucbvax}!pur-ee!mdbs!support

Note:  "These are my opinions, so if anyone asks, I didn't do it!"

------------------------------

End of AIList Digest
********************

∂02-May-88  0334	LAWS@KL.SRI.COM 	AIList V6 #88 - Philosophy  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 2 May 88  03:34:00 PDT
Date: Sun  1 May 1988 20:58-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #88 - Philosophy
To: AIList@SRI.COM


AIList Digest             Monday, 2 May 1988       Volume 6 : Issue 88

Today's Topics:
  Opinion - AI Goals & Free Will & Sociology

----------------------------------------------------------------------

Date: 26 Apr 88 15:06:48 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU  (Stephen Smoliar)
Subject: Re: Expert Systems in the Railroad Industry.

In article <73@edai.ed.ac.uk> ceb@edai (Colin Bridgewater) writes:
>  Just to get my two penn'orth in, whatever happened to dynamic programming
>for scheduling, cargo-space optimisation and inventory control etc ?  This
>well-worn technique is quite adequate for the majority of purposes envisaged
>by EL. I mention this to raise a wider issue which was possibly not in the
>mind of the original sender, namely that of the desire to throw ever more
>complex solution procedures at the simplest of problems....
>
>  Why should we want to implement an expert system, when adequate techniques
>exist already ? That is, is the application of expert system technology
>appropriate to the magnitude and complexity of the problem ? Should we be
>advocating the application of such 'high-tech' solutions to all and sundry ?
>I have no doubt that such systems could be made to work, don't get me wrong
>on that, I just question whether the level of technology required in order to
>do so is justified. Surely it is better to apply the simplest solutions when-
>ever possible.

There is one issue of "appropriate technology" which appears to have been
overlooked in Colin's argument;  and that is the matter of computational
tractability.  In many practical domains, while it is certainly possible
to build mathematical models which may then be processed by dynamic
programming, those models are too unwieldy to yield much useful information
in any reasonable period of time.  Often what makes an expert an expert is
the ability to recognize that a complex general-purpose model may be
considerably simplified through abstraction without significantly sacrificing
fidelity.  The mathematical nature of the model, in and of itself, cannot
provide us with information of how to perform such abstractions.  That is
often why we need experts;  and, in such cases, if that expertise can be
properly modeled by an expert system, a computationally intractable approach
can be turned into a practical one.

------------------------------

Date: 28 Apr 88 09:15:47 GMT
From: mcvax!ukc!dcl-cs!simon@uunet.uu.net  (Simon Brooke)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

In article <445@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
(flaming against an article submitted by Gilbert Cockton)

>       "Rigorous sociology/contemporary anthropology"?  Ha ha ha ha
>ha ha ha ha, &c.

What do the third and subsequent iterations of the symbol 'ha' add to the
meaning of this statement? Are we to assume the author doubts the rigour
of Sociology, or the contemporary nature of anthropology?

>And some of the most interesting investigations of topics once dominated
>by the humanities, such as theory of mind, are taking place in AI labs.

This is, of course, true - some of it is. Just as some of the most
interesting advances in Artificial Intelligence take place in Philosophy
and Linguistics departments. This is what one would expect, after all; for
what is AI but an experimental branch of Philosophy?

>sociologists produce a great deal of nonsense, and indeed the social
>"sciences" in toto are afflicted by conceptual confusion at every
>level.  Ideologues, special interest groups, purveyors of outworn
>dogma (Marxists, Freudians, et alia) continue to plague the social
>sciences in a way that would be almost unimaginable in the sciences,

Gosh! Isn't it nice, now and again, to read the words of someone whose
knowledge of a field is so deep and thorough that they can some it up in
one short paragraph!

It is, of course, true that some embarassingly poor work is published in
Sociology, just as in any other discipline; perhaps indeed there is more
poor sociology, simply because sociology is more difficult to do well than
any other type of study - most of the phenomena of sociology occurs in the
interaction between individuals, and this interaction cannot readily be
accessed by an observer who is not party to the interaction. Yet if you
are part of the interaction, it will not proceed as it would with someone
else...

Again, sociological investigation, because it looks at us in a
rigorous way which we are not used to, often leads to conclusions which
seem counter-intuitive - they cut through our self-deceits and hypocrisies.
So we prefer to abuse the messenger rather than listen to the message.

For the rest:

He who knows not an knows not he knows not......

A dictum which I will conveniently forget next time I feel like shooting
my mouth off.

** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      *
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*                                                                       *
*  Thought for today: Most prologs chew everything very slowly anyway,  *
***just being polite I guess*********************************************

------------------------------

Date: 28 Apr 88 10:53:57 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

In article <445@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>By comparison, sociologists produce a great deal of nonsense, and indeed the
>social "sciences" in toto are afflicted by conceptual confusion at every
>level.  Ideologues, special interest groups, purveyors of outworn
>dogma (Marxists, Freudians, et alia) continue to plague the social
>sciences in a way that would be almost unimaginable in the sciences,
>even in a field as slippery, ill-defined, and protean as AI.
There are more of them :-)  But if you looked at the work of U.K. sociologists
like Townsend and Halsey on Age, Poverty, Health and social mobility, you might
find something less concerned with theory and more with rigorous investigation.

I find the conflict in the humanities and behvioural "sciences" far more healthy
than the uncritical following of fashions of paradigms in science.  Whilst the
former areas encourage an understanding of methodology and epistemology, the
sciences assume their core methods are correct and get on with it.  A lot boils
down to personality (Liam Hudson, Contrary Imaginations).  The reason that
ideology and methodological pluralism would be unimaginable in the sciences may
have something to do with the nature (and please, not the LACK) of the
scientific imagination compared to the humanist imagination.  Note that
materialism, determinism, statistical inference and positivism are no less
outworn dogmas and ideologies than are Marxism, Freudianism, etc.  My
experience is that someone from a humanist critical tradition will have a better
understanding of the assumptions behind methodologies than will scientists and
even more so, engineers.  Out of such understandings came the rejection of first
Medieval Catholicism, then Seventeeth Century materialism, Twentieth Century
Behaviourism and Systems Theory, and now the "pure" AI position.  Assumptions
behind AI are similar to many which have been around since the warm humility of
Renaissance Humanism cooled into the mechanical fascination of the Baroque.

>So talk about "philistine technical vacuums" if you wish, but
>remember that by and large people know which emperor has no clothes.
So who is it who is deciding strategy for most Western social programmes?
Clothes or no clothes, social administrators have an empire which extends
beyond academia and many of them draw on sociological concepts and results in
their work.  It is in their complete ignorance of socialisation that AI workers
fall down in their study of machine learning.  Most human learning always takes
place in a social context, with only the private interests of marginal
adolescents and adults taking place in isolation - but here they draw on problem
solving capabilities which were nutured in a social context.  The starkest
examples of the nature and role of primary socialisation come from those few
unfortunate children who had been isolated from birth.  They are savage animals.
If parents had to interact with their children in FOPC or connectionist inputs,
the same would be true, until the children were taken into care.

>Also, if you want to say "one dead end after another," you might adduce actual
>dead ends pursued by AI research and contrast them with non-dead ends.

DEAD ENDS
Computational Lingusitics, continuous speech understanding, intelligent vision,
reliable expert systems which do not require endless maintenance, human
problem solving, the physical symbol system hypothesis, knowledge representation
formalisms using computable models.  Largely areas where some other paradigm
within another discipline can make progress as the lead weight of computability
is not suffocating research.  Generally due to knowledge representation problems
- even the Novel has problems here :-)  If you can't write it in a text-book
(e.g. clinical diagnosis, teaching techniques, advocacy), you'll never get it
on a machine - impossible in superset (NL) => inpossible in subset (FOPC,
computationally denotable/constructable).  A problem in AI is trying to solve
other people's problems, where those other people know more about the problem
than you ever will - they live it day in day out.

NON-DEAD ENDS
Much work done under the name of AI is good - low-to-medium level vision,
restricted natural language, knowledge-based programming formalisms,
theorem-proving and highly-constrained technical planning problems.  Indeed,
most technical knowledge, being artificial and symbolic from the outset, is an
obvious candidate for AI modelling and there is nothing in the humanist
tradition which would doubt the viability of this work.  Here knowledge
representation is easy, because the domain will generally be so boring (but
economically/environmentally/security critical) that no-one wants to argue
about it.  Much technical expertise executed by humans is best suited to
machines.  In HCI research, sensible work on intelligent (=supportive) user
interfaces is getting somewhere, but then coming up with a computer model of a
computer system is hardly a major challenge in knowledge representation
techniques.  Coming up with a computer model of a user is also possible, as long
as we don't try to model anything controversial, but stick to observable
behaviour and user-negotiated input.

The main objection to AI is when it claims to approach our humanity.

                        It cannot.

------------------------------

Date: Sat, 30 Apr 88 23:39:49 EDT
From: Marvin Minsky <MINSKY@AI.AI.MIT.EDU>
Subject: AIList V6 #86 - Philosophy

Yamauchi, Cockton, and others on AILIST have been discussing freedom
of will as though no AI researchers have discussed it seriously.  May
I ask you to read pages 30.2, 30.6 and 30.7 of The Society of Mind.  I
claim to have a good explanation of the free-will phenomenon.  I agree
with Gilbert Cockton that it is not the lack of answers that should be
criticised, but the contemporary ignorance of the subject.  (As for
why my own answer evaded philosophers for millenia, My hypothesis is
that philosophers have not been very insightful about actual
psychological phenomena - which is why it had to wait for Freud - or,
perhaps, Poincare - to produce convincing discussions about the
importance of unconscious thinking.)

Cockton also sagely points out that
 a rule-based or other mechanical account of cognition and decision
 making is at odds with the doctrine of free will which underpins most
 Western morality. ...  Scientists who seek moral, ethical,
 epistemological or methodological vacuums are only marginalising
 themselves into positions where social forces will rightly constrain
 their work.

I only disagree with Cockton's insertion of "rightly".  Like
E.O.Wilson, I prefer follow ideas even where they lead to potentially
unpopular conclusions.  Indeed, I feel it is only proper for those
social forces to try to constrain my work.  When the researchers feel
constrained to censor their own work, then everyone may end up the
poorer in the end.

I'm not even sure this is a disagreement.  A closer look might show that
this is what Cockton is actually saying, too.

------------------------------

Date: 1 May 88 06:50:41 GMT
From: TAURUS.BITNET!shani@ucbvax.Berkeley.EDU
Subject: Re: Free Will & Self-Awareness

In article <912@cresswell.quintus.UUCP>, ok@quintus.BITNET writes:

> compatible with strong determinism.  (The ones I've seen are riddled with
> logical errors, but most philosophical arguments I've seen are.)

That is correct, but there are few which arn't and that is mainly because
they mannaged to avoid self-contradictions and mixing of concepts...

O.S.

------------------------------

Date: 30 Apr 88 16:37:20 GMT
From: uflorida!novavax!maddoxt@gatech.edu  (Thomas Maddox)
Subject: Social science gibber [Was Re:  Various Future of AI

Summary:  Here we have a prime specimen of the species
Keywords: AI, Sociology, manners.

In article <502@dcl-csvax.comp.lancs.ac.uk> simon@comp.lancs.ac.uk
(Simon Brooke) writes:
>In article <445@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>
>>      "Rigorous sociology/contemporary anthropology"?  Ha ha ha ha
>>ha ha ha ha, &c.
>
>What do the third and subsequent iterations of the symbol 'ha' add to the
>meaning of this statement? Are we to assume the author doubts the rigour
>of Sociology, or the contemporary nature of anthropology?

        Yeah, I think you could assume both, pal.  Repeated "ha"s
added for emphasis, in case some lamebrain (sociologist?  if the shoe
fits . . . ) wandered through and needed help.

>>And some of the most interesting investigations of topics once dominated
>>by the humanities, such as theory of mind, are taking place in AI labs.
>
>This is, of course, true - some of it is. Just as some of the most
>interesting advances in Artificial Intelligence take place in Philosophy
>and Linguistics departments. This is what one would expect, after all; for
>what is AI but an experimental branch of Philosophy?

        "AI but an experimental branch of Philosophy," eh?  Let's see,
now:  according to that view, I believe *every* branch of what we
usually call science could be construed in this way . . . or not.  In
short, the statement is almost perfectly empty.  Or maybe the secret
is in the use of the word "Philosophy."  That must be a special variant
of common or run-of-the-mill "philosophy," capitalized for occult reasons
known only to its initiates.
        Also, I have no quarrel with these "most interesting advances"
that are coming out of philosophy and linguistic departments.
Philosophy and linguistics, you might notice, *not* sociology.

        Let's read on. He's quoting me now:

>>sociologists produce a great deal of nonsense, and indeed the social
>>"sciences" in toto are afflicted by conceptual confusion at every
>>level.  Ideologues, special interest groups, purveyors of outworn
>>dogma (Marxists, Freudians, et alia) continue to plague the social
>>sciences in a way that would be almost unimaginable in the sciences,

        Then he returns to his own lovely prose:

>Gosh! Isn't it nice, now and again, to read the words of someone whose
>knowledge of a field is so deep and thorough that they can some it up in
>one short paragraph!

        "Some it up in one short paragraph"?  No, really, I can't
"some" it up; don't even know what doing so means.  However, if you
are trying in your inept fashion to say, "sum it up," thanks.  I
thought it was a pretty good paragraph myself.

>It is, of course, true that some embarassingly poor work is published in
>Sociology, just as in any other discipline; perhaps indeed there is more
>poor sociology, simply because sociology is more difficult to do well than
>any other type of study - most of the phenomena of sociology occurs in the
>interaction between individuals, and this interaction cannot readily be
>accessed by an observer who is not party to the interaction. Yet if you
>are part of the interaction, it will not proceed as it would with someone
>else...

        We're told "most of the phenomena . . . occurs" [subject-verb
agreement], further that "this interaction cannot readily be accessed
by an observer" [unnecessary jargon borrowed from another field and
used for the appearance of scientific rigor].  I guarantee it, this
guy *must* be a social scientist, sociologist or not.

>Again, sociological investigation, because it looks at us in a
>rigorous way which we are not used to, often leads to conclusions which
>seem counter-intuitive - they cut through our self-deceits and hypocrisies.
>So we prefer to abuse the messenger rather than listen to the message.

        "Sociological investigation . . . looks at us in a rigorous way
which we are not used to," the man says.  On his evidence, it's
through a glass darkly, which, alas, we are all quite used to.  The
notion of sociology as a bringer of ugly truths is particularly
amusing, though, and I thank him for it.
        I should add that I felt some remorse for my slap at
sociology, because the essential plight of the social sciences is
quite desperate.  However, when I read the message quoted above, my
remorse evaporated.  I would simply add that many sociologists,
whatever the ultimate value of their work, *can* read, write, and
think.
        Also, present polemics aside, my original diatribe came as a
response to a particularly self-satisfied posting from (apparently) a
sociologist attacking AI research as uninformed, puerile, &c.  It
seemed (and seems) to me that anyone in such an inherently weak field
should be rather careful in his criticism:  he's in the position of a
man throwing bricks at passers-by through his own front window.
        So let me reiterate:  AI research produces valuable and
interesting work; sociology produces much, much less.

------------------------------

End of AIList Digest
********************

∂05-May-88  0238	LAWS@KL.SRI.COM 	AIList V6 #89 - Seminars, Conferences 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 5 May 88  02:38:10 PDT
Date: Wed  4 May 1988 22:49-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #89 - Seminars, Conferences
To: AIList@SRI.COM


AIList Digest            Thursday, 5 May 1988      Volume 6 : Issue 89

Today's Topics:
  Seminars - Butterfly Lisp (NASA) &
    Joshua Database System (SRI) &
    Concept Acquisition in Noisy Environments (CMU) &
    Computational Models in AI (CMU) &
    Expert Database Systems (SRI),
  Conferences - 22nd Carnegie Symposium on Cognition (CMU) &
    WESTEX-88 Conference on Expert Systems

----------------------------------------------------------------------

Date: Fri, 29 Apr 88 11:14:49 PDT
From: CHIN%PLU@ames-io.ARPA
Subject: Seminar - Butterfly Lisp (NASA)


              National Aeronautics and Space Administration
                         Ames Research Center

                        SEMINAR ANNOUNCEMENT

SPEAKER:   Seth Steinberg

TOPIC:     Butterfly Lisp

The BBN Butterfly is a shared memory multiprocessor which can be
configured with up to 256 processors.  Over the last several years we
have developed a Lisp system which takes advantage of this hardware,
by providing for the parallel execution of lightweight tasks in a
shared Lisp world.  Tasks are created using the future mechanism which
automatically provides for data directed task synchronization.  Other
tasking and synchronizing techniques may be used as well.

In an attempt to understand how parallel programs execute, we installed
a tracing system to record the fine details of tasking and
synchronization.  This data is then used to produce a picture showing
both the task creation tree and the task synchronization tree.  These
pictures have shed light on the behavior of a number of parallel
programs.

Butterfly Lisp can be described as locally serial, since, whenever
possible, sections of code which contain no parallel constructs will
execute as they would on a serial processor.   Preserving this
property while combining such features as tasking, unwind protection
and special variables has required us to make a number of changes in
the way these features are implemented.

The system is now being ported to run on the BBN GP-1000 processor
under the Mach operating system and a new compiler is being tested
which takes advantage of Common Lisp type declarations.


BIOGRAPHY:

Seth Steinberg has been working with computers since the late '60's
and has an M.S. in Computer Science from MIT.  He spent seven years
working for Nicholas Negroponte doing systems programming, language
development and graphics at the Architecture Machine Group.  (Now the
nucleus of the Arts and Media Technology Center) In 1979, he joined
Software Arts where he designed and developed TK!Solver, a constraint
relaxation system for personal computers.  More recently he has been
working on a parallel Lisp implementation for the BBN Butterfly
multiprocessor.

================================================================
DATE: Monday,      TIME: 2:00 - 3:00 pm     BLDG. 244   Room 103
      May 2, 1988       --------------


POINT OF CONTACT: Marlene Chin   PHONE NUMBER: (415) 694-6525
     NET ADDRESS: chin%plu@ames-io.arpa

***************************************************************************

VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18.  Do not
use the Navy Main Gate.

Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance.  Submit requests to the point of
contact indicated above.  Non-citizens must register at the Visitor
Reception Building.  Permanent Residents are required to show Alien
Registration Card at the time of registration.
***************************************************************************

------------------------------

Date: Fri 29 Apr 88 15:18:06-PDT
From: NIELSEN@KL.SRI.COM (Norm R. Nielsen)
Subject: Seminar - Joshua Database System (SRI)

       Information Industries Divisional Seminar

            Joshua:  A System That Provides
    Syntactically Uniform Access to Heterogeneously
                Implemented Data Bases

                     Steve Anthony
                       Symbolics

                May 11 at 10 AM, BS-208


Joshua is a system developed at Symbolics that provides
syntactically uniform access to heterogeneously implemented
knowledge bases.  Its power comes from the observation that
there is a "protocol of inference" consisting of a small set
of abstract actions, each of which can be implemented in many
ways.  The object-oriented programming facilities of Flavors
have been used to control the choice of implementation.  Each
statement in the language represents an instance of a class
identified with that statement's predicate.  Steps of the
protocol are implemented by methods inherited from the
classes.  Since inheritance of protocol methods is a compile-
time operation, very fine-grained control can be achieved
with little run-time cost.

Joshua offers two major advantages.  First, a Joshua
programmer can easily change his or her program to use more
efficient data structures without changing the rule set or
other knowledge-level structures.  Second, it is easy to
build interfaces which incorporate existing tools into
Joshua, without having to modify those tools.

Steve Anthony will discuss the capabilities and design of
Joshua, followed by some demonstrations.  We will be meeting
in the Intelligent System Laboratory's computer room so that
the demonstrations can be run live on the Symbolics 3670.

------------------------------

Date: Sun, 01 May 88 00:49:10 EDT
From: Anurag.Acharya@CENTRO.SOAR.CS.CMU.EDU
Subject: Seminar - Concept Acquisition in Noisy Environments (CMU)

Dates:    9-May-88
Time:     11:30 am
Place:    WeH 7220
Type:     Machine Learning Seminar
Duration: 1 hr.
Who:      Francesco Bergadano (CS Dept., U. di Torino, Italy)


 Automated Concept Acquisition in Noisy Environments

          F. Bergadano, A. Giordana, L. Saitta

                  Computer Science Dpt
           Universita di Torino, Torino, Italy

    A learning method for concept acquisition from examples
in noisy environments is presented. The learned knowledge
is expressed in the form of production rules, organized into
separate clusters and linked together in a graph structure;
A continuous-valued semantics is associated
to the description language and each rule is affected
by a certainty factor.
        The learning process is guided by a top-down control
strategy, through successive specialization steps. Search
is strongly focalized by task-oriented heuristics and
by the available domain knowledge. The methodology
been tested on a problem in the field of speech recognition,
and the obtained results are presented and discussed.


Mr. Bergadano is also interested in the following topics:
inductive learning, explanation based learning, integration of inductive
and deductive approaches to Machine Learning, and applications of learning
systems to Pattern Recognition, and would like to meet people working
in these areas.

------------------------------

Date: Sun, 01 May 88 00:54:53 EDT
From: Anurag.Acharya@CENTRO.SOAR.CS.CMU.EDU
Subject: Seminar - Computational Models in AI (CMU)


                            THEORY/AI SEMINAR

                    Jiawei Hong, Courant Institute
                            Friday, May 6
                              2:00 p.m.
                              4605 WEH

             Connectionist and Other Computational Models in AI


This talk consists of three problems:

1. The well known connectionist models defined by an nxn real weight matrix
(notice that an arbitrary real number may have infinite information in it),
can be simulated by a non-uniform circuit of O(n↑3 log n) Boolean gates
(thus the total information is finite) with time slowdown O(log n). Therefore
the connectionist model do not have more computational power than other
parallel computation hardwares.

2. In the future, computers may consist of digital elements as well as analogue
elements. Which kind of analogue element does help? We prove that a kind of
analogue element does help only if
    (1). the analogue element has very very high precision: exponentially many
significant bits, or
    (2). the analogue element can very efficiently compute a problem which is
NOT in NC.  Both are unlikely true.

3. Can human brains be simulated by computers in the future? Under some rea-
sonable assumptions, we proved that this is possible. (there is no intrinsic
difficulty, like NP-hardness.) We are optimistic.

------------------------------

Date: Mon, 2 May 88 09:52:23 PDT
From: seminars@csl.sri.com (contact lunt@csl.sri.com)
Subject: Seminar - Expert Database Systems (SRI)


SRI COMPUTER SCIENCE LAB SEMINAR ANNOUNCEMENT:


           A FLARE FOR EXPERT DATABASE SYSTEMS IN STARBURST

                          Shel Finkelstein
                     IBM Almaden Research Center

                    Monday, May 9 at 4:00 pm
            SRI International, Conference Room B, Building A


Expert systems technology is being integrated into standard data
processing environments.  These environments have requirements not
typically emphasized by expert systems, including storage management,
shared access/update of data and knowledge, efficient access to data and
knowledge, persistent data and authorization.  Database systems meet
these requirements.  On the other hand, database systems do not have the
representation and search capabilities required for certain
applications.  Expert systems have these capabilities.

The goal of the Starburst project at IBM Almaden Research Center is to
do exploratory systems research towards building a portable, extensible,
distributed relational database management system.  One extension that
is a new direction (called "Flare") in Starburst is for expert database
systems.  In this talk we discuss the relationship between expert
systems and database systems and describe some areas for research in
expert database systems in Starburst.


NOTE FOR VISITORS TO SRI:

Please arrive at least 10 minutes early in order to sign in and
be shown to the conference room.

SRI is located at 333 Ravenswood Avenue in Menlo Park.  Visitors
may park in the visitors lot in front of Building A (red brick
building at 333 Ravenswood Ave) or in the conference parking area
at the corner of Ravenswood and Middlefield.  The seminar room is in
Building A.  Visitors should sign in at the reception desk in the
Building A lobby.

IMPORTANT: Visitors from Communist Bloc countries should make the
necessary arrangements with Liz Luntzel (415) 859-3285 as soon as possible.

------------------------------

Date: Sun, 01 May 88 00:53:38 EDT
From: Anurag.Acharya@CENTRO.SOAR.CS.CMU.EDU
Subject: Conference - 22nd Carnegie Symposium on Cognition (CMU)


The 22nd Carnegie Symposium on Cognition will be on the topic of
Architectures for Intelligence.  Talks will be in the Adamson wing,
as usual.  Inclosed is a schedule.  Contact Kurt VanLehn or Elaine
Benjamin (x4964, VanLehn@psy.cmu.edu) for more information.

--------------------------------------------------------------

Monday, May 16

8:30  Coffee

8:45  Welcome -- David Klahr, Head, Psychology Department

9:00  John Laird (U. Michigan), Allen Newell (CMU), Paul Rosenbloom (ISI
      "Towards completing the Soar architecture"

10:00 Coffee

10:15 Michael Genesereth (Stanford) -- "Deliberate agents"

11:15 Barbara Hayes-Roth (Stanford) -- "Making intelligent systems adapt

12:15 Lunch

1:30  Tom Mitchell (CMU) -- "Theo: A framework for constructing
      self-improving systems."

2:30  Rodney Brooks (MIT) -- "How to build creatures rather than
      isolated cognitive imitators."

3:30  Coffee

3:45  Jamie Carbonell (CMU) -- TBA

4:45  Bill Clancey (Xerox IRL) -- "Intelligent architectures and knowledge
engineering: A commentary"

5:45  Adjourn

-------------------------------------------------------------------------
Tuesday, May 17

8:45  Coffee

9:00  John Anderson (CMU) "The status of cognitive architectures
      in a rational analysis"

10:00 Coffee

10:15 Kurt VanLehn (CMU) "Flexibility and robustness in
      the execution of cognitive procedures"

11:15 Geoff Hinton (Toronto) "The transition from serial to
      parallel processing in a connectionist network"

12:15 Lunch

1:30  Walter Schneider and William Oliver (LRDC) -- "An instructable
      connectionist/control architecture: Using rule based instructions
      accomplish connectionist learning in a human time scale."


2:30  Jay McClelland (CMU) -- "Nature, nurture and connections:
      Explorations in network architecture"

3:30  Coffee

3:45  Zenon Pylyshyn (W. Ontario) -- "Architectures and strong equivalence
      A commentary"

4:45  Adjourn

------------------------------

Date: Tue, 3 May 88 15:04:30 PDT
From: Greg Jordan <gjordan@cirm.northrop.com>
Subject: Conference - WESTEX-88 Conference on Expert Systems

The third annual WESTEX conference, sponsored by the Western Committee
of the Computer Society of the IEEE and the IEEE Los Angeles Council,
will be held June 28-30, 1988 at the Anaheim Marriott Hotel, Anaheim,
California.

This year special emphasis will be given to management issues associated
with fielding successful applications.  Over the past two years a great
deal of experience has been gained in the basic technology and early
development of expert systems.  As a result, many systems have been
successfully prototyped for a broad range of applications.  A common
problem facing the field today is transitioning from prototypes to
fielded solutions.  WESTEX-88 will focus on the issues and problems
that must be solved in this transition.

The program will feature the internationally prominent speaker, Professor
Edward Feigenbaum, Stanford University, who will present new observations
and predictions regarding expert systems from his decades of leadership
and experience.

On Tuesday, June 28, three tracks of tutorials will be presented by
leading expert system practitioners:  "Basic Concepts", "Advanced
Concepts" and "Special Topics".

A two-day program featuring well-known invited speakers and submitted
papers will be presented on Wednesday and Thursday, June 29 and 30.

Technical exhibits featuring expert system products, tools and
environments from commercial artificial intelligence hardware and
software development firms are an important part of the conference.  The
exhibits provide potential users the opportunity to see and compare
first hand what is available and to make contact with interested vendors.
For exhibit information contact Martha B. Wolf, Electronics Conventions
Management, 8110 Airport Blvd., Los Angeles, CA 90045, or call
1-800-421-6816(US) or 1-800-262-4208 (CA only).

------------------------------

End of AIList Digest
********************

∂05-May-88  0507	LAWS@KL.SRI.COM 	AIList V6 #90 - Decision Theory, Training Wheels, Fellowship   
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 5 May 88  05:05:06 PDT
Date: Wed  4 May 1988 22:57-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #90 - Decision Theory, Training Wheels, Fellowship
To: AIList@SRI.COM


AIList Digest            Thursday, 5 May 1988      Volume 6 : Issue 90

Today's Topics:
  Queries - Robustly Implemented Parser & Boyer and Moore's Prover &
    Connectionist Sidewalk Rover & Causal Modeling &
    Unification Recipes & KB Update Info,
  Opinion - Exciting work in AI & Decision Theory in AI,
  AI Tools - Training Wheels,
  Academia - Leverhulem Research Fellowship

----------------------------------------------------------------------

Date: Sun, 1 May 88 22:08 EDT
From: LEWIS%cs.umass.edu@RELAY.CS.NET
Subject: robustly implemented parser wanted

     This is a basic, run-of-the-mill, "We want tools" message.

     In particular, what we're looking for is a robust implementation
of a syntactic parser.  We're much more interested in a reliable
piece of Common LISP software that implements a basic chart parser
or something similar, than in a cutting edge piece of research
software.  Plusses would be large existing grammars or lexicons,
hooks for semantic interpretation, or good documentation.  The
parser would be used for research on document retrieval by
the Intelligent Information Retrieval group here at U Mass, and we
would be happy to provide the authors with plenty of data on
its performance on large quantities of real world text, as well
as on our experiences with using the on-line Longman dictionary
with it.  I'd be interested in hearing from both users and
authors/maintainers of such software.  If there's sufficient
interest, I'll summarize to the net.

Thanks,

David D. Lewis                         INTERNET: lewis@cs.umass.edu
COINS Dept.                            BITNET: lewis@umass
University of Massachusetts, Amherst
Amherst, MA  01003
ph. 413-545-0728

------------------------------

Date: 2 May 88 20:19:09 GMT
From: unido!laura!atoenne@uunet.UU.NET (Andreas Toenne)
Reply-to: unido!atoenne@uunet.UU.NET (Andreas Toenne)
Subject: Re: Availability of Boyer and Moore's Prover


In article <8804230021.AA05197@CLI.COM> boyer@CLI.COM (Robert S. Boyer) writes:
>A Common Lisp version of our theorem-prover is now available under the
>usual conditions: no license, no copyright, no fee, no support.  The
>To get a copy follow these instructions:
>
>1.   ftp to Arpanet/Internet host cli.com.

Ouch!

Can anyone send me this theorem-prover by e-mail please?
I have no ftp access :-(

Please send it to the following BITNET Address, as UUCP (and Arpa) Mail
costs me real $$.

        atoenne@ddoinf6.bitnet

        Thank you in advance

        Andreas Toenne

------------------------------

Date: 3 May 88 18:48:55 GMT
From: gary%desi@ucsd.edu (Gary Cottrell)
Reply-to: desi!gary@ucsd.edu (Gary Cottrell)
Subject: Re: Need info on new CMU sidewalk rover


In article <8804301216.AA23619@ucbvax.Berkeley.EDU> jbn@GLACIER.STANFORD.EDU
(John B. Nagle) writes:
>
>      I hear that CMU has a new autonomous vehicle, a sidewalk rover
>like the Terregator, but improved.  Who is doing the work, and are there
>any papers yet?
>
>                                       John Nagle

I don't know who is working on it, but I heard from a usually reliable
source that Dean Pomerleau (grad student at CMU, inventor of meta-connection
networks) has a 3-layer back-prop network driving the thing better than the
CMU vision group's system. Anyone care to confirm or deny this information?

gary cottrell
Computer Science and Engineering C-014
UCSD,
La Jolla, Ca. 92093
gary@sdcsvax.ucsd.edu (ARPA)
{ucbvax,decvax,akgua,dcdwest}!sdcsvax!sdcsvax!gary (USENET)
gwcottrell@ucsd.edu (BITNET)

------------------------------

Date: Tue 3 May 88 17:44:47-PDT
From: SCHWARTZ@PLUTO.ARC.NASA.GOV
Subject: Causal modeling

From:   PLU::SCHWARTZ     29-APR-1988 14:43
To:     RI
Subj:   Causal Modeling

For a survey white paper for HQ-OAST, I am trying to gather information on
all of the causal modeling efforts being pursued at ARC.  Specifically, the
information I am seeking is:

Who is doing work which concerns causal modeling and AI?

What domain/application is being used to pursue this work?

Why are Causal Modeling techniques being used?

What is the level of effort associated with the work (manpower/resources)?

Is the work being done in-house (NASA) or on grant/contract outside of NASA?

Are there any papers ( a research proposal will suffice) that are available
concerning your work?

Please help me in collecting this information.  One liners as answers are fine.
I'd li
It is imperative that I have this information by May 6th.  Thanks for your
support!
                                                Mary Rudokas

------------------------------

Date: Mon, 2 May 88 13:06:17 EDT
From: spike%mwcamis@mitre.arpa
Subject: unification recipes wanted

--------

     If you happen to have a reference list handy on the topic of
      unification algorithms in pattern matching systems, I'd be
      grateful to obtain such.

     [On a different note, I send my gratitude to our AIList moderator
     Kenneth Laws for your impressive and sustained effort in providing
     an incredibly useful service.]

     Thanks, Ben Bloom (spike@mitre.arpa)

------------------------------

Date: 3 May 88 10:45:00 CST
From: "HENRY::TSATSOUL" <tsatsoul%henry.decnet@space-tech.arpa>
Reply-to: "HENRY::TSATSOUL" <tsatsoul%henry.decnet@space-tech.arpa>
Subject: KB Update Info Request


        I am interested in investigating the updating of KBs. In other words,
when a system makes inferences to solve a goal or answer a question, it
generates many intermediate facts. The question is which of these to keep,
how, when, where, for how long, etc. Do you know of any work related to
this? (I am not interested in truth maintenace!!)

Thank you,

--------------------------------------------------------------------------------
| Costas Tsatsoulis                            |  tsatsoul @ space-tech.arpa   |
| Dept. of Electrical and Computer Engineering |-------------------------------|
| Nichols Hall                                 |                               |
| The University of Kansas                     |         H  A  I  K  U         |
| Lawrence, KS 66045                           |                               |

------------------------------

Date: 2 May 88 20:42:46 GMT
From: stuart%warhol@ads.com (Stuart Crawford)
Reply-to: stuart@ads.com (Stuart Crawford)
Subject: Re: Exciting work in AI


Wray Buntine (wray@nswitgould.oz)  writes:
> Ross's original ID3 work (and the stuff usually reported in Machine Learning
> overviews) and much subsequent work by him and others (e.g. pruning)
> actually fails the "real AI" test.  It was independently developed by
> a group of applied statisticians in the 70's and is well known
>       Breiman, L., Friedman, J.H., Olshen, R.A. and Stone, C,J. (1984)
>       "Classification and Regression Trees", Wadsworth
> Ross's more recent work does significantly improve on Breiman et al.s stuff.
                          ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

How?  If you mean his stuff on generating production rules from decision trees,
I think you're missing the point of CART.  It seems to me that simply
transforming decision trees into production rules is a rather uninteresting
exercise.  Quinlan tries to motivate the idea by suggesting that the generated
rules are an "improvement" over the induced tree because they are both easier
to interpret and more parsimonious.  I disagree that they are easier to
interpret, and they are more parsimonious only if your original induction
algorithm has not already pruned the tree.  Using production rule generation as
an alternative to tree pruning strikes me as the wrong approach.  I still feel
that CART is the induction procedure of choice because of the following:

1. generates parsimonious trees
2. handles noisy, incomplete data
3. strong, well understood, asymptotic properties
4. allows user-defined priors and cost-functions
5. delivers attribute-importance diagnostics
6. can induce rules incrementally
7. delivers low bias, low variance estimates of misclassification rate

For references on 1-5, see Brieman et al. (1984), and for 6,7 see Crawford, S.
"Extensions to the CART Algorithm", proceedings Knowledge Acquisition for
Knowledge-Based Systems workshop (1987).

I also find somewhat curious Buntine's suggestion that Quinlan's most recent
work,

> is closer to real AI (e.g. concern for comprehensibility),
> though it still has an applied statistics flavour.

I would suggest that the work has an applied statistics flavor because it is
attempting to solve an applied statistics problem.

--------------------------------------
Stuart Crawford
stuart@ads.com
Advanced Decision Systems
1500 Plymouth Street
Mountain View, CA 94043


Stuart

------------------------------

Date: 2 May 88 21:27:51 GMT
From: mikeb@wdl1.UUCP (Michael H. Bender)
Subject: Re: Decision Theory in AI.

You might want to take a look at Judea Pearl's (UCLA) work on the use
of Influence Diagrams for uncertainty reasoning.  It is closely
associated with work by Ron Howard (Stanford) and others.

Mike Bender (mikeb@ford-wdl1)

------------------------------

Date: 3 May 88 01:37:30 PDT (Tuesday)
From: "Elizabeth_Lloyd.WGCERX"@Xerox.COM
Subject: Training wheels


I do have some firm references for the above work, but they're not with me now.
However a few pointers - your time frame is correct - I think you should search
the following journals - HUMAN FACTORS; IBM SYSTEMS JOURNAL; BEHAVIOUR AND
INFORMATION TECHNOLOGY; INTERNATIONAL JOURNAL OF MAN-MACHINE STUDIES.  I can't
remember which of these contains the information you want, but if you don't have
any sucess, then contact me directly and I'll search out and send you the exact
references.

Elizabeth

------------------------------

Date: 3-MAY-1988 13:54:01 GMT
From: HANCOXPJ%ASTON.AC.UK@CUNYVM.CUNY.EDU
Subject: Leverhulem research fellowship

From:    Dr P J Hancox <HANCOXPJ@uk.ac.aston>
Dept:    Computer Science
Tel No:  021 359 3611 X4652


Leverhulme Fellowship
The Computer Science Department of Aston University would welcome enquiries
concerning a Leverhulme Fellowship (about 1 year) from active researchers
who has been awarded a PhD in the last five years. The other restriction is
that the Fellow must be from the US or the UK Commonwealth.

Aston University is a technological institution in the centre of Birmingham
(UK). As an institution, it is dedicated to the application of information
technology. The Department has strong research interests in NLP (esp LFG),
image processing, information systems engineering and software engineering.

Those interested should initially contact Peter Hancox
(hancoxpj@mail.aston.ac.uk / hancoxpj@aston.uucp). All messages will be
acknowledged.

Peter Hancox

------------------------------

End of AIList Digest
********************

∂05-May-88  0811	LAWS@KL.SRI.COM 	AIList V6 #91 - Philosophy  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 5 May 88  08:11:40 PDT
Date: Wed  4 May 1988 23:08-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #91 - Philosophy
To: AIList@SRI.COM


AIList Digest            Thursday, 5 May 1988      Volume 6 : Issue 91

Today's Topics:
  Philosophy - Free Will & Marxism & Intelligence & Social Science

----------------------------------------------------------------------

Date: 1 May 88 06:47:17 GMT
From: TAURUS.BITNET!shani@ucbvax.Berkeley.EDU
Subject: Re: Free Will & Self-Awareness

In article <30502@linus.UUCP>, bwk@mbunix.BITNET writes:
>
> I would like to learn how to imbue silicon with consciousness,
> awareness, free will, and a value system.

  First, by requesting that, you are underastimating yourself as a free-willing
creature, and second, your request is self-contradicting ans shows little
understanding of matters, like free will and value systems - such things cannot
be 'given', they simply exist. (Something to beare in mind for other perpuses,
besides to AI...). You can write 'moral' programs, even in BASIC, if you want,
because they will have YOUR value system....

O.S.

------------------------------

Date: 2 May 88 00:28:40 GMT
From: phoenix!pucc!RLWALD@princeton.edu  (Robert Wald)
Subject: Re: Free Will & Self-Awareness

In article <1029@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:

>In article <1484@pt.cs.cmu.edu> yamauchi@SPEECH2.CS.CMU.EDU (Brian Yamauchi)
writes:
>For AI workers (not AI developers/exploiters who are just raiding the
>programming abstractions), the main problem they should recognise is
>that a rule-based or other mechanical account of cognition and decision
>making is at odds with the doctrine of free will which underpins most Western
>morality.  It is in no way virtuous to ignore such a social force in
>the name of Science.  Scientists who seek moral, ethical, epistemological
>or methodological vacuums are only marginalising themselves into
>positions where social forces will rightly constrain their work.




    Are you saying that AI research will be stopped because when it ignores
free will, it is immoral and people will take action against it?

    When has a 'doctrine' (which, by the way, is nothing of the sort with
respect to free will) any such relationship to what is possible?





-Rob Wald                Bitnet: RLWALD@PUCC.BITNET
                         Uucp: {ihnp4|allegra}!psuvax1!PUCC.BITNET!RLWALD
                         Arpa: RLWALD@PUCC.Princeton.Edu
"Why are they all trying to kill me?"
     "They don't realize that you're already dead."     -The Prisoner

------------------------------

Date: Mon, 02 May 88 11:43:55 EDT
From: Thanasis Kehagias <ST401843%BROWNVM.BITNET@MITVMA.MIT.EDU>
Subject: Marxism is not outworn!!!


well, this sociology vs. AI debate is getting nasty in many ways, but
here is one that i find particularly interesting. Marxism may be a
dogma, but is not outworn at all, in that

a. it shapes a whole lot of contemporary politics (i hope most AI
researchers know what politics is).

b. even though what Marx said 140 years ago is not necessarily 100%
correct, he was very right in making a big deal out of the fact that
social and particularly economic forces is what determines the history
of the species at this point. if you doubt this is very true even today,
please think where would your research be without the mega$ that DOD
spends. the scientific enquiry does not happen in vacuo, it is
determined by the social process (pretty obvious but good to remember.
it also goes in the opposite direction). whether sociology captures the
social process is another story, and far  be it from me to play the
advocatus sociologiae!

Thanasis Kehagias

------------------------------

Date: Mon, 02 May 88 14:41:40 HAE
From: Spencer Star <STAR%LAVALVM1.BITNET@CORNELLC.CCS.CORNELL.EDU>
Subject: Re: AIList V6 #87 - Queries, Causal Modeling, Texts

     This discussion about free will doesn't seem to want to go away, so
let me ask a question or two.  Let's agree that a deterministic world
is one in which a given state of the world at time 0 is sufficient to
determine the world state at time T, where T is after 0.  An indeterministic
world is one in which that proposition is false.  These are propositions
about the nature of the world not about what we know about the world.  In
many situations we do not have enough knowledge about the world to be able
to predict without error state T from the available information about
state 0.
       Free will seems to me to mean that regardless of state 0 the agent
can choose which one of the possible states it will be in at time T.  A
necessary precondition for free will is that the world be indeterministic.
This does not, however, seem to be a sufficient condition since radioactive
decay is indeterministic but the particles do not have free will.
       Free will should certainly be more than just our inability to
predict an outcome, since that is consistent with limited knowledge in
a deterministic world.  And it must be more than indeterminism.
      My questions:

Given these definitions, (1) What is free will for a machine?
                         (2) Please provide a test that will determine
                             if a machine has free will. The test should
                             be quantitative, repeatable, and unabiguous.

Perhaps McCarthy could summarize how he would answer those questions
based on his article.
                       --Spencer Star

------------------------------

Date: Mon, 2 May 88 18:12:24 PDT
From: larry@VLSI.JPL.NASA.GOV
Subject: Intelligence is an Many-factored Thing

-- It seems to me to be silly to (say) things like "when a computer
-- passes the Turing test it will be intelligent."  Intelligence is
-- not...binary....                                  -jeff sherrard

I agree.  All too often I see people treat questions as if each had a
single binary answer.  This is worse than idiocy -- it's sub-idiocy,
demoting oneself to the intelligence of a flip-flop.  Intelligence has many
dimensions, and most seem to be either continuous or have many quantum
steps.  IQ tests (crude though they are) typically include a half to a
dozen factors, including spatial and verbal reasoning.  Some psychologists
have sought to add creativity, social ability, and other qualities.

(This isn't to say that all dimensions of a field are linear or curvilinear.
Clearly some metrics have transition points, or "catastrophes," where a
small change in one dimension is accompanied by a tremendous change in a
companion dimension.  I suspect consciousness is like this; at some point
in the evolution of life it suddenly appeared -- though I suspect it was at
a much earlier point than most people are willing to credit.)

                                        Larry @ JPL-VLSI

------------------------------

Date: Mon, 2 May 88 18:13:18 PDT
From: larry@VLSI.JPL.NASA.GOV
Subject: Unfree Will

--
Gilbert Cockton & others:  Please send me a few references to critiques of
System Theory.  Also, are you referring to General Systems Theory?
--
I'm surprised that no one has brought up the distinction between will and
free will.  The latter (in the philosophy courses I took) implies complete
freedom to make choices, which for humans seems debatable.  For instance,
I don't see how anyone can choose an alternative that they do not know
exists.

There might be several reasons for this that cybernetics or computer or
information science can illuminate.  (1) The data needed to gain some
knowledge cannot be input by the chooser's perception as, for example, there
have not yet been (conclusive) proof that any human can karoo therms.
(2) The knowledge is lacking in the chooser's memory.  Few humans know, for
example, that in addition to moving up-down/forward-backward/left-right, we
can also move oolward-choward & uptime-downtime--though once explained most
find it easy to do.  (3) The knowledge may be literally unthinkable because
humans don't have (e.g.) irtsle logic ciruitry.  In these cases no amount of
explanation or observation (even with machine-aided perception) will supply
the needed understanding.

The above examples can be multiplied by more ordinary instances by reading
human-pathology or animal-cognition reports.
                                                   Larry @ jpl-vlsi

------------------------------

Date: 2 May 88 17:01:39 GMT
From: yamauchi@SPEECH2.CS.CMU.EDU  (Brian Yamauchi)
Subject: Re: AIList V6 #86 - Philosophy

In article <368693.880430.MINSKY@AI.AI.MIT.EDU>, MINSKY@AI.AI.MIT.EDU
(Marvin Minsky) writes:
> Yamauchi, Cockton, and others on AILIST have been discussing freedom
> of will as though no AI researchers have discussed it seriously.  May
> I ask you to read pages 30.2, 30.6 and 30.7 of The Society of Mind.  I
> claim to have a good explanation of the free-will phenomenon.

Actually, I have read The Society of Mind, where Minsky writes:

| Everything that happens in our universe is either completely determined
| by what's already happened in the past or else depends, in part, on
| random chance.  Everything, including that which happens in our brains,
| depends on these and only on these :
|
| A set of fixed, deterministic laws.   A purely random set of accidents.
|
| There is no room on either side for any third alternative.

I would agree with this.  In fact, unless one believes in some form of
supernatural forces, this seems like the only rational alternative.

My point is that it is reasonable to define free will, not as some mystical
third alternative, but as the decision making process that results from
the interaction of an individual's values, memories, emotional state, and
sensory input.

As to whether this is "free" or not, it depends on your definition of
freedom.  If freedom requires some force independent of genetics,
experience, and chance, then I suppose this is not free.  If freedom
consists of allowing an individual to make his own decisions without
coercion from others, then this definition is just as compatible with
freedom as any other.

If I am interpreting Minsky's book correctly, I think we agree that it is
possible (in the long run) for AIs to have for the same level of decision
making ability / self-awareness as humans.  The only difference is that he
would say that this means that neither humans nor AIs have free will, while
I would say that (using the above definition) that this means that humans do
have free will and AIs have the potential for having free will.

On the other hand, Cockton writes:

>The main objection to AI is when it claims to approach our humanity.
>
>                       It cannot.

Cockton seems to be saying that humans do have free will, but is totally
impossible for AIs to ever have free will.  I am curious as to what he bases
this belief upon other than "conflict with traditional Western values".
______________________________________________________________________________

Brian Yamauchi                      INTERNET:    yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________

------------------------------

Date: 3 May 88 12:41:39 EDT
From: John Sowa <SOWA@ibm.com>
Subject: Gibber in AI, social sciences, etc.

I agree with the following comment by Thomas Maddox:

>                           ...anyone in such an inherently weak field
> should be rather careful in his criticism:  he's in the position of a
> man throwing bricks at passers-by through his own front window.

But I wish he would apply that remark to himself.  Just scan through
back issues of AI List to see the controversies, polemics, fads, and
fallacies.  The arguments between connectionists and representationalists
convey just as much heat and as little light as any argument between
Marxists and Freudians.  The dialog between LISPers and Prologers is no
more meaningful than the dialog between Catholics and Protestants in
Northern Ireland.

My position:  Every field has good people, dummies, charlatans, and
religious fanatics.  AI certainly has its share of all four types (and
sometimes the same person shifts position from one type to another).
The sociologist who was bashing AI was wrong, and so are the AI people
who bash the social scientists.  The human mind is the most difficult
subject of all, and we'll all learn more by approaching each other's
disciplines with a little sympathy than with a lot of loud polemics.

John Sowa

------------------------------

Date: 28 Apr 88 20:29:07 GMT
From: olivier@boulder.colorado.edu  (Olivier Brousse)
Subject: Re: Free Will & Self-Awareness

In article <17424@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
>
>      Could the philosophical discussion be moved to "talk.philosophy"?
>Ken Laws is retiring as the editor of the Internet AILIST, and with him
>gone and no replacement on the horizon, the Internet AILIST (which shows
>on USENET as "comp.ai.digest") is to be merged with this one, unmoderated.
>If the combined list is to keep its present readership, which includes some
>of the major names in AI (both Minsky and McCarthy read AILIST), the content
>of this one must be improved a bit.
>
>                                       John Nagle

"The content of this one must be improved a little bit."
What is this ? I believe the recent discussions were both interesting and of
interest to the newsgroup.
AI, as far as I know, is concerned with all issues pertaining to intelligence
and how it could be artificially created.
The question raised are indeed important questions
to consider, especially with regards to the recent success of connectionism.

Keep the debate going ...
Olivier Brousse                       |
Department of Computer Science        |  olivier@boulder.colorado.EDU
U. of Colorado, Boulder               |

------------------------------

Date: 3 May 88 00:13:58 GMT
From: dartvax!eleazar!lantz@decvax.dec.com  (Bob Lantz)
Subject: Re: Free Will & Self-Awareness

John Nagle writes (17424@glacier.stanford.edu)

>     Could the philosophical discussion be moved to "talk.philosophy"?
>... the major names in AI (both Minsky and McCarthy read AILIST), the content
>of this one must be improved a bit.

One could just as easily abstract the articles on cognitive psychology,
programming, algorithms, or several other topics equally relevant to AI.
Considering AI is an interdisciplinary endeavor combining philosophy,
psychology, and computer science (for example) it seems unwise to
artificially narrow the scope of discussion.

I expect most readers of comp.AI (and Minsky, McCarthy, McDermott...
other AI people whose names start with 'M') have interests in multiple
facets of this fascinating discipline.

-Bob

Bob_Lantz@dartmouth.EDU

------------------------------

End of AIList Digest
********************

∂09-May-88  0224	LAWS@KL.SRI.COM 	AIList V6 #92 - Seminars, Conferences 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 9 May 88  02:23:58 PDT
Date: Sun  8 May 1988 23:31-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #92 - Seminars, Conferences
To: AIList@SRI.COM


AIList Digest             Monday, 9 May 1988       Volume 6 : Issue 92

Today's Topics:
  Seminars - Tutor for Electrostatics (CMU) &
    Fuzzy Inference for Robot Control (CMU) &
    Dataflow Semantics (SRI) &
    Formulating Concepts and Analogies (NASA),
  Conference - Automatic Deduction &
    AI and Discrete Event Control Systems &
    Automating Software Design -- AAAI-88 Workshop &
    1st Int. Symp. on AI

----------------------------------------------------------------------

Date: Thu, 05 May 88 11:59:43 EDT
From: Anurag.Acharya@CENTRO.SOAR.CS.CMU.EDU
Subject: Seminar - Tutor for Electrostatics (CMU)


Date:     May 11, 1988
Time:     3:00pm
Place:    Wean 5409
Who:      Blake D. Ward
Title:    A Soar-Based Tutor for Electrostatics


                                ABSTRACT

     Intelligent Tutoring Systems (ITS) are computer-based instructional
systems that attempt to exhibit some of the intelligent capabilities of
human tutors.  In addition to being important for obvious educational
reasons, they are also a valuable testbed for theories of cognition and
represent a difficult real-world task for artificial intelligence research
in general.  Theories of cognition and learning are important to the
development of effective tutoring systems, but all but a few tutoring
systems are not based on any underlying cognitive theory.  Within high
school and post-secondary education, science is an important field and this
is reflected by a fairly large body of research concerning the differences
between novice and expert scientific problem solvers.  However, with very
few exceptions, most of that research has yet to be applied to science
education in a substantial way.  The research proposed here will explore the
implications of using the Soar architecture (which is both a theory of
general cognition and a powerful AI problem solver) to develop an
intelligent tutoring system framework.  To focus and evaluate this effort,
an electrostatics tutor (ET-Soar) will be developed, making use of recent
novice/expert scientific problem solving research with an emphasis on
getting students to adopt the internal representations used by expert
problem solvers.

        A copy of the proposal will be left in the CS lounge.

------------------------------

Date: Thu, 05 May 88 12:05:19 EDT
From: Anurag.Acharya@CENTRO.SOAR.CS.CMU.EDU
Subject: Seminar - Fuzzy Inference for Robot Control (CMU)


Time: Thursday, May 5, 3:30-4:30.
Place:  4605 WeH

FUZZY INFERENCE, ITS APPLICATION TO ROBOT CONTROL, AND FUZZY MICROPROCESSOR

Kaoru Hirota, Hosei University, and Takeshi Yamakawa, Kumamoto University


Fuzzy  inference (or approximate reasoning) is an important theoretical tool in
real industrial applications of fuzzy theory; e.g. automatic control of  subway
trains,  automatic  speed control of cars, water purification system, ...  When
we use fuszzy inference, a knowledge base consists of a small number  of  fuzzy
production  rules.    In  this  short  lecture,  foundations of fuzzy inference
technqiues are introduced first, then followed by the outline of robot  control
applications.    Also  presented  is  a  micro processor in linear mode (not in
binary digital mode) on a semi-conductor chip for real time processing of fuzzy
informations.

Contact : Takeo.Kanade@IUS3.IUS.CS.CMU.EDU

------------------------------

Date: Thu, 5 May 88 13:36:19 PDT
From: seminars@csl.sri.com (contact lunt@csl.sri.com)
Subject: Seminar - Dataflow Semantics (SRI)


SRI COMPUTER SCIENCE LAB SEMINAR ANNOUNCEMENT:


        A DESCRIPTIVE AND PRESCRIPTIVE MODEL FOR DATAFLOW SEMANTICS

                              R. Jagannathan
                             SRI International

                         Monday, May 23 at 4:00 pm
              SRI International, Conference Room B, Building A


        A general model is proposed that allows dataflow (operational)
        semantics embodied by various dataflow computers to be described and
        analyzed.  The model assumes that a program is represented as a
        network of operators which can be given meaning mathematically.
        Various dataflow semantics are differentiated in terms of which data
        items of a program instance are ``desired to be computed'' and when
        such ``desires for computation'' occur when using a particular
        semantics.  The model facilitates various properties of dataflow
        semantics such as correctness and efficiency to be studied.  The
        model is also used to prescribe new dataflow semantics such as
        eazyflow.  Implementation of the eazyflow semantics using mechanisms
        such as data-driven and demand-driven execution is considered.



NOTE FOR VISITORS TO SRI:

Please arrive at least 10 minutes early in order to sign in and be shown to
the conference room.

SRI is located at 333 Ravenswood Avenue in Menlo Park.  Visitors may park in
the visitors lot in front of Building A (red brick building at 333 Ravenswood
Ave) or in the conference parking area at the corner of Ravenswood and
Middlefield.  The seminar room is in Building A.  Visitors should sign in at
the reception desk in the Building A lobby.

IMPORTANT: Visitors from Communist Bloc countries should make the
necessary arrangements with Liz Luntzel (415) 859-3285 as soon as possible.

------------------------------

Date: Fri, 6 May 88 11:52:39 PDT
From: CHIN%PLU@ames-io.ARPA
Subject: Seminar - Formulating Concepts and Analogies (NASA)


***************************************************************************
              National Aeronautics and Space Administration
                         Ames Research Center

                        SEMINAR ANNOUNCEMENT


SPEAKER:   Smadar Kedar-Cabelli
           Rutger University

TOPIC:     Formulating Concepts and Analogies According to Purpose

ABSTRACT:

This talk describes research within the {\it explanation-based
generalization} (EBG) framework, a framework for producing deductive
generalizations from single examples.  Despite recent progress, EBG
methods exhibit an important limitation: they are incapable of
determining which concepts are useful ones to acquire.  More robust
generalizers must be able to automatically determine which concepts to
acquire based on the {\it purpose} of the learning, since concepts
acquired for one purpose may not be appropriate for another.

Our notion of the purpose of the learning is to acquire concepts which
will benefit an associated performance system.  Two open issues become
apparent once EBG is associated with a performance system: How can EBG
acquire target concepts and definitions appropriate for the
performance system?  Further, could the acquired target concept
definitions be used to improve subsequent performance?

The research focuses and investigates these issues in the context of a
specific type of performance system -- a state-space planner.  The
approach is to provide EBG with explicit knowledge of the planner and
specific planning task.  The {\it purposive concept formulation} and
{\it purposive explanation replay} methods, respectively, provide
solutions to the open problems.

We describe the prototype systems (PurForm and REPeat) which provide
experimental support for these methods.  The results confirm that a
learning system {\it can} formulate concepts and analogies sensitive
to the purpose of the learning in restricted planning situations.  We
discuss further extensions suggested by these results.


BIOGRAPHY:

Smadar Kedar-Cabelli has recently received her Ph.D. from the
Department of Computer Science at Rutgers University.  She is
currently a research assistant at Rutgers, and a consultant for the
Learning Systems Group at Siemens Research and Technology Laboratories
in Princeton.  Her research in machine learning focuses on open
problems within the explanation-based generalization (EBG) framework.
She has published a number of recent papers describing the
dissertation results.  A paper presented at the 1987 National
Conference for Artificial Intelligence describes results on
formulating concept according to purpose.  A paper describing the
close relationship of EBG and resolution-theorem proving was presented
at the Fourth International Machine Learning Workshop in 1987.
Earlier papers include a journal paper in Machine Learning, published
jointly with Mitchell and Keller, introducing the explanation-based
generalization framework.

---------------------
DATE: Thursday     TIME: 2:00 - 3:00 pm     BLDG. 244   Room 209
      May 19, 1988       --------------


POINT OF CONTACT: Marlene Chin   PHONE NUMBER: (415) 694-6525
     NET ADDRESS: chin%plu@ames-io.arpa

***************************************************************************

VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18.  Do not
use the Navy Main Gate.

Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance.  Submit requests to the point of
contact indicated above.  Non-citizens must register at the Visitor
Reception Building.  Permanent Residents are required to show Alien
Registration Card at the time of registration.
***************************************************************************

------------------------------

Date: Thu, 5 May 88 16:24 CDT
From: GRAHAM%UMKCVAX2.BITNET@CORNELLC.CCS.CORNELL.EDU
Subject: Conference - Automatic Deduction

Mini-Conference on Automatic Deduction

        A mini-conference on automated deduction is being organized.  The focus
is to be on new approaches to automatic deduction based on non-traditional
approaches - topics not well represented at such conferences as CADE.  These
non-traditional frameworks might include both theoretical and experimental
research on proof theory and deduction systems for the following
non-classical logics and extensions to first order logic:

        modal, nonmonotonic, default, tense, and action logics
        circumscription
        the frame problem
        intentional reasoning
        metatheory
        reflective reasoning
        fixed points
        closed world assumption
        non-Cantorean set theories
        quantifier elimination
        possibility, probability, and ontology

        Those interested in helping to organize or participate in such a
mini-conference are invited to contact:
        Dr. Frank Brown,
        Dept. of Computer Science, University of Kansas,
        Lawrence, Kansas 66045,
        (913)-864-4482.


Steven Graham, GRAHAM@UMKCVAX1.BITNET
Computer Science Program, University of Missouri Kansas-City,
5100 Rockhill Road
Kansas City, MO 64110
(816) 276-2365

------------------------------

Date: 6 May 88 15:34:43 GMT
From: BOSCO.BERKELEY.EDU!grossman@ucbvax.Berkeley.EDU
Subject: Conference - AI and Discrete Event Control Systems


                           Workshop On

                AI and Discrete Event Control Systems
                        July 7 and 8, 1988
                     NASA-Ames Research Center
                     Moffett Field, California


Hamid Berenji, NASA-Ames Research Center
The Role of Approximate Reasoning in AI-based Control

Peter Caines, McGill University
Dynamical Logic Observers for Finite Automata, Part 1

James Demmel, Courant Institute
Hierarchical Control  Studies in Dextrous Manipulation
Using the Utah/MIT Hand

Russel Greiner, University of Toronto
Dynamical Logic Observers for Finite Automata, Part 2

Robert Hermann, NASA-Ames Research Center and Boston University
The Scott Theory of Fixed Points Symbolic Control

Michael Heymann, Israel Institute of Technology
Real-Time Discrete Event Processes

Peter Ramadge, Princeton University
Discrete Event Systems, Modeling and Complexity

Stan Rosenschein, CSLI, Stanford University
Real Time AI Systems

Gerry Sussman, Massachusetts Institute of Technology
Automatic Extraction of Features From Dynamical Systems


For more information, please contact:

Robert Grossman                                 (415) 642-8196
Department of Mathematics                       (415) 642-6526 (messages)
University of California, Berkeley              grossman@cartan.berkeley.edu
Berkeley, CA 94720                              grossman@ucbcarta.bitnet

------------------------------

Date: Tue, 3 May 88 13:22:13 EDT
From: Robert McCartney <rdm%cs.brown.edu@RELAY.CS.NET>
Subject: Conference - Automating Software Design -- AAAI-88 Workshop


------------------------
                        CALL FOR PARTICIPATION

            Automating software design: current directions
                       (a workshop at AAAI-88)

                Radison-St. Paul Hotel, St. Paul, Minnesota
                       Thursday, 25 August 1988


In this workshop, we intend to discuss current approaches to automated
software design and how these approaches deal with: 1) acquiring
specifications, 2) acquiring and representing domain and design
knowledge, and 3) controlling search during design.  Among the issues
that might be addressed are the interaction of domain and design
knowledge, comparing automatic and interactive systems, the use of
general vs.  specific control mechanisms, and software environments
appropriate for design systems.

This is intended to be a forum for the presentation and discussion of
current ideas and approaches.  The format will consist of individual
presentations followed by adequate time for interaction with peers.
To maximize such interaction, participation will be limited to a small
number of  active researchers.

Participation: Those interested in attending should submit a short
description of their research interests and current work to one of the
organizing committee (preferably electronically) by June 15.  At the
same time, those interested in making a presentation should submit a
short abstract (around 500 words) of their intended topic.
Notification of acceptance or rejection will be given after July 15.
All participants may submit an extended abstract or position paper by
August 1; these will be reproduced and distributed at the workshop.

Organizing Committee:

 Michael Lowry            Robert McCartney           Douglas R. Smith
Stanford/Kestrel         Univ. of Connecticut        Kestrel Institute
 (415) 325-3105           (203) 486-2428             (415) 493-6871
(lowry@kestrel.arpa)     (robert@uconn.csnet)      (smith@kestrel.arpa)

Hard-copy submissions may be sent to:

                          Douglas R. Smith
                          Kestrel Institute
                         1801 Page Mill Road
                        Palo Alto, CA 94304-1216

------------------------------

Date: 5 May 88   20:43 EDT
From: PL233270%TECMTYVM.BITNET@CUNYVM.CUNY.EDU
Subject: Conference - 1st Int. Symp. on AI

Date: 5 May 1988, 20:43:23 EDT
From: Teresa Lucio Nieto        Mexico (83) 58 56 49 PL233270 at TECMTYVM
To:   AILIST-REQUEST at SRI.COM


***********************************************************************

                 1ST INTERNATIONAL SYMPOSIUM ON
                    ARTIFICIAL INTELLIGENCE
                     MONTERREY, N.L. MEXICO

***********************************************************************

             THE INFORMATION RESEARCH CENTER OF
           THE INSTITUTO TECNOLOGICO Y DE ESTUDIOS
                  SUPERIORES DE MONTERREY


IS ORGANIZING THE FIRST INTERNATIONAL SYMPOSIUM ON ARTIFICIAL
INTELLIGENCE TO PROMOTE THE ARTIFICIAL INTELLIGENCE TECHNOLOGY AMONG
PROFESSIONALS AS AN APPROACH TO PROBLEM SOLVING, THE USE OF OF THE
KNOWLEDGE-BASED PARADIGM IN SOLVING PROBLEMS IN INDUSTRY AND BUSINESS,
AND ALSO TO MAKE PROFESSIONALS AWARE OF THE ARTIFICIAL INTELLIGENCE
TECHNIQUES THAT EXIST AND TO DEMONSTRATE THEIR USE IN SOLVING REAL
PROBLEMS, ALSO TO SHOW CURRENT ARTIFICIAL INTELLIGENCE AND EXPERT
SYSTEMS APPLICATIONS IN MEXICO, USA, AND OTHER COUNTRIES.



Tentative Program:
------------------
October 24th, 25th, 1988
Knowledge-Based Systems Tutorial.

October 26th, 27th, 28th 1988
CONFERENCES AND HARDWARE & SOFTWARE EXPOSITION.



                           T O P I C S
                         ----------------


          * KNOWLEDGE-BASED SYSTEMS
          * KNOWLEDGE ACQUISITION
          * KNOWLEDGE REPRESENTATION
          * INFERENCE ENGINE
          * CERTAINTY FACTORS
          * VISION
          * ROBOTICS
          * EXPERT SYSTEMS APPLICATIONS IN INDUSTRY
          * NATURAL LANGUAGE PROCESSING
          * LEARNING
          * SPEECH RECOGNITION
          * ARTIFICIAL INTELLIGENCE IN MEXICO
          * FIFTH COMPUTERS GENERATION


Conference Participants
-----------------------
Speakers from the following Universities and Research Centers will
participate:
Stanford, Texas at Austin, MIT, Colorado, Waterloo, Alberta, Rice,
IBM Center and Microelectronics and Computer Technology Corp.


SOFTWARE AND HARDWARE EXPOSITION
--------------------------------
DURING THE SYMPOSIUM THERE WILL BE AN EXPOSITION OF COMPUTER HARDWARE
AND SOFTWARE INCLUDING PRODUCTS AND SYSTEMS FROM COMPANIES AND
INSTITUTIONS IN MEXICO, USA AND ABROAD.
WE ARE INVITING SOFTWARE AND HARDWARE BUSINESS TO PARTICIPATE IN
THIS EXPOSITION WITH THEIR PRODUCTS.


SOCIAL EVENTS
-------------
In order to encourage an atmosphere of friendship and exchange among
participants, some social events will be held after the conferences.


Fees
----
TUTORIAL:
                  Before August 31st,88   After August 31st,88
PROFESSIONALS        $150 US DOLLARS            $170
STUDENTS             $75                        $85

SYMPOSIUM:

PROFESSIONALS        $100                       $120
STUDENTS             $50                        $60


ACCOMMODATIONS
-------------
CONTACT US FOR FURTHER INFORMATION ABOUT THIS.



******************************************************************

            1ST INTERNATIONAL SYMPOSIUM ON
               ARTIFICIAL INTELLIGENCE
                MONTERREY, N.L. MEXICO

WE WOULD LIKE TO INVITE ALL THE PROFESSORS AND RESEARCHERS TO
SEND PAPERS FOR THE FIRST INTERNATIONAL SYMPOSIUM ON ARTIFICIAL
INTELLIGENCE TO BE HELD ON OCTOBER 24-28, 1988
IN MONTERREY, MEXICO AT THE INSTITUTO TECNOLOGICO Y DE ESTUDIOS
SUPERIORES DE MONTERREY (ITEMS).

            C A L L     F O R    P A P E R S
           ----------------------------------

TOPICS INCLUDE KNOWLEDGE REPRESENTATION, KNOWLEDGE ACQUISITION,
NATURAL LANGUAGE PROCESSING, KNOWLEDGE BASED SYSTEMS, INFERENCE
ENGINE, MACHINE LEARNING, SPEECH RECOGNITION, PATTERN RECOGNITION,
VISION AND THEOREM PROVING.

FOUR TO FIVE PAGES MAXIMUM SUMMARIES, FOUR COPIES AND RESUME, TO
I T E S M . CENTRO DE INVESTIGACION EN INFORMATICA.
DAVID GARZA SALAZAR.  SUCURSAL DE CORREOS J.  64849 MONTERREY, N.L.
MEXICO.  (83) 59 57 47, (83) 59 59 43, (83) 59 57 50;

Deadline for submissions: August 31st,88


BITNET ADDRESS: SIIACII AT TECMTYVM
TELEX:  0382975 ITEMSE
TELEFAX: (83) 58 59 31
APPLELINK ADDRESS: IT0023
P.S. ANY INFORMATION FEEL FREE TO CONTACT US, WE WOULD LIKE TO SEND YOU
MORE INFORMATION ABOUT OUR SYMPOSIUM.

------------------------------

End of AIList Digest
********************

∂09-May-88  0447	LAWS@KL.SRI.COM 	AIList V6 #93 - Philosophy, Free Will 
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 9 May 88  04:46:46 PDT
Date: Sun  8 May 1988 23:55-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #93 - Philosophy, Free Will
To: AIList@SRI.COM


AIList Digest             Monday, 9 May 1988       Volume 6 : Issue 93

Today's Topics:
  Philosophy - Free Will

----------------------------------------------------------------------

Date: 2 May 88 10:42:23 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert Cockton)
Subject: Sorry, no philosphy allowed here.

> Could the philosophical discussion be moved to "talk.philosophy"? (John Nagle)
I am always suspicious of any academic activity which has to request that it
becomes a philosophical no-go area.  I know of no other area of activity which
is so dependent on such a wide range of unwarranted assumptions.  Perhaps this
has something to do with the axiomatic preferences of its founders, who came
from mathematical traditions where you could believe anything as long as it was
logically consistent.  Before the 5th Generation scare, AI in the UK had been
sat on for dodging too many methodological issues.  Whilst, like the AI
pioneers, they "could see no reasons WHY NOT [add list of major controversial
positions", Lighthill could see no reasons WHY in their work.

> What about compatibilism?  There are a lot of arguments that free will is
> compatible with strong determinism.  (The ones I've seen are riddled with
> logical errors, but most philosophical arguments I've seen are.) (R. O'Keefe)
I would not deny the plausibility of this approach.  However, detection of
logical errors in an argument is not enough to sensibly dismiss it, otherwise
we would have to become resigned to widespread ignorance.  My concern over AI
is, like some psychology, it has no integration with social theories, especially
those which see 'reality' as a negotiated outcome of social processes, and not
logically consistent rules.  If the latter approach to 'reality', 'truth' etc.
were feasible, why have we needed judges to deliver equity? For some AI
enthusiasts, the answer of course, is that we don't.  In the brave new world,
machines will interpret the law unequivocably, making the world a much fairer
place:-) Anyway, everyone know's that mathematicians are much smarter
than lawyers and can catch up with them in a few months. Hey presto, rule-base!

> One of the problems with the English Language is that most of the
> words are already taken.  ( --Barry Kort)
ALL the words that exist are taken! And if the West Coast had managed to take
more of them, we wouldn't have needed that silly Beach Boys talk ('far out' ->
'how extraordinary/well, fancy that; etc. :-))  AI was a natural term in the
late 50's before the whole concept of definable and measurable intelligence was
shot through in the 1960s on statistical, methodological and sociological
grounds.  Given the changed intellectual climate, it would be sensible if the
mathematical naievety of the late 1950s were replaced by the more sophisticated
positions of at least the 1970s.  There's no need to change names, just absorb
AI into computer science, linguistics, psychology, management etc.  That would
leave workers in advanced computer applications free to get on with pragmatic
issues with no pressure to address the pottier postures of 'pure' AI.

> I would like to learn how to imbue silicon with consciousness,
> awareness, free will, and a value system.  (--Barry Kort)
But why!  Surely you can't have been bullied that much at school to
have developed such a materialist view of human nature? :-) :-)

> Suppose I were able to inculcate a Value System into silicon.   And in the
> event of a tie among competing choices, I use a  random mechanism to force
> a decision.  Would the behavior of  my system be very much different from a
> sentient being with free will? (--Barry Kort)
Oh brave Science! One minute it's Mind on silicon, the next it's a little
randomness to explain the inexplicable.  Random what? Which domain? Does it
close? Is it enumerable?  Will it type out Shakespeare? More seriously 'forcing
decisions' is a feature of Western capitalist society (a historical point
please, not a political one).  There are consensus-based (small) cultures where
decisions are never forced and the 'must do something now' phenomena is
mercifully rare.  Your system should prevaricate, stall, duck the
issue, deny there's a problem, pray, write to an agony aunt, ask its
mum, wait a while, get its friends to ring it up and ask it out ...

Before you put your value system on Silicon, put it on paper.  That's hard
enough, so why should a dumb bit of constrained plastic and metal promise any
advance over the frail technology of paper? If you can't write it down, you
cannot possibly program it.

So come on you AI types, le't see your *DECLARATIVE TESTIMONIALS* on
this news group by the end of the month. Lay out your value systems
in good technical English. If you can't manage it, or even a little of it,
should you really keep believing that it will fit onto silicon?

------------------------------

Date: 4 May 88 21:10:50 GMT
From: dvm@yale-bulldog.arpa  (Drew Mcdermott)
Subject: Free Will


My contribution to the free-will discussion:

Suppose we have a robot that models the world temporally, and uses
its model to predict what will happen (and possibly for other purposes).
It uses Qualitative Physics or circumscription, or, most likely, various
undiscovered methods, to generate predictions.  Now suppose it is in a
situation that includes various objects, including an object it calls R,
which it knows denotes itself.  For concreteness, assume it believes
a situation to obtain in which R is standing next to B, a bomb with a
lit fuse.  It runs its causal model, and predicts that B will explode,
and destroy R.

Well, actually it should not make this prediction, because R will be
destroyed only if it doesn't roll away quickly.  So, what will R do?  The
robot could apply various devices for making causal prediction, but they
will all come up against the fact that some of the causal antecedents of R's
behavior *are situated in the very causal analysis box* that is trying to
analyze them.  The robot might believe that R is a robot, and hence that
a good way to predict R's behavior is to simulate it on a faster CPU, but
this strategy will be in vain, because this particular robot is itself.
No matter how fast it simulates R, at some point it will reach the point
where R looks for a faster CPU, and it won't be able to do that simulation
fast enough.  Or it might try inspecting R's listing, but eventually it
will come to the part of the listing that says "inspect R's listing."
The strongest conclusion it can reach is that "If R doesn't roll away,
it will be destroyed; if it does roll away, it won't be."  And then of
course this conclusion causes R to roll away.

Hence any system that is sophisticated enough to model situations that its own
physical realization takes part in must flag the symbol describing that
realization as a singularity with respect to causality.  There is simply
no point in trying to think about that part of the universe using causal
models.  The part so infected actually has fuzzy boundaries.  If R is
standing next to a precious art object, the art object's motion is also
subject to the singularity (since R might decided to pick it up before
fleeing).  For that matter, B might be involved (R could throw it), or
it might not be, if the reasoner can convince itself that attempts to
move B would not work.  But all this is a digression.  The basic point
is that robots with this kind of structure simply can't help but think of
themselves as immune from causality in this sense.  I don't mean that they
must understand this argument, but that evolution must make sure that their
causal-modeling system include the "exempt" flag on the symbols denoting
themselves.  Even after a reasoner has become sophisticated about physical
causality, his model of situations involving himself continue to have this
feature.  That's why the idea of free will is so compelling.  It has nothing
to do with the sort of defense mechanism that Minsky has proposed.

I would rather not phrase the conclusion as "People don't really have
free will," but rather as "Free will has turned out to be possession of
this kind of causal modeler."  So people and some mammals really do have
free will.  It's just not as mysterious as one might think.

                       -- Drew McDermott

------------------------------

Date: 6 May 88 12:23:46 GMT
From: sunybcs!rapaport@boulder.colorado.edu  (William J. Rapaport)
Subject: Free Will and Self-Reference

In article <28437@yale-celray.yale.UUCP> dvm@yale.UUCP (Drew Mcdermott) writes:
>
>Suppose we have a robot that models the world temporally, and uses
>its model to predict what will happen... Now suppose it is in a
>situation that includes various objects, including an object it calls R,
>which it knows denotes itself.
>The robot could apply various devices for making causal prediction, but they
>will all come up against the fact that some of the causal antecedents of R's
>behavior *are situated in the very causal analysis box* that is trying to
>analyze them.  The robot might believe that R is a robot, and hence that
>a good way to predict R's behavior is to simulate it on a faster CPU, but
>this strategy will be in vain, because this particular robot is itself.
>...
>Hence any system that is sophisticated enough to model situations that its own
>physical realization takes part in must flag the symbol describing that
>realization as a singularity with respect to causality.

Followers of this debate should, at this point, familiarize themselves
with the literature on "essential indexicals" and "quasi-indexicality",
philosophical analyses designed for precisely such issues about
self-reference.  Here are some pertinent references, each with pointers to
the literature:

Castaneda, Hector-Neri (1966), " `He':  A Study in the Logic of
Self-Consciousness," Ratio 8:  130-157.

Castaneda, Hector-Neri (1967), "On the Logic of Self-Knowledge,"
Nous 1:  9-21.

Castaneda, Hector-Neri (1967), "Indicators and Quasi-Indicators,"
American Philosophical Quarterly 4:  85-100.

Castaneda, Hector-Neri (1968), "On the Logic of Attributions of
Self-Knowledge to Others," Journal of Philosophy 64:  439-456.

Perry, John (1979), "The Problem of the Essential Indexical," Nous 13:
3-21.

Rapaport, William J. (1986), "Logical Foundations for Belief Representation,"
Cognitive Science 10:  371-422.
                                        William J. Rapaport
                                        Assistant Professor

Dept. of Computer Science||internet:  rapaport@cs.buffalo.edu
SUNY Buffalo             ||bitnet:    rapaport@sunybcs.bitnet
Buffalo, NY 14260        ||uucp: {decvax,watmath,rutgers}!sunybcs!rapaport
(716) 636-3193, 3180     ||

------------------------------

Date: 3 May 88 15:36:40 GMT
From: bwk@MITRE-BEDFORD.ARPA  (Barry W. Kort)
Subject: Re: Free Will & Self-Awareness

I also went back to reread Professor Minsky's theory of Free Will in the
concluding chapters of _Society of Mind_.  I am impressed with the
succinctness with which Minksky captures the essential idea that
individual behavior is generated by a mix of causal elements (agency
motivated by awareness of the state-of-affairs vis-a-vis one's value system)
and chance (random selection among equal-valued alternatives).

The only other treatises on Free Will that I resonated with were the
ones by Raymond Smullyan ("Is God a Taoist" in _The Tao is Silent_ and
reprinted in Hofstadter and Dennet's _The Mind's I_) and the book by
Daniel Dennet (_Elbow Room: The Varieties of Free Will Worth Wanting_).

My own contribution to this discussion is summarized in the only free
verse I ever composed in my otherwise prosaic career:


                             Free Will

                                or

                         Self Determination



          I was what I was.

                     I am what I am.

                               I will be what I will be.


--Barry Kort

------------------------------

Date: Thu, 5 May 88 08:49 EDT
From: GODDEN%gmr.com@RELAY.CS.NET
Subject: Re: Free Will

$0.02: "The Mysterious Stranger" by Mark Twain is a novella dealing with
free will and determinism that the readers of this list may find interesting.

------------------------------

Date: Tue, 3 May 88 01:38 EST
From: Jeffy <JCOGGSHALL%HAMPVMS.BITNET@MITVMA.MIT.EDU>
Subject: Re: Proper subject matter of AILIST

______________________________________________________________________________

>Date: 28 Apr 88 15:42:18 GMT
>From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
>If the combined list is to keep its present readership, which includes some
>of the major names in AI (both Minsky and McCarthy read AILIST), the content
>of this one must be improved a bit.

     - As to questions about what is the appropriate content of AIlist -
Presumably it is coextensive with the field "AI" - which I would find
difficult to put a lid on - if you could do it, I would be happy. I
suggest, however, that you cannot, for all research field boundaries are,
by their nature, arbitrary (or, if not arbitrary, then _always_ extremely
fuzzy at the edges).
     Quality - I cannot speak for; however, if you suggest limiting the
group of people who can make contributions to AIList to, perhaps, PHD's in
computer science, or perhaps, the membership of AAAI....
     Or perhaps, limiting the content to specific technical issues (as
opposed to the "philosophical" debates about AI & Freewill and AI & Ethics,
or AI & Mimicking "human" consciousness....
     Well, there are several reasons why this is just plain bad (and you
must understand that as I argue my position - I am a person who is
profoundly interested in AI & Ethics, and in hearing what people currently
working in "AI" think about ethics, as it relates to the work they are
doing).
     Reason 1: What would be the point of making such regulations? Why not
respond to it case by case? Since, as I note, there are fuzzy boundaries,
why not allow the readership (ever heard of representative democracy?) be
the ones who put the social pressure on the contributors to contribute what
they want to hear about?
     Anti-your-argument 3: So, we don't want to lose our present
readership.... (which includes Minsky & McCarthy) - and we should tailor
the "quality" (please tell me what this is) and the "subject matter" of our
submissions to "keep" our "major names"? Why? Because they give the list a
"good atmosphere"? Because they can hear what "lesser figures" have to say
and perhaps drop a few pearls of wisdom into our midst? Why?
     Reason 2: I don't know about you, but I have this bias against the
sectarianism and ivory-towerism of the scientific community at large such
that the common society is excluded from decisions that are being made
about the direction of research etc.. that are going to have major effects
in years to come. Often, there is an unspoken "we are better because we
_really_ know what we're doing" attitude among the scientific community,
and I would like you to tell me why your message isn't representative of
it.
                                            Jeff Coggshall

------------------------------

Date: 3 May 88 19:05:06 GMT
From: centro.soar.cs.cmu.edu!acha@PT.CS.CMU.EDU  (Anurag Acharya)
Subject: this is philosophy ??!!?

In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:
>I am always suspicious of any academic activity which has to request that it
>becomes a philosophical no-go area.  I know of no other area of activity which
>is so dependent on such a wide range of unwarranted assumptions.  Perhaps this
>has something to do with the axiomatic preferences of its founders, who came
>from mathematical traditions where you could believe anything as long as it
>was logically consistent.

Just when is an assumption warranted ? By your yardstick (it seems ),
'logically inconsistent' assumptions are more likely to be warranted than
the logically consistent ones. Am I parsing you wrong or do you really claim
that ?!

>logical errors in an argument is not enough to sensibly dismiss it, otherwise
>we would have to become resigned to widespread ignorance.

Same problem again ( sigh ! ). Just how do you propose to argue with logically
inconsistent arguments ? Come on Mr. Cockton, what gives ?

> My concern over AI
>is, like some psychology, it has no integration with social theories,
especially
>those which see 'reality' as a negotiated outcome of social processes, and not
>logically consistent rules.

You wish to assert that reality is a negotiated outcome of social processes ???
Imagine Mr. Cockton, you are standing on the 36th floor of a building and you
and your mates decide that you are Superman and can jump out without getting
hurt.  By the 'negotiated outcome of social processes' claptrap, you really
are Superman.  Would you then jump out and have fun ?

> Your system should prevaricate, stall, duck the
>issue, deny there's a problem, pray, write to an agony aunt, ask its
>mum, wait a while, get its friends to ring it up and ask it out ...

Whatever does all that stuff have to do with intelligence per se ?

Mr. Cockton, what constitutes a proof among you and your
"philosopher/sociologist/.." colleagues ? Since logical consistency is taboo,
logical errors are acceptable, reality and truth are functions of the current
whim of the largest organized gang around ( oh! I am sorry, they are the
'negotiated ( who by ? ) outcomes of social processes ( what processes ? )')
how do you guys conduct research ? Get together and vote on motions or what ?

-- anurag

--
Anurag Acharya                Arpanet: acharya@centro.soar.cs.cmu.edu

"There's no sense in being precise when you don't even know what you're
 talking about"   -- John von Neumann

------------------------------

End of AIList Digest
********************

∂09-May-88  0723	LAWS@KL.SRI.COM 	AIList V6 #94 - Philosophy  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 9 May 88  07:22:47 PDT
Date: Sun  8 May 1988 23:58-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #94 - Philosophy
To: AIList@SRI.COM


AIList Digest             Monday, 9 May 1988       Volume 6 : Issue 94

Today's Topics:
  Philosophy - Free Will, Self-Awareness

----------------------------------------------------------------------

Date: 5 May 88 14:40:56 GMT
From: planting@speedy.cs.wisc.edu  (W. Harry Plantinga)
Subject: Free Will & Mcdermott's argument

In article <28437@yale-celray.yale.UUCP> dvm@yale.UUCP (Drew Mcdermott) writes:

>. . . any system that is sophisticated enough to model situations that its own
>physical realization takes part in must flag the symbol describing that
>realization as a singularity with respect to causality.
> . . . robots with this kind of structure simply can't help but think of
>themselves as immune from causality in this sense.
>Even after a reasoner has become sophisticated about physical
>causality, his model of situations involving himself continue to have this
>feature.  That's why the idea of free will is so compelling.
>I would rather not phrase the conclusion as "People don't really have
>free will," but rather as "Free will has turned out to be possession of
>this kind of causal modeler."  So people and some mammals really do have
>free will.  It's just not as mysterious as one might think.
>
>                       -- Drew McDermott

Taking this as an argument that people don't have free will in the
common sense, let's see what we have.  Is this a fair restatement
of the argument?

    (1)  Systems that reason about causality and that reason about
    themselves have a singularity.  They must consider themselves
    exempt from causality.

    (2)  Therefore people are subject to causality and do not have
    free will (in the common sense).

As it stands, this is not a sound argument.  Clearly there are a couple
of unwritten assumtions here.  Perhaps these:

    (1.1)  People are "systems that reason about causality" (assuming
    materialism)

    (1.2)  The feeling of free will is the same thing as the feeling
    of not being able to reason about causality with respect to self.

If we accept (1), (1.1), and (1.2) we still can't conclude that people
don't have free will.  The best result that can be argued is

    (2')  If people didn't have free will, they would still feel that
    they did.

Note that this is very much distinct from (2), and even this argument
is based on some highly disputable premisses.  (1.1) is surely not
commonly agreed upon outside of AI, and (1.2) is also dubious.  My free
will doesn't feel like the simple inability to figure out what I am
going to do in the future by reasoning about causality.  My free will
feels like the ability to choose among alternatives no matter what my
reason, emotion, conscience etc. say is more "reasonable."

As an argument that people don't have free will in the common sense,
this would only be convincing to someone who holds (1.1) and (1.2) and
that (2') implies (2), i.e. someone who already thinks people don't
have free will.


Harry Plantinga
planting@cs.wisc.edu

------------------------------

Date: 3 May 88 08:45:49 GMT
From: mcvax!ukc!reading!onion!henry!jadwa@uunet.uu.net  (James
      Anderson)
Subject: Re: Free Will & Self Awareness

In article 1380 of comp.ai: bwk@mitre-bedford.ARPA (Barry W.
Kort) says:

> Suppose I were able to inculcate a Value System into silicon.
> And in the event of a tie among competing choices, I use a
> random mechanism to force a decision.  Would the behavior of
> my system be very much different from a sentient being with
> free will?

Well ...

I take, "free will" to mean that an agent can choose what action
to take despite the choices that other agents might make and
despite unchosen events in the world.

There are a number of corollaries to this definition.

1)  I am a strong minded person, so I often exercise free will,
    but you can deny me free will, say, by killing me.

2a) You might exercise your free will by making an oath with
    yourself never to deny me free will, say, by never applying
    irresistible force to me.

2b) Making an oath does not deny your own free will. You can
    chose to break the oath.

3a) Random events in the choice mechanism deny free will to the
    extent that they prevent the agent from determining the
    outcome of a decision.

3b) I know of no way to discover, by observing the behaviour of an
    agent, that exception (3a) applies to it. So the answer to
    your question is "yes": I can not tell apart the behaviour of
    a random system and one with free will. A free-will system
    could, for example, chose to behave randomly.

4) An event which is not of any agents choosing might deny me
   free will. The event may be random, as in losing a game of
   Russian roulette, or deterministic, like being being drowned
   in a rising tide.

So far, so good. But here comes the free will paradox, with knobs
on!

If the world is deterministic I am denied free will because I can
not determine the outcome of a decision. On the other hand, if
the world is random, I am denied free will because I can not
determine the outcome of a decision. Either element, determinancy
or randomness, denies me free will, so no mixture of a
deterministic world or a non-deterministic world will allow me
free will.

I am going to think about that for a little while before I post
again.

James

(JANET) James.Anderson@reading.ac.uk

------------------------------

Date: 5 May 88 15:36:34 GMT
From: sunybcs!sher@boulder.colorado.edu  (David Sher)
Subject: Re: Free Will & Self Awareness

It seems that people are discussing free will and determinism by
trying to distinguish true free will from random behavior.  There is a
fundamental problem with this topic.  Randomness itself is not well
understood.  If you could get a good definition of random behavior you
may have a better handle on free will.

Consider this definition of random behavior:
X is random iff its value is unknown.

This is I believe a valid definition of randomness.  But in this case
free will may be a subset of random behaviors.  Other more
sophisticated definitions may be proposed for randomness.

On a similar note to decide if we can allow machines to take
responsibility (which seems to be bothering our english contingent), we
must decide just what responsibility is.  We already entrust machines
with our lives and have for thousands of years (since we invented
boats).
-David Sher
ARPA: sher@cs.buffalo.edu       BITNET: sher@sunybcs
UUCP: {rutgers,ames,boulder,decvax}!sunybcs!sher

------------------------------

Date: 4 May 88 12:55:36 GMT
From: vu0112@bingvaxu.cc.binghamton.edu  (Cliff Joslyn)
Subject: Re: Free Will & Self-Awareness

In article <30800@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>The essential idea that
>individual behavior is generated by a mix of causal elements (agency
>motivated by awareness of the state-of-affairs vis-a-vis one's value system)
>and chance (random selection among equal-valued alternatives).

This is a central tennet of Systems Science.  Absolute determinism is
possible and relatively common; absolute freedom is impossible; relative
freedom is possible and relatively common.  Most (all?) real systems
involve mixes of relatively free and determined elements operating at
multiple levels of interaction/complexity.

It should be emphasized that this is not just true of mental systems,
but also of biological and even physical systems.  As one moves from the
physical to the biological and finally to the mental, the relative
importance of the free components grows.  Intelligent organisms are more
free than unintelligent organisms; which are more free than
non-organisms.

None of the above are absolutely free.  No-one even knows what it might
mean to be absolutely free.

>--Barry Kort


--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: 3 May 88 16:04:00 GMT
From: channic@m.cs.uiuc.edu
Subject: Re: AIList V6 #86 - Philosophy


In his article Brian Yamuchi (yamauchi@speech2.cs.cmu.edu) writes:
> /* ---------- "Re: AIList V6 #86 - Philosophy" ---------- */
> In article <368693.880430.MINSKY@AI.AI.MIT.EDU>, MINSKY@AI.AI.MIT.EDU
(Marvin Minsky) writes:
> > Yamauchi, Cockton, and others on AILIST have been discussing freedom
> > of will as though no AI researchers have discussed it seriously.  May
> > I ask you to read pages 30.2, 30.6 and 30.7 of The Society of Mind.  I
> > claim to have a good explanation of the free-will phenomenon.
>
> Actually, I have read The Society of Mind, where Minsky writes:
>
> | Everything that happens in our universe is either completely determined
> | by what's already happened in the past or else depends, in part, on
> | random chance.  Everything, including that which happens in our brains,
> | depends on these and only on these :
> |
> | A set of fixed, deterministic laws. A purely random set of accidents.
> |
> | There is no room on either side for any third alternative.
>
I see plenty of room -- my own subjective experience.  I make mental
decisions which are not random and are not completely determined (although
certainly influenced) by past determinism.  Minsky wondered why his
explanation seems to have eluded philosophers of the past.  I am not
surprised because evidently he is just being swept away out of control
in an entirely random or totally determined universe.  The philosophers
of the past, like myself, were probably like me -- intelligent, free will
sentient beings.  Are these philosphers and myself in the minority?
I think not, but I am surprised that such views constitute RESPECTED,
let alone, PUBLISHED material on the subject.  Certainly this objectivist
viewpoint helps the discipline of AI:  people (i.e. funding agences) will
be more likely to believe that a machine can be intelligent if intelligence
can be reduced to a set of purely deterministic laws.  But this BEGS THE
QUESTION of intelligent machines in the worst way.  Show me the deterministic
laws that create mind, Dr. Minsky, then I will believe there is no free will.
Otherwise, you are trying to refute an undeniable human experience.
Do you believe your career was merely the result of some bizarre genetic
combination or pure chance?

The attack is over.  The following is a plea to all AI researchers.  Please
do not try to persuade anyone, especially impressionable students, that s\he
does not have free will.  Everyone has the ability to choose to bring peace
to his or her own life and to the rest of society, and has the ability to
MAKE A DIFFERENCE in the world.  Free will should not be compromised for the
mere prospect of creating an intelligent machine.

Tom Channic
University of Illinois
channic@uiucdcs.cs.uiuc.edu
{decvax|ihnp4}!pur-ee!uiucdcs!channic

------------------------------

Date: 5 May 88 06:12:45 GMT
From: vu0112@bingvaxu.cc.binghamton.edu  (vu0112)
Subject: Re: Free Will & Self Awareness

In article <770@onion.cs.reading.ac.uk> jadwa@henry.cs.reading.ac.uk
(James Anderson) writes:
>3a) Random events in the choice mechanism deny free will to the
>    extent that they prevent the agent from determining the
>    outcome of a decision.

Interesting.  So you understand freedom to be the ability for me to
determine my own actions, as opposed to them being determined by
external sources.  Thus you're defining freedom in terms of a different
kind of determinism, a stance which seems problematic.

Why not instead freedom as simply the *lack* of *complete* external
control?  This way you allow *degrees of freedom*.  That is, I am free
if I am not determined.  This in no ways implies that I in turn must
determine something else.

This also helps us sort out the difference between subjective and
objective uncertainty (freedom).  Objective freedom is understood as the
(e.g.  quantum) inherent uncertainty in processes, whereas subjective
uncertainty is the premise that I lack information about a possibly
determinate process.  On my definition, both conditions indicate freedom
to me, but to you in the latter case we are still determined.

>So far, so good. But here comes the free will paradox, with knobs
>on!

I think my definition knocks your knobs off.

>If the world is deterministic I am denied free will because I can
>not determine the outcome of a decision.

For me, if the world is deterministic I am granted freedom when I cannot
determine the outcome of a decision.

>On the other hand, if
>the world is random, I am denied free will because I can not
>determine the outcome of a decision.

Yes, no matter if the world is determined or not, I can never determine
the outcome of a decision.  This is an inherent epistemic limitation,
independent of the state of the world.

I think that's obviously correct.  Further, by showing us that
determinism is impossible, I think you've just demonstrated that freedom
is necessary.

>I am going to think about that for a little while before I post
>again.

What I find fascinating is the implicit assumption that determinism is
the "normal" general case, and that freedom is come kind of strange
property rarely seen.  I think that just the opposite is true, that
deterministic processes of any kind are very rare.  Demonstrating the
freedom of *any* organism, let alone humans, is trivial.  Demonstrating
our determinism is a silly philosophical waste of time.

--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: 5 May 88 07:00:47 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: Re: Sorry, no philosphy allowed here.

In article <1069@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:
> > What about compatibilism?  There are a lot of arguments that free will is
> > compatible with strong determinism.  (The ones I've seen are riddled with
> > logical errors, but most philosophical arguments I've seen are.) (O'Keefe)
> I would not deny the plausibility of this approach.  However, detection of
> logical errors in an argument is not enough to sensibly dismiss it, otherwise
> we would have to become resigned to widespread ignorance.  My concern over AI
> is, like some psychology, it has no integration with social theories,
especially
> those which see 'reality' as a negotiated outcome of social processes, and
not
> logically consistent rules.  If the latter approach to 'reality', 'truth'
etc.
> were feasible, why have we needed judges to deliver equity? For some AI
> enthusiasts, the answer of course, is that we don't.  In the brave new world,
> machines will interpret the law unequivocably, making the world a much fairer
> place:-) Anyway, everyone know's that mathematicians are much smarter
> than lawyers and can catch up with them in a few months. Hey presto,
> rule-base!

The rider was intended to indicate that I neither endorse nor reject
compatibilism.  In philosphy, I fear that I _have_ become resigned to
ignorance (though some recent moral philosophers have raised my hopes).

Concerning 'reality' as a negotiated outcome of social processes, do you
remember the old days when we were assured that colour terms were a social
construct, and that biological species were merely a theoretical construct
imposed on a world with no boundaries?  An archaeologist once commented in
a lecture that I attended that a number of ancient peoples had the practice
of "killing" the deceased's possessions (breaking bowls, bending knives,
tearing cloth &c), and that by and large the artifacts that had been
damaged least (e.g. just a "token" chip out of the rim of a bowl) were the
ones the archaeologists found most beautiful (comparing reconstruction with
reconstruction).  Negotiated social processes between people 5,000 years
apart?

Why have we needed judges to deliver equity?
(a) Because anyone who does that we _call_ a judge.
(b) They don't.
(c) The use of judges has nothing to do with the nature of reality.
    The facts may be known to both parties, yet either or both may
    simply be pushing to see what it/they can get away with.
    Ever heard of "the Kerry alibi?"
(d) This is something of a loaded example, insofar as law _is_ a socially
    negotiated field (particularly in the USA).  But none of the admittedly
    few books on jurisprudence I've read makes any claim that the law is an
    instrument for reaching 'truth' or 'reality', only a mechanism for
    reducing the level of dispute in a society to a workable degree.  (It
    doesn't even matter too much if a law is and is known to be unjust,
    so long as you know what it _is_ and can rely on it well enough to avoid
    its consequences.)

For an example of someone who _has_ looked at AI with an eye to sociology,
        Plans and Situated Actions : The problem of human/machine communication
        Lucy A. Suchman
        Cambridge University Press 1987
        ISBN 0-521-33739-9

------------------------------

Date: 3 May 88 15:46:10 GMT
From: otter!cwp@hplabs.hp.com  (Chris Preist)
Subject: Re: Free Will & Self-Awareness

O.S. writes...

>> I would like to learn how to imbue silicon with consciousness,
>> awareness, free will, and a value system.
>
>  First, by requesting that, you are underastimating yourself as a
free-willing
> creature, and second, your request is self-contradicting ans shows little
> understanding of matters, like free will and value systems - such things
cannot
> be 'given', they simply exist. (Something to beare in mind for other
purposes,
> besides to AI...). You can write 'moral' programs, even in BASIC, if you
want,
> because they will have YOUR value system....

Sorry, but not correct. While it is quite possible that the goal of 'imbuing
silicon with a value system' may never bge fulfilled, it is NOT correct to
say that values simply exist.

Did my value system exist before my conception? I doubt it. I learnt it,
through interaction with the environment and other people. Similarly, a
(possibly deterministic) program MAY be able to learn a value system, as
well as what an arch looks like. Simply because we have values, does not
mean we are free.

On the question of Free Will - simply because someone denies that we are
truly free, does not mean they have little understanding of the matter.
As Sartre pointed out, we have an overwhelming SUBJECTIVE sensation of
freedom. Questioning this sensation is a major step, but a step which
has to be made. Up to now, the problem has been purely metaphysical. An
answer was impossible. But now, AI provides an investigation into
deterministic intelligence. I believe it IS important for AI researchers
to make an effort to understand the philosophical arguments on both sides.
Maybe your heart will lie on one of those sides, but the mind must remain
as open as possible.

Chris Preist


P.S. Note that AI only provides a semi decision procedure to the problem
of free will. Determinism would be proven (though even this is debatable)
if an 'intelligent' deterministic system were created. However, if objective
free will exists, then we could hack away with the infinite monkeys, all
to no avail.

------------------------------

End of AIList Digest
********************

∂09-May-88  1044	LAWS@KL.SRI.COM 	AIList V6 #95 - Philosophy  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 9 May 88  10:44:33 PDT
Date: Mon  9 May 1988 00:02-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #95 - Philosophy
To: AIList@SRI.COM


AIList Digest             Monday, 9 May 1988       Volume 6 : Issue 95

Today's Topics:
  Philosophy - Free Will & Responsibility

----------------------------------------------------------------------

Date: 5 May 88 21:56:31 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Free Will & Self Awareness

I was intrigued by David Sher's comments about "machine responsibility".

It is not uncommon for a child to "spank" a machine which misbehaves.
But as adults, we know that when a machine fails to carry out its
function, it needs to be repaired or possibly redesigned.  But we
do not punish the machine or incarcerate it.

Why then, when a human engages in undesirable behavior, do we resort
to such unenlightened corrective measures as yelling, hitting, or
deprivation of life-affirming resources?

--Barry Kort

------------------------------

Date: 5 May 88 20:09:02 GMT
From: sunybcs!bettingr@boulder.colorado.edu  (Keith E. Bettinger)
Subject: Re: AIList V6 #86 - Philosophy

In article <3200016@uiucdcsm} channic@uiucdcsm.cs.uiuc.edu writes:
}In his article Brian Yamuchi (yamauchi@speech2.cs.cmu.edu) writes:
}} /* ---------- "Re: AIList V6 #86 - Philosophy" ---------- */
}} In article <368693.880430.MINSKY@AI.AI.MIT.EDU}, MINSKY@AI.AI.MIT.EDU
(Marvin Minsky) writes:
}} } Yamauchi, Cockton, and others on AILIST have been discussing freedom
}} } of will as though no AI researchers have discussed it seriously.  May
}} } I ask you to read pages 30.2, 30.6 and 30.7 of The Society of Mind.  I
}} } claim to have a good explanation of the free-will phenomenon.
}}
}} Actually, I have read The Society of Mind, where Minsky writes:
}}
}} | Everything that happens in our universe is either completely determined
}} | by what's already happened in the past or else depends, in part, on
}} | random chance.  Everything, including that which happens in our brains,
}} | depends on these and only on these :
}} |
}} | A set of fixed, deterministic laws.   A purely random set of accidents.
}} |
}} | There is no room on either side for any third alternative.
}}
}I see plenty of room -- my own subjective experience.  I make mental
}decisions which are not random and are not completely determined (although
}certainly influenced) by past determinism.

How do you know that?  Do you think that your mind is powerful enough to
comprehend the immense combination of effects of determinism and chance?
No one's is.

} [...] But this BEGS THE
}QUESTION of intelligent machines in the worst way.  Show me the deterministic
}laws that create mind, Dr. Minsky, then I will believe there is no free will.
}Otherwise, you are trying to refute an undeniable human experience.

No one denies that we humans experience free will.  But that experience says
nothing about its nature; at least, nothing ruling out determinism and chance.

}Do you believe your career was merely the result of some bizarre genetic
}combination or pure chance?
             ↑↑
The answer can be "yes" here, if the conjunction is changed to "and".

}
}The attack is over.  The following is a plea to all AI researchers.  Please
}do not try to persuade anyone, especially impressionable students, that s\he
}does not have free will.  Everyone has the ability to choose to bring peace
}to his or her own life and to the rest of society, and has the ability to
}MAKE A DIFFERENCE in the world.  Free will should not be compromised for the
}mere prospect of creating an intelligent machine.

Believe it or not, Minsky makes a similar plea in his discussion of free will
in _The Society of Mind_.  He says that we may not be able to figure out where
free will comes from, but it is so deeply ingrained in us that we cannot deny
it or ignore it.

}
}Tom Channic
}University of Illinois
}channic@uiucdcs.cs.uiuc.edu
}{decvax|ihnp4}!pur-ee!uiucdcs!channic

-------------------------------------------------------------------------
Keith E. Bettinger                  "Perhaps this final act was meant
SUNY at Buffalo Computer Science     To clinch a lifetime's argument
                                     That nothing comes from violence
CSNET:    bettingr@Buffalo.CSNET     And nothing ever could..."
BITNET:   bettingr@sunybcs.BITNET               - Sting, "Fragile"
INTERNET: bettingr@cs.buffalo.edu
UUCP:     ..{bbncca,decvax,dual,rocksvax,watmath,sbcs}!sunybcs!bettingr

------------------------------

Date: 5 May 88 15:49:12 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Sorry, no philosophy allowed here.

In an earlier article, I wrote:
>> Suppose I were able to inculcate a Value System into silicon.   And in the
>> event of a tie among competing choices, I use a  random mechanism to force
>> a decision.  Would the behavior of  my system be very much different from a
>> sentient being with free will? (--Barry Kort)

In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) responds:
>Oh brave Science! One minute it's Mind on silicon, the next it's a little
>randomness to explain the inexplicable.  Random what? Which domain? Does it
>close? Is it enumerable?  Will it type out Shakespeare? More seriously
>'forcing decisions' is a feature of Western capitalist society
>(a historical point please, not a political one).
>There are consensus-based (small) cultures where
>decisions are never forced and the 'must do something now' phenomena is
>mercifully rare.  Your system should prevaricate, stall, duck the
>issue, deny there's a problem, pray, write to an agony aunt, ask its
>mum, wait a while, get its friends to ring it up and ask it out ...

Random selection from a set of equal-value alternatives known to
the system.  The domain is just the domain of knowledge possessed by
the decision maker.  The domain is finite, incomplete, possibly
inconsistent, and evolving over time.  It might type out Shakespeare,
especially if 1) it were asked to, 2) it had no other pressing
priorities, and 3) it knew some Shakespeare or knew where to find some.

One of the alternative decisions that the system could take is to emit
the following message:

        "I am at a dilemma such that I am not aware of a good action
         to take, other than to emit this message."

The above response is not particulary Western.  (Well, I suppose it
could be translated into Western-style behavior:  "[Panic mode]:  I
don't know what to do!!!")

>Before you put your value system on Silicon, put it on paper.  That's hard
>enough, so why should a dumb bit of constrained plastic and metal promise any
>advance over the frail technology of paper? If you can't write it down, you
>cannot possibly program it.

I actually tried to do this a few years ago.  I ended up with a book-length
document which very few people were interested in reading.

>So come on you AI types, le't see your *DECLARATIVE TESTIMONIALS* on
>this news group by the end of the month. Lay out your value systems
>in good technical English. If you can't manage it, or even a little of it,
>should you really keep believing that it will fit onto silicon?

Gee Gilbert, I'm still trying to discover whether a good value system
will fit onto protoplasmic neural networks.  But seriously, I agree
that we haven't a prayer of communicating our value systems to silicon
if we can't communicate them to ourselves.

--Barry Kort

------------------------------

Date: 6 May 88 04:01:53 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Punishment of machines

In article <31024@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>I was intrigued by David Sher's comments about "machine responsibility".
>
>It is not uncommon for a child to "spank" a machine which misbehaves.
>But as adults, we know that when a machine fails to carry out its
>function, it needs to be repaired or possibly redesigned.  But we
>do not punish the machine or incarcerate it.
>
      The concept of a machine which could be productively punished is
not totally unreasonable.  It is, in fact, a useful property for some robots
to have.  Robots that operate in the real world need mechanisms that implement
fear and pain to survive.  Such machines will respond positively to punishment.

      I am working toward this end, am constructing suitable hardware and
software, and expect to demonstrate such robots in about a year.


                                        John Nagle

------------------------------

Date: 4 May 88 16:55:42 GMT
From: mcvax!ukc!dcl-cs!simon@uunet.uu.net  (Simon Brooke)
Subject: Re: Social science gibber [Was Re:  Various Future of AI

People, this is not a polite message; if polemic offends you, hit <n> now.
It is, however, a serious message, which reflects on attitudes which have
become rather too common on this mailing list. Some time ago, Thomas
Maddox, of Fort Lauderdale, Florida responded to a mailing by Gilbert
Cockton (Scottish HCI Centre, Glasgow), in a way which showed he was both
ignorant and contemptuous of Sociology. Now it isn't a crime to be
ignorant, or contemptuous - I am frequently both, about Unix for example.
But when I am, I'm not surprised to be corrected.

In any case, Tom's posting annoyed me, and I replied sharply. And he, in
turn, has replied to me. Whilst obviously there's no point in spinning
this out ad infinitum, there are a few points to be responded to. In his
first message, Tom wrote:

        "Rigorous sociology/contemporary anthropology"? Ha ha....

I responded:

        Are we to assume that the author doubts the rigour of Sociology,
        or the contemporary nature of anthropology?

and Tom has clarified:

        Yes, I think you could assume both, pal.

That anthropology is contemporary is a matter of fact, not debate.
Anthropologists are contemporarily studying contemporary cultures. If you
doubt that, you obviously are not reading any contemporary anthropology.

Tom's claim that he doubts the rigour of sociology, whilst more believable,
displays equal lack of knowledge of the field. What is more disturbing is
his apparent view that 'dogma' which 'plagues the social sciences', is
less prevalent in the sciences. Has he read Thomas Kuhn's work on
scientific revolutions?

Tom also takes issue with my assertion that:

        AI (is) an experimental branch of Philosophy

AI has two major concerns: the nature of knowledge, and the nature of
mind. These have been the central subject matter of philosophy since
Aristotle, at any rate. The methods used by AI workers to address these
problems include logic - again drawn from Philosophy. So to summarise:
AI addresses philosophical problems using (among other things)
philosophers tools. Or to put it differently, Philosophy plus hardware -
plus a little computer science - equals what we now know as AI. The fact
that some workers in the field don't know this is a shameful idictment on
the standards of teaching in AI departments.

Too many AI workers - or, to be more accurate, too many of those now
posting to this newsgroup - think they can get away with arrogant
ignorance of the work on which their field depends.

Finally, with regard to manners, Tom writes:

        My original diatribe came as a response to a particularly
        self-satisfied posting by (apparently) a sociologist attacking
        AI research as uninformed, peurile, &c.

If Tom doesn't know who Gilbert Cockton is, then perhaps he'd better not
waste time reading up sociology, anthropology, and so on. He's got enough
to do keeping up with the computing journals.

** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      *
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*************************************************************************

------------------------------

Date: 6 May 88 12:55:00 EDT
From: cugini@icst-ecf.arpa
Reply-to: <cugini@icst-ecf.arpa>
Subject: replies on free will


larry@VLSI.JPL.NASA.GOV writes:

> I'm surprised that no one has brought up the distinction between will and
> free will.  The latter (in the philosophy courses I took) implies complete
> freedom to make choices, which for humans seems debatable.  For instance,
> I don't see how anyone can choose an alternative that they do not know
> exists.

Not in the philosophy courses I took.  I think we all mean by "free will"
*some* freedom - it's not clear what "complete" freedom even means.
If I can freely (non-deterministically) choose between buying
vanilla or chocolate, then I have (some) freedom of the will.


yamauchi@SPEECH2.CS.CMU.EDU  (Brian Yamauchi) writes:

> As to whether this is "free" or not, it depends on your definition of
> freedom.  If freedom requires some force independent of genetics,
> experience, and chance, then I suppose this is not free.  If freedom
> consists of allowing an individual to make his own decisions without
> coercion from others, then this definition is just as compatible with
> freedom as any other.

This confuses freedom of action with freedom of the will.  No one doubts
that in the ordinary situation, there are no *external* constraints
forcing me to choose vanilla or chocolate.  If "free will" is defined
as the mere absence of such constraints, then, trivially, I have free
will;  but that is not the significant question.  We all agree that
*if* I choose to buy chocolate, I will succeed; but this is better
called "effective will" not "free will".  The issue is whether
indeed there are subtle internal constraints that make my choice
itself causally inevitable.


Spencer Star <STAR%LAVALVM1.BITNET@CORNELLC.CCS.CORNELL.EDU> writes:

>        Free will seems to me to mean that regardless of state 0 the agent
> can choose which one of the possible states it will be in at time T.  A
> necessary precondition for free will is that the world be indeterministic.
> This does not, however, seem to be a sufficient condition since radioactive
> decay is indeterministic but the particles do not have free will.
>        Free will should certainly be more than just our inability to
> predict an outcome, since that is consistent with limited knowledge in
> a deterministic world.  And it must be more than indeterminism.
>       My questions:
>
> Given these definitions, (1) What is free will for a machine?
>                          (2) Please provide a test that will determine
>                              if a machine has free will. The test should
>                              be quantitative, repeatable, and unabiguous.

So far, so good ... Note that an assertion of free will is (at least)
a denial of causality, whose presence itself can be confirmed only
inferentially (we never directly SEE the causing, as D. Hume reminds
us).  Well, in general, suppose we wanted to show that some event E
had no cause - what would constitute a test?  We're in a funny
epistemological position because the usual scientific assumption
is that events do have causes and it's up to us to find them.
If we haven't found one yet, it's ascribed to our lack of cleverness,
not lack of causality.  So one can *disprove* free will by exhibiting
the causal scheme it which our choice is embedded, just as one
disproves "freedom of the tides".  But it doesn't seem that one has
any guarantee of being able to prove the absence of causality in a
positive way.  The indeterminism of electrons, etc, is accepted
because the indeterminism is itself a consequence of an elaborate and
well-confirmed theory; but we should not expect that such a
theory-based rationale will always be available.

Note in general that testing for absences is a tricky business -
"there are no white crows".  Assuming an exhaustive search is out of
the question, all you can do is keep looking - after a while, if you
don't find any, you just have to (freely?) decide what you want to
believe (cf. "there are no causes of my decision to choose
chocolate").

It's also worth pointing out that macro-indeterminism is not
sufficient (though necessary) for free will.  If we rigged up a
robot to turn left or right depending on some microscopically
indeterministic event (btw, this "magnification" of micro- to
macro-indeterminism goes on all the time - nothing unusual),
most of us would hardly credit such a robot as having free will.


John Cugini  <Cugini@icst-ecf.arpa>

------------------------------

Date: 6 May 88 01:12:12 GMT
From: bwk@mitre-bedford.ARPA (Barry W. Kort)
Reply-to: bwk@mbunix (Barry Kort)
Subject: Re: AIList V6 #87 - Queries, Causal Modeling, Texts

Spencer Star asks:

        (1) What is free will for a machine?
        (2) Please provide a test that will determine
            if a machine has free will. The test should
            be quantitative, repeatable, and unambiguous.

I suggest the following implementation of Free Will, which I believe
would engender behavior indistinguishable from a sentient being
with Free Will.

1)  Imbue the machine with a Value System.  This will enable the machine
    to rank by preference or utility the desirability of the anticipated
    outcomes of pursuing alternative courses of action.

2)  Provide a random choice mechanism for selecting among equal-valued
    alternatives.

3)  Allow the Value System to learn from experience.

4)  Seed the Value System with a) the desire to survive and b) the desire
    to construct accurate maps of the state-of-affairs of the world
    and accurate models for predicting future states-of-affairs from a
    given state as a function of possible actions open to the machine.

--Barry Kort

------------------------------

End of AIList Digest
********************

∂09-May-88  1421	LAWS@KL.SRI.COM 	AIList V6 #96 - Philosophy  
Received: from KL.SRI.COM by SAIL.Stanford.EDU with TCP; 9 May 88  14:21:05 PDT
Date: Mon  9 May 1988 00:07-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #96 - Philosophy
To: AIList@SRI.COM


AIList Digest             Monday, 9 May 1988       Volume 6 : Issue 96

Today's Topics:
  Philosophy - Free Will & Randomness

----------------------------------------------------------------------

Date: 3 May 88 19:23:33 GMT
From: ulysses!sfmag!sfsup!glg@ucbvax.Berkeley.EDU  (G.Gleason)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
         the Future]

In article <1053@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:

>The main objection to AI is when it claims to approach our humanity.

>                       It cannot.

That's a pretty strong claim to make without backing it up.

I'm not saying that I disagree with you, and I also object to all the
hype which makes this claim for current AI, or anything that is likely
to come out of current research.  I'm also not saying your claim is
wrong, only that it is unjustified; there is more to learn before we
can really say.

There are new ideas in biology that build upon "systems theory," and
probably can be tied in with the physical symbol systems theory (I
hope I got that right) that suggest that information or "linguistic
interaction" is fundamental to living organisms.

In the May/June issue of "The Sciences," I found an article called
"The Life of Meaning." It was in a regular column (The Information Age).
I won't summarize the whole article, but it does present some compelling
examples, and arguments for extending the language of language to talking
about cellular mechanisms.  One is how cyclic AMP acts as an internal
message in E. coli.  When an E. coli lands in an environment without
food, cyclic AMP binds to the DNA, and switches the cell over to a
"motion" program.  Cyclic AMP in this role has all the attributes of
a symbolic (or linguistic) message: the choice of symbol is arbitrary,
and the "meaning" is context dependant.  This becomes even more clear
with the example of human adrenaline response in liver cells.  The
hormone binds to sites on the outside of the cell which causes an
internal message to be generated, which just happens to be cyclic AMP.
The cell responds to the cyclic AMP (not by a DNA based mechanism as
in E. coli) by producing more glucose.  The composition of the message
has nothing to do with the trigger or the response, it is symbolic.

So, how is this relevant to the original discussion.  I don't see any
fundamental difference between exchanging chemical messages or electronic
ones.  Although this does not imply that configurations of electronic and
electromechanical components that we would call "alive" are possible or
that it is possible to design and build one, it doesn't rule it out, and
more importantly it suggests a fundamental similarity between living
organisms and "information processors."  The only difference is how they
arise.  Possibly an important difference, but we have no way to prove this
now.

Gerry Gleason

------------------------------

Date: 2 May 88 16:11:11 GMT
From: clyde!mtunx!whuts!homxb!houdi!marty1@bellcore.com  (M.BRILLIANT)
Subject: Re: Free Will & Self-Awareness

In article <717@taurus.BITNET>, shani@TAURUS.BITNET writes:
> In article <30502@linus.UUCP>, bwk@mbunix.BITNET writes:
> >
> > I would like to learn how to imbue silicon with consciousness,
> > awareness, free will, and a value system.
>
> .... free will and value systems - such things cannot
> be 'given', they simply exist.....
> .... You can write 'moral' programs, even in BASIC, if you want,
> because they will have YOUR value system....

It has been suggested that intelligence cannot be "given" to a machine
either.  That is, an "expert system" using only expertise "given" to it
out of the experience of human experts is not exhibiting full
"artificial intelligence."

BWK suggested "artificial awareness" as a complement to "artificial
intelligence,"  but apparently that is not enough.  You need artificial
learning.  My value system was not "given" to me, nor was my
professional expertise; both were learned.  At its ultimate, AI
research is really devoted to the invention of artificial learning.

For full artificial intelligence, the machine must derive its expertise
from its own experience.  For full artificial awareness, the machine
must derive its values from its own experience.  Not much different.
Achieve artificial learning, and you will get both.

I hate to rehash the old "Turing test" again, but a machine cannot pass
for human longer than a few hours, or days at most, unless it has the
capacity for "agonizing reappraisal": the ability to "reevalueate its
basic assumptions."  That would be learning as humans do it.

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

Disclaimer: Opinions stated herein are mine unless and until my employer
            explicitly claims them; then I lose all rights to them.

------------------------------

Date: 6 May 88 13:31:25 GMT
From: eniac.seas.upenn.edu!lloyd@super.upenn.edu  (Lloyd Greenwald)
Subject: Re: Free Will & Self Awareness

In article <10942@sunybcs.UUCP> sher@wolf.UUCP (David Sher) writes:
>It seems that people are discussing free will and determinism by
>trying to distinguish true free will from random behavior.  There is a
>fundamental problem with this topic.  Randomness itself is not well
>understood.  If you could get a good definition of random behavior you
>may have a better handle on free will.
>

This is a good point.  It seems that some people are associating free will
closely with randomness.  To me true randomness is as difficult to comprehend
as true free will.  We can't demonstrate true randomness in present day
computers; the closest we can come (to my knowledge) is to generate a string
of numbers which does not repeat itself.  Can anyone give us a better view
of randomness then this?  I've heard some mention of true randomness at the
quantum level.  Does anyone have any information on this?  Given that current
theories of free will tie it so closely to randomness, it seems necessary to
get a handle on true randomness.


                        Lloyd Greenwald
                        lloyd@eniac.seas.upenn.edu

------------------------------

Date: 4 May 88 07:19:52 GMT
From: TAURUS.BITNET!shani@ucbvax.Berkeley.EDU
Subject: Re: AIList V6 #86 - Philosophy

In article <1579@pt.cs.cmu.edu>, yamauchi@speech2.cs.cmu.edu.BITNET writes:
> Actually, I have read The Society of Mind, where Minsky writes:
>[A quote of Minsky]
> I would agree with this.  In fact, unless one believes in some form of
> supernatural forces, this seems like the only rational alternative.

   You are touching the very core of the problem. The point in which, this
'only random and determination exist' is getting into problems is the
question of responseability i.e., if everything is pre-determened or random,
how can you asume responsebility to what you are doing? and if responsability
does not exist, the whole matter of free will and value system has no content,
so, if free will and value system can be given to a machine, it is meaningless,
and if it has meaning, it is depended on a third, irational factor (Free will),
which cannot (Menwhile?) be given to a machine...

O.S.

------------------------------

Date: 6 May 88 09:20:18 GMT
From: otter!cwp@hplabs.hp.com  (Chris Preist)
Subject: Re: Free Will & Self-Awareness


R. O'Keefe replies to me...

> > Did my value system exist before my conception? I doubt it.
>This is rather like asking whether some specific number existed before
>anyone calculated.  Numbers and value systems are symbolic/abstract
>things, not material objects.  I have often wondered what philosophy
>would have been like if it had arisen in a Polynesian community rather
>than an Indo-European one (in Polynesian languages, numbers are _verbs_).
>----------

Oh no! Looks like my intuitionist sympathies are creeping out!!!

Seriously though, there IS a big difference between numbers and value
systems - Empirical evidence for this is given by the fact that (most of)
society agrees on a number system, but the debate about which value system
is 'correct' leads to factionism, terrorism, war, etc etc. Value systems
are unique to each individual, a product of his/her nature and nurture.
While they may be able to be expressed abstractly, this does not mean
they 'exist' in abstraction (Intuitionist aside: The same could be said of
numbers). They are obviously not material objects, but this does not mean
they have Platonic Ideal existance. We are not imbued with them at birth,
but aquire them. This aquisition is perfectly compatible with determinism.


So what does this mean for AI?  Earlier, in my reply to O.S., I was arguing
that our SUBJECTIVE experience of freedom is perfectly compatible with our
existance within a deterministic system, hence AI is not necessarily
fruitless. You have drawn me out on another metaphysical point - I believe
that our intelligence (rather than our capacity for intelligence), our
value systems, and also our 'semantics' stem from our existance within the
world, rather than our essential nature. Sensation and experience are
primary. The brain is a product of the spinal chord, rather than vice-versa.
For this reason, I believe that the goals of strong AI can only be
accomplished by techniques which accept the importance of sensation.
Connectionism is the only such technique I know of at the moment.


Chris Preist

------------------------------

Date: 6 May 88 17:06:34 GMT
From: vu0112@bingvaxu.cc.binghamton.edu  (Cliff Joslyn)
Subject: Re: Free Will & Self Awareness

In article <4543@super.upenn.edu> lloyd@eniac.seas.upenn.edu.UUCP
(Lloyd Greenwald) writes:
>This is a good point.  It seems that some people are associating free will
>closely with randomness.

Yes, I do so.  I think this is a necessary definition.

Consider the concept of Freedom in the most general sense.  It is
opposed by the concept of Determinism.  We can say of anything, either
it is absolutely determined (it will always do one thing and only one
thing), or it is somwhat free (sometimes it will do one thing, other
times another).  This is so whether we talk of molecules in a box or the
actions of an organism.

>To me true randomness is as difficult to comprehend
>as true free will.

I agree.  That's because both psychological free will and randomness are
cases of my general sense of Freedom.  Freedom is a very difficult
things to understand.

>We can't demonstrate true randomness in present day
>computers;

von Neumann machines are highly Determined systems.  They posess so
little Freedom that it is essentially null.  This is what they have been
designed to do.  However, it is easy to demonstrate that von Neumann
machines are slightly free.  Consider the distribution of bit errors in
a cpu or RAM, or of read errors on a disk drive.  These are random
events.  To that extent the computer is Free.  This is not especially
useful or interesting Freedom, nevertheless it is there.

>the closest we can come (to my knowledge) is to generate a string
>of numbers which does not repeat itself.

This is not possible in a von Neumann machine.

>I've heard some mention of true randomness at the
>quantum level.

See recent (last two years) articles in _Scientific American_ concerning
hidden variables theories in QM.  As I described in a brevious article,
we can think of two cases of randomness, subjective and objective.
Subjective randomness is usually equated with ignorance.  For example,
in Newtonian physics if I had sufficient information about initial
conditions I could predict the roll of a die.  Objective randomness is
your "true", or irreducible, or inherent, or unavoidable randomness.

There has been a great debate as to whether quantum uncertainty was
subjective or objective.  The subjectivists espoused "hidden variables"
theories (i.e.: there are determining factors going on, we just don't
know them yet, the variables are hidden).  These theories can be tested.
Recently they have been shown to be false.

>Given that current
>theories of free will tie it so closely to randomness, it seems necessary to
>get a handle on true randomness.

In my mind, the critical thing to understand about Freedom is that
Freedom is always relative; Determinism is always absolute.

What I mean is that when we talk about something being Free, we can
always talk about degrees of freedom.  A six sided die is more Free than
a four sided, a twelve than a six.  Or consider a probability
distribution: it's Freedom is generally measured by it's entropy, which
takes values in the interval [ 0, inf ).  In order for the
distribution to reach the infinite limit, it must be uniformly
distributed over the whole positive real interval.  This distribution is
not well defined.

In other words, we know what it means for something to be completely
Determined.  I submit that it is not possible for somethings to be
completely Free.  Absolute Freedom is an infinite limit; absolute
determinism is a zero limit.

This is obviously true in the realm of human affairs as well.  It is
easy for me to completely determine your actions: put you in a Skinner
box, or straight jacket, or just kill you.  And while I espouse Free
Will, I do so only in this relative way.  In no way can you tell me that
you are absolutely free: drug delusions, dreams, illness, epilepsy, all
kinds of physical/biological factors come into play which somewhat limit
the Freedom of your mind.


--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: 5 May 88 09:37:54 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: Free Will & Self-Awareness

In article <5100@pucc.Princeton.EDU> RLWALD@pucc.Princeton.EDU writes:
>    Are you saying that AI research will be stopped because when it ignores
>free will, it is immoral and people will take action against it?
Research IS stopped for ethical reasons, especially in Medicine and
Psychology.  I could envisage pressure on institutions to limit its AI
work to something which squares with our ideals of humanity.  If the
US military were not using technology which was way beyond the
capability of its not-too-bright recruits, then most of the funding
would dry up anyway.  With the Pentagon's reported concentration on
more short-term research, they may no longer be able to indulge their
belief in the possibility of intelligent weaponry.

>    When has a 'doctrine' (which, by the way, is nothing of the sort with
>respect to free will) any such relationship to what is possible?
From this, I can only conclude that your understanding of social
processes is non-existent.  Behaviour is not classified as deviant
because it is impossible, but because it is undesirable.  I know of NO
rational theory of society, so arguments that a computational model of
human behaviour MAY be possible are utterly irrelevant.  This is a
typical academic argument, and as you know, academics have a limited
influence on society.

The question is, do most people WANT a computational model of human
behaviour?  In these days of near 100% public funding of research,
this is no longer a question that can be ducked in the name of
academic freedom.  Everyone is free to study what they want, but public
funding of a distasteful and dubious activity does not follow from
this freedom.   If funding were reduced, AI would join fringe areas such as
astrology, futorology and palmistry.  Public funding and institutional support
for departments implies a legitimacy to AI which is not deserved.

------------------------------

Date: 5 May 88 19:35:39 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net  (Jeff Dalton)
Subject: Re: Free Will & Self-Awareness

In article <717@taurus.BITNET> <shani%TAURUS.BITNET@CUNYVM.CUNY.EDU>
writes:
- you are underastimating yourself as a free-willing
- creature, and second, your request is self-contradicting ans shows
- litle understanding of matters, like free will and value systems -
- such things cannot be 'given', they simply exist.

Is this an Ayn Rand point?  It sure sounds like one.  Note the use
of `self-contradicting'.

- You can write 'moral' programs, even in BASIC, if you want,
- because they will have YOUR value system....

It is hard to see how this makes any sense whatsoever.

------------------------------

Date: 5 May 88 20:19:25 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net  (Jeff Dalton)
Subject: Re: Free Will & Self Awareness

In article <770@onion.cs.reading.ac.uk> jadwa@henry.cs.reading.ac.uk
(James Anderson) writes:
>If the world is deterministic I am denied free will because I can
>not determine the outcome of a decision. On the other hand, if
>the world is random, I am denied free will because I can not
>determine the outcome of a decision. Either element, determinancy
>or randomness, denies me free will, so no mixture of a
>deterministic world or a non-deterministic world will allow me
>free will.

Just so.  Having one's actions determined randomly isn't much help.

One of the problems with discussing free will here is that it's too
easy to simply rehash arguments that have been handled in the
philosophical literature.  I thought it best to make this point
in response to an article that I agreed with, though, because I'm
not claiming that no one ever says anything valuable or that I am
some kind of expert in these matters with no time to listen to the
rest of you.

Nonetheless, anyone who is seriously interested in such topics should
be willing to do some reading.  I would recommend Dennet's Elbow Room:
The Varieties of Free Will Worth Wanting for its discussion of free
will, for its relevance to AI, and for the interesting things that
come up along the way.

Jeff Dalton,                      JANET: J.Dalton@uk.ac.ed
AI Applications Institute,        ARPA:  J.Dalton%uk.ac.ed@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton

------------------------------

End of AIList Digest
********************

∂09-May-88  1725	LAWS@KL.sri.com 	AIList V6 #97 - Philosophy  
Received: from KL.sri.com by SAIL.Stanford.EDU with TCP; 9 May 88  17:24:50 PDT
Date: Mon  9 May 1988 00:19-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #97 - Philosophy
To: AIList@SRI.COM


AIList Digest             Monday, 9 May 1988       Volume 6 : Issue 97

Today's Topics:
  Philosophy - Free Will

----------------------------------------------------------------------

Date: 6 May 88 22:48:09 GMT
From: paul.rutgers.edu!cars.rutgers.edu!byerly@rutgers.edu  (Boyce Byerly )
Subject: Re: this is philosophy ??!!?

|In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
|(Gilbert Cockton) writes:
|>logical errors in an argument is not enough to sensibly dismiss it, otherwise
|>we would have to become resigned to widespread ignorance.
|
|To which acha@centro.soar.cs.cmu.edu (Anurag Acharya) replies: Just
|when is an assumption warranted ? By your yardstick (it seems ),
|'logically inconsistent' assumptions are more likely to be warranted
|than the logically consistent ones. Am I parsing you wrong or do you
|really claim that ?!

My feelings on this are that "hard logic", as perfected in first-order
predicate calculus, is a wonderful and very powerful form of
reasoning.  However, it seems to have a number of drawbacks as a
rigorous standard for AI systems, from both the cognitive modeling and
engineering standpoints.

1) It is not a natural or easy way to represent probabalistic or
intuitive knowledge.

2) In representing human knowledge and discourse, it fails because it
does not recognize or deal with contradiction.  In a rigorously
logical system, if

  P ==> Q
  ~Q
   P
Then we have the tautology ~Q and Q.

If you don't believe human beings can have the above deriveably
contradictory structures in their logical environments, I suggest you
spend a few hours listening to some of our great political leaders :-)
Mr. Reagan's statements on dealing with terrorists shortly before
Iranscam/Contragate leap to mind, but I am sure you can find equally
good examples in any political party.  People normally keep a lot of
contradictory information in their minds, and not from dishonesty -
you simply can't tear out a premise because it causes a contradiction
after exhaustive derivation.

3) Logic also falls down in manipulating "belief-structures" about the
world.  The gap between belief and reality ( whatever THAT is) is
often large.  I am aware of this problem from reading texts on natural
language, but I think the problem occurs elsewhere, too.

Perhaps the logical deduction of western philosophy needs to take a
back seat for a bit and let less sensitive, more probalistic
rationalities drive for a while.

        Boyce
        Rutgers University DCS

------------------------------

Date: 5 May 88 10:54:23 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Arguments against AI are arguments against human formalisms

In article <1579@pt.cs.cmu.edu> yamauchi@speech2.cs.cmu.edu (Brian Yamauchi)
writes:
>Cockton seems to be saying that humans do have free will, but is totally
>impossible for AIs to ever have free will.  I am curious as to what he bases
>this belief upon other than "conflict with traditional Western values".
Isn't that enough?  What's so special about academia that it should be
allowed to support any intellectual activity without criticism from
the society which supports it?  Surely it is the duty of all academics
to look to the social implications of their work?  Having free will,
they are not obliged to pursue lines of enquiry which are so controversial.

I have other arguments, which have popped up now and again in postings
over the last few years:

        1) Rule-based systems require fully formalised knowledge-bases.

          Rule-based systems are impossible in areas where no written
          formalisation exists.  Note how scholars like John Anderson
          restrict themselves to proper psycholgical data. I regard Anderson
          as a psychologist, not as an AI worker.  He is investigating
          computational accounts of known phenomena.  As such, his research
          is a respectable confrontation with the boundaries of the
          computational paradigm.  His writing is candid and I have yet to
          see him proceed confidently from assumptions, though he often has
          to live with some.

          Conclusion, AI as a collection of mathematicians and computer
          scientists playing with machines, cannot formalise psychology where
          no convincing written account exists.  Advances here will come from
          non-computational psychology first, as computational psychology has
          to follow in the wake of the real thing.

          The real thing unfortunately cuts a slow and shallow bow-wave.

          [yes, I know about connectionism, but then you have to formalise the
           inputs.   Furthermore, you don't know what a PDP network does know]

        2) Formal accounts of nearly every area of human activity are rare.

          I have a degree in Education. For it I studied Philosophy, Psychology
          and Sociology.  My undegraduate dissertation was on Curriculum design
           -  an interdisciplinary topic which has to draw on inputs from a
          number of disciplines.  What I learnt here was which horse was best
          suited for which course, and thus when not to use mathematics, which
          was most of the time.  I did philosophy with a (ex-)mathematician BTW

          I know of few areas in psychology where there is a WRITTEN account of
          human decision making which is convincing.  If no written account
          exists, no computational account, a more restrictive representation,
          is possible.  Computability adds nothing to 'writability', and many
          things in this world have not been well represented using written
          language.  Academics are often seduced by the word, and forget that
          the real decisions in life are rarely written down, and when they are
          (laws, treaties) they seem worlds apart from what originally was said

          AI depends on being able to use written language (physical symbol
          hypothesis) to represent the whole human and physical universe.  AI
          and any degree of literate-ignorance are incompatible.  Humans, by
          contrast, may be ignorant in a literate sense, but knowlegeable in
          their activities.  AI fails as this unformalised knowledge is
          violated in formalisation, just as the Mona Lisa is indescribable.

          Philosophically, this is a brand of scepticism.  I'm not arguing that
          nothing is knowable, just that public, formal knolwedge accounts for
          a small part of our effective everyday knowledge (see Heider).

          So, AI person, you say you can compute it.  Let's forget the Turing
          Test and replace it with the Touring Test.  Write down what you did
          on your holidays, in English, then come up with a computational model
          to account for everything you did.  There is a warm-up problem which
          involves the first 10-minutes as you step out of bed in the morning.
          After 10 minutes, write down EVERYTHING you did (from video?).  Then
          elaborate what happened.  This writing will be hard enough.

          Get my point?  The world's just too big for your head.  The arrogance
          of AI lies in its not grasping this.  AI needs everything formalised
          (world-knowledge problem).  BTW, Robots aren't AI. Robots are robots.

        3) The real world is social, not printed.

          Because so little of our effective knowledge is formalised, we learn
          in social contexts, not from books.  I presume AI is full of relative
          loners who have learnt more of what they publicly interact with from
          books rather than from people.  Well I didn't, and I prefer
          interaction to reading.

          Learning in a social context is the root of our humanity.  It is
          observations of this social context that reveal our free will in
          action.  Note that we become convinced of our free will, we do not
          formalise accounts of it.  This is the humanity which is beyond AI.
          Feigenbaum & McCorduck (5th Gen) mention this 'socialisation'
          objection to AI in passing, but produce no argument for rejecting it.

          It is the strongest argument against AI.  Look at language
          acquisition in its social context.  AI people cannot program a system
          at the same rate as humans acquire language.  OK, perhaps 'n'
          generations of AI workers could slowly program a NLP system up to
          competence. But as more gets added, there is more to learn, and there
          would come a point that the programmers wouldn't understand the
          system until they were a few years from retirement.

          We spend enough of our time growing in this world to ever have time
          to formalise it.  The moment we grasp ourselves, we are already out
          of date, for this grasping is now part of the self that was grasped.

Anyway, you did ask.  Hope this makes sense.

------------------------------

Date: 6 May 88 18:00:42 GMT
From: bwk@MITRE-BEDFORD.ARPA  (Barry W. Kort)
Subject: Re: Free Will & Self Awareness

James Anderson writes:

>If the world is deterministic I am denied free will because I can
>not determine the outcome of a decision. On the other hand, if
>the world is random, I am denied free will because I can not
>determine the outcome of a decision. Either element, determinancy
>or randomness, denies me free will, so no mixture of a
>deterministic world or a non-deterministic world will allow me
>free will.

It is not clear to me that a mixture of determinism and randomness
could not jointly create free will.

A Thermostat with no Furnace cannot control the room temperature.
A Furnace with no Thermostat cannot control the room temperature.
But join the two in a feedback loop, and together they give rise
to an emergent property: the ability to control the room temperature
to a desired value, notwithstanding unpredicted changes in the outside
weather.

Similarly, could it not be the case that Free Will emerges from
a balanced mixture of determinism (which permits us to predict the
likely outcome of our choices) and freedom (which allows us to
make arbitrary choices)?  Just as the Furnace+Thermostat can drive
the room temperature to a desired value, Cause+Chance gives us
the power to drive the future state-of-affairs toward a desired
goal.

If you buy this line of reasoning, then perhaps we can get on to
the next level, which is: How do we select goal states which we
imagine to be desirable?

--Barry Kort

------------------------------

Date: 7 May 88 02:31:52 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: Re: Free Will & Self Awareness

In article <10942@sunybcs.UUCP>, sher@sunybcs (David Sher) writes:
> It seems that people are discussing free will and determinism by
> trying to distinguish true free will from random behavior.  There is a
> fundamental problem with this topic.  Randomness itself is not well
> understood.  If you could get a good definition of random behavior you
> may have a better handle on free will.

In particular, consider the difference between _random_ behaviour
and _chaotic_ behaviour.  A phsyical system may be completely described
by simple deterministic laws and yet be unpredictable in principle
(unpredictable by bounded computational mechanisms, that is).
[Pseudo-random numbers are really pseudo-chaotic.]

> Consider this definition of random behavior:
> X is random iff its value is unknown.

I do not known David Sher's telephone number, but I do not find it useful
to regard it as random (nor as chaotic).  Conversely, when listening to a
Geiger counter, I am quite sure whether or not I have heard a click, but
I believe that the clicks are random events.

------------------------------

Date: 7 May 88 02:57:46 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: Re: Free Will & Self Awareness

In article <1179@bingvaxu.cc.binghamton.edu>,
vu0112@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes:
> In article <4543@super.upenn.edu> lloyd@eniac.seas.upenn.edu.UUCP
(Lloyd Greenwald) writes:
> >This is a good point.  It seems that some people are associating free will
> >closely with randomness.
>
> Yes, I do so.  I think this is a necessary definition.
>
> Consider the concept of Freedom in the most general sense.  It is
> opposed by the concept of Determinism.

For what it is worth, my "feeling" of "free will" is strongest when I
act in accord with my own character / value system &c. That is, when I
act in a relatively predictable way.  There is a strong philosophical
and theological tradition of regarding free will and some sort of
determinism as compatible.  If I find myself acting in "random" or
unpredictable ways, I look for causes outside myself ("oh, I brought the
wrong book because someone has shuffled the books on my shelf").
Randomness is *NOT* freedom, it is the antithesis of freedom.
If something else controls my behaviour, the randomness of the
something else cannot make _me_ free.

I suppose the philosophical position I can accept most readily is the
one which identifies "free will" with being SELF-determined.  That is,
an agent possesses free will to the extent that its actions are
explicable in terms of the agent's own beliefs and values.
For example, I give money to beggars.  This is not at all random; it is
quite predictable.  But I don't do it because someone else makes me do it,
but because my own values and beliefs make it appropriate to do so.
A perfectly good person, someone who always does the morally appropriate
thing because that's what he likes best, might well be both predictable
and as free as it is possible for a human being to be.

> There has been a great debate as to whether quantum uncertainty was
> subjective or objective.  The subjectivists espoused "hidden variables"
> theories (i.e.: there are determining factors going on, we just don't
> know them yet, the variables are hidden).  These theories can be tested.
> Recently they have been shown to be false.

Hidden variables theories are not "subjectivist" in the usual meaning of
the term; they ascribe quantum uncertainty to objective physical processes.
They haven't been shown false.  It has been shown that **LOCAL** hidden
variables theories are not consistent with observation, but NON-local
theories (I believe there are at least two current) have not been falsified.

------------------------------

Date: 6 May 88 23:05:54 GMT
From: oliveb!tymix!calvin!baba@sun.com  (Duane Hentrich)
Subject: Re: Free Will & Self Awareness

In article <31024@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>It is not uncommon for a child to "spank" a machine which misbehaves.
>But as adults, we know that when a machine fails to carry out its
>function, it needs to be repaired or possibly redesigned.  But we
>do not punish the machine or incarcerate it.

Have you not bashed or kicked a vending machine until it gave up the
junk food you paid for.  It is my experience that swatting a non-solid
state TV tuner sometimes results in clearing up the picture.  Indeed we
do "punish" our machines.

Sometimes with results which clearly outweigh the damage done by
such punishment.

>Why then, when a human engages in undesirable behavior, do we resort
>to such unenlightened corrective measures as yelling, hitting, or
>deprivation of life-affirming resources?

For the same reason that the Enter/Carriage Return key on many keyboards
is hit repeatedly and with great force, i.e. frustration with an
inefficient/ineffective interface which doesn't produce the desired results.

No value judgements re results or punishments here.

If this doesn't belong here, please forgive.

d'baba Duane Hentrich           ...!hplabs!oliveb!tymix!baba

Claimer: These are only opinions since everything I know is wrong.
Copyright notice: If you're going to copy it, copy it right.

------------------------------

Date: 7 May 88 06:05:46 GMT
From: quintus!ok@sun.com  (Richard A. O'Keefe)
Subject: Re: Free Will & Self-Awareness

In article <2070015@otter.hple.hp.com>, cwp@otter.hple.hp.com (Chris Preist)
writes:
> The brain is a product of the spinal chord, rather than vice-versa.

I'm rather interested in biology; if this is a statement about human
ontogeny I'd be interested in having a reference.  If it's a statement
about phylogeny, it isn't strictly true.  In neither case do I see the
implications for AI or philosophy.  It is not clear that "develops
late" is incompatible with "is fundamental".  For example, the
sociologists hold that our social nature is the most important thing
about us.  In any case, not all sensation passes through the spinal
cord.  The optical nerve comes from the brain, not the spinal cord.
Or isn't vision "sensation"?

> For this reason, I believe that the goals of strong AI can only be
> accomplished by techniques which accept the importance of sensation.
> Connectionism is the only such technique I know of at the moment.

Eh?  Now we're really getting to the AI meat.  Connectionism is about
computation; how does a connectionist network treat "sensation" any
differently from a Marr-style vision program?  Nets are interesting
machines, but there's still no ghost in them.

------------------------------

Date: Sat, 07 May 88 18:50:03 +0100
From: "Gordon Joly, Statistics, UCL"
      <gordon%stats.ucl.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Hapgood.

Intelligence comes in both continuous and discrete forms,
(cf Vol 6 # 91).
Light is both waves and particles; anybody see a problem?

Gordon Joly,
Department of Statistical Sciences,
University College London,
Gower Street,
LONDON WC1E 6BT,
U.K.

------------------------------

End of AIList Digest
********************

∂09-May-88  2039	LAWS@KL.sri.com 	AIList V6 #98 - Philosophy  
Received: from KL.sri.com by SAIL.Stanford.EDU with TCP; 9 May 88  20:39:17 PDT
Date: Mon  9 May 1988 00:25-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #98 - Philosophy
To: AIList@SRI.COM


AIList Digest             Monday, 9 May 1988       Volume 6 : Issue 98

Today's Topics:
  Philosophy - Free Will

----------------------------------------------------------------------

Date: 7 May 88 02:14:26 GMT
From: yamauchi@SPEECH2.CS.CMU.EDU  (Brian Yamauchi)
Subject: Re: Arguments against AI are arguments against human formalisms

In article <1103@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:
> In article <1579@pt.cs.cmu.edu> yamauchi@speech2.cs.cmu.edu
(Brian Yamauchi) writes:
> >Cockton seems to be saying that humans do have free will, but is totally
> >impossible for AIs to ever have free will.  I am curious as to what he bases
> >this belief upon other than "conflict with traditional Western values".
> Isn't that enough?  What's so special about academia that it should be
> allowed to support any intellectual activity without criticism from
> the society which supports it?  Surely it is the duty of all academics
> to look to the social implications of their work?  Having free will,
> they are not obliged to pursue lines of enquiry which are so controversial.

These are two completely separate issues.  Sure, it's worthwile to consider
the social consequences of having intelligent machines around, and of
course, the funding for AI research depends on what benefits are anticipated
by the government and the private sector.

This has nothing to do with whether it is possible for machines to have
free will.  Reality does not depend on social consensus.
            --------------------------------------------

Or do you believe that the sun revolved around the earth before Copernicus?
After all, the heliocentric view was both controversial and in conflict
with the social consensus.

In any case, since when is controversy a good reason for not doing
something?  Do you also condemn any political or social scientist who has
espoused controversial views?

> I have other arguments, which have popped up now and again in postings
> over the last few years:
>
>       1) Rule-based systems require fully formalised knowledge-bases.

This is a reasonable criticism of rule-based systems, but not necessary
a fatal flaw.

>          Conclusion, AI as a collection of mathematicians and computer
>          scientists playing with machines, cannot formalise psychology where
>          no convincing written account exists.  Advances here will come from
>          non-computational psychology first, as computational psychology has
>          to follow in the wake of the real thing.

I am curious what sort of non-computational psychology you see as having had
great advances in recent years.

>   [yes, I know about connectionism, but then you have to formalise the
>    inputs.

For an intelligent robot (see below), you can take inputs directly from the
sensors.

>   Furthermore, you don't know what a PDP network does know]

This is a broad overgeneralization.  I would recommend reading Rumelhart &
McClelland's book.  You can indeed discover what a PDP network has learned,
but for very large networks, the process of examining all of the weights
and activations becomes impractical.  Which, at least to me, is
suggestive of an analogy with human/animal brains with regard to the
complexity of the synapse/neuron interconnections (just suggestive, not
conclusive, by any means).

>          AI depends on being able to use written language (physical symbol
>          hypothesis) to represent the whole human and physical universe.

Depends on which variety of AI.....

>  BTW, Robots aren't AI. Robots are robots.

And artificially intelligent robots are artificially intelligent robots.

>       3) The real world is social, not printed.

The real world is physical -- not social, not printed.  Unless you consider
it to be subjective, in which case if the physical world doesn't objectively
exist, then neither do the other people who inhabit it.

> Anyway, you did ask.  Hope this makes sense.

Well, you raise some valid criticisms of rule-based/logic-based/etc systems,
but these don't preclude the idea of intelligent machines, per se.  Consider
Hans Moravec's idea of building intelligence from the bottom up (starting
with simple robotic animals and working your way up to humans).

After all, suppose you could replace every neuron in a person's brain with
an electronic circuit that served exactly the same function, and afterwards,
the individual acted like exactly the same person.  Wouldn't you still
consider him to be intelligent?

So, if it is possible -- or at least conceivable -- in theory to build an
intelligent being of some type, the real question is how.

______________________________________________________________________________

Brian Yamauchi                      INTERNET:    yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________

------------------------------

Date: 7 May 88 02:46:11 GMT
From: yamauchi@speech2.cs.cmu.edu  (Brian Yamauchi)
Subject: Re: Free Will & Self-Awareness

In article <1099@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:
> In article <5100@pucc.Princeton.EDU> RLWALD@pucc.Princeton.EDU writes:
> >    Are you saying that AI research will be stopped because when it ignores
> >free will, it is immoral and people will take action against it?
> Research IS stopped for ethical reasons, especially in Medicine and
> Psychology.  I could envisage pressure on institutions to limit its AI
> work to something which squares with our ideals of humanity.

I can envisage pressure on institutions to limit work on sociology and
psychology to limit work to that which is compatible with orthodox
Christianity.  That doesn't mean that this is a good idea.

> If the
> US military were not using technology which was way beyond the
> capability of its not-too-bright recruits, then most of the funding
> would dry up anyway.  With the Pentagon's reported concentration on
> more short-term research, they may no longer be able to indulge their
> belief in the possibility of intelligent weaponry.

Weapons are getting smarter all the time.  Maybe soon we won't need the
not-too-bright recruits.....

> >    When has a 'doctrine' (which, by the way, is nothing of the sort with
> >respect to free will) any such relationship to what is possible?
> From this, I can only conclude that your understanding of social
> processes is non-existent.  Behaviour is not classified as deviant
> because it is impossible, but because it is undesirable.

From this, I can only conclude that either you didn't understand the
question or I didn't understand the answer.  What do the labels that society
places on certain actions have to do with whether any action is
theoretically possible?  Anti-nuke activists may make it practically
impossible to build nuclear power plants -- they cannot make it physically
impossible to split atoms.

> The question is, do most people WANT a computational model of human
> behaviour?  In these days of near 100% public funding of research,
> this is no longer a question that can be ducked in the name of
> academic freedom.

100% public funding?????  Haven't you ever heard of Bell Labs, IBM Watson
Research Center, etc?  I don't know how it is in the U.K., but in the U.S.
the major CS research universities are actively funded by large grants from
corporate sponsors.  I suppose there is a more cooperative atmosphere here --
in fact, many of the universities here pride themselves on their close
interactions with the private research community.

Admittedly, too much of all research is dependent on government funds, but
that's another issue....

>  Everyone is free to study what they want, but public
> funding of a distasteful and dubious activity does not follow from
> this freedom.   If funding were reduced, AI would join fringe areas such as
> astrology, futorology and palmistry.  Public funding and institutional support
> for departments implies a legitimacy to AI which is not deserved.

A modest proposal: how about a cease-fire in the name-calling war?  The
social scientists can stop calling AI researchers crackpots, and the AI
researchers can stop calling social scientists idiots.

______________________________________________________________________________

Brian Yamauchi                      INTERNET:    yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________

------------------------------

Date: 7 May 88 05:54:20 GMT
From: vu0112@bingvaxu.cc.binghamton.edu  (Cliff Joslyn)
Subject: Formal Systems and AI


I have recently been thinking some about formal systems and AI, and have
been prompted by our recent conversations, as well as by an *excellent*
article by Chris Cherniak ("Undebuggability and Cognitive Science",
_Comm. ACM_, 4/88) to make some comments.

It seems patently obvious to me at this point that the following
statements are false:
1) The mind is a formal system.
2) Attempts to construct AI as a formal system can succeed.

These ideas seem to me to be the heart of "classical Cartesian Cognitive
Science" (e.g.  Fodor, Chomsky).  I assert that these positions are
based on an old, false view of a deterministic, deductive, reducible
world.  The relative failure of strong, theoretical AI seems, in
hindsight, terribly obvious.

Cherniak takes the following stance: "A complete computational
approximation of the mind would be a huge, 'branchy,' holistically
structurred, quick-and-dirty (i.e.  computationally tractable, but
formally incorrect/incomplete), kludge.  .  .[as opposed to] a small set
of elegant, powerful, general principles, on the model such as classical
mechanics."

This view is not only common-sensical, but is well-motivated by some
gross approximations about *real* intelligent systems and *real* physics
of information system.  For example, let's say that I have a computer so
small that it could calculate a line in a truth-table in the time it
takes for light to cross the diameter of the proton.  Cherniak concludes
that there is then an upper bound of ~ 138 independent logical
propositions that can be solved by the truth-table method.  A tiny
number! More quotations: "Our basic methodological instinct.  .  .seems
to be to work out a model of the mind for 'typical' cases - most
importantly, very small cases - and then implicitly to suppose a grand
induction to the full-scale case of a complete human mind."

Instead, we see large software systems (e.g.  Star Wars), rather than
being elegant, correct/complete/verifiable formal systems, as being huge
unintelligible bug-ridden masses.  It is well known that programmers
quickly lose the ability to understand their own code, let alone verify
it.  Visualization past three dimensions is practically impossible, yet
real information systems have thousands of dimensions.

This move away from formalism as a valid paradigm for AI seems perfectly
in step with non-von Neumann arhitecture (i.e.  connectionism), as well
other academic trends away from deterministic, deductive, reducible
theories towards the science of fuzzy, uncertain, multi-dimensional
information system.

--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: 2 May 88 14:33:05 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU  (Stephen
      Smoliar)
Subject: Re: Free Will & Self-Awareness

In article <912@cresswell.quintus.UUCP> ok@quintus.UUCP
(Richard A. O'Keefe) writes:
>In article <1029@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert
>Cockton) writes:
>> For AI workers (not AI developers/exploiters who are just raiding the
>> programming abstractions), the main problem they should recognise is
>> that a rule-based or other mechanical account of cognition and decision
>> making is at odds with the doctrine of free will which underpins most
>>Western morality.
>
>What about compatibilism?  There are a lot of arguments that free will is
>compatible with strong determinism.  (The ones I've seen are riddled with
>logical errors, but most philosophical arguments I've seen are.)
>When I see how a decision I have made is consistent with my personality,
>so that someone else could have predicted what I'd do, I don't _feel_
>that this means my choice wasn't free.


Here, here!  Cockton's statement is the sort of doctrinaire proclamation which
is guaranteed to muddy the waters of any possible dialogue between those who
practice AI and those who practice the study of philosophy.  He should either
prepare a brief substantiation or relegate it to the cellar of outrageous
vacuities crafted solely to attract attention!

------------------------------

Date: Sat,  7 May 88 23:39:41 EDT
From: Marvin Minsky <MINSKY@AI.AI.MIT.EDU>
Subject: AIList V6 #91 - Philosophy

Brian Yamauchi correctly paraphrases 30.6 of Society of Mind:

|Everything, including that which happens in our brains,
|depends on these and only fixed, deterministic laws or random accidents.
| There is no room on either side for any third alternative.

He agrees, and goes on to suggest that free will is a decision making
process. But that doesn't explain why we feel that we're free.  I
claim that we feel free when we decide to not try further to
understand how we make the decisions: the sense of freedom comes from
a particular act - in which one part of the mind STOPs deciding, and
accepts what another part has done.  I think the "mystery" of free will
is clarified only when we realize that it is not a form of decision making
at all - but another kind of action or attitude entirely, namely, of how
we stop deciding.

------------------------------

Date: 5 May 88 16:23:18 GMT
From: hpda!hp-sde!hpfcdc!hpfclp!nancyk@ucbvax.Berkeley.EDU  (Nancy
      Kirkwood)
Subject: Re: this is philosophy ??!!?


nancyk@hpfclp.sde.hp.com                 Nancy Kirkwood at HP, Fort Collins

Come now!!! Don't try to defend "logical consistency" with exaggeration
and personal attack.

> Just when is an assumption warranted ? By your yardstick (it seems ),
> 'logically inconsistent' assumptions are more likely to be warranted than
                                           ↑↑↑↑
> the logically consistent ones.

It's important to remember that the rules of logic we are discussing come
from Western (European) cultural traditions, and derive much of their power
"from the consent of the governed," so to speak.  We have agreed that if
we present arguments which satisfy the rules of this system, that the
arguments are correct, and we are speaking "truth."  This is a very useful
protocol, but we should not be so narrow as to believe that it is the only
yardstick for truth.

The "laws" of physics certainly preclude jumping off a 36 story building
and expecting not to get hurt, but physicists would be the first to admit
that these laws are incomplete, and the natural processes involved are
*not* completely known, and possibly never will be.  Nor can we be sure,
being fallible humans who don't know all the facts, that our supposed
logical arguments are useful or even correct.

"Reality" in the area of human social interactions is largely if not
completely the "negotiated outcome of social processes."  It has been a
topic of debate for thousands of years at least as to whether morality
has an abstract truth unrelated to the social milieu it is found in.

> Since logical consistency is taboo, logical errors are acceptable,
> reality and truth are functions of the current whim of the largest organized
> gang around ( oh! I am sorry, they are the 'negotiated ( who by ? ) outcomes
> of social processes ( what processes ? )') how do you guys conduct research ?

Distorting someone's statements and then attacking the distortions is
not an effective means of carrying on a productive discussion (though
it does stir up interest :-)).

                                                -nancyk
*   *   *   ***********************************************   *   *   *
*       "There are more things in heaven and earth, Horatio,          *
*                than are dreamt of in your philosophy."              *
*                              -Shakespeare                           *
*   *   *   ***********************************************   *   *   *

------------------------------

Date: 7 May 88 16:12:55 GMT
From: vu0112@bingvaxu.cc.binghamton.edu  (Cliff Joslyn)
Subject: Re: Free Will & Self Awareness

In article <940@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe)
writes:
>In article <1179@bingvaxu.cc.binghamton.edu>,
vu0112@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes:
>> In article <4543@super.upenn.edu> lloyd@eniac.seas.upenn.edu.UUCP
(Lloyd Greenwald) writes:
>> >This is a good point.  It seems that some people are associating free will
>> >closely with randomness.
>>
>> Yes, I do so.  I think this is a necessary definition.
>>
>> Consider the concept of Freedom in the most general sense.  It is
>> opposed by the concept of Determinism.
>
>For what it is worth, my "feeling" of "free will" is strongest when I
>act in accord with my own character / value system &c. That is, when I
>act in a relatively predictable way.

Yes, it is more complicated, isn't it? What if, instead of having a
"value system", you were rather in the grip of some hideous, controlling
"ideology." Let's say you're Ronald Raegan, or Botha, or Gorbachev.  I
wouldn't want to deny them free will, but would say that their
ideologies are highly determining (at least on political issues).  On
the other hand, let's say that I am an intelligent, impressionable child
in an "ideas bazarre," say a super-progressive, highly integrated school
in New York.  Then my value system will be in constant flux.

Also, don't confuse predictibility with determinism.  There are degrees
of predictibility.  If I know the distribution of a random variable, I
can make some degree of prediction.

>Randomness is *NOT* freedom, it is the antithesis of freedom.
>If something else controls my behaviour, the randomness of the
>something else cannot make _me_ free.

Yes, it is critical to keep the levels of analysis clear.  If something
external *determines* your behavior (for example, a value system), then
your behavior is *determined* no matter what.  The *cause* of your
behavior being free in no way implies you are free.  But we aren't
talking about something controlling you, we are talking about whether
you are controlled.  My assertion is that if you are completely
controlled, then you cannot act randomly.  If you are free, than you
can.

Try this: freedom implies the possibility of randomness, not its
necessity?

>That is,
>an agent possesses free will to the extent that its actions are
>explicable in terms of the agent's own beliefs and values.
>For example, I give money to beggars.  This is not at all random; it is
>quite predictable.  But I don't do it because someone else makes me do it,
>but because my own values and beliefs make it appropriate to do so.

In order for something to be not at all random, it must not just be
quite predictible, but rather completely predictible.  To that extent,
it is determined.

Again, we're talking at different levels (probably a
subjective/objective problem).  Let's try this: if you are free, that
means it is possible for you to make a choice.  That is, you are free to
scrap your value system.  At each choice you make, there is a small
chance that you will do something different, something unpredictible
given your past behavior/current value system.  If, on the other hand,
you *always* adhere to that value system, then from my perspective, that
value system (as an *external cause*) is determining your behavior, and
you are not free.  The problem here may be one of observation: if a coin
"chooses" to come up heads each time, I will say that it's necessary
that it does, as an inductive inference.

There's a lot of issues here.  I don't either of us have thought through
it very clearly.

>A perfectly good person, someone who always does the morally appropriate
>thing because that's what he likes best, might well be both predictable
>and as free as it is possible for a human being to be.

He is not free so long as it is not possible for him to act imorally.  I
say it is then impossible to distinguish between someone who is free to
act imorally, and chooses not to, and someone who is determined to act
morally.

>It has been shown that **LOCAL** hidden
>variables theories are not consistent with observation, but NON-local
>theories (I believe there are at least two current) have not been falsified.

Thanks for the clarification.

--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: Sun, 8 May 88 21:52:59 -0200
From: Antti Ylikoski <ayl%hutds.hut.fi%FINGATE.BITNET@CUNYVM.CUNY.EDU>
Subject: discussion involving free will and self-awareness

I've had a superficial look at the discussion involving free will and
self-awareness in AIList.

In my opinion, what we mean with the English word "will" depends upon
the philosophical viewpoint of the person in whose world the noun
exists.

"Will" might be a collection of neurological and psychological mechanisms
(ah ... like the frontal lobes being the central controller of the brain).

The noun "will" gets another meaning if we consider one's
responsibility of his deeds, to mankind, and ... for those who are
religious ... to God.  CIA is said to hypnotize people to commit
murders.  The questions arise whether they are evil ... whether they
committed a sin ... whether they did their crimes of their "free will".

And "self-awareness" ... in my opinion, simply, the mental processes of
humans are so complicated and developed we can form good models of
ourselves.

Antti Ylikoski

------------------------------

Date: Sun 8 May 88 10:25:51-PDT
From: Paul Roberts <PMR@Score.Stanford.EDU>
Subject: Philosophy, free will


If anyone is actually interested is reading something intelligent about
these topics, particularly as they apply to AI, I recommend they read
`Brainstorms' by Daniel Dennett. One chapter is called, I believe,
`The kind of free-will worth having'.

Paul Roberts

------------------------------

End of AIList Digest
********************

∂09-May-88  2302	LAWS@KL.sri.com 	AIList V6 #99 - Applications, Road Follower, Explorer vs. Sun  
Received: from KL.sri.com by SAIL.Stanford.EDU with TCP; 9 May 88  23:01:51 PDT
Date: Mon  9 May 1988 00:30-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #99 - Applications, Road Follower, Explorer vs. Sun
To: AIList@SRI.COM


AIList Digest             Monday, 9 May 1988       Volume 6 : Issue 99

Today's Topics:
  Applications - Graphic Design & Chinese Character Cataloging,
  Project Report - CMU Connectionist Road Follower,
  AI Tools - Explorer (vs. Sun) Experience & Prolog vs. Lisp

----------------------------------------------------------------------

Date: 2 May 88 09:46:29 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert Cockton)
Subject: Re: expert systems for graphic design?

In article <10712@sunybcs.UUCP> dmark@sunybcs.UUCP (David Mark) writes:
>It is my opinion that the content and components of maps in a geographic
>information systems (GIS) application vary so much that we cannot have
>"our layouts" to be evaluated by expert designers.  Does anyone know of work
>which confirms or contradicts this, or knowledge of a graphics domain
>as complicated and variable as map production that has been 'solved' in a
>design sense?

There will never be a final 'solution' to many graphic design
problems, but as it is graphic designers who:
        a) work with these problems all day
        b) largely define (versions of) what aesthetics IS in the popular
           consciousness

then I'm sure that good graphic designer's could improve the
subjective response to some layouts.

As far as cartography is concerned, the British Ordnance survey
redesigned their 1:50000 maps over 10 years ago and I'm sure that
there will be publications about this revision process and the
principles involved.  Experienced map-users, as would be expected,
intensely disliked the changes to their user interfaces! I was a teenager
at the time, adjusted fairly quickly and now find it difficult to navigate
with the old maps.  I'd say the U.K. O.S. had am improved 'solution' here.

For other forms of information, there is a substantial psychological
literature on information presentation.  The results are piecemeal,
but they could be integrated into an interactive design assistant
(which some might even call an expert system!).

P.S. This is probably no longer a comp.ai discussion (unlike free-will :-))
     Follow-ups to comp.cog-eng?

------------------------------

Date: 3 May 88 16:14:19 GMT
From: sunybcs!rapaport@boulder.colorado.edu  (William J. Rapaport)
Subject: Re: how to recognize a chinese character

In article <527@vmucnam.UUCP> daniel@vmucnam.UUCP (Daniel Lippmann) writes:
>is there anybody knowing some computer-method to analize
>a chinese character to find his place in a dictionnary ?

You might want to implement the strategy used by James McCawley, in his
Guide to Chinese Characters for Eating in Chinese Restaurants (or some
such title--my copy of the book is at home), published by UChicago
Press.

------------------------------

Date: Thu, 5 May 1988 16:44-EDT
From: Dean.Pomerleau@F.GP.CS.CMU.EDU
Subject: Re: Need info on new CMU sidewalk rover

In a recent post concerning autonomous land vehicle work at CMU, Gary
Cottrell writes:
>I don't know who is working on it, but I heard from a usually reliable
>source that Dean Pomerleau (grad student at CMU, inventor of meta-connection
>networks) has a 3-layer back-prop network driving the thing better than the
>CMU vision group's system. Anyone care to confirm or deny this information?
>
>gary cottrell

I am working on a project called ALVINN (for Autonomous Land Vehicle In a
Neural Network).  Currently ALVINN is quite proficient at driving on
simulated road sequences and a limited number of real roads stored on disk.
It hasn't been tested on the actual vehicle yet primarily because the van's
hardware is currently being upgraded but we are cautiously optimistic about
its ability to drive under field conditions.  However it is too early to make
a comparison between ALVINN and the current ALV implementation here at CMU.
A paper discussing the project is being submitted to the November Conference
on Neural Information Processing Systems in Denver.

Dean Pomerleau
Computer Science Dept.
Carnegie Mellon University
Pittsburg, PA 15213-3890

pomerlea@f.gp.cs.cmu.edu (ARPAnet)

------------------------------

Date: Thu 5 May 88 21:10:37-EDT
From: Dave.Touretzky@C.CS.CMU.EDU
Subject: addendum to Pomerleau's message

For the record:  Dean's preliminary experiments on *simulated* road images
give very good results.  His experiments on 16 *real* road images also
gave good results, but that's way too few images to say anything conclusive
about how his net will compare with the CMU vision group's software.

While I personally think that the neural net approach can eventually outperform
other approaches on this simple road following task, and will do so shortly,
we don't have the data in hand to make public claims like that.  Also, the
ALV task involves more than just road tracking.  One must deal with things like
forks and intersections in roads; one must be able to use a map to keep track
of current position and navigate from starting points to arbitrary
destinations.  Neural nets have not yet even been applied to these more
complex tasks.  It's anybody's guess how well they'll perform.

-- Dave Touretzky

------------------------------

Date: Thu, 5 May 1988 12:15-EDT
From: Douglas.Reece@IUS1.CS.CMU.EDU
Subject: CMU Robots

Re Request by John Nagle and Gary Cottrell for information about the CMU
Terregator successor:

1. The current robot testbed is a vehicle called the Navlab.  It was built
from a large Chevrolet van/truck and is completely self-contained, including
propulsion, electrical power, computers and sensors.  It is being driven
on paved paths (it is too big for sidewalks).  Current research is addressing
vision, control, and architecture.  More details can be found from documents
like
Shafer, Stenz, and Thorpe, "An Architecture for Sensor Fusion in a Mobile
Robot," in Proc. IEEE International Conference on Robotics and Automation,
1986

CMU Robotics Institute Tech Reports on Strategic Computing Project, Road
Following, Navlab, etc.

Thorpe et al, "Vision and Navigation for the Carnegie-Mellon Navlab,"
Annual Review of Computer Science, Vol 2, 1987, pp 521-56

2. Dean Pomerleau's network did a nice job of identifying and locating a road
in synthesized noisy images.  He has not yet tried it on the range of (real)
images that the vision/Navlab group has used. I suspect that with some more
work he will be able to find roads in more difficult images.  He has not
tried to identify intersections. Although his network produces some very
simple control outputs, he has not tried to actually control a vehicle.

Doug Reece      dreece@ius1.cs.cmu.edu

------------------------------

Date: 5 May 88 19:13:15 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net  (Jeff Dalton)
Subject: Re: Explorer (vs. Sun) Experience ?

In article <11061@mimsy.UUCP> nau@frabjous.UUCP (Dana Nau) writes:
>On the Lisp machines, Lisp is thoroughly integrated with the operating
>system, and as a result, you can quite easily do things with windows,
>menus, editing, debugging, etc., that would be pretty painful to do in
>Lisp on the Sun.  For example, if I want a pop-up a menu on the explorer,
>I simply call a built-in Lisp function, giving it the menu title and menu
>entries, and telling what should be done for each menu entry.  That kind
>of thing is substantially more difficult on the Sun.

I would think you could just call a built-in function, etc.  This seems
more a question of what libraries are available than an inherent advantage
of Lisp machines.

Nonetheless, it is true that such things are easier at present on Lisp
machines.

Jeff Dalton,                      JANET: J.Dalton@uk.ac.ed
AI Applications Institute,        ARPA:  J.Dalton%uk.ac.ed@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton

------------------------------

Date: 7 May 88 01:43:28 GMT
From: rochester!daemon@bbn.com  (Brad Miller)
Subject: Re: Explorer (vs. Sun) Experience ?


    Date: 5 May 88 19:13:15 GMT
    From: jeff@aiva.ed.ac.uk (Jeff Dalton)

        [...]

    I would think you could just call a built-in function, etc.  This seems
    more a question of what libraries are available than an inherent advantage
    of Lisp machines.

I think the more telling difference is your ability to change under a lispm
environment what is taken as immutable under UNIX. For example, if I want to
modify the scheduler slightly, I can do that *at runtime*, I don't have to
compile a whole new system to run. If I want to change a definition being
used in another process, again, I can change it *at runtime*. Thus, I can
write new modes for my editor, and test them out in the same session, not
reload and relink a new version of the editor and then test *that* out.

In general, one is provided with the source to the entire system, and any
function may be changed or advised (advice is giving a piece of code to be
run before, after, or around some definition. Thus if I don't want to
actually change some part of the compiler because it will change between
releases, but the interface will remain constant, I can advise it instead.
Since advice can be compiled, there is really no performance penalty to
doing this, it is a function of working on an object-oriented system.

Most importantly, a lispm does not distinguish between the 'user' and the
'kernel'. Everyone is one big happy address space. This has the advantage of
allowing you to reuse software as you see fit, not as some UNIX designer has
decreed your interface must be to the kernel. You are free to call directly
or modify any functions that would normally be inside of the kernel, e.g.
the scheduler example I brought up. Why write your own scheduler to run as a
single UNIX process when you can just modify your system's scheduler to
suit?

There are many other advantages to the lispm environment, but I'm just
attempting to address this issue of libraries. Several papers have been
published on the lispm programming environment(s), the more current of which
I'm sure e.g. Symbolics will be happy to provide you with. As a quick
starter, look at _Interactive Programming Environments_ by Barstow, Shrobe,
and Sandewall, but realize that the book was published 4 years ago, and all
of Xerox, TI and Symbolics have done much to advance the state of the art
since then.

----
Brad Miller             U. Rochester Comp Sci Dept.
miller@cs.rochester.edu {...allegra!rochester!miller}

------------------------------

Date: 7 May 88 15:47:10 GMT
From: bbn.com!aboulang@bbn.com  (Albert Boulanger)
Subject: Re: Explorer (vs. Sun) Experience ?


  There are many other advantages to the lispm environment, but I'm just
  attempting to address this issue of libraries. Several papers have been
  published on the lispm programming environment(s), the more current of which
  I'm sure e.g. Symbolics will be happy to provide you with. As a quick
  starter, look at _Interactive Programming Environments_ by Barstow, Shrobe,
  and Sandewall, but realize that the book was published 4 years ago, and all
  of Xerox, TI and Symbolics have done much to advance the state of the art
  since then.


Also, for a non-lispm oriented discussion of the advantages of single
address environments, see the article:

"Towards Monolingual Programming Environments" Jay Heering & Paul Klint
ACM Trans. on Prog. Lang. & Systems Vol7 No. 2 April 1985. 183-213.

Personally, I feel the house of cards that multiple address
programming environments collapse when it comes to error handling.
While it is possible to fix this, it is VERY VERY hard. Question: What
do you do when you get an error in somebody elses foreign-language
(non lisp) window system that you are using within lisp on, say, a UNIX box?
Can you debug the code within a lisp stack trace? Can you build an
interface to mix the stack traces together?



Albert Boulanger
aboulanger@bbn.com
Albert Boulanger
BBN Labs Inc.
ABoulanger@bbn.com (arpa)
Phone: (617)873-3891

------------------------------

Date: 6 May 88 02:53:44 GMT
From: quintus!ok@sun.com (Richard A. O'Keefe)
Subject: Re: Gibber in AI, social sciences, etc.


In article <050388.124141.sowa@ibm.com>, SOWA@IBM.COM (John Sowa) writes:
> The dialog between LISPers and Prologers is no
> more meaningful than the dialog between Catholics and Protestants in
> Northern Ireland.
> John Sowa

Er, just what dialogue between LISPers and Prologers are you talking about?
Here at Quintus (makers of the finest Prolog system in the known Universe)
our attitude to Lisp is "what good ideas can we steal".  I refer to my copy
of CLtL about once a day.  ZYX like Lisp so much they've even imitated its
syntax.  Sussex (makers of PopLog) think the great thing about their product
is _very_ close coupling between Lisp, Prolog, and Pop.  I suspect that
someone who argues for (Lisp|Prolog) on the grounds that (Prolog|Lisp) is
bad doesn't understand either.

------------------------------

End of AIList Digest
********************

∂10-May-88  0130	LAWS@KL.sri.com 	AIList V6 #100 - New Moderator, Queries, AI News, AI-ED SIG    
Received: from KL.sri.com by SAIL.Stanford.EDU with TCP; 10 May 88  01:29:50 PDT
Date: Mon  9 May 1988 01:31-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI.COM>
Reply-to: AIList@SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V6 #100 - New Moderator, Queries, AI News, AI-ED SIG
To: AIList@SRI.COM


AIList Digest             Monday, 9 May 1988      Volume 6 : Issue 100

Today's Topics:
  Administrivia - Matriculation,
  Queries - Looking back on "The Fifth Generation" &
    Neural Network Capabilities & Proof Checker &
    AI and Cytology & Explanation Generation &
    Knowledge Engineering Courses & Reasoning by Analogy,
  Review - Spang Robinson V4 N4,
  Education - New AI-ED SIG

----------------------------------------------------------------------

Date: Mon, 9 May 88 01:30:20 PDT
From: Ken Laws <LAWS@KL.SRI.COM>
Reply-to: AIList-Request@SRI.COM
Subject: Matriculation

This issue, the 100th of this year, is the last AIList Digest
I will edit.  After almost exactly five years, I am abandoning
my career in volunteer publishing and giving the chair to
Nick Papadakis.  I'm sure all of you are as glad as I that
Nick stepped forward, whether or not it was of his own free will.
Future submissions should be mailed to AIList@AI.AI.MIT.EDU,
with administrative mail to AIList-Request@AI.AI.MIT.EDU.
(I will set up aliases so that mail to the old addresses will
be rerouted to MIT.)  It may take a while to develop new mailing
procedures, so be patient.  If you want to help Nick, add
meaningful Subject fields to your messages and be sure to
notify him if you are going to drop off the net.

I will be leaving for NSF in just over a month, arriving about
June 30.  Sorry I haven't time to visit you all on the drive over.
Lily and I have to get to Washington with our three kids --
Brandon, 8; Kelsey, 6; and Devon, 1 -- to look for a house.
(If you know of a bargain 4-bedroom rental in Bethesda, Falls
Church, or any really great school district there, we'd like to
hear about it.)

I look forward to meeting many of you during my stay at NSF.
Part of my job, as Program Manager for Robotics and Machine
Intelligence, will be to send out proposals for peer review.
I have started to build a database of addresses and professional
interests of potential reviewers (and people able to suggest
good reviewers).  If you wish to volunteer, just send me a note
with keywords for your areas of interest.  (Otherwise I'll
take names from conference announcements and journal articles.)

Good luck to all of you, and especially to Nick!

                                        -- Ken

------------------------------

Date: Thu, 5 May 88 08:11:16 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Looking back on "The Fifth Generation"


      It has been five years since Feigenbaum and McCorduck's "The Fifth
Generation", and over five years since the five-year fifth generation
program began in Japan.  It is thus time for a serious retrospective.
Is someone writing one?

                                        John Nagle

------------------------------

Date: Thu, 5 May 88 12:11:31 edt
From: drb@cscadm.ncsu.edu (Dennis R. Bahler)
Subject: Neural Network Capabilities

There is a new edition of the Minsky/Papert Perceptron book out
which contains new material, including I am told a theorem/proof on the
(in)adequacies of the newer neural net models.

Unfortunately, the book is back-ordered and I can't get my hands on a copy
locally.  Would someone (Prof. Minsky himself, perhaps?) care to offer
a precis of the new chapter while I wait for my copy to arrive?

Dennis Bahler
Dept. of Computer Science          INTERNET - drb@cscadm.ncsu.edu
North Carolina State University    CSNET    - drb%cscadm.ncsu.edu@relay.cs.net
Raleigh, NC   27695-8206           UUCP     - ...!decvax!mcnc!ncsu!cscadm!drb

------------------------------

Date: 5 May 88 22:39:58 GMT
From: abbott@aerospace.aero.org (Russell J. Abbott)
Reply-to: abbott@aero.uucp (Russell J. Abbott)
Subject: Proof Checker


Does anyone have or know of a public domain (or cheap) proof checker
that can be used by undergraduates to write and check simple proofs.  I'
teaching an automata theory and formal languages course and the students
are having a hard time formalizing their thinking.  It would be nice if
they could practice with an automated proof checker.

A simple example problem is: prove that all strings in the set denoted
by the regular expression (01 + 10)* have the same number of 0's as 1.
The proof is straightforward by induction on the length of the string.

The proof checker should have built into it knowledge of set notation,
i.e., {X | p(X)} and of inductive proofs.  It should also have a basic
knowledge of simple arithmetic.  Of course it also needs to be able to
use results that are proved earlier or given to it as axioms.

Thanks,

-- Russ Abbott

------------------------------

Date: 4 May 88 17:56:46 GMT
From: paul.rutgers.edu!eagles.rutgers.edu!lubinsky@rutgers.edu 
      (David Lubinsky)
Subject: Request for refernces on AI and Cytology

I would very much appreciate any references on the automatic analysis
of cytological samples, especially cervical smears.  I would also like
to know about current work in robot microscopes.

If anyone else is interested, send me your address and I will mail a
summary.

Thanks
David

------------------------------

Date: 7 May 88 00:18:46 GMT
From: munnari!moncsbruce.oz.au!pps@uunet.UU.NET (Peter Sember)
Subject: explanation generation bib


I would greatly appreciate any information on the availability of a
bibliography on Explanation Generation Systems.

                                Thanks in Advance.

Peter Sember,           |  ARPA: munnari!moncsbruce.oz!pps@uunet.uu.net
Computer Science Dept., |  CSNET and ACSNET: pps@moncsbruce.oz
Monash University,      |
Victoria 3168,          |
Australia               |

------------------------------

Date: 6 May 88 15:32:38 GMT
From: mcvax!philmds!philtis!grant@uunet.uu.net  (Joe Grant)
Subject: Knowledge Engineering Courses


I am posting this in the hope that people out there can help me. I have just
begun working in a Knowledge Based Systems group, and am hoping to train as
a knowledge engineer. My background is as a software engineer, and the only
exposure that I have had to knowledge engineering to date is a brief
introduction to the are of expert systems in college, plus this and the other
Comp.ai.* newsgroups which I have been reading for the last 6 weeks.

What I would like to do is take some time to find out what courses are
available, and plan out a training schedule that will at least give me a good
grounding in the current knowledge engineering techniques. These may take the
form of College, Company, or correspondence training, which ever suits the
situation best, though the College option may very well not prove viable --
I can't see my employers letting me wonder off to anything but a 9--5 course,
and all the college courses I took were far from 9--5 on one topic. I have
heard that there are a number of in-house courses run by the likes of DEC, to
train their own prospective knowledge engineers, any information on such
courses would be greatly appreciated. I don't know how well such firms take to
applications form non-employees, but if I can get the information at least I
can test the water.

As I am pretty green in this area I feel that the best way to approach this
is to look at all the options. So if you know of any training course or aid,
I would be indebted to you if you could let me know about it, and if you
have any information as to its usefulness that too would be greatly
appreciated.

I am not so familiar with the network, so I don't know what sort of
addressing format will be required to reach me. However I do know that the
route from here to mcvax -- the European gateway I believe -- is via a
system called philmds, if I were to send to there I'd enter

        philmds!mcvax!userid

Hopefully that will be enough information for you to reach me. I hope to
hear from some of you out there. Many thanks in advance.

                        Joe Grant.

------------------------------

Date: 6 May 88 16:54:53 GMT
From: mcvax!ukc!mupsy!liv-cs!stian@uunet.uu.net
Subject: Reasoning by Analogy

Does anyone know of any work done on reasoning by analogy. Any references
received gratefully. Thanks in advance,

Ian Finch
---------
Postal: Dept. of Computer Science, Chadwick Tower, University of Liverpool,
        P.O. Box 147, Liverpool L69 3BX
 Janet: stian@uk.ac.liv.csvax
  uucp: ...!mcvax!ukc!mupsy!liv-cs!stian
  arpa: stian%csvax.liv.ac.uk@nss.cs.ucl.ac.uk

------------------------------

Date: Sat, 7 May 88 20:41:21 CDT
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: Review - Spang Robinson V4 N4

     Spang Robinson Report on Artificial Intelligence
        Volume 4, No. 4, April 1988

Lead article is on Expert system applications and MIS.

There is a discussion on how MIS people can avoid the high costs of
"knowledge engineers," CASE tools, the use of low end systems and
availability of tools for mainframes.

* Expert Systems never eliminated employees
* Cultural issues
* Problems in using COBOL to do the job

Blue Cross, claim analysis system reduce the time to evaluate a claim
from two weeks to fifteen-30 minutes.  145 out of 155 claims were
handled by a medical review system.

Also provided is a table of some applications with
information such as whether the
systems paid for themselves, barriers overcame and software:

Blue Cross - claims analysis and medical review
Data General - MIS department
DEC - order handling
McDermott Corporation - used Bachman Re-engineering tool set
Northern Telecom - Engineering Change Manager
Provident Life - Credit Union loan analysis
Shearson Lehman - Bachman Re-engineering tools
Oil Company - Two Engineering Application

________________________________________

IEEE Conference on Artificial Intelligence Applications Review.

There were 620 attendees consisting of people developing commerical
AI applications.  Complaint was lack of technical content.

Session on expert system project failures showed "cultural planning"
and "focus on real business planning rather than on technical experimentation"
necessary to avoid failure

________________________________________

Carnegie Group of Pittsburgh and Texas Instruments have announced an expert
system shell for troubleshooting com plex machine failures and process
planners.  The development enginee runs on Explorer with capability
to translate applications to C or IBM PC/AT.

________________________________________
Shorts:

Integrated Analytics sells Market Mind which alerts traders to
significant trading patterns.  It is written in C.

AI Corp has announced "KBMS" available by June 1.  It supports IBM
mainframes including MVS, VM, CICS, TSO, IMS, CMS, DB/2 and SQL/DS.

Cooperative Technology is a new start up that offers management consulting
to custom software development.  It is headed by William Turpin who
developed the Personal Consultant Series from TI.

DEC announced that in its network and communications products division,
AI integration has generated a 300 percent increase in revenues, a 20 percent
rise in gross profit margins and a 285 percent increase in product
reliability.

Lucid will be bundled with KEE on 80386 and Wisdom Systems' Concept Modeller.

Nihon will be giving Symbolics a million dollars in funding for
the development of a system based on the Ivory chip.

Anza is now selling a diskette-based bibliographic reference list
for neural networks.

John McDermott has left Carnegie Mellon University to go to DEC.

------------------------------

Date: 29 Apr 88 2144-PDT
From: Moderator Steve Barnhouse... 
      <AI-ED-REQUEST@SUMEX-AIM.STANFORD.EDU>
Subject: New AI-ED SIG

                 [Forwarded from the IRList Digest.]

AI-ED Digest            Saturday, 30 Apr 1988      Volume 3 : Issue 16


   Date: 15 Apr 88 13:05:00 EST
   From: "ARTIC::PSOTKA" <psotka%artic.decnet@ari-hq1.arpa>
   Subject: New AERA  AI & ED  SIG

               ARTIFICIAL INTELLIGENCE  &  EDUCATION

     A NEW Special Interest Group was formed at the annual meeting of AERA in
New Orleans in April.  I am pleased to announce that Wallace Feurzeig of
BBN Laboratories is the first  Chair of the SIG on AI&ED.  Cathie Norris
of  the University of North Texas is the Newsletter Editor, and Joe
Psotka  of the U.S. Army Research Institute is the Secretary/Treasurer.


       It was the consensus of the SIG organizational meeting that it is
time to put AI to use in schools by getting a broader range of researchers
to be aware of its potential.  Artificial Intelligence  is advancing
rapidly on a broad front of research issues.  Many   of these
issues are directly relevant to  the interests and needs of teachers
and researchers at all levels of instruction, and in many different settings.
  This SIG will provide an overview of the current issues that may have the
strongest effect on education.


         Artificial Intelligence
is increasingly becoming  more applicable to practical use in education.
 In part,  this is because the technology of AI, based as it
is on specific algorithms and  understanding derived from areas of computer
science and cognitive science somewhat remote from the mainstream
of educational research,  is maturing steadily and becoming less
arcane and more generally useful for instruction.  The other main reason
for this increasing practicality of AI technology is the continuing
increase in the power of available personal computer technology at
an affordable price for schools and workplaces to purchase.  The outstanding
example of this is the Hypercard environment on MACs, and the
powerful Lisp environments on all new PCs. Both of these factors make
it more important that educational researchers understand and become
familiar with AI technology.

This SIG will offer us an opportunity
to introduce other educational researchers to these topics.  The SIG
on AI & ED will be mainly concerned with the use of AI and cognitive
science technologies for education.  Primary areas for reporting research
in  these technologies  will be within Authoring systems for CBI
and ICAI; intelligent microworlds;  machine learning;   complex environments
for instruction;  knowledge representation; qualitative modelling
techniques;  structures of declarative knowledge; computer thinking
tools;  rule systems for procedural knowledge; student modelling;
student diagnosis; teacher amplifiers; hypertext systems; natural
language processing; and other important outgrowths of AI that offer
significant potential for improving education in the schools, workplace,
and at home.

For more information,  inquiries, suggestions for symposia, and  other
offers of support,  please contact :


Wallace Feurzeig,
BBN Laboratories
10 Moulton St
Cambridge, MA  02238   at (617)873-3448  or  Feurzeig@g.bbn.com.arpa

Send newsletter contributions, announcements, and other information to:

Cathie Norris
University of North Texas
P. O. Box 5155
Denton, TX  76203  at (817)565-4189



Joseph Psotka, Ph.D.
Army Research Institute
5001 Eisenhower Avenue
Alexandria, Virginia  22333-5600

OR CALL:   (202)274-5540   or Psotka@ARI-HQ1.Arpa


If you would like to join, please send this application.

  [Contact the message author.  -- KIL]

------------------------------

End of AIList Digest
********************

∂14-May-88  1955	@MC.LCS.MIT.EDU:AIList-REQUEST@AI.AI.MIT.EDU 	AIList Digest   V7 #1   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 14 May 88  19:54:53 PDT
Received: from FORD.MIT.EDU by MC.LCS.MIT.EDU via Chaosnet; 14 MAY 88  21:52:03 EDT
Date: Saturday, 14 May 1988, 21:50-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Sender: nick@MIT-ARTHUR
Reply-to: AIList@AI.AI.MIT.EDU
Subject: AIList Digest   V7 #1
To: ailist-outgoing@mc



AIList Digest           Saturday, 14 May 1988       Volume 7 : Issue 1

Today's Topics:

  Administrivia (New address)
  Queries (Lots and lots) 


----------------------------------------------------------------------

Date: Sat, 14 May 88 12:12:00 -0700
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Subject: New address

	Just a reminder - the correct address to use for postings to
AIList is now: AILIST@AI.AI.MIT.EDU

	Note the *repetition* of the .AI part! Administrative requests
should go to AILIST-REQUEST@AI.AI.MIT.EDU

	Please bear with me these first few weeks, as the changeover is
not likely to be painless ...

		- nick

------------------------------

Date: Mon, 09 May 88 12:12:00 -0700
From: "Karl B. Schwamb" <Schwamb@ics.UCI.EDU>
Subject: Query: 3K Regularities of Human Cognition?

In a recent article, Allen Newell stated that there are 3000 regularities
of human cognition.  Does anyone know a reference where a list of these
may be found?

-Karl

------------------------------

Date: 9 May 88 02:02:29 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: Re: Explorer (vs. Sun) Experience ?

In article <9457@sol.ARPA>, miller@ACORN.CS.ROCHESTER.EDU (Brad Miller) writes:
> Most importantly, a lispm does not distinguish between the 'user' and the
> 'kernel'. Everyone is one big happy address space.

Which is to say:  one fall down, ALL fall down.
Don't lispm users ever make mistakes?

------------------------------

Date: 9 May 88 02:02:29 GMT
From: hall@alpha.ece.jhu.edu
Subject: Re: Explorer (vs. Sun) Experience ?


Nah, we just call it a feature.  :-)
Seriously though, remember that lispm's are single user machines, so even if
you REALLY mess up and the system's debugging facilities can't save you (they
usually can - my Symbolics debugging aids are many times more powerful/helpful
than those on Lucid CL on my Sun) - even if you have to reboot, you don't kill
anyone else.
                        - Marty Hall
--
ARPA (preferred) - hall@alpha.ece.jhu.edu [hopkins-eecs-alpha.arpa]
UUCP   - ..seismo!umcp-cs!jhunix!apl_aimh | Bitnet  - apl_aimh@jhunix.bitnet
Artificial Intelligence Laboratory, MS 100/601,  AAI Corp, PO Box 126,
Hunt Valley, MD  21030   (301) 683-6455

------------------------------

Date: 9 May 88 13:40:35 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.MIT.EDU 
      (Stephen Smoliar)
Subject: Re: Reasoning by Analogy

In article <1533@csvax.liv.ac.uk> stian@csvax.liv.ac.uk writes:
>Does anyone know of any work done on reasoning by analogy. Any references
>received gratefully.
>
Dedre Gentner
Mechniamss of Analogical Learning
Report No. UIUCDCS-R-87-1381
Department of Computer Science
University of Illinois at Urbana-Champaign
Urbana, Illinois

Author's address:
Dr. Dedre Gentner
Department of Psychology
University of Illinois
603 E. Daniel
Champaign, Illinois  61820

------------------------------

Date: 10 May 88 01:52:38 GMT
From: news@galaxy.rutgers.edu (News)
Reply-to: andromeda!subraman@rutgers.edu (Ramesh Subramanian)
Subject: Re: Decision Theory in AI. (Judea Pearl's Influence Diag)


I wonder if somebody could mail me some info. on where I could get
literature on Judea Pearl's Infl.Diag ?  Thanks.
From: subraman@andromeda.rutgers.edu (Ramesh Subramanian)
Path: andromeda!subraman


(Ramesh Subramanian               email (uucp):...!rutgers!andromeda!subraman
101 Bleeker St. Box#85           voice: (201) 565-9290.
Newark, NJ 07102.)

------------------------------

Date: Mon, 9 May 88 20:21:51 PDT
From: trwrb!smpvax1!sdl@ucbvax.Berkeley.EDU

I am trying to compare Gemstone and Vbase which are object oriented
databases.  Does anyone have experience with both systems?

I would also like to know if there is a mailing list for either
object oriented systems or databases.

Thanks.

Daniel Lee
Inference Corporation
ucbvax!trwrb!smpvax1!sdl

------------------------------

Date: 10 May 88 17:27:27 GMT
From: vrdxhq!daitc!viusys!gabe@umd5.umd.edu  (Gabe Nault)
Subject: ai languages on unix wanted

I am starting a Master's thesis and am interested in finding
an artifical intelligence language that either runs under Unix or
can be ported to a Unix system. I hope to be able to find something
more than lisp or xlisp. I have heard of a language called STAR, which
is originally from NASA. The problem is that they want $2000 for this
software, (and you thought that all government sponsored software was
public domain). If anyone knows of any languages such as this or perhaps
a prolog that runs on UNIX please let me know.
        Thanks in advance
        Gabe Nault

------------------------------

Date: 10 May 88 17:27:27 GMT
From: osu-cis!dsacg1!mgiven
Subject: ai languages on unix wanted



One language that you could consider is CLIPS, a forward-chaining language
implemented in C which is available from COSMIC (at the Univ. of Georgia,
404-542-3265).  It was developed for NASA.
--
Mott Given @ Defense Logistics Agency ,DSAC-TMP, P.O. Box 1605,
            Systems Automation Center, Columbus, OH 43216-5002
UUCP:        {cbosgd,gould,cbatt!osu-cis}!dsacg1!mgiven
Phone:       614-238-9431

------------------------------

Date: Tue, 10 May 88 15:39 EDT
From: John Watkins <jwatkins@STONY-BROOK.SCRC.Symbolics.COM>
Subject: Re: how to recognize a chinese character

    In article <527@vmucnam.UUCP> daniel@vmucnam.UUCP (Daniel Lippmann) writes:
    >is there anybody knowing some computer-method to analize
    >a chinese character to find his place in a dictionnary ?

There are several standard indexing techniques used in Chinese
dictionaries to look up the meaning of a Chinese character. The four I
am familiar with are:

1.pronunciation - Not useful here.

2.stroke count - Each "brush stroke" is counted. A table ordered by the
total number of strokes used to write the character is referred to that
gives an index value for the location of the character in the
dictionary. The table is divided by total number of strokes and then
further divided by one of the other techniques mentioned here. Aside
from total number of brush strokes, there is one variation of this
where the first stroke forms the first index value and the remaining
strokes form a second index value.

3.radical/residual - I believe this is the most common for frequent
usage. Each character is composed of a radical and a residual. The
radical is composed of several strokes. Generally the radicals are
arranged in ascending order of the number of strokes used in writing
them. Characters are ordered within each radical grouping
according to the count of the remaining strokes used to write the
character.

4"boxes" - This is the technique with which I am least familiar. I
believe it was used in one of the Yale dictionaries. As I recall it was
based upon dividing up the character into four boxes. The starting
stroke in each box was used as a partial index to the character.

If any of these is of interest I suggest looking through a good library
at a university with a Chinese language department.


The real problem lies in recognizing what is and what is not a correct
"brush stroke". I do not know of any work in this area. Perhaps others
will have suggestions.

------------------------------

Date: 10 May 88 23:22:11 GMT
From: amdahl!apple!pz@ames.arpa  (Peter Zukoski)
Subject: Info wanted on commercial products started at a university

Howdy -
I'm trying to find commercially successful computer products that were begun in
university research. Some examples might be NuBus, or Ingres.
I'm interested in the success (or failure) of technology transfer between
universities and industry. If you know anything about the way the product
was transferred, that is "How did it begin?", "How did it move out of
the university?", "What made it a successful transfer?", etc. that will
be wonderful. One of the theories I'm investigating is:
"Is it the technology which makes it successful, or is it the people/researchers
involved that make it successful?" For instance, with NuBus, the research
team from MIT went to Western Digital to continue work on the product, and it
was their presence which greatly helped make NuBus a viable product.

If you have examples, history, and opinions on technology transfer issues they
will be most welcome.

Please forward this if you know of anyone expert in this area.

Thanks in advance.
Please mail responses. If you're interested in any results, let me know, and
I'll forward them to you, or post if enough interest is shown.


peter "does a dogma have the buddhist nature" z

I demand rigidly defined areas of doubt and uncertainty!

CSNET:   pz@apple.COM
UUCP :   {sun,amdahl,nsc,dual}!apple!pz
SNAIL:   20600 Mariani MS/22C Cupertino CA 95014
BELL :   (408)973-2920 / (408)356-9133

------------------------------

Date: Tue, 10 May 88 17:13:05 PDT
From: lambert@cod.nosc.mil (David R. Lambert)
Subject: analogical reasoning


See books and reports (1980's) by Dr. Dedre Gentner, who is doing current
research in analogical reasoning at University of Illinois, Dept of
Psychology, 603 E. Daniel St., Champaign, IL 61820.

David R. Lambert, PhD
Email:  lambert@nosc.mil

------------------------------

Date: 11 May 88 04:46:38 GMT
From: munnari!phadfa.adfa.oz.au!lee@uunet.UU.NET (Bill Lee )
Subject: wanted expert program "AM" source


I was after the source code for a program called "AM"
This program was written by Lenat & Davis.  Can anyone help
me by either sending the source code my email, or telling me where I may
get a copy?

Please send all replies to

shaw@eeadfa.ee.adfa.oz   ACSNET Address

or

Brian Shaw
GPO Box 2389
Canberra 2601
Australia

(rather than the owner of this account)
--
Mail: Bill Lee, Dept. Electrical & Electronic Engineering, University College,
UNSW, ADFA, Canberra. 2600.  Phone:  (062) 68 8193,  Telex:  ADFADM AA62030,
ACSNET: "bill@eeadfa.ee.adfa.oz"

------------------------------

Date: Wed, 11 May 88 21:02:48 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: TV systems for mobile robots

      I'd like to hear about experience with various cameras and radio links
used with mobile robots.  I'm interested in units suitable for a small,
high-speed vehicle in which the vision processing is offboard.

      The ideal device, as pointed out by Russell Anderson in "A Robot
Ping-Pong Player", is a CCD frame-transfer image sensor, since with such
devices the entire frame is acquired as a unit and no artifacts of the
scanning process appear in the image.  Examples of such parts are the
Sanyo LC99xx series.  (The Fisher-Price Toy Camcorder and Lionel
Loco-Vision use the LC9943, a low-resolution part from this line.
There are higher resolution parts in the same family.)
Is a minature TV camera using such a sensor with at least 250x250 resolution
available yet?

      Next best is a CCD line-transfer image sensor.  The better Pulnix
units have these, and many robotic groups use them.  What is the experience
with these?

      The Sony Watchcam is a low-cost alternative.  Any experience here?

      What about TV transmitters and receivers?  I've seen a few TV Genie
units around, but not only are they weak, they're illegal.  But they do
sbow that such a transmitter need not be large, and there are bands in
which one can obtain appropriate licences.  I do need something
about that size, though, say 4x2x2 or smaller.   Is there such a thing
as FM TV gear, to improve the noise immunity?

      Has anyone dealt with the problem of camera stabilization and vibration
isolation in a moving vehicle?  The Steadicam gyro approach seems overkill.
Sorbothane shock mounting is easy enough to do, but is it enough to get
clear single frames?  Has anyone tried using data from accelerometers
and rate gyros to stabilize an image electronically?

      Has anyone tried sending data back from a robot in the audio carrier
of a TV signal or in the vertical retrace interval?  If so, with what hardware?

      Yes, I know it's a hard, ugly problem.

                                        John Nagle

------------------------------

Date: 12 May 88 15:49:58 GMT
From: bigburd.PRC.Unisys.COM!judy@burdvax.prc.unisys.com  (Judy P.
      Clark)
Subject: Machine Design for Testability References

Several weeks ago someone asked for references on the application of
AI techniques to testability in VLSI design.  I only saw one response to
that request and would be interested in receiving a summary if there were
more.

However, what I am more interested in is information on design for
testability of entire machines or systems.  Does anyone know of any
references?

Thanks in advance,

Judy Clark
judy@prc.unisys.com


Judy Clark                judy@prc.unisys.com
Unisys Defense Systems
PO Box 517
Paoli, PA 19301

------------------------------

Date: 12 May 88 18:55:39 GMT
From: trwrb!aero!abbott@bloom-beacon.MIT.EDU  (Russell J. Abbott)
Subject: Proof Checker Wanted

Does anyone have or know of a public domain, free, or cheap proof
checker that can be used by undergraduates to write and check simple
proofs.  I'm teaching an automata theory and formal languages course,
and the students are having a hard time formalizing their thinking.  It
would be nice if they could practice with an automated proof checker.

A simple example problem is: prove that all strings in the set denoted
by the regular expression (01 + 10)* have the same number of 0's as 1.
The proof is straightforward by induction on the length of the string.

The proof checker should have built into it knowledge of set notation,
i.e., {X | p(X)}, strings, and of inductive proofs.  It should also have
a basic knowledge of simple arithmetic.  Of course it also needs to be
able to use results that are proved earlier or given to it as axioms.

Thanks,

-- Russ Abbott


------------------------------

Date: 12 May 88 18:55:39 GMT
From: mcnc!ecsvax!rgn
Subject: Proof Checker Wanted


I would also be interested in a proof checker for possible use in
an Intro. to Theoretical Computer Science course.

Thanks,
Rob
--
Rob Norris
Dept. of Math Sciences     UUCP:      ...!mcnc!ecsvax!rgn

------------------------------

Date: 12 May 88 18:55:39 GMT
From: rapaport@cs.buffalo.edu
Subject: Proof Checker Wanted


There are proof checkers (as well as proof givers) for both propositional
and predicate-logic natural-deduction systems in:

Schagrin, Morton L.; Rapaport, William J.; & Dipert, Randall D. (1985)
Logic:  A Computer Approach (New York:  McGraw-Hill).

Software for them are available from:

LCA Software
c/o Prof. Randall R. Dipert
Department of Philosophy
State University College
Fredonia, NY 14063
                                        William J. Rapaport
                                        Assistant Professor

Dept. of Computer Science||internet:  rapaport@cs.buffalo.edu
SUNY Buffalo             ||bitnet:    rapaport@sunybcs.bitnet
Buffalo, NY 14260        ||uucp: {decvax,watmath,rutgers}!sunybcs!rapaport
(716) 636-3193, 3180     ||

------------------------------

Date: 12 May 88 21:23:18 GMT
From: pyramid!prls!philabs!sbcs!dji@decwrl.dec.com  (the dirty vicar)
Subject: Book Rec Wanted (Thm Proving)

Thanks to all who responded to my request for suggested books on
resolution-based theorem proving.  Sorry, but I didn't get a chance
to respond to all personally.  Anyway, here are the results, for
those who are interested.  Almost all the votes were for one or
both of the following two books (and they got about equal numbers
of votes):

_Automated Reasoning: Introduction and Applications_
        by Wos, Overbeek, Lusk, and Boyle
        Prentice-Hall 1984

_Symbolic Logic and Mechanical Theorem Proving_
        by Chang and Lee
        Academic Press 1973

also got a few votes for:

_Computer Modelling of Mathematical Reasoning_
        by Allan Bundy
        Academic Press 1983

                                Thanks again
                                        the vic

Dave Iannucci \ Dept of Computer Science \ SUNY at Stony Brook, Long Island, NY
ARPA-Internet: dji@sbcs.sunysb.edu / CSNet: dji@suny-sb / ICBM: 40 55 N 73 08 W
UUCP: {allegra, philabs, pyramid, research}!sbcs!dji or ....bpa!sjuvax!iannucci

------------------------------

Date: Thu, 12 May 88 17:59:13 PDT
From: Edward Feigenbaum <FEIGENBAUM@SUMEX-AIM.Stanford.EDU>
Subject: update on Fifth Generation

Re: the query about "The Fifth Generation":

In chapter 10 of a new book by McCorduck, Nii, and me (The Rise of the
Expert Company, Times Books, forthcoming in August at AAAI), we attempt to
update the status of the Japanese Fifth Generation project. The basis for this
is routine reading of 5G technical papers, and one long interview in
December of 1986 with Dr. Fuchi and other friends at ICOT, The update is
short, and is intended for the same general audience that read the book
The Fifth Generation.


Ed Feigenbaum

------------------------------

Date: Thu 12 May 88 18:19:20-PDT
From: HOSEIN@PLUTO.ARC.NASA.GOV
Subject: SOAR graphics


                Marc P. Hosein
                Intelligent Systems Technology Branch
                NASA Ames Research Center
                Mail Stop 244-4
                Moffett Field, CA.  94035
                (415) 694-6526

        TO: Neural Network and Connectionist Researchers

     Thank you for responding to my previous request for information letter.
The response has been tremendous!  I have received many neural network
papers and have been in the process of studying them over the past few months.
I have chosen several of the connectionist models for use in my poster,
but I am now in need of some color.  That is, I am looking for

        1) Color photographs associated with your work.

        2) Videos of demos or research models being studied.

Any videos or pictures would be very helpful.  I would much appreciate
speedy correspondence as the SOAR conference will be held July 20-23.
Thank you for your time and consideration.  Please feel free to
call me at (415) 694-6526 or send mail on the arpanet to
HOSEIN@AMES-PLUTO.ARPA if you have any questions.

Again, I can not thank you enough for the papers I have already received.


                                        Thank you, Marc P. Hosein

------------------------------

End of AIList Digest
********************

∂23-May-88  2051	@MC.LCS.MIT.EDU:AIList-REQUEST@AI.AI.MIT.EDU 	AIList Digest   V7 #2   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 23 May 88  20:51:09 PDT
Received: from MARVIN.MIT.EDU by MC.LCS.MIT.EDU via Chaosnet; 23 MAY 88  23:30:56 EDT
Date: Monday, 23 May 1988, 23:24-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Sender: nick@MIT-ARTHUR
Reply-to: AIList@AI.AI.MIT.EDU
Subject: AIList Digest   V7 #2
To: ailist-outgoing@mc


AIList Digest            Tuesday, 24 May 1988       Volume 7 : Issue 2

Today's Topics:

  Seminars, Papers, and Conferences

----------------------------------------------------------------------

Date: Mon, 16 May 88 11:15:54
From: SubbaRao Kambhampati <rao@cvl.umd.edu>
Subject: Thesis Proposal: Approach for Flexible Reuse of Plans

          An Approach for Flexible Reuse of Plans

               (Ph.D. Dissertation Proposal)

                    Subbarao Kambhampati
               Department of Computer Science
                   University of Maryland
                   College Park MD 20742

                        May 31, 1988

Abstract

     The value of enabling a planning system to remember the
plans  it  generates for later use was acknowledged early in
planning research. The systems developed, however, were very
inflexible  as the reuse was primarily based on simple stra-
tegies of generalization via variablization and later unifi-
cation.   We  propose  an approach for flexible reuse of old
plans in the presence  of  a  generative  planner.   In  our
approach  the  planner  leaves  information  relevant to the
reuse process in the form of annotations on every  generated
plan.   To  reuse  an old plan in solving a new problem, the
old plan along with its annotations is mapped into  the  new
problem.   A  process  of annotation verification is used to
locate applicability failures and  suggest  refitting  stra-
tegies.   The  planner  is then called upon to carry out the
suggested modifications-to produce an  executable  plan  for
the new problem.  This integrated approach obviates the need
for any extra domain  knowledge  (other  than  that  already
known  to the planner) during reuse and thus affords a rela-
tively domain independent framework for plan reuse.  We will
describe  the realization of this  approach in two disparate
domains (blocks world and  process  planning  for  automated
manufacturing)   and  propose extensions to the reuse frame-
work to overcome observed limitations.  We believe that  our
approach  for  plan reuse can be profitably employed by gen-
erative planners in many applied domains.

------------------------------

Date: Tue, 10 May 88 09:35:08
From: "ERIC Y.H. TSUI" <munnari!aragorn.oz.au!eric@uunet.UU.NET>
Subject: Submission to AILIST

CALL FOR PAPERS: 1st AUSTRALIAN KNOWLEDGE ENGINEERING CONGRESS

           AUSTRALIAN KNOWLEDGE ENGINEERING CONGRESS

                      2-4th NOVEMBER 1988

                     MELBOURNE, AUSTRALIA

                        CALL FOR PAPERS

                        - MAJOR THEMES -

* FROM DATABASE TO IKBS (INTERACTIVE KNOWLEDGE-BASED SYSTEMS)

        INFORMATION ENGINEERING
        EXPERT SYSTEM APPLICATIONS
        KNOWLEDGE ENGINEERING MANAGEMENT

* CONVERSATIONAL ADVISORS

        NATURAL LANGUAGE INTERFACES
        KNOWLEDGE SOURCE SYSTEMS
        INTELLIGENT ASSISTANTS

* PLANNING & DECISION SUPPORT

        CASED BASED REASONING
        EXPLANATION BASED LEARNING
        DISTRIBUTED KNOWLEDGE BASE SUPPORT


Submission of Papers
--------------------
Papers are invited on the above topics or on any other topic of practical
interest to knowledge engineers. Three (3) copies of the full papers must
be received by 31st July, 1988. Papers must be written in English, may not
exceed 20 double spaced pages, and should conform to the attached guidelines.

Panel and tutorial proposals are also solicited. Five (5) copies of proposals
must be received by May 31st, 1988. Proposals may not exceed 2 double spaced
pages and should include a description of the major topics and, for panels,
a tentative list of panelists.


Important dates
---------------
Papers due on July 31st 1988
Notification of acceptance on August 31st 1988
Camera ready copies due on September 30th 1988


All correspondence and enquiries should be directed to:

Professor B.J. Garner
Division of Computing and Mathematics
Deakin University
Geelong, Victoria 3217,
AUSTRALIA


Eric Tsui                               eric@aragorn.oz
Division of Computing and Mathematics
Deakin University
Geelong, Victoria 3217
Australia

------------------------------

Date: Tue, 10 May 88 15:19:18 EDT
From: finin@PRC.Unisys.COM
Subject: Seminar: Semantics of Verbal Modifiers ... (UNISYS)


                              AI SEMINAR
                     UNISYS PAOLI RESEARCH CENTER

              Defining the Semantics of Verbal Modifiers
                    in the Domain of Cooking Tasks

                           Robin F. Karlin
                   Computer and Information Science
                      University of Pennsylvania

SEAFACT (Semantic Analysis For the Animation of Cooking Tasks) is a
natural language interface to a computer-generated animation system
operating in the domain of cooking tasks.  SEAFACT allows the user to
specify cooking tasks using a small subset of English.  The system
analyzes English input and produces a representation of the task which
can drive motion synthesis procedures.  This talk describes the
semantic analysis of verbal modifiers on which the SEAFACT
implementation is based.


                       2:00 pm Tuesday, May 19
                           Paoli Auditorium
                     Unisys Paoli Research Center
                      Route 252 and Central Ave.
                            Paoli PA 19311

   -- non-Unisys visitors who are interested in attending should --
   --   send email to finin@prc.unisys.com or call 215-648-7446  --

------------------------------

Date: 11 May 88 21:30:34 GMT
From: rochester!ur-tut!sunybcs!rapaport@cu-arpa.cs.cornell.edu
Subject: ACL-88 program & registration CLARIFICATION


                 ASSOCIATION FOR COMPUTATIONAL LINGUISTICS
                            26th Annual Meeting

                               7-10 June 1988
        Knox 20, State University of New York at Buffalo (Amherst Campus)
                           Buffalo, New York, USA


                                  PROGRAM

MONDAY EVENING, 6 JUNE
7:00 9:00       Tutorial Registration and Reception
                Rathskeller, Norton Hall

TUESDAY MORNING, 7 JUNE
9:00 12:15      Tutorial Sessions

                CONTEMPORARY SYNTACTIC THEORIES
                Peter Sells

                TEXT PROCESSING SYSTEMS
                Martha Palmer, Lynette Hirschman, and Deborah Dahl

TUESDAY AFTERNOON, 7 JUNE
1:45-5:00       Tutorial Sessions

                NATURAL LANGUAGE GENERATION
                David McDonald

                EFFICIENT PARSING ALGORITHMS
                Masaru Tomita

TUESDAY EVENING, 7 JUNE
7:00-9:00       Conference Registration and Reception
                Rathskeller, Norton Hall

REGISTRATION: Wednesday - Friday
8:00-5:00       Rathskeller, Norton Hall; until noon Friday

EXHIBITS: Wednesday Friday
9:00-6:00       Rathskeller, Norton Hall

WEDNESDAY MORNING, 8 JUNE
9:00-9:15       Opening remarks and announcements

9:15-9:45       Adapting an English Morphological Analyzer for French
                Roy J. Byrd and Evelyne Tzoukermann

9:45-10:15      Sentence Fragments Regular Structures
                Marcia C. Linebarger, Deborah A. Dahl, Lynette Hirschman, and
                Rebecca J. Passonneau

10:45-11:10     Multi-Level Plurals and Distributivity
                Remko Scha and David Stallard

11:10-11:35     The Interpretation of Function Nouns
                Jos de Bruin

11:35-12:00     Quantifier Scoping in the SRI Core Language Engine
                Douglas B. Moran

WEDNESDAY AFTERNOON, 8 JUNE
1:30-1:55       A General Computational Treatment of Comparatives for Natural
                Language Question Answering
                Bruce W. Ballard

1:55-2:20       Parsing and Interpreting Comparatives
                Manny Rayner and Amelie Banks

2:20-2:45       Defining the Semantics of Verbal Modifiers in the Domain of
                Cooking Tasks
                Robin F. Karlin

2:45-3:10       The Interpretation of Tense and Aspect in English
                Mary Dalrymple

3:40-4:05       An Integrated Framework for Semantic and Pragmatic
                Interpretation
                Martha E. Pollack and Fernando C. N. Pereira

4:05-4:30       A Logic for Semantic Interpretation
                Eugene Charniak and Robert Goldman

4:30-4:55       Interpretation as Abduction
                Jerry R. Hobbs, Mark Stickel, Paul Martin, and Douglas Edwards

4:55-5:20       Project APRIL: A Progress Report
                Robin Haigh, Geoffrey Sampson, and Eric Atwell

7:00-9:00       Visit to Albright-Knox Art Gallery

THURSDAY MORNING, 9 JUNE
9:00-9:25       Discourse Deixis: Reference to Discourse Segments
                Bonnie Lynn Webber

9:25-9:50       Cues and Control in Expert-Client Dialogues
                Steve Whittaker and Phil Stenton

9:50-10:15      A Computational Theory of Perspective and Reference in Narrative
                Janyce M. Wiebe and William J. Rapaport

10:45-11:10     Parsing Japanese Honorifics in Unification-Based Grammar
                Hiroyuki Maeda, Susumu Kato, Kiyoshi Kogure and Hitoshi Iida

11:10-11:35     Aspects of Clause Politeness in Japanese: An Extended Inquiry
                Semantics Treatment
                John Bateman

11:35-12:00     Experiences with an On-Line Translating Dialogue System
                Seiji Miike, Koichi Hasebe, Harold Somers, and Shin-ya Amano

THURSDAY AFTERNOON, 9 JUNE
1:30-2:30       ANALOGY AND THE INTERPRETATION OF METAPHOR, Invited Talk
                Dedre Gentner

2:30-2:55       Planning Coherent Multisentential Text
                Eduard H. Hovy

3:25-3:50       A Practical Nonmonotonic Theory for Reasoning about Speech Acts
                Douglas Appelt and Kurt Konolige

3:50-4:15       Two Types of Planning in Language Generation
                Eduard H. Hovy

4:15-4:40       Assigning Intonational Features in Synthesized Spoken Directions
                James Raymond Davis and Julia Hirschberg

4:40-5:05       Atomization in Grammar Sharing
                Megumi Kameyama

7:00-8:00       RECEPTION
                Erie Community College, City Campus

8:00-10:00      BANQUET
                Erie Community College, City Campus
                Co-sponsored by Erie Community College and Barrister
                Information Systems Corporation
                Presidential Address: Alan Biermann

FRIDAY MORNING, 10 JUNE
9:00-9:25       Syntactic Approaches to Automatic Book Indexing
                Gerard Salton

9:25-9:50       Lexicon and Grammar in Probabilistic Tagging of Written English
                Andrew David Beale

9:50-10:15      Parsing vs. Text Processing in the Analysis of Dictionary
                Definitions
                Thomas Ahlswede and Martha Evens

10:45-11:10     Polynomial Learnability and Locality of Formal Grammars
                Naoki Abe

11:10-12:00     BUSINESS MEETING & ELECTIONS
                Nominations for ACL Offices for 1989
                President: Candy Sidner, BBN Laboratories
                Vice President: Jerry Hobbs, SRI International
                Secretary-Treasurer: Don Walker, Bellcore
                Executive Committee (1989-1991): Ralph Grishman, NYU
                Nominating Committee (1989-1991): Alan Biermann, Duke

FRIDAY AFTERNOON, 10 JUNE
1:30-1:55       Conditional Descriptions in Functional Unification Grammar
                Robert T. Kasper

1:55-2:20       Deductive Parsing with Multiple Levels of Representation
                Mark Johnson

2:20-2:45       Graph-Structured Stack and Natural Language Parsing
                Masaru Tomita

2:45-3:10       An Earley-Type Parsing Algorithm for Tree Adjoining Grammars
                Yves Schabes and Aravind K. Joshi

3:10-3:40       Break

3:40-4:05       A Definite Clause Version of Categorial Grammar
                Remo Pareschi

4:05-4:30       Combinatory Categorial Grammars: Generative Power and
                Relationship to Linear Context-Free Rewriting Systems
                David J. Weir and Aravind K. Joshi

4:30-4:55       Unification of Disjunctive Feature Descriptions Structures
                Andreas Eisele and Jochen Doerre


                        PROGRAM COMMITTEE
                Jared Bernstein, SRI International
                Roy Byrd, IBM Watson Research Center
                Sandra Carberry, University of Delaware
                Eugene Charniak, Brown University
                Raymonde Guindon, MCC
                Lynette Hirschman, Unisys
                Jerry Hobbs, SRI International (Chair)
                Karen Jensen, IBM Watson Research Center
                Lauri Karttunen, Xerox PARC
                William Rounds, University of Michigan
                Ralph Weischedel, BBN Laboratories
                Robert Wilensky, UC Berkeley


                        TUTORIAL DESCRIPTIONS

CONTEMPORARY SYNTACTIC THEORIES
Peter Sells, University of California, Santa Cruz

This tutorial will examine some recent developments in theoretical syntax
centered in, or stemming from, work in Government-Binding Theory, Generalized
Phrase Structure Grammar, and Lexical-Functional Grammar.  I will try to
explain the linguistic motivations for the proposals I will discuss, and also
convergences among the theories.  Little in the way of background will be
assumed, beyond a rudimentary knowledge of phrase structure grammars and basic
transformational mechanisms (movement, deletion, etc.).

TEXT PROCESSING SYSTEMS
Martha Palmer, Lynette Hirschman, and Deborah Dahl, Paoli Research Center,
Unisys Defense Systems

This tutorial will cover issues in text processing, focusing on the current
state-of-the-art in text processing, the applications of text processing,
the architecture of a text-processing system (using the Unisys PUNDIT system
as an example), issues of portability and extensibility, and issues relating
to large-scale computational linguistics projects.  The section on system
architecture will describe a modular architecture, with components that handle
syntax, semantics and pragmatics, emphasizing the importance of segregating
domain-specific and domain-independent data.  We will then discuss, in the
context of recent experiences with the PUNDIT system, the issue of portability
across domains and the tools that support bringing up an application in a new
domain.  We will also look at the problems associated with building a large
natural language processing system: how to integrate people with a variety of
backgrounds (computer science, linguistics), how to manage and maintain a
large system, and how to do development in multiple domains simultaneously.
We will conclude with a survey of text-processing systems, comparing their
strengths and weaknesses as related to their particular goals.

NATURAL LANGUAGE GENERATION
David McDonald, Brattle Research Corporation

This tutorial will take participants through the workings of a complete, albeit
very simple, generation system from the underlying conceptual representation to
the surface morphology.  This mini-system, which uses a ``direct replacement''
algorithm, would be quite satisfactory for the demands of most present expert
systems; its weaknesses will be used to motivate the research that is going on
in generation today.  The major themes of that research will be surveyed,
concentrating on the rationales behind the adoption of specific frameworks,
such as systemic, unification, or tree adjoining grammar.  Illustrations will
be taken from current and historically important systems.  Emphasis will be on
generation as a planning and construction process which has markedly different
concerns and issues from language understanding, and on how this has led to
the approaches generation researchers are taking today.

Efficient Parsing Algorithms
Masaru Tomita, Carnegie-Mellon University

Parsing efficiency is crucial when building practical natural language systems.
This is especially the case for interactive applications such as natural
language database access, interfaces to expert systems and interactive machine
translation.  This tutorial covers several efficient context-free parsing
algorithms, including chart parsing, Earley's algorithm, LR parsing and the
generalized LR algorithm.  Augmentation to the context-free parsing algorithms
is also discussed, to handle unification-based grammar formalisms such as
Lexical-Functional Grammar, Functional Unification Grammar, and Generalized
Phrase Structure Grammar.
11-May-1988 21:42:04-EST,7641;000000000001


The printed version of the program and registration information has
been mailed to ACL members.  Others are encouraged to use the attached form or
write for a program flier to the following address:
                Dr. D.E. Walker (ACL)
                Bellcore - MRE 2A379
                445 South Street - Box 1910
                Morristown, NJ 07960-1910, USA
or send net mail to walker@flash.bellcore.com or bellcore!walker@uunet.uu.net,
specifying "ACL Annual Meeting Information" on the subject line.

------------------------------

Date: Wed, 11 May 88 22:42:04 EDT
From: Rita.McCardell@NL.CS.CMU.EDU
Subject: CMU/CMT Conference Brochure

             Second International Conference on Theoretical and
      Methodological Issues in Machine Translation of Natural Languages


                          June 12 - 14, 1988

                             Hamburg Hall
                    Center for Machine Translation
                      Carnegie Mellon University
                    Pittsburgh, Pennsylvania 15213

*** Purpose ***

The field of Machine Translation (MT) has gradually regained its
importance as an academic discipline and an engineering application. The
number of research teams in MT has grown significantly over the past five
years, and correspondingly, the rate of progress, measured both in the
scientific output and the technological innovation has become increasingly
steep. The requirements for information exchange in the field have grown
accordingly. The conference is aimed at fulfilling that requirement for
information exchange.


*** Topics of the Conference ***

The conference will cover a wide set of interrelated topics in machine
translation including: parsing, generation, computational lexicons, multiple
approaches to translation (knowledge-based, interactive, post and pre-editing,
etc...), theoretical and comparative analysis, case studies, computational
tools for the system developer or translator, and new algorithms and
architectures for natural language processing.



*** Center for Machine Translation ***

The Center for Machine Translation was established at Carnegie Mellon
University in July 1986. The center is dedicated to the development of a
new generation of machine translation systems with capabilities ranging
far beyond the current technology.  Current research initiatives include:
knowledge-based machine translation, knowledge representation and
acquisition, unification algorithms, multilingual parsing algorithms,
fluent text generation and development of computational lexicons,
grammars and knowledge bases.




*** Conference Program and Schedule ***

Saturday, June 11
Participants arrive in Pittsburgh

Sunday, June 12
--- General Session ---
8:30 am     Registration/Coffee & Donuts
8:50 am     Welcome

--- Session 1:  Issues in Analysis I ---
9:00 am     "Meaning Understanding in Machine Translation"
            Hirosato Nomura, Kyushu Institute of Technology (Japan)
9:30 am     "Coordination:  Some Problems and Solutions for Parsing
            English with an ATN"
            Lee Ann Schwartz, Pan American Health Organization (United States)
10:00 am    "A Method of Analyzing Japanese Speech Act Types"
            Kiyoshi Kogure, Hitoshi Iida, Kei Yoshimoto, Hiroyuki Maeda,
            Masako Kume, Susumu Kato, ATR (Japan)

10:30 am    COFFEE

--- Session 2:  Issues in Generation ---
11:00 am    "On Lexical Selection in MT Generation"
            Sergei Nirenburg, Rita McCardell, Eric Nyberg, Scott Huffman,
            Edward Kenschaft, Irene Nirenburg, Carnegie Mellon University
            (United States)
11:30 am    "Natural Language Generation using the Meaning Text Model"
            Richard Kittredge, A. Polguere, L. Jordanskaya
            University of Montreal (Canada)

--- Session 3:  EUROTRA Perspectives ---
Noon        "'Relaxed' Compositionality in MT"
            Doug Arnold, University of Essex (United Kingdom)
            Steven Krauwer, Louis des Tombe
            University of Utrecht (Netherlands)
            Louisa Sadler, University of Essex (United Kingdom)

12:30 pm    "CAT2 - Implementing a Formalism for Multi-Lingual MT"
            Randall Sharp, IAI (West Germany)

1:00 pm     LUNCH

--- Panel   1:  Real-Time Interpretive MT ---
2:30 pm     Masaru Tomita (Chair), Carnegie Mellon University (United States)
            Shin-ya Amano, Toshiba (Japan)
            Raj Reddy, Carnegie Mellon University (United States)
            Akira Kurematsu, ATR (Japan)

4:00 pm     DEMONSTRATIONS

5:30 pm     RECEPTION

6:30 pm     DINNER



Monday, June 13
8:30 am     Coffee & Donuts

--- Session 4:  Grammatical Issues ---
9:00 am     "Functional Descriptions as a Formalism for Linguistic Knowledge
            Representation in a Generation Oriented Approach"
            Miyo Otani, Nathalie Simonin, Cap Sogeti Innovation (France)
9:30 am     "Computational Complexity of Left-Associative Grammar"
            Roland Hausser, Universitat Munchen (West Germany)
10:00 am    "Reversible Logic Grammars for MT"
            Pierre Isabelle, Canadian Workplace Automation Research Center
            (Canada)

10:30 am    COFFEE

--- Session 5:  System Descriptions ---
11:00 am    "ETOC:  A MAHT System Using Approximate Text-Matching
            Based on Heuristic Rules"
            E. Sumita, Y. Tsutsumi, IBM (Japan)

11:30 am    "ATLAS:  A MT System by Interlingua"
            Hiroshi Uchida, Fujitsu (Japan)
Noon        "Translational Ambiguity Rephrased"
            Danit Ben-Ari, Mory Rimon, IBM (Israel)
            Daniel M. Berry, Technion (Israel)
12:30 pm    "A Principle-based Korean/Japanese MT System:  NARA"
            Hee-Sung Chung, E & I Research (Korea)

1:00 pm     LUNCH

--- Session 6:  Issues in Analysis II ---
2:30 pm     "A Comparative Study of Japanese and English Sublanguage Patterns"
            Virginia Teller, Hunter College SUNY (United States)
            Michiko Kosaka, Monmouth College (United States)
            Ralph Grishman, New York University (United States)
3:00 pm     "Noun Phrase Identification in Dialogue and its Application"
            Izuru Nogaito, Hitoshi Iida, ATR (Japan)

3:30 pm     COFFEE

--- Panel 2:  Paradigms for MT ---
4:00 pm     Jaime Carbonell (Chair), Carnegie Mellon University
            (United States)
            Harold Sommers, UMIST (United Kingdom)
            Peter Brown, IBM (United States)
            Victor Raskin, Purdue University (United States)

6:00 pm     DINNER - Mt. Washington (**)


Tuesday, June 14
8:30 am     Coffee & Donuts

--- Session 7:  Methodological Considerations ---
9:00 am     "Methodological Considerations in the METAL Project"
            Winfield Bennett, University of Texas (United States)
9:30 am     "Application of a Natural Language Interface to a MT Problem"
            John S. White, Heidi M. Johnson, Yukiko Sekine
            Martin Marietta Corporation (United States)
            Gil C. Kim, Korean Advanced Institute of Science and Technology
            (Korea)
10:00 am    "Complex Procedures for MT Quality"
            Michael Zarechnak, Georgetown University (United States)

10:30 am    COFFEE

--- Panel 3:  Historical Perspectives ---
11:00 am    Makoto Nagao (Chair), Kyoto University (Japan)
            Christian Boitet, Universite de Grenoble (France)
            Rolf Stachowitz, Lockheed Artificial Intelligence Center
            (United States)

12:30 pm    LUNCH and CONCLUDING REMARKS




Requests for more information or applications contact:

MT CONFERENCE:
Center for Machine Translation
Carnegie Mellon University
Pittsburgh, PA 15213
(412) 268-6591
12-May-1988 13:50:50-EST,4631;000000000001
Return-Path: <@AI.AI.MIT.EDU:munnari!arp.anu.oz.au!daemon@uunet.UU.NET>
To: munnari!comp-ai-digest@uunet.UU.NET
From: munnari!TECMTYVM.BITNET.arp.anu.oz.au!PL233270@uunet.UU.NET
Newsgroups: comp.ai.digest
Subject: Conference - 1st Int. Symp. on AI
Date: 12 May 88 18:50:50 GMT
Sender: munnari!ucbvax.BERKELEY.EDU.arp.anu.oz.au!daemon@uunet.UU.NET
Organization: The Internet
 (cmaster@
Lines: 135


***********************************************************************

                 1ST INTERNATIONAL SYMPOSIUM ON
                    ARTIFICIAL INTELLIGEN0E
                     MONTERREY, N.L. MEXICO

***********************************************************************

             tiv INFORMATION RESEARCH 0 15BNTER OF
           THE INSTITUTO TECNOLOGICO Y DE ESTUDIOS
                  SUPERIORES DE MONTERREY


IS ORGANIZING THE FIRST INTERNATIONAL SYMPOSIUM ON ARTIFICIAL
INTELLIGEN0E TO PROMOTE THE ARTIFICIAL INTELLIGEN0E TECHNOLOGY AMONG
PROFESSIONALS AS AN APPROACH TO PROBLEM SOLVING, THE USE OF OF THE
KNOWLEDGE-BASED PARADIGM IN SOLVING PROBLEMS IN INDUSTRY AND BUSINESS,
AND ALSO TO MAKE PROFESSIONALS AWARE OF THE ARTIFICIAL INTELLIGENCE
TECHNIQUES THAT EXIST AND TO DEMONSTRATE THEIR USE IN SOLVING REAL
PROBLEMS, ALSO TO SHOW CURRENT ARTIFICIAL INTELLIGENCE AND EXPERT
SYSTEMS APPLICATIONS IN MEXICO, USA, AND OTHER COUNTRIES.



Tentative Program:
------------------
October 24th, 25th, 1988
Knowledge-Based Systems Tutorial.

October 26th, 27th, 28th 1988
CONFEREN0 15BS AND HARDWARE & SOFTWARE EXPOSITION.



                           T O P I C S
                         ----------------


          * KNOWLEDGE-BASED SYSTEMS
          * KNOWLEDGE ACQUISITION
          * KNOWLEDGE REPRESENTATION
          * INFEREN0E ENGINE
          * CERTAINTY FACTORS
          * VISION
          * ROBOTICS
          * EXPERT SYSTEMS APPLICATIONS IN INDUSTRY
          * NATURAL LANGUAGE PRO0 15BSSING
          * LEARNING
          * SPEECH RECOGNITION
          * ARTIFICIAL INTELLIGEN0E IN MEXICO
          * FIFTH COMPUTERS GENERATION


Conference Participants
-----------------------
Speakers from the following Universities and Research Centers will
participate:
Stanford, Texas at Austin, MIT, Colorado, Waterloo, Alberta, Rice,
IBM Center and Microelectronics and Computer Technology Corp.


SOFTWARE AND HARDWARE EXPOSITION
--------------------------------
DURING THE SYMPOSIUM THERE WILL BE AN EXPOSITION OF COMPUTER HARDWARE
AND SOFTWARE INCLUDING PRODUCTS AND SYSTEMS FROM COMPANIES AND
INSTITUTIONS IN MEXICO, USA AND ABROAD.
WE ARE INVITING SOFTWARE AND HARDWARE BUSINESS TO PARTICIPATE IN
THIS EXPOSITION WITH THEIR PRODUCTS.


SOCIAL EVENTS
-------------
In order to encourage an atmosphere of friendship and exchange among
participants, some social events will be held after the conferences.


Fees
----
TUTORIAL:
                  Before August 31st,88   After August 31st,88
PROFESSIONALS        $150 US DOLLARS            $170
STUDENTS             $75                        $85

SYMPOSIUM:

PROFESSIONALS        $100                       $120
STUDENTS             $50                        $60


ACCOMMODATIONS
-------------
CONTACT US FOR FURTHER INFORMATION ABOUT THIS.



******************************************************************

            1ST INTERNATIONAL SYMPOSIUM ON
               ARTIFICIAL INTELLIGENCE
                MONTERREY, N.L. MEXICO

WE WOULD LIKE TO INVITE ALL THE PROFESSORS AND RESEARCHERS TO
SEND PAPERS FOR THE FIRST INTERNATIONAL SYMPOSIUM ON ARTIFICIAL
INTELLIGEN0 15B TO BE HELD ON OCTOBER 24-28, 1988
IN MONTERREY, MEXICO AT THE INSTITUTO TECNOLOGICO Y DE ESTUDIOS
SUPERIORES DE MONTERREY (ITEMS).

            C A L L     F O R    P A P E R S
           -------------------------- an-----

TOPICS INCLUDE KNOWLEDGE REPRESBMTATION, KNOWLEDGE ACQUISITION,
NATURAL LANGUAGE PRO0ESSIvixro REPDGE BASED SYSTEMS, INFEREN0E
ENGINE, MACHINE LEARNING, SPEECH RECOGNITION, PATTERN RECOGNITION,
VISION AND THEOREM PROVING.

FOUR TO FIVE PAGES MAXIMUM SUMMARIES, FOUR COPIES AND RESUME, TO
I T E S M . CBMTRO DE INVESTIGACION EN INFORMATICA.
DAVID GARZA SALAZAR.  SUCURSAL DE CORREOS J.  64849 MONTERREY, N.L.
MEXICO.  (83) 59 57 47, (83) 59 59 43, (83) 59 57 50;

Deadline for submissions: August 31st,88


BITNET ADDRESS: SIIACII AT TECMTYVM
TELEX:  0382975 ITEMSE
TELEFAX: (83) 58 59 31
APPLELINK ADDRESS: IT0023
P.S. ANY INFORMATION FEEL FREE TO CONTACT US, WE WOULD LIKE TO SEND YOU
MORE INFORMATION ABOUT OU

12-May-1988 17:58:41-EST,2290;000000000001
Return-Path: <@AI.AI.MIT.EDU:ailist-request@ai.ai.mit.edu>
Date: 12 May 88 22:58:41 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Organization: Cognitive Science, Princeton University
Subject: Psychophysics: BBS Call for Commentators
Sender: ailist-request@ai.ai.mit.edu
To: ailist@ai.ai.mit.edu


The following is the abstract of a target article to appear in
Behavioral and Brain Sciences (BBS).  All BBS articles are accompanied
by "open peer commentary" from across disciplines and around the
world. For information about serving as a commentator on this article,
send email to harnad@mind.princeton.edu or write to BBS, 20 Nassau
Street, #240, Princeton NJ 08540 [tel: 609-921-7771]. Specialists in
the following areas are encouraged to contribute: psychophysics,
sensory physiology, vision, audition, visual modeling, scaling,
philosophy of perception

                        Reconciling Fechner and Stevens:
                     Toward a Unified Psychophysical Theory

                        Lester E. Krueger
                        Human Performance Laboratory
                        Ohio State University
                        Columbus OH 43210-1285
                ts0340@ohstmvsa.ircc.ohio-state.edu     o

------------------------------




------------------------------




------------------------------




------------------------------




------------------------------




------------------------------




------------------------------




------------------------------




------------------------------




------------------------------




------------------------------




------------------------------




------------------------------




------------------------------




------------------------------




------------------------------




------------------------------




------------------------------




------------------------------

End of AIList Digest
********************

∂24-May-88  0107	@MC.LCS.MIT.EDU:AIList-REQUEST@AI.AI.MIT.EDU 	AIList Digest   V7 #3   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 24 May 88  01:07:05 PDT
Received: from MARVIN.MIT.EDU by MC.LCS.MIT.EDU via Chaosnet; 23 MAY 88  23:42:10 EDT
Date: Monday, 23 May 1988, 23:37-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Sender: nick@MIT-ARTHUR
Reply-to: AIList@AI.AI.MIT.EDU
Subject: AIList Digest   V7 #3
To: ailist-outgoing@mc



AIList Digest            Tuesday, 24 May 1988       Volume 7 : Issue 3

Today's Topics:
  
	More Conferences, Workshops, and Seminars

----------------------------------------------------------------------

Date: 13 May 88 02:57:07 GMT
From: kersch@gmu90x.gmu.edu  (Larry Kerschberg)
Subject: Proceedings: 2nd Intl. Conf. on Expert Database Systems


Second International Conference on Expert Database Systems

A limited number of the Proceedings from the
Second International Conference on Expert Database Systems are
available at a cost of $40.  Tutorial notes are being sold for $15 each.

Please add a $5 handling charge for either the proceedings or any
combination of tutorials up to 4; add $2 handling fee for each
additional tutorial, $5 for each additional copy of the proceedings.

Tutorial Note Titles

_____   I-Logic and Databases by Dr. Carlo Zaniolo of MCC, Austin,
Texas

_____   II-Distributed Problem Solving in Knowledge/Data Environments
by Professor Victor Lesser of the University of Massachusetts at
Amherst
_____   III-Knowledge Representation and Data Semantics by Professor
John Mylopoulos of the University of Toronto
_____   IV-Acquisition of Knowledge from Data by Professor Gio
Wiederhold of Stanford University

Mail to :       EDS Conference
                Office of Conferences and Community Services
                George Mason University
                4400 University Drive
                Fairfax, VA 22030, USA

Table of Contents

Session 1:      Object-Oriented Systems

Chairman:  Jacob Stein, Servio Logic, USA

Abstract Objects in an Object-Oriented Data Model
J. Zhu and D. Maier, Oregon Graduate Center, USA

The Design of KIVIEW:  An Object-Oriented Browser
A. Motro,  Univ. of Southern California, USA , A. D'Atri and L.
Tarantino, and Univ. of Rome, Italy

Towards a Unified View of Design Data and Knowledge Representation
B. Mitschang, Universitat Kaiserslautern, FRG

Session 2:      Constraint Management

Chairman:  Herve Gallaire, ECRC, FRG

Implementing Constraints in a Knowledge-Base
J.A. Wald, Schlumberger-Doll Research, USA

Update-Oriented Database Structures
L. Tucherman and A.L. Furtado, IBM Rio Scientific Center,  Brazil

Distribution Design of Integrity ConstraintsX. Qian, Stanford
University, USA

Session 3:      Panel Session:  Constraint-Based Systems:  Knowledge
about Data
Chairman:  Matthew Morgenstern, SRI International, USA

Panelists:  A. Borgida, Rutgers University, C. Lassez, IBM T.J. Watson
Research, D. Maier, Oregon Graduate Center, and G. Wiederhold,
Stanford University

Session 4:      Expert Database System Architectures

Chairmen:  Robert Meersman, Tilburg University, Netherlands  and
Sushil Jajodia, NSF, USA

BERMUDA -- An Architectural Perspective on Interfacing Prolog to a
Database Machine
Y.E. Ioannidis, J. Chen, M.A. Friedman and M.M. Tsangaris, Univ. of
Wisconsin-Madison, USA

A Look at Loosely-Coupled Prolog/Database Systems
B. Napheys and D. Herkimer, Martin Marietta, USA

Combining Top Down and Bottom Up Computation in Knowledge Based
Systems
M. Nussbaum, Swiss Federal Institute of Technology (ETH),Switzerland

Session 5A:     Knowledge/Data System Architectures

Chairmen:  Roger King, Univ. of Colorado  and Robert Abarbanel, Apple
Computer, Inc.

A Distributed Knowledge Model for Multiple Intelligent Agents
Y.P. Li, Jet Propulsion Laboratory, USA

The Relational Production Language:  A Production Language for
Relational Databases
L.M.L. Delcambre and J.N. Etheredge, Univ. of Southwestern Louisiana,
USA

A Transaction Oriented Mechanism to Control Processing in a Knowledge
Base Management System
L. Raschid, Univ. of Maryland, USA  and S.Y.W. Su, Univ. of Florida,
USA

Session 5B:     Recursive Query Processing

Chairman:  Tim H. Merrett, McGill University

Transitive Closure of Transitively Closed Relations
P. Valduriez and S. Khoshafian, MCC, USA

Transforming Nonlinear Recursion to Linear Recursion
Y.E. Ioannidis, Univ. of Wisconsin-Madison  and E. Wong, UC-Berkeley,
USA

A Compressed Transitive Closure Technique for Efficient Fixed-Point
Query Processing
H.V. Jagadish, AT&T Bell Laboratories, USA

Session 6A:     Learning and Adaptation in Expert Databases

Chairmen:  Alex Borgida, Rutgers University  and Don Potter, Univ. of
Georgia

An Automatic Improvement Processor for an Information Retrieval System
K.P. Brunner, Merit Technology, Inc.  and R.R. Korfhage, Univ. of
Pittsburgh, USA

Supporting Object Flavor Evolution through Learning in an
Object-Oriented Database System
Q. Li and D. McLeod, Univ. of Southern California, USA

Implicit Representation of Extensional Answers
C.D. Shum and R. Muntz, UCLA, USA

Session 6B:     Knowledge Management in Deductive Databases

Chairmen:  Sham Navathe, Univ. of Florida

Deep Compilation of Large Rule Bases
T.K. Sellis and N. Roussopoulos, Univ. of Maryland, USA

Handling Knowledge by its Representative
C. Sakama and H. Itoh, ICOT, Japan

Integrity Constraint Checking in Deductive Databases using a Rule/Goal
Graph
B. Martens and M. Bruynooghe, Katholieke Universiteit Leuven, Belgium

Session 7:      Panel Session:  Knowledge Distribution and
Interoperability

Chairman:  Michael Brodie, GTE Labs, USA

Panelists:      Danny Bobrow,  Xerox PARC, Carl Hewitt, MIT, Victor
Lesser, University of Massachusetts, Amherst, Stuart Madnick, MIT,
Dennis Tsichritzis, University of Geneva, Switzerland


Session 8:      Intelligent Database Interfaces

Chairman: Larry Reeker, BDM Corporation

Musing in an Expert Database
S. Fertig and D. Gelernter, Yale University, USA

Cooperative Answering:  A Methodology to Provide Intelligent Access to
Databases
F. Cuppens and R. Demolombe, ONERA-CERT, France

G+:  Recursive Queries without Recursion
I.F. Cruz, A.O. Mendelzon and P.T. Wood, Univ. of Toronto, Canada

Session 9:      Semantic Query Optimization

Chairman:  Matthias Jarke, Univ. of Passau, FRG

Automatic Rule Derivation for Semantic Query Optimization
M.D. Seigel, Boston University, USA

A Metainterpreter to Semantically Optimize Queries in Deductive
Databases
J. Lobo and J. Minker, Univ. of Maryland, USA

From QSQ towards QoSaQ:  Global Optimization of Recursive Queries
L. Vieille, ECRC, FR G

Session 10:     Panel Session:  Knowledge Management

Chairman:  Adrian Walker, IBM T.J. Watson Research Center,  USA

Panelists:  R. Kowalski, Imperial College, London, D. Lenat, MCC,
Austin, Texas, E. Soloway, Yale University  and M. Stonebraker, UC -
Berkeley


===================================================================
EDS'88 Tutorial Speaker Bios and Note Contents
Tutorial I
Logic and Databases
Instructor:  Dr. Carlo Zaniolo, MCC, Austin, Texas

Dr. Zaniolo heads a group at MCC performing research on deductive
databases and logic programming.  He has held positions at Sperry
Research and Bell Laboratories.  He is the author of over 40 technical
papers, a member of numerous Program Committees, and edited the
December 1987 Data Engineering special issue on Databases and Logic.

Course Description:  There is a growing demand for supporting
knowledge-based applications by means of Knowledge Management Systems;
these will have to combine the inference mechanisms of Logic with the
efficient and secure management of data provided by Database
Management Systems(DBMS).  The major topics are:  Logic and relational
query languages; Semantics of Horn Clauses; Prolog and DBMSs; Coupling
Prolog with a DBMS; Making Prolog a database language; Integrating
Logic and Database Systems:  Sets, Negation and Updates; Choosing an
Execution Model; Compilation: magic sets to support recursive
predicates; Optimization and Safety; Overview of selected R&D
projects.

Tutorial II

Distributed Problem Solving in Knowledge/Data Environments
Instructor:  Prof. Victor Lesser, University of Massachusetts, Amherst


Dr. Lesser is Professor of Computer and Information Science at UMASS,
where he heads research groups in Distributed Artificial Intelligence
and Intelligent User Interfaces.  Prior to joining UMASS in 1977, he
was on the faculty of Carnegie-Mellon University, where he was a
Principal in the development of the HEARSAY Speech Understanding
System and responsible for the system architecture.

Course Description:  This tutorial will explore the major concepts and
systems for cooperative knowledge-based problem solving.  The major
topics include:  Connectionist, Actor and Cooperating ES paradigms;
Conceptual Issues including:  examples of distributed search,
interpretation, planning and cooperation, global coherence, dealing
with inconsistency and incompleteness, sharing world views, and design
rules for a cooperating ES; System Architectures for satisficing,
negotiation, tolerance of inconsistency in problem-solving,
organizational structuring, integration of local and network control,
and expectation-driven communication; Discussion of working systems
including Contract Nets, Partial Global Planning, AGORA MACE, ABE,
DPS, and MINDS.

Tutorial III

Knowledge Representation and Data Semantics
Instructor:  Prof. John Mylopoulos, University of Toronto, Canada

Dr. John Mylopoulos is Professor of Computer Science at the University
of Toronto and research fellow of the Canadian Institute for Advanced
Research. His research interests include knowledge representation and
its applications to Databases and Software Engineering.  Dr.
Mylopoulos has edited three books on the general topic of AI and
Databases. He received his Ph.D degree from Princeton University.

Course Description:  Knowledge Representation including history, basic
paradigms such as semantic nets, logic-based representations,
productions, frames, role of uncertainty, and inference mechanisms,
examples such as KL-ONE and OMEGA; Semantic Data Models including
historical models such as AbrialUs Binary Model, Entity/Relationship,
RM/T and SDM, detailed study of ADAPLEX, TAXIS, and GALILEO,
implementation techniques; Comparison of SDMs to Object-Oriented model
such as POSTGRES and GEM as well as Deductive Databases.

Tutorial IV

Acquisition of Knowledge from Data
Instructor:  Prof. Gio Wiederhold, Stanford University, California

Dr. Gio Wiederhold is Associate Professor of Medicine and Computer
Science (Research) at Stanford University.  His research involves
knowledge-based approaches to medicine, design, and planning.  He is
the Editor-in-Chief of ACM's Transactions on Database Systems  and
associate editor of M.D. Computing   and IEEE Expert  magazine.
Wiederhold has over 130 publications, including a widely used textbook
on Database Design.  In 1987, McGraw-Hill published his new book, File
Organization for Database Design.

Course Description:  The architecture of an operational system, RX, is
presented which uses knowledge-based techniques to extract new
knowledge from a large clinical database.  RX exploits both
frame-based knowledge and rules, as well as a database.  Frames are
used to store deep and interconnected knowledge about disease states
and medical actions.   Definitional and causal knowledge is
represented by inter-connections between frames that go across the
hierarchies, sideways as well as up and down, so that the aggregate
knowledge is represented by a network.  Rules select the appropriate
statistical methods used to reduce the volume of data into
information.  The database contains observations on rheumatic
diseases, collected over a dozen years.

------------------------------

Date: 13 May 88 23:27
From: Andreas Huber <huber%ifi.unizh.ch@RELAY.CS.NET>
Subject: Conference Announcement


INTERNATIONAL CONFERENCE ANNOUNCEMENT


****************************************************************************
*                                                                          *
*           C O M M E R C I A L   E X P E R T   S Y S T E M S              *
*           I N   B A N K I N G   A N D   F I N A N C E .......            *
*                                                                          *
*           ..... A N D   H O W   T O   M A K E   T H E M   R U N          *
*                                                                          *
****************************************************************************


Lugano (Switzerland), Palazzo dei Congressi, June 6-7, 1988


Supporting Organizations:

Associazione Bancaria Italiana
Associazione Bancaria Ticinese
Associazione Ticinese Elaborazione Dati
European Center for Insurance Education and Training
European Coordinating Committee for Artificial Intelligence
Institute for Swiss Banking, University of Zurich
Instituto Dalle Molle di Studi sulla Intelligenza Artificiale
Swiss Bankers Association


What it is all about...

Knowledge based systems have come of age. Today a majority of executives and
professionals agree that expert systems represent a key technology for the
economic survival of a company. This is particularly true in banking and
finance where daily changes in regulations and new financial instruments
have given rise to a complexity which endangers the efficiency of the organi-
zation. With higher quality services and the rapidly increasing application
of intelligent tools one aims to improve performance and competitiveness.

Prototypes have proven the feasibility and the potential of knowledge based
systems. With growing experience, however, also serious problems such as
deficiencies in robustness or new dimensions of data and "knowledge" securi-
ty have become evident. Nevertheless a rapid realization of their commercial
application is not principally called in question.

The international conference reviews the current development and documents
the state of present commercial applications. Special emphasis is given to the
critical transition between the working prototype and the successful operatio-
nal system. Case studies illustrate the essential issues and outline specific
details. A topic oriented exhibition of leading soft- and hardware houses com-
plements and completes the event.


At this symposium you will hear answers to the questions:

What can we expext from the technology both now and in the future?

Which approaches will solve our current problems?

When will we get the solutions we can afford?


The conference is intended for:

- Executives and EDP professionals in banking, finance and insurance

- Officers in strategic planning

- EDP and communication experts

- Managers and application specialists of hard- and software suppliers



P R O G R A M M


MONDAY, JUNE 6, 1988

INTRODUCTION
9.00 - 10.15
D. Shpilberg
Expert Systems in the services sector - facts and fictions

PROBLEMS AND SOLUTIONS
10.45 -12.30
M. De Marco
Towards operational systems - Problems and challenges

H. Schorr
IBM perspective on AI; essentials for the future

COMMERCIAL APPLICATIONS: THE USER PERSPECTIVE
14.00 - 15.45
Case studies in parallel sessions

APPLICATIONS CONTINUED
16.15 - 17.00
Case studies in parallel sessions

FUTURE ROLE OF TECHNOLOGY IN THE SERVICES INDUSTRY
17.10 - 18.00
M. Janssen
Expert Systems: Survival strategy for the financial corporation?

EXHIBITION
09.00 - 19.00


TUESDAY, JUNE 7, 1988

PROJECT MANAGEMENT
09.00 - 10.00
B. Bachmann
Expert Systems - Cold coffee or a new challenge?

COMMERCIAL APPLICATIONS: TECHNOLOGY IN PERSPECTIVE
10.30 - 12.15
Case studies in parallel sessions

TECHNICAL TRENDS
13.45 - 14.30
F. Gardin
The future role of advanced AI soft- and hardware tools and techniques

SUMMING UP
14.30 - 16.00
J. Campbell

Hearing

R. Pfeifer
Summary


Programm committee:

G. Anastaze, IBM, Geneva, CH
T. Bernold, Gottlieb Duttweiler Institute (GDI), CH
M. De Marco, Universita Cattolica di Milano, I
A. Huber, University of Zurich, CH
S. Marioni, Banca Solari & Blum S.A., Lugano, CH
R. Pfeifer, University of Zurich, CH


Advisory Board:

B. Bachmann, Union Bank of Switzerland, Zurich, CH
K. Bauknecht, University of Zurich, CH
J. Campbell, University College London, GB
E. Kilgus, University of Zurich, CH
B. Rees, Digital Equipment Corporation (Europe), Geneva, CH
D. Shpilberg, Coopers & Lybrand, New York, USA


Organizers:

Gottlieb Duttweiler Institute, GDI, Rueschlikon

Swiss Group for Artificial Intelligence and Cognitive Science, SGAICO

University of Zurich, Department of Computer Science

Gotttlieb Duttweiler Institute
The "Green Meadow" Foundation
CH-8803 Rueschlikon/Zurich
Phone: (41) 1 724 00 20
Telex: 826 510 gdi ch
Fax: (41) 1 461 37 39

University of Zurich
Department of Computer Science
Winterthurerstrasse 190, CH-8057 Zurich
Phone: (41) 1 257 43 23
Fax: (41) 1 257 40 04


Administration:

Anne-Marie Brennwald (GDI)
Phone: (41) 1 724 00 20
Telex: 826 510 gdi ch


For further information please contact:

Brigitta Scherrer (GDI)
Phone: (41) 1 461 37 16
Fax: (41) 1 461 37 39

Prof. Rolf Pfeifer
University of Zurich
Phone: (41) 1 257 43 23
Fax: (41) 1 257 40 04
Email: pfeifer@ifi.unizh.ch

Participation fee:

Registration before May 6: one day SFr. 550.-- , two days SFr. 950.--*
Registration after May 6: one day SFr. 700.-- two days 1150.-- *

Documentation, lunches and refreshments are included. Please remit the
fee only upon receipt of invoice by GDI.

*Reduction of SFr. 50.-- per day for SI/SGAICO members and
  memberorganisations of FSI/SVI


A. Huber

------------------------------

Date: 16 May 88 00:08:43 GMT
From: munnari!goanna.oz.au!isaac@uunet.UU.NET (Isaac Balbin)
Subject: Call for Papers


-----------------------------------------------------------------------------

                                Call for Papers

 _____________________________________________________________________________

                 International Computer Science Conference '88

                        Hong Kong, December 19-21, 1988

                Artificial Intelligence: Theory and Applications
 _____________________________________________________________________________

                                  Sponsored by

              THE COMPUTER SOCIETY OF THE IEEE, HONG KONG CHAPTER
 _____________________________________________________________________________

International Computer Science Conference '88 is to be the first international
conference in Hong Kong devoted to computer science. The purpose of the
conference is to bring together people from academia and industry of the East
and of the West, who are interested in problems related to computer science.
The main focus of this conference will be on the Theory and Applications of
Artificial Intelligence. Our expectation is that this conference will provide a
forum for the sharing of research advances and practical experiences among
those working in computer science.

Topics of interest include, but are not limited to:

     AI Architectures       Expert Systems        Knowledge Engineering
     Logic Programming      Machine Learning      Natural Languages
     Neural Networks        Pattern Recognition   Robotics
     CAD/CAM                Chinese Computing     Distributed Systems
     Information Systems    Office Automation     Software Engineering

Paper Submissions

Submit four copies of the paper by June 15, 1988 to either of the Program
Co-Chairmen:

     Dr. Jean-Louis Lassez                 Dr. Francis Y.L. Chin
     Room H1-A12                           Centre of Computer Studies and
     IBM Thomas J. Watson             Applications
Research Center                            University of Hong Kong
     P.O. Box 218                          Pokfulam Road
     Yorktown Heights NY                   Hong Kong
10598                                      (For papers from Pan-Pacific region
     U.S.A.                           only)
     e-mail: JLL@ibm.com                   e-mail: hkucs!chin@uunet.uu.net

The first page of the paper should contain the author's name, affiliation,
address, electronic address if available, phone number, 100 word abstract, and
key words or phrases. Papers should be no longer than 5000 words (about 20
double-spaced pages). A submission letter that contains a commitment to present
the paper at the conference if accepted should accompany the paper.

Tutorials

The day after the conference will be devoted to tutorials. Proposals for
tutorials on Artificial Intelligence topics, especially advanced topics, are
welcome. Send proposals by June 15, 1988 to the Program Co-Chairmen.

Conference Timetable and Information

     Papers due: June 15, 1988
     Tutorial proposals due: June 15, 1988
     Acceptance letters sent: September 1, 1988
     Camera-ready copy due: October 1, 1988

International Program Committee:

   J-P Adam (Paris             T.Y. Chen (Melbourne &      W.F. Clocksin
Scientific Center)          HKU)                        (Cambridge)
   A. Despain (Berkeley)       J. Gallier                  Qingshi Gao
   M. Georgeff (SRI)        (Pennsylvania)              (Academia Sinica)
   R.C.T. Lee (National        D. Hanson (Princeton)       R. Hasegawa (ICOT)
Tsin Hua)                      M. Maher (IBM)              Z. Manna (Stanford &
   F. Mizoguchi (Science       U. Montanari (Pisa)      Weizmann)
U. of Tokyo)                   P.C. Poole (Melbourne)      K. Mukai (ICOT)
   H.N. Phien (AIT)            C.K. Yuen (Singapore)       D.S.L. Tung (CUHK)

Organizing Committee        Local Arrangements          Publicity Chairman:
Chairman:                   Chairman:
                                                           Mr. Wanbil Lee
   Dr. K.W. Ng                 Dr. K.P. Chow               Department of
   Department of Computer      Centre of Computer       Computer Studies
Science                     Studies and Applications       City Polytechnic of
   The Chinese University      University of Hong Kong  Hong Kong
of Hong Kong                   Pokfulam Road               Argyle Center,
                                                        Kowloon
   Shatin, N.T.                Hong Kong                   Hong Kong
   Hong Kong                   e-mail:
                            hkucs!icsc@uunet.uu.net

In Cooperation With:

     Center for Computing Studies and Services, Hong Kong Baptist College
     Centre of Computer Studies and Applications, University of Hong Kong
     Department of Computer Science, The Chinese University of Hong Kong
     Department of Computer Studies, City Polytechnic of Hong Kong
     Department of Computing Studies, Hong Kong Polytechnic

------------------------------

Date: 16 May 88 01:45:28 GMT
From: ut-sally!kumar@uunet.UU.NET (Vipin Kumar)
Subject: Workshop: Last Call


               Workshop on Parallel Algorithms
        for Machine Intelligence and Pattern Recognition

   Sponsored by the American Association of Artificial Intelligence
                       Aug.20 and 21, 1988.
                       St. Paul, Minnesota


                       Organizing Committee:

                       Prof. Laveen N. Kanal (kanal@mimsy.umd.edu)
                       Dept. of Computer Science
                       University of Maryland College Park, Md., 20742

                       Dr. P.S. Gopalakrishnan (PSG@ibm.com)
                       T.J. Watson Research Center, 39-238
                       P.O.Box 218
                       Yorktown Heights, N.Y. 10598

                       Prof. Vipin Kumar (kumar@sally.utexas.edu)
                       Computer Science Dept.
                       Univ. of Texas at Austin
                       Austin, Texas, 78712.


There is much interest in AI in parallel algorithms for exploring
higher level knowledge representations and structural relationships. Parallel
algorithms for search, combinatorial optimization, constraint satisfaction,
parallel production systems, and pattern and graph matching are expressions of
this interest. There is also considerable interest and ongoing work on
parallel algorithms for lower level analysis of data, in
particular, in vision, speech and signal processing, often based on stochastic
models. For practical applications of machine intelligence and pattern
recognition the question arises as to the extent to which parallelism
for high and low level analysis can be achieved in an integrated manner.

The workshop will aim at bringing together individuals working in each of
the above two aspects of parallel algorithms to consider the basic nature
of the procedures involved and the degree to which parallel
approaches to high and low level operations in machine intelligence,
pattern recognition, and signal processing can be integrated.

Contributors interested in participating in this workshop are requested to
submit a 1000-2000 word extended abstract of their work on parallel algorithms
in areas of Machine Intelligence and Pattern Recognition. Areas of interest
include Search Problems in A.I. and Pattern recognition, high and low level
processing in Computer Vision, Speech Recognition, Optimization Problems
in A.I.,Constraint Satisfaction, and Pattern and  Graph matching.
The number of participants at the workshop must be limited due to limited
room size.

Abstracts should be sent as soon as possible and must reach the organizers
no later than June 1,1988. Abstracts may be sent by electronic mail to all
the organizers at the e-mail addresses shown.  Hard copy versions of each
abstract should also be sent to one of the organizers.
Responses to all who submit abstracts will be sent by July 1, 1988.

------------------------------

Date: Mon, 16 May 88 11:55:08 PDT
From: CHIN%PLU@ames-io.ARPA
Subject: seminar announcement

***************************************************************************
              National Aeronautics and Space Administration
                         Ames Research Center

                        SEMINAR ANNOUNCEMENT


SPEAKER:   Balas Natarajan
           Carnegie-Mellon University

TOPIC:     "Towards Learning Algorithms"

ABSTRACT:

This talk concerns two paradigms for "learning algorithms".

First, we consider learning in the sense of acquiring new information -
specifically, extracting a good approximation to an unknown set from examples
for the set.  After formalising the problem, we give a theorem identifying
conditions necessary and sufficient for such learning to be efficient.  We also
present smooth extensions of these results to functions (as opposed to sets) on
discrete and continuous domains.

Second, we consider learning in the sense of improving computational
efficiency - specifically, constructing good heuristics for a problem
from solved examples.  After formalising the problem, we give a theorem
identifying conditions sufficient to allow the construction of provably
good heuristics for a collection of problems.



BIOGRAPHY:

Balas K. Natarajan is a Research Scientist at the Robotics
Institute, Carnegie Mellon University. He received his Ph.D. in
computer science from Cornell University in 1986. Currently his
major research interests include both formal and applied
methods in Machine Learning and Robotics.


DATE: Monday,       TIME: 3:00 - 4:00 pm     BLDG. 244   Room 103
      May 23, 1988           --------------


POINT OF CONTACT: Marlene Chin   PHONE NUMBER: (415) 694-6525
     NET ADDRESS: chin%plu@ames-io.arpa

***************************************************************************

VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18.  Do not
use the Navy Main Gate.

Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance.  Submit requests to the point of
contact indicated above.  Non-citizens must register at the Visitor
Reception Building.  Permanent Residents are required to show Alien
Registration Card at the time of registration.
***************************************************************************

------------------------------

Date: 17 May 88 05:02:34 GMT
From: pasteur!agate!eos!millard@ames.arpa  (Millard Edgerton)
Subject: NN VIDEO TAPE COURSE

_________________________________________________
           NEURAL NETWORK VIDEOTAPE COURSE
_________________________________________________

The UCSC Extension 25 hour course, "Neural Nets --
Level One",  recently completed its 2nd semester.
A revised version will soon be videotaped in a
studio and released for distribution around Aug 88.

It will be a no-frills, 16 hour compressed version
of the original course, and includes 4 manuals, with
problem sets, answers, and literature.

NOTE --- A LITTLE PUBLICIZED OFFERING

Although the lecture series will be priced in Sept.
at around 1,800 dollars,  the pre-publication price
(until 1-JUN-88) is only 495 dollars !!  This
compares favorably to other N-N video tutorials
going for 5K to 6K dollars.  Call USA 408-738-2888
ext 4677 for more details.

I am posting this for the benefit of Mark Jurik, the
course instructor.

------------------------------

Date: Mon, 23 May 88 14:48:09 PDT
From: Ken Kahn <Kahn.pa@Xerox.COM>
Subject: Workshop on Open Systems at Xeroxc PARC

Xerox PARC will be hosting a AAAI-sponsored workshop on Open Systems from June 1
through June 3.  The morning sessions will be in PARC auditorium and open to the
public.

Open systems pose challenging problems at the level of software design and the
description of their behavior. Since they are often incrementally modified by
introducing new functionality and improving existing modules, they create
problems of coordination, commonality, and trust barriers. So far, insights,
studies, and algorithms for their design have appeared in a rather disjoint
fashion, with many researchers unaware of advances in related disciplines.
Among the topics that we plan to discuss in this workshop are: Natural and
Artificial Open Systems, Design Constrains for Open systems, Programming
Languages Issues and Knowledge Markets.

Wednesday June 1,  10am - 11am Organizational Knowledge Processing by Carl
Hewitt, 11-12 discussion
Thursday June 2, 9am - 10am Open Systems and Software Engineering by Alan Perlis
Friday June 3, 9:30 - 11:30 Knowbots and Knowledge Markets by Mark Stefik and
Danny Bobrow

The workshop is organized by Bernardo Huberman and Mark Stefik.  For more
information contact Bernardo Huberman <Huberman.pa@Xerox.Com>

Directions to reach PARC:  The Auditorium is located at 3333 Coyote Hill Road in
Palo Alto, between Page Mill Road (west of Foothill Expressway) and Hillview
Avenue, in the Stanford Research Park.  To get here, take Page Mill Road to
Coyote Hill Road.  PARC is the only building on the left, just over the crest of
Coyote Hill.  Park in the large lower lot if visitor parking is full and enter
the auditorium at the upper level of the building.  (The auditorium is located
down the stairs to the left of the main doors).

------------------------------

End of AIList Digest
********************

∂24-May-88  1202	@MC.LCS.MIT.EDU:AIList-REQUEST@AI.AI.MIT.EDU 	AIList Digest   V7 #4   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 24 May 88  12:02:05 PDT
Received: from FORD.MIT.EDU by MC.LCS.MIT.EDU via Chaosnet; 24 MAY 88  14:37:16 EDT
Date: Tuesday, 24 May 1988, 14:24-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Sender: nick@MIT-AI
Reply-to: AIList@AI.AI.MIT.EDU
Subject: AIList Digest   V7 #4
To: ailist-outgoing@mc



AIList Digest           Wednesday, 25 May 1988      Volume 7 : Issue 4

Today's Topics:
  
  Philosophy - Free Will 
   (First of at least three digests on this topic)

----------------------------------------------------------------------

Date: 7 May 88 17:51:22 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU  (Stephen
      Smoliar)
Subject: Re: Free Will & Self Awareness

In article <31024@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>
>It is not uncommon for a child to "spank" a machine which misbehaves.
>But as adults, we know that when a machine fails to carry out its
>function, it needs to be repaired or possibly redesigned.  But we
>do not punish the machine or incarcerate it.
>
>Why then, when a human engages in undesirable behavior, do we resort
>to such unenlightened corrective measures as yelling, hitting, or
>deprivation of life-affirming resources?
>
This discussion seems to be drifting from the issue of intelligence to that
of aggression.  I do not know whether or not such theses have gone out of
fashion, but I still subscribe to the hypothesis that aggression is "natural"
to almost all animal life forms, including man.  Is your adult self so
rational and mature that you have not so much as banged your fist on the
table when your software does something which particularly frustrates you
(or do you feel that adults also transcend frustration)?

------------------------------

Date: 8 May 88 09:52:18 GMT
From: TAURUS.BITNET!shani@ucbvax.Berkeley.EDU
Subject: Re: Free Will & Self-Awareness

In article <2070013@otter.hple.hp.com>, cwp@otter.BITNET writes:

> Did my value system exist before my conception? I doubt it. I learnt it,
> through interaction with the environment and other people. Similarly, a
> (possibly deterministic) program MAY be able to learn a value system, as
> well as what an arch looks like. Simply because we have values, does not
> mean we are free.

  Try to look at it this way: even assumeing that you did learn your values
from other people (pearents, teachers, TV etc.) and did not make anything up,
how did you decide, what to adupt from who? randomly? or by some pre-determent
factors? doesn't it make values worthless, if you can always blame chance,
heritage or some teachers in your school, for your decisions?

There is one mistake (In my opinion, ofcourse), which repeat in many of the
postings on this subject. You must make difference, between values as
themselfes (Like which is your favorite color, whether you belive in god or
not), from the practicing of your system-of-values (or alignment as I prefer to
machine), is the given realm, you 'play' on, because 'real' things (in the
manner of things that exist in the given realm), are the only things which
have a common (more or less...) meaning to all of us. Now, if you will think of
values, as theyre pure self, and not as theyre practice in realety, you will
see that they are not 'real' in this manner, and therfore cannot be learnt or
'given'. Maybe one day we will be able to create machines that will have this
uniqe human abielty to give a personal meaning to things... Infact, we can
do it already, and could for thousands of years - we create new human beings.

O.S.

-----------------------------------------------------------------------------
I think that I think, and therfore I think that I am...

------------------------------

Date: 8 May 88 20:19:03 GMT
From: oliveb!felix!dhw68k!feedme!doug@ames.arc.nasa.gov  (Doug Salot)
Subject: Re: Free Will & Self Awareness

In article <1179@bingvaxu.cc.binghamton.edu> Cliff Joslyn writes:
>In article <4543@super.upenn.edu> Lloyd Greenwald writes:
>>This is a good point.  It seems that some people are associating free will
>>closely with randomness.
>
>Yes, I do so.  I think this is a necessary definition.
>
>[good points about QM vs Classical vs Ignorance as views of Freedom deleted]

I don't believe randomness (in the quantum mechanical sense) is important
to a sense of free will.  The illusion of free will is what's important,
and when dealing with a computing machine (the brain) which makes state
changes on the time order of milliseconds, it is simply impossible for
that machine, self-aware or not, to view its state changes as
deterministic when its state changes are based on finer-grained state
changes that occur at or near the speed of light.  It seems to me
that it would be straight-forward to give a computer program with
the ability to monitor its *behavior* without giving it the ability to
find causal relations between its holistic state and its behavior.


--
Doug Salot || doug@feedme.UUCP || {trwrb,hplabs}!felix!dhw68k!feedme!doug
Feedme Microsystems:Inventors of the Snarf->Grok->Munge Development Cycle

------------------------------

Date: 9 May 88 01:32:40 GMT
From: pdn!ard@uunet.uu.net  (Akash Deshpande)
Subject: Re: Free Will & Self-Awareness


Consider a vending machine that for $.50 vends pepsi, coke or oj. After
inserting the money you make a selection and get it. You are happy.

Now consider a vending machine that has choices pepsi, coke and oj, but
always gives you only oj for $.50. After inserting the money you make
a selection, but irrespective of your selection you get oj. You may feel
cheated.

Thus, the willed result through exercise of freedom of choice may not be
related to the actual result. The basic question of freewill is -
"Is it enough to maintain an illusion of freedom of choice, or should
the willed results be made effective?". The latter, I suppose.

Further consider the first (good) vending machine. While it was being
built, the designer really had 5 brands, but chose (freely, for whatever
reasons) to vend only the three mentioned. As long as I (as a user of the
vending machine) don't know of my unavailable choice space, I have the
illusion of a full freedom of choice. This is where awareness comes in.
Awareness expands my choices, or equivalently, lack of awareness creates
an illusion of freewill (since you cannot choose that which you do not
know of). Note that the designer of the vending machine controls the
freewill of the user.

Indian philosophy contends that awareness (=consciousness) is fundamental.
Freewill always exists and is commensurate with awareness. But freewill
is also an illusion when examined in the perspective of greater awareness.

Akash
--
Akash Deshpande                                 Paradyne Corporation
{gatech,rutgers,attmail}!codas!pdn!ard          Mail stop LF-207
(813) 530-8307 o                                Largo, Florida 34649-2826
Like certain orifices, every one has opinions. I haven't seen my employer's!

------------------------------

Date: 9 May 88 02:36:41 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: Re: Free Will & Self Awareness

In article <1182@bingvaxu.cc.binghamton.edu>, Cliff Joslyn writes:
> Again, we're talking at different levels (probably a
> subjective/objective problem).  Let's try this: if you are free, that
> means it is possible for you to make a choice.  That is, you are free to
> scrap your value system.  At each choice you make, there is a small
> chance that you will do something different, something unpredictible
> given your past behavior/current value system.  If, on the other hand,
> you *always* adhere to that value system, then from my perspective, that
> value system (as an *external cause*) is determining your behavior, and
> you are not free.

I'm not sure that there is any point in continuing this, our basic
presuppositions seem to be so alien.

Orthodox Christianity holds that
    - God is able to do anything that is doable
    - God is not constrained by anything other than His own nature
    - it is impossible for God to sin
From my perspective, such a God is maximally free.
From Chris Joslyn's perspective, such a God is minimally free.

I think the problem lies in my disagreement with Joslyn's definition
quoted above.  He defines freedom as the ability to make choices, and
seems to regard unmotivated (random) ``choices'' as the freest kind.
I also take exception with his view that a value system is an
external cause.  My "value system" is as much a part of me as my
memories.  Or are my memories to be regarded as an external cause too?

Note that behaving consistently according to a particular value system
does not mean that said value system is immune from revision.  There
are some interesting logical problems involved:  in order to move
_rationally_ from one state of your value system to another, you have
to believe that the new state is _better_, which is to say that your
existing value system has to endorse the new one.  I would regard
"scrapping" one's value system as irrational (and it is not clear to
me that it is possible far a human being to do it), and if being able
to do it is freedom, that's not a kind of freedom worth having.

My objection to randomness as a significant component of freedom is
that a random act is not an act that **I** have *willed*.  If I were to
randomly put my fist through this screen, it wouldn't be _my_ act any
more than Chris Joslyn's or J.S.Bach's.  It is sheer good luck if a
random act happens to be in accord with my wishes.  Unfortunately, we
have living proof that randomness as such is not freedom: consider the
people who suffer from Tourette's syndrome.

It might be objected that behaving in a way consistent with one's goals,
beliefs, and wishes is too much like behaving in a way consistent with a
program to be counted as freedom.  It would be, if we were not capable
of revising said goals and beliefs.  To be free, you have to _check_ your
beliefs.

I propose as a tentative definition that a robot can be said to possess
free will provided that its actions are in accord with its own mental
models and provided that it has sufficient learning capacity to be able
to almost wholly replace the mental models it is initially provided with.

------------------------------

Date: 9 May 88 08:50:08 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: Free Will & Self-Awareness

In art. <5404@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes
>a rule-based or other mechanical account of cognition and decision making
>is at odds with the doctrine of free will which underpins most Western morality

>Cockton should either prepare a brief substantiation or relegate it to the
>cellar of outrageous vacuities crafted solely to attract attention!

Hey, that vacuity's sparked off a really interesting debate, from
which I'm learning a lot.  Don't put it in the cellar yet.
Apologies to anyone who doesn't like polemic, but I've always found it a great
way of getting the ball rolling - I would use extreme statements as a classroom
teacher to get discussion going, hope no-one's bothered by the transfer of this
behaviour to the adult USENET.

Anyway, the simplified, and thus inadeqaute argument is:

        machine intelligence => determinism
        determinism => lack of responsibility
        lack of responsibility => no moral blame
        no moral blame => do whatever your rulebase says.

Now we could view morality as just another rulebase applied to output 1 of the
decision-process, a pruning operator as it were.

Unfortunately, all attempts to date to present a moral rule-base have
failed, so the chances of morality being rule-based are slim. Note that
in the study of humanity, we have few better tools now than we had in
Classical times, so there are no good reasons for expecting major advances
in our understanding of ourselves.  Hence Skinner's dismay that while Physics
had advanced much since classical times, Psychology has hardly advanced at all.
Skinner accordingly stocked his lab with high-tech rats and pidgeons in an
attempt to push back the frontiers of learning theory.

At least you don't have to clean out the computer's cage :-)

------------------------------

Date: 9 May 88 10:52:56 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Free Will & Self Awareness

I was doing fine reading Cliff's rejoinder to Lloyd's comments until
I came to this part:

>>We can't demonstrate true randomness in present day computers;
>>the closest we can come (to my knowledge) is to generate a string
>>of numbers which does not repeat itself. [Lloyd]

>This is not possible in a von Neumann machine. [Cliff]

I was under the impression that a simple recursion (or not-so-simple
if one is a fan of Ramanujan) can emit the digits of pi (or e or
SQRT(2)) and that such a string does not repeat itself.

I think what Cliff meant is that a von Neumann machine cannot emit
a string whose structure cannot be divined.

If I wanted to give my von Neumann machine a *true* random number
generator, I would connect it to an A/D converter driven by thermal
noise (i.e. a toasty resister).

--Barry Kort

------------------------------

Date: 9 May 88 13:11:04 GMT
From: geb@cadre.dsl.pittsburgh.edu  (Gordon E. Banks)
Subject: Re: Free Will & Self Awareness

In article <31024@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>It is not uncommon for a child to "spank" a machine which misbehaves.
>But as adults, we know that when a machine fails to carry out its
>function, it needs to be repaired or possibly redesigned.  But we
>do not punish the machine or incarcerate it.
>
>Why then, when a human engages in undesirable behavior, do we resort
>to such unenlightened corrective measures as yelling, hitting, or
>deprivation of life-affirming resources?
>


Pray tell, how do you repair, or redesign
a human?  Is "Clockwork Orange" the model we want to strive for?
If you had a machine which was running amok, and you did not know
how to repair it or redesign it, would not destroying it or isolating
it from the objects of its aggression be a prudent course?  Punishment
can serve to "redesign" the human machine.  If you have children, you
will probably know this.  Unfortunately, it doesn't work with everyone.

------------------------------

Date: 9 May 88 14:23:33 GMT
From: sunybcs!nobody@rutgers.edu
Reply-to: sunybcs!rapaport@rutgers.edu (William J. Rapaport)
Subject: Re: Philosophy, free will


Another good reference on free will is:

Dennett, Daniel, _Elbow Room:  The Varieties of Free Will Worth Having_
(MIT Press).

------------------------------

Date: 9 May 88 16:02:01 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Punishment of machines

I was fascinated by John Nagle's rejoinder to my remarks about
punishing a machine.  John writes:
>      The concept of a machine which could be productively punished is
>not totally unreasonable.  It is, in fact, a useful property for some robots
>to have.  Robots that operate in the real world need mechanisms that implement
>fear and pain to survive.  Such machines will respond positively to punishment.
>
>      I am working toward this end, am constructing suitable hardware and
>software, and expect to demonstrate such robots in about a year.

John's posting reminded me of the short story, "Soul of the Mark III Beast"
which appears in _The Mind's I_.  While I cannot dispute John's point
that a game of engineered darwinism might produce a race of hardy robots,
I must confess that I am troubled by the concept.  Would not the survivors
be liable to rising up against their creators in a titanic struggle
for dominance and survival?  Would we erect a new colliseum to enjoy
the spectacle of intermachine warfare?  Why am I both excited and
horrified by the thought?

--Barry Kort

------------------------------

Date: 9 May 88 16:28:40 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Free Will & Self-Awareness

I was gratified to see Marty Brilliant's entry into the discussion.
I certainly agree that an intelligent system must be able to
evolve its knowledge over time, based information supplied partly
by others, and partly by its own direct experience.  Thomas Edison
had a particularly rich and accurate knowledge base because he was
a skeptic:  he verified every piece of scientific knowledge before
accepting it as part of his belief system.  As a result, he was able
to envision devices that actually worked when he built them.

I think Minsky would agree that our values are derived partly from
inheritance, partly from direct experience, and partly from internal
reasoning.  While the state of AI today may be closer to Competent
Systems rather than Expert Systems, I see no reason why the field
of AI cannot someday graduate to AW (Artificial Wisdom), in which an
intelligent system not only knows something useful, it senses that
which is worth knowing.

--Barry Kort

------------------------------

Date: 9 May 88 17:41:09 GMT
From: COYOTE.STANFORD.EDU!eyal@ucbvax.Berkeley.EDU  (Eyal Mozes)
Subject: Re: Free Will and Self-Awareness

In article <726@taurus.BITNET>, shani@TAURUS.BITNET writes:
>In article <402@aiva.ed.ac.uk>, jeff@aiva.BITNET writes:
>> Is this an Ayn Rand point?  It sure sounds like one.  Note the use
>> of `self-contradicting'.
>
>I bet you will not belive me, but I'm not sure I know who Ayn Rand is...

I didn't see anything in Shani's message that looks like it's based on
Ayn Rand (she certainly wasn't the only philosopher to oppose
self-contradiction), but I do agree with Jeff that Ayn Rand's writings
can shed light on the free will issue.

To those who haven't heard of her, Ayn Rand was a novelist and a
philosopher, and both her novels and her philosophy books are highly
recommended.

I think there are two main ways in which Ayn Rand was relevant to the
free will issue:

1. Her basic approach, of basing philosophical theories on observation
of facts rather than on assumptions about what the world should be
like, is an approach that all those who discuss the issue of free will
should learn. Specifically, we have to realize that free will is a
fact, perceived directly by introspection, and it is therefore wrong to
reject it just because we would like all natural processes to conform
to the model of physics.

2. Ayn Rand has identified the exact nature of free will, in a way that
is consistent with the facts, does not suffer from the philosophical
problems of other free will theories, and demonstrates why free will is
not connected to, and is actually incompatible with, randomness. Man's
free will lies in the act of focusing his consciousness, in his choice
to think or not and what to think about. This means that free will is
consistent with having reasons that determine your thoughts and your
actions, because, by your choice in focusing your consciousness, it is
you who chose those reasons.

        Eyal Mozes

        BITNET: eyal%coyote@stanford
        ARPA:   eyal@coyote.stanford.edu

------------------------------

Date: 9 May 88 18:22:01 GMT
From: vu0112@bingvaxu.cc.binghamton.edu  (Cliff Joslyn)
Subject: Re: Free Will & Self Awareness

In article <31337@linus.UUCP> bwk@mbunix (Kort) writes:
>I was doing fine reading Cliff's rejoinder to Lloyd's comments until
>I came to this part:
>
>>>We can't demonstrate true randomness in present day computers;
>>>the closest we can come (to my knowledge) is to generate a string
>>>of numbers which does not repeat itself. [Lloyd]
>
>>This is not possible in a von Neumann machine. [Cliff]
>
>I was under the impression that a simple recursion (or not-so-simple
>if one is a fan of Ramanujan) can emit the digits of pi (or e or
>SQRT(2)) and that such a string does not repeat itself.
>
>I think what Cliff meant is that a von Neumann machine cannot emit
>a string whose structure cannot be divined.

Hmm, I suppose you're right.  I was thinking of your typical
psuedo-random process whose cycle length was a function of the size of
the seed.

I forget the impact on the argument at this point: it seems it would
rest on the epistemic grounds of determining a truly random string from
a simply chaotic one.  My impressions is that this is not always
possible.

Food for mailer
Food for mailer
Food for mailer
Food for mailer
Food for mailer

--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: 9 May 88 19:13:48 GMT
From: mcvax!botter!klipper!biep@uunet.uu.net  (J. A. "Biep" Durieux)
Subject: Free Will, Quantum computers, determinism, randomness,
         modelling

Warning: this is long!     Question to Muslims at the end (but they should
                                read the rest first before answering).

In several newsgroups there are (independent) discussions on the nature
of free will. It seems natural to merge these. Since these discussions
mostly turn around physical, logical or mathematical notions, the natural
place for this discussion seems to be  sci.philosophy.tech  (which newsgroup
is devoted to technical philosophy, i.e. logic, philosophical mathematics
(like intuitionism, formalism, etc.), philosophical physics (the time
problem, interpretations of quantum mechanics, etc) and the like).
Here follows a bit of the discussion in sci.math, plus some of my
comments.

- - -

In article <5673@uwmcsd1.UUCP>,
        markh@csd4.milw.wisc.edu (Mark William Hopkins) writes:
> This is probably the sharpest formulation of the widely held idea
> that our free will may be ultimately derived from the Uncertainty Principle.
> What's being argued here is that human beings are fundamentally random
> (Turing Random) ... or less euphemistically, fundamentally insane.  The
> most succinct expression of these ideas is:

>       Human intelligence = Turing machine
>                              + Quantum theoretical random signal generator

- - -

Some points (personal opinions):

(1) The philosophical meaning of free will is mainly to form a basis for
    founding responsibility. To a lesser degree it is important with respect
    to the cluster of problems surrounding consciousness, experiencing, etc.
    It is in no way clear to me how "mechanical" explanations of the
    phenomenon can be of any help here. Is there any more reason for giving
    a victim a verdict of heavy duty instead of just life-long imprisonment
    (or easier still: death by destruction) because the victim came to its
    actions by a random decision process, or by trying in vain to model
    itself? Or does any such explanation give a hint as to the "why" of my
    sensation of existing? [I am not saying it doesn't: if you think it
    does, please explain!]
    [The same holds the other way too, of course: why take the trouble to
    give someone a "good old day" (especially if there is no motive of
    getting one yourself because of it).]

(2) In answer to Drew McDermott's robot that can only predict the world by
    seeing itself as a black box that acts intentionally: that is exactly
    what there is to it: the robot has to model itself as an intentional
    being. And it *is* an intentional being. So is a glass with water and
    lots of salt. Any system with the right sort of feedback is intentional.
    The concept of intentionality is just a tool for solving simultaneous
    equations without getting into an infinite recursion. But what does
    intentionality have to do with free will? If I understand you well, you
    say that the robot would get the *illusion* of having free will because
    of having to model itself that way. Does that mean that until you
    "understood" the working of a magnet, since you had to model it
    intensionally, you were forced into the thought magnets had free will?
    Do you think matter has the free will to attract its like?

(3) About explaining free will by introducing randomness:
    - I think those who say randomness is just as forcing as determinism:
      most of you have been talking about *external* randomness, and
      *external* determinism. Compatibilists see people make this error,
      and from that conclude that, since the objections to the theory
      don't hold, the theory is true after all. I think it isn't.
      Primo, the responsibility argument still holds, and secundo, the
      "explanation" is just shifting the problem, and therefore begging
      the (or another) question:
    - Randomness is a poorly (or even non-)understood phenomenon. Events
      may be random, but still have a probability distribution, which
      makes the thing even more complicated. Now free will might be an
      (unexplained in itself) explanation for randomness (the particle
      emerges where it wants to), at least as validly as the other way
      around.
    - Determinism, on the other hand, is not much better understood.
      It can easily be described, and so it *seems* better understood,
      but as far as I know, there is no valid explanation for why certain
      things happen always exactly the same way. (This is not even true
      in physics: QM.) The problem is as perplexing in logic: why are
      certain types of inference always valid? Even if I make one for
      the 100th time, again it is valid.
    - A widely accepted answer to both questions (on randomness and on
      determinism) is: because of free will (e.g. the free will of G-d
      to let the world be as it is). Free will, or at least will, is the
      "natural" explanation: primitive man supposes lots of spirits of
      all kinds to cause both regularities and irregularities.
    - Finally, several persons have argued that there is a division
      deterministic/random, and that there is nothing between. (Martin
      Gardner, in "The whys of a philosophical scrivener", argues that
      there *is*, or at least may be, something in between.) My impression
      is that these persons are looking for free will at the wrong place.
      To me, there is a spectrum (or dichotomy, rather), with probability
      (including determinism at both limits and randomness in between)
      at one end, and free will at the other. After all, determinism is
      just a special case of probability. A priori, it seems to me that
      it is easier to explain probability in terms of free will than free
      will in terms of probability.

(4) I think free will and determinism don't have to clash, but for another
    reason than the ones I have read so far. Let me explain by way of an
    example: Suppose I am dreaming. Now I dream a world, and in that world
    someone proves that pi equals 7. What's more: he *correctly* proves so.
    He shows his proof to others, and the knowledge that pi is 7 becomes
    general knowledge in that world. That is possible. I will never be
    able to reproduce that proof as a correct proof in this world, but
    it that other world it is correct. Ask anybody in that dream-world of
    mine, and (s)he'll tell you that the laws of logic are as rigid as we
    think they are here. (By the way, what would the logical equivalent
    of the physical light speed be?) So that world is deterministic, but
    nevertherless my will made pi be 7. The only requirement is, that my
    will is *transcendent* to the world in question. Now anybody can
    imagine a world that is governed by *one* will, because we all have
    the dream example. But now imagine a world that is partially governed
    by lots of wills: every will calls the piece it governs "I". That may
    be somewhat harder to imagine, but to me it seems that that would
    produce a world in which the rules of logic and physics could be rigid
    and inviolable, but in which nevertheless free will could influence
    reality.
    I realise this too, is begging the question, for how would those
    free wills exist in their hyper-world, but one thing has changed:
    what first seemed impossible, now "only" seems ununderstandable (as
    we don't have any idea of what such a hyper-world would be like, and
    we only cannot imagine what sort of substratum they would have in
    stead of logic and causality).

But let's stop now, before I introduce G-d, or even J-sus (ever been
immanently present in your own dream-world?).

One last question: how do Muslims explain the concept of responsibility,
given the fact that they can't help what they do or don't do (kismet)?
Or am I misunderstanding the concept of kismet?
--
                                                Biep.  (biep@cs.vu.nl via mcvax)
        Some mazes (especially small ones) have no solutions.
                                                        -- man 6 maze

------------------------------

Date: 9 May 88 20:27:16 GMT
From: mcvax!ukc!its63b!aiva!tw@uunet.uu.net  (Toby Walsh)
Subject: Re: Free Will

Drew McDermott's proposes a "cute" example of a robot R next to a bomb B,
thinking about (thinking about (thinking about ..... its thinking) ....));
to avoid this infinite regress,  he proposes "free will" = "ability to
identify one's special status within one's model of the universe".

This example immediately suggests to me the analogy with meta-level
reasoning; reasoning about reasoning occurs at the meta-level, and
reasoning about this meta-level reasoning at the meta-meta-level, ....
To escape this infite regress of meta-meta-.... levels, we need to
introduce the idea of (self-)reflection, where we reason about the
meta↑n-level in the meta↑n-level. The notion of identifying one's
special status within the model then becomes the analogous concept
of naming between object- and meta-levels.

But does this example/analogy tell us more about the annoying issue of free
will ? No, I believe. It has much to say about consciousness but
doesn't directly address what it is to have goals, desires, what it is
to MAKE a decision when confronted with choice. Nevertheless, meta-level
reasoning is an interesting model within which to formulate these concepts.


-------------------------------------------------------------------------------
Toby Walsh                      JANET: T.Walsh@uk.ac.edinburgh
Dept of AI                      ARPA:  T.Walsh%uk.ac.edinburgh@nss.cs.ucl.ac.uk
Edinburgh University            Tel:   (=44)-31-225-7774 ext 235
80 South Bridge, Edinburgh EH1 1HN

------------------------------

Date: 9 May 88 21:05:14 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Free Will & Self Awareness

Perhaps it would help if I offered a straw proposal for invoking one's
free will in a specific situation.

Assume that I possess a value system which permits me to rank my
personal preferences regarding the likely outcome of the courses
of action open to me.  Suppose, also, that I have a (possibly crude)
estimate of your value system.  If I were myopic (or maybe just stupid)
I would choose my course of action to maximize my payoff without regard
to you.  But my knowledge of your value system creates an interesting
opportunity for me.  I can use my imagination to conceive a course
of action which increases both of our utility functions.  Free will
empowers me to choose a Win-Win alternative.  Without free will, I am
predestined to engage in acts that hurt others.  Since I disvalue hurting
others, I thank God that I am endowed with free will.

Is there a flaw in the above line of reasoning?  If so, I would be
grateful to someone for pointing it out to me.

--Barry Kort

------------------------------

Date: 9 May 88 22:28:02 GMT
From: turing.arc.nasa.gov!nienart@icarus.riacs.edu  (john nienart)
Subject: Re: Free Will and Self-Awareness

In article <8805091739.AA27922@ucbvax.Berkeley.EDU> Eyal Mozes writes:
>
>                 Specifically, we have to realize that free will is a
>fact, perceived directly by introspection, and it is therefore wrong to
>reject it just because we would like all natural processes to conform
>to the model of physics.

What makes you so certain that _anything_ perceived be introspection is
fact? Introspection provides me with the "fact" that there is, in fact, a
"me" to do the introspecting, but there are a number of philosophies and
religions (mostly of the Eastern variety) which insist that this is _not_ a
fact, but rather simply an illusion we impose on ourselves, essentially
through habit, and that through the proper discipline, we can train
ourselves to note the absence of this self. After this process is complete
(no claim is being made here as to personal success in this), introspection
apparently confirms a hypothesis which directly contradicts our previous
introspective belief. Which introspection is correct?
>
>Man's
>free will lies in the act of focusing his consciousness, in his choice
>to think or not and what to think about. This means that free will is
>consistent with having reasons that determine your thoughts and your
>actions, because, by your choice in focusing your consciousness, it is
>you who chose those reasons.

Maybe its just me, but I find rather frequently that I'm thinking about
something that I'm _sure_ I'd rather not think about (or humming a trashy
pop song I hate, etc.). It certainly feels at these times that I don't have
complete control over that upon which my consciousness is focussed. So would
you say that I _dont_ have free will (despite my introspective belief in
it), or that the (introspectively motivated) "fact" of my lack of control
is false?

--John

------------------------------

Date: 9 May 88 23:55:11 GMT
From: COYOTE.STANFORD.EDU!eyal@ucbvax.Berkeley.EDU  (Eyal Mozes)
Subject: Re: Free Will and Self-Awareness

In article <796@hydra.riacs.edu>, john nienart writes:
>What makes you so certain that _anything_ perceived be introspection is
>fact? Introspection provides me with the "fact" that there is, in fact, a
>"me" to do the introspecting, but there are a number of philosophies and
>religions (mostly of the Eastern variety) which insist that this is _not_ a
>fact,

And that's exactly the point. Most philosophies, and all religions, make
a lot of a priori assumptions about what the world should be like, some
of them ridiculously contrary to fact. This approach must be rejected.

>Maybe its just me, but I find rather frequently that I'm thinking about
>something that I'm _sure_ I'd rather not think about (or humming a trashy
>pop song I hate, etc.). It certainly feels at these times that I don't have
>complete control over that upon which my consciousness is focussed.

Are you saying that, at those times, you are making a deliberate,
conscious effort to turn your thoughts to something else, and this
effort fails? If so then, yes, it is just you; all the evidence I'm
familiar with points to the fact that it's always possible for a human
being to control his thoughts by a conscious effort.

        Eyal Mozes

        BITNET: eyal%coyote@stanford
        ARPA:   eyal@coyote.stanford.edu

------------------------------

Date: 10 May 88 00:56:01 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.MIT.EDU 
      (Stephen Smoliar)
Subject: Re: Free Will & Self-Awareness

In article <1099@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>Research IS stopped for ethical reasons, especially in Medicine and
>Psychology.  I could envisage pressure on institutions to limit its AI
>work to something which squares with our ideals of humanity.

Just WHOSE ideals of humanity did you have in mind?  I would not be surprised
at the proposition that humanity, taken as a single collective, would not be
able to agree on any single ideal;  that would just strike me as another
manifestation of human nature . . . a quality for which the study of artificial
intelligence can develop great respect.  Back when I was a callow freshman, I
was taught to identify Socrates with the maxim, "Know thyself."  As an
individual who has always been concerned with matters of the mind, I can
think of no higher ideal to which I might aspire than to know what it is
that allows myself to know;  and I regard artificial intelligence as an
excellent scientific approach to the pursuit of this ideal . . . one which
enables me to test flights of my imagination with concrete experimentation.
Perhaps Gilbert Cockton would be kind enough to let us know what it is that
he sees in artificial intelligence research that does not square with his
personal ideals of humanity (whatever they may be);  and I hope he does not
confuse the sort of brute force engineering which goes into such endeavours
as "smart weapons" with scientific research.

>If the
>US military were not using technology which was way beyond the
>capability of its not-too-bright recruits, then most of the funding
>would dry up anyway.  With the Pentagon's reported concentration on
>more short-term research, they may no longer be able to indulge their
>belief in the possibility of intelligent weaponry.
>
Which do you want to debate, ethics or funding?  The two have a long history
of being immiscible.  The attitude which our Department of Defense takes
towards truly basic research is variable.  Right now, times are hard (but
then they don't appear to be prosperous in most of Europe either).  We
happen to have an administration that is more interested in guns than brains.
We have survived such periods before, and I anticipate that we shall survive
this one.  However, a whole-scale condemnation of funding on grounds of
ethics doesn't gain very much other than a lot of bad feeling.  Fortunately,
we have benefited from the fat years to the extent that the technology has
become affordable to the extent that some of us can pursue more abstract
studies of artificial intelligence with cheaper resources than ever before.
Anyone who REALLY doesn't want to take what he feels is "dirty" money can
function with much smaller grants from "cleaner" sources (or even, perhaps,
work out his garage).
>
>The question is, do most people WANT a computational model of human
>behaviour?

Since when do "most people" determine the agenda of any scientific inquiry.
Did "most people" care whether or not this planet was the center of the
cosmos.  The people who cared the most were navigators, and all they cared
about was the accuracy of their charts.  The people who seemed to care the
most about Darwin were the ones who were most obsessed with the fundamental
interpretation of scripture.  This may offend sociological ideals;  but
science IS, by its very nature, an elite profession.  A scientist who lets
"most people" set the course of his inquiry might do well to consider the
law or the church as an alternative profession.

>  Everyone is free to study what they want, but public
>funding of a distasteful and dubious activity does not follow from
>this freedom.

And who is to be the arbiter of taste?  I can imagine an ardent Zionist who
might find the study of German history, literature, or music to be distasteful
to an extreme.  (I can remember when it was impossible to hear Richard Wagner
or Richard Strauss in concert in Israel.)  I can imagine political scientists
who might find the study of hunter-gartherer cultures to be distasteful for
having no impact on their personal view of the world.  I have about as much
respect for such tastes as I have for anyone who would classify artificial
intelligence research as "a distasteful and dubious activity."

>   If funding were reduced, AI would join fringe areas such as
>astrology, futorology and palmistry.  Public funding and institutional support
>for departments implies a legitimacy to AI which is not deserved.


Of course, those "fringe areas" do not get their funding from the government.
They get it through their own private enterprising, by which they convince
those "most people" cited above to part with hard-earned dollars (after the
taxman has taken his cut).  Unfortunately, scientific research doesn't "sell"
quite so well, because it is an arduous process with no quick delivery.
Gilbert Cockton still has not made it clear, on scientific grounds at any
rate, why AI does not deserve this so-called "legitimacy."  In a subsequent
article, he has attempted to fall back on what I like to call the
what-a-piece-of-work-is-man line of argument.  Unfortunately, this
approach is emotional, not scientific.  Why he has to draw upon emotions
must only be because he cannot muster scientific arguments to make his
case.  Fortunately, those of us who wish to pursue a scientific research
agenda need not be deterred by such thundering.  We can devote our attention
to the progress we make in our laboratories.

------------------------------

Date: Mon, 9 May 88 19:58 EST
From: EBARNES%HAMPVMS.BITNET@MITVMA.MIT.EDU
Subject: Free Will and Determinism


OK Folks:
        There are a few developments that this arguement has undergone in the
past hundred years that you should become familiar with.  James Anderson
was the first to point out the apparent paradox here on AIlist.  To
repeat what he said - 1) Determinism makes Free will impossible, because
my action were determined from before I was born, and I therefore cannot
have control over them; and 2) Free will is impossible without Determinism,
because without strict determinism I do not have direct control over my
actions (some random event could prevent my doing what I wanted to do).

The first view was defended by Peter Van Inwagen in "An Essay on Free Will"
and the second view was defended by Schopenhauer in "On the Freedom of the
Will".  The Arguement was settled by Dennett in his recent book "Elbow
Room: Varieties of free will worth wanting".  Dennett argues that when
the idea of freedom is analyzed, there is no possible state of affairs that
we could be refering to if we want to have complete control over our actions
(i.e. - be able to do what we desire), and have our actions not be determined
and so we must redefine what we mean by freedom.  The arguement is involved
but he uses analogies of what we mean by freedom in other senses, such as
not being in prison to show that what we want when we want freedom is control
over our lives and actions.  What determinism claims is that our actions
are determined by way of our desires being determined, not that our actions
are controled in spite of our desires.  His conclusion is that Schopenhauer
was right, and that free will requires determinism rather than being
prevented by it.  Yes, this means that our desires are determined, but this
is good.  Our desires are determined by our enviornment, which is the best
place for them to come from, since it is impossible for us to desire all
of our desires (It would result in an infinite loop).

        The problem now is that Determinism is not in fact true, because of
Quantum Mechanics.  The probability 1 outcome of future events is a myth
that has been thouroghly disproven.  This raises the second point that
has been missing in this discussion, a definition of random.  I thank
David Sher for pointing this out, and I offer the following clarification:
there are two kinds of random - 1) Epistemic randomness means that we do
not know what the outcome of an event will be; 2) Ontological randomness
means that the outcome is not yet a fact of the matter (i.e. - it could
turn out either way).  Quantum Mechanics has shown that Ontological
randomness exists (Read up on the Einstien, Pedalsky, Rosen [EPR] paradox).
But the randomness occers primarily on the microscopic level, so that
macroscopic events may still be almost totally determined.
        In conclusion, Free will is compatible with determinism, but
determinism is not true.  Free will is not only compatible with determinism,
but dependant on it, so it appears that we have free will only to the
extent that macroscopic events are determined.  There are therefore degrees
of freedom of the will.

------------------------------

End of AIList Digest
********************

∂24-May-88  1816	@MC.LCS.MIT.EDU:AIList-REQUEST@AI.AI.MIT.EDU 	AIList Digest   V7 #5   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 24 May 88  18:16:03 PDT
Received: from FORD.MIT.EDU by MC.LCS.MIT.EDU via Chaosnet; 24 MAY 88  14:45:00 EDT
Date: Tuesday, 24 May 1988, 14:40-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Sender: nick@MIT-AI
Reply-to: AIList@AI.AI.MIT.EDU
Subject: AIList Digest   V7 #5
To: ailist-outgoing@mc


AIList Digest           Wednesday, 25 May 1988      Volume 7 : Issue 5

Today's Topics:

  Philosophy - More Free Will 
   (Second of at least three digests on this topic)

----------------------------------------------------------------------

Date: Mon  9 May 88 23:41:40-PDT
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: A BDI Approach to Free Will

I am not trained in philosophy, but the following points seem reasonable:

Let my current beliefs, desires, and intentions be called my BDI state.
It may be a fuzzy state, an algorithm, whatever.  Let the continuous set
of all such states from conception to the present be called my BDI history.
(I gather that these are the situated automata assumptions.  Fine;
I'm willing to view myself as a Markov process.  I just hope I'm not
abusing a standardized vocabulary.)

Are my actions fully determined by external context, my BDI state,
and perhaps some random variables?  Yes, of course -- what else is there?
This follows almost directly from my definition of BDI state.
I suppose there could be influence from non-BDI variables
(e.g., from my BDI history, which is not itself a belief, desire,
or intention), but I could fix that by positing a more elaborate
state vector that includes all such influences.

Is my BDI history fully determined by genetics, external context,
and perhaps some random variables?  Yes, of course -- since I'm  not
a mind/body dualist.  The dualist position seems to require
a spiritual-domain context, BDI state, and history -- but
that just bumps the problem up one level instead of solving it.

Are my actions predictable?  No.  My BDI history is chaotic and
possibly stochastic, and my BDI state is unknowable.  Even I can't
predict my actions in complete detail, although I can predict dominant
characteristics in familiar situations.

Do I have free will?  What does that mean?  It can't mean that I will
my actions to contradict my BDI state, since that intention would
itself be part of my BDI state.  It can't mean that I ignore
my BDI state and take random actions, since that surrenders will
to chance.  (The act of surrender is controlled by my BDI state, and
is separate from any random acts that later occur.  It might be an
act of free will, if we can pin down what that means.)

Free will must mean the ability to follow actions dictated by my
BDI state, if it means anything.  Yet that is only the freedom
to follow a program, the antithesis of free will.  So the term itself
is a contradiction, and the discussion is meaningless.  I do what
I do because I am what I am, and the current "I" has no control
over what I am at this moment.

There is one aspect I haven't covered.  Because my BDI states are
recursive, my current actions (including thoughts and other mental
actions) influence my future BDI states.  I can shape my own
character and destiny, although my actions in this regard are still
determined by my BDI state.  I can lock in specific goals, then
work toward changing my BDI state to achieve them.  Success may
be almost instantaneous, or may be as difficult as quitting smoking
or losing weight.  It is in these processes that the illusion of
free will is strongest, but it is still an illusion (of the sort
pointed out by Drew McDermott.)  It is also in these processes
that we most sense our lack of free will when we fail to achieve
the internal states necessary for our chosen goals.

I see no reason why we can't build machines with similar mental
architectures, at least in principle.  They will necessarily
experience free will, although they may or may not believe that
they have it.  We can also believe as we will, but that will is
no more free for us than for the machines.

                                        -- Ken Laws

------------------------------

Date: 10 May 88 08:42:21 GMT
From: otter!cwp@hplabs.hp.com  (Chris Preist)
Subject: Re: Re: Free Will and Self-Awareness


It is depressing to see that people are unable to accept that the problem
of determinism is a METAPHYSICAL problem, which cannot be solved purely
by philosophical debate, introspection, etc etc. It is NOT self-contradictory
to assume that the world is determined, and the freedom we percieve is
subjective. Nor is it self-contradictory to assume that we are free through
some form of mind-body dualism (As soon as you bring in 'I','free choices',etc,
this is what you are opting for). However, it is self-contradictory to assume
that what you assume is necessarily correct! Some people in the debate
obviously have not understood the determinist/compatiblist argument. For a
short and lucid explanation, may I recommend the relevant chapter in Ayer's
 'Problems of Philosophy'.

By the principle of Occam's Razor (i.e. disguard all unnecessary assumptions),
I personally would choose to work assuming the determinist approach. However,
the fact that the universe is determined is subjectively irrelevant to our
experiences - the freedom we experience (and thus the moral responsibility,
etc) is as exciting as ever.

Chris

------------------------------

Date: 10 May 88 13:44:44 GMT
From: hpscad.dec.com!verma@decwrl.dec.com  (Virendra Verma, DTN
      297-5510, MRO1-3/E99)
Subject: RE: Free Will & Self-Awareness

>Consider a vending machine that for $.50 vends pepsi, coke or oj. After
>inserting the money you make a selection and get it. You are happy.

>Now consider a vending machine that has choices pepsi, coke and oj, but
>always gives you only oj for $.50. After inserting the money you make
>a selection, but irrespective of your selection you get oj. You may feel
>cheated.

>Thus, the willed result through exercise of freedom of choice may not be
>related to the actual result. The basic question of freewill is -
>"Is it enough to maintain an illusion of freedom of choice, or should
>the willed results be made effective?". The latter, I suppose.

>Further consider the first (good) vending machine. While it was being
>built, the designer really had 5 brands, but chose (freely, for whatever
>reasons) to vend only the three mentioned. As long as I (as a user of the
>vending machine) don't know of my unavailable choice space, I have the
>illusion of a full freedom of choice. This is where awareness comes in.
>Awareness expands my choices, or equivalently, lack of awareness creates
>an illusion of freewill (since you cannot choose that which you do not
>know of). Note that the designer of the vending machine controls the
>freewill of the user.

>Akash

        It seems to me that you are mixing "free will" and "outcome". I think
        "free will" is probabilitically related to the "outcome". Isn't the
        essence of "law of karma" when Krashna mentions that you are free
        to exercise your will (i.e., the act of doing something which is
        karma element, "insertion of coins" is an act of free will in your
        example"). You have no control over the "results" element of your
        free will? The "awareness" element simply improves the probablity
        of the "outcome". Even in your first example with good machine, you
        may not get what you want because there may be a power failure
        right after you insert the coin!!

        - Virendra

------------------------------

Date: 10 May 88 15:29:35 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.MIT.EDU 
      (Stephen Smoliar)
Subject: control of one's thoughts

In article <8805092354.AA05852@ucbvax.Berkeley.EDU> eyal@COYOTE.STANFORD.EDU
(Eyal Mozes) writes:
>In article <796@hydra.riacs.edu>, nienart@turing.arc.nasa.gov (john nienart)
>writes:
>
>>Maybe its just me, but I find rather frequently that I'm thinking about
>>something that I'm _sure_ I'd rather not think about (or humming a trashy
>>pop song I hate, etc.). It certainly feels at these times that I don't have
>>complete control over that upon which my consciousness is focussed.
>
>Are you saying that, at those times, you are making a deliberate,
>conscious effort to turn your thoughts to something else, and this
>effort fails? If so then, yes, it is just you; all the evidence I'm
>familiar with points to the fact that it's always possible for a human
>being to control his thoughts by a conscious effort.
>
There is an old joke that may serve as a valuable counterexample here:
Try consciously NOT to think of an elephant for exactly the next five
minutes and then think of a baby elephant as soon as those five minutes
have elapsed.

------------------------------

Date: 10 May 88 16:25:58 GMT
From: krulwich-bruce@yale-zoo.arpa  (Bruce Krulwich)
Subject: Acting irrationally (was Re: Free Will & Self Awareness)

In article <31024@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>It is not uncommon for a child to "spank" a machine which misbehaves.
>But as adults, we know that when a machine fails to carry out its
>function, it needs to be repaired or possibly redesigned.  But we
>do not punish the machine or incarcerate it.
>
>Why then, when a human engages in undesirable behavior, do we resort
>to such unenlightened corrective measures as yelling, hitting, or
>deprivation of life-affirming resources?

This can be explained easily in light of AI theories of the roles of
expectations in cognition and learning.  Your example could be
explained as follows:

    1.  Yelling at and hitting a person because of something he's done
        is irrational
    2.  People have the expectation that other people act rationally
    3.  When someone yells at you, it triggers a failure of this
        expectation
    4.  So, the person being yelled at or hit tries to explain this
        expectation failure, hopefully concluding that he did
        something that the other person feels strongly about
    5.  Thus he learns that the other person feels strongly about
        something, which most of the time is the goal that the yeller
        or hitter had in the first place

Bruce Krulwich

------------------------------

Date: 10 May 88 20:40:20 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Free Will and Self-Awareness

Eyal Mozes writes:
> ... all the evidence I'm familiar with points to the fact
>that it's always possible for a human being to control his
>thoughts by a conscious effort.

Our thoughts are at least partly influenced by information received
through our senses.  People who have witnessed a disturbing event
in their lives may have trouble getting it off their minds.  I think
most psychologists would agree that at least some portion of the
population is susceptible to unwanted thoughts.  Perhaps these
victims haven't discovered how to engage the conscious mind to
override the invasions of the nonconscious mind.  (By the way, I'm
one of the victims, so I'd be grateful for any guidance Eyal can give me.)

--Barry Kort

------------------------------

Date: 11 May 88 15:04:06 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Acting irrationally (was Re: Free Will & Self Awareness)

I appreciated Bruce Krulwich's analysis of the cognitive chain
initiated by a yelling/hitting episode.  The fifth (and last)
link in the chain of reasoning, is that the target of the verbal
abuse draws a conclusion about the abuser:

>    5.  Thus he learns that the other person feels strongly about
>        something, which most of the time is the goal that the yeller
>        or hitter had in the first place

Wouldn't it have been easier if the yeller had simply disclosed his/her
value system in the first place?  Or do I have an unrealistic expectation
that the yeller is in fact able to articulate his/her value system to an
inquiring mind?

--Barry Kort

------------------------------

Date: 11 May 88 15:21:39 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Free Will & Self Awareness

I appreciated Richard O'Keefe's suggestion that free will is intimately
related to the freedom to learn.  This idea is consistent with the
notion that one cannot create a sentient being without free will.
Moreover, it is evidently unpredictable what a sentient being will
in fact discover and learn in his/her/its lifetime.

--Barry Kort

------------------------------

Date: 11 May 88 15:52:09 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Free Will & Self Awareness

Stephen Smoliar seems frustrated at the drift of the discussion.
He writes:

>This discussion seems to be drifting from the issue of intelligence to that
>of aggression.  I do not know whether or not such theses have gone out of
>fashion, but I still subscribe to the hypothesis that aggression is "natural"
>to almost all animal life forms, including man.  Is your adult self so
>rational and mature that you have not so much as banged your fist on the
>table when your software does something which particularly frustrates you
>(or do you feel that adults also transcend frustration)?

It is not clear to me whether aggression is instinctive (wired-in)
behavior or learned behavior.  I think the pschological jury is
still out on this question.  It is certainly true that aggressive
and non-aggressive behaviors can be learned.  (Personally, I feel
that assertive behavior is preferrable to aggressive, and tactful
behavior is preferable to assertive.  But tactful behavior is harder
to learn.)

As to your question about my personal habits when frustrated, I do
not bang my fist on the table.  Rather, I clench my teeth.  And
I do believe it is possible (though difficult) to transcend frustration.

--Barry Kort

------------------------------

Date: 11 May 88 16:32:18 GMT
From: ssc-vax!bcsaic!rwojcik@beaver.cs.washington.edu  (Rick Wojcik)
Subject: Re: Free Will

In article <28705@yale-celray.yale.UUCP> dvm@yale.UUCP (Drew Mcdermott) writes:
DM> Hence any system that is sophisticated enough to model situations that its
DM> own physical realization takes part in must flag the symbol describing that
DM> realization as a singularity with respect to causality.  There is simply
DM> no point in trying to think about that part of the universe using causal
DM> models...

I like your metaphor of 'a singularity with respect to causality'.  It
neatly captures the concept of the Agent case role in linguistic theory.
But it goes beyond modelling one's own physical realization.  Chuck
Fillmore used to teach (in the heyday of Case Grammar) that simple clause
structure only admits to two overtly marked causers--the Agent and the
Instrument.  This is a fairly universal fact about language (the only
exception being languages with 'double agent' verbs, where the verb stem
can have an affix denoting indirect causation).  Agents refer to verbal
arguments that are 'ultimate causers' and Instruments refer to those that
are 'immediate causers'.  He has always been quite explicit in his belief
that the human mind imposes a kind of filter on the way we can view chains
of causally related events--at least when we try to express them in
language.   One of the practical side effects of the belief in free will
is that it provides us with a means of chunking chains of causation up
into conceptual units.
--
Rick Wojcik   csnet:  rwojcik@boeing.com
              uucp:   uw-beaver!ssc-vax!bcsaic!rwojcik
address:  P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone:    206-865-3844

------------------------------

Date: Wed, 11 May 88 13:34:09 EDT
From: Thanasis Kehagias <ST401843%BROWNVM.BITNET@MITVMA.MIT.EDU>
Subject: AI is about Death


here is my two bits on the free-will argument. it is highly speculative
not scientific. but with the debate being at its present level, i think
it will fit well with the rest.

1st METAQUESTION: why do people argue so vehemently about free will? not
only now in this list, but also through the centuries, free will has
been a touchy subject.

2nd METAQUESTION: why has AI been such a controversial subject, since it
became a possibly real possibility (around the mid-forties) ?

SUGGESTED ANSWER: if AI is possible, then it is possible to create
intelligence. all it takes is the know-how and the hardware. also, the
following inference is not farfetched: intelligence -> life. so if AI
is possible, it is possible to give life to a piece of hardware. no ghost
in the machine. no soul. call this the FRANKENSTEIN HYPOTHESIS, or, for
short, the FH (it's just a name, folks!).

this is where the action starts. there is a pessimistic interpretation
of FH and an optimistic interpretation of the FH. i suggest that the
AI opponents (broadly speaking, this would include people who claim
that the "hard" sciences and mathematics cannot capture the human
element) see the pessimistic interpretation and the AI proponents see
the optimistic interpretation. of course, in making these suggestions i
may be mistaken, especially since they are very broad generalizations.
i may also make lots of people on both sides angry.

what is the pessimistic interpretation? if there is no ghost in the
machine, when the machine breaks down, the intelligence disappears.
DEATH. if this holds for an AI, it is not unlikely that it holds
for Natural Intelligence as well. in short, it is a threatening
suggestion: when we die we die and nothing is left of us. no
afterlife. no reincarnation. nothing. just death. very frightening
for most of us. that is why we need to claim some special status
for humans, claim that we are different from a machine. here is
where free will comes in very handy.

now for the optimistic interpretation. the intelligence does not
really reside in the hardware. it could in fact be, to a very great
extent, independent of any type of specific hardware, and certainly
is independent from any specific instance of the hardware. the intelli-
gence really resides in the information that was used to "construct"
the hardware. this suggests the following program for AI:

(1) create AI.

(2) (before or after (1)?) map the correspondence between the hardware
    and the AI ( = reasoning, memories, emotions etc.).

(3) same as (2) but for a Natural Intelligence.  get the blueprints for
    (say) a human intelligence.

(4) implement the blueprints of (3) on some kind of hardware.

Step (4) gives us virtual immortality, since whenever our current
intelligence-carrying hardware (human body? computer? etc.) is about to
give up (because of a disease, old age ...) we can transfer the
intelligence to another piece of hardware. there are some more delicate
problems here, but you get the idea.

(PLEASE NOTE: i, personally am not saying that this program can be
carried out. neither am i saying it cannot be carried out. i am not saying
that this is what AI researchers are striving for. i am simply
SPECULATING that it may be the motivation that resides in some dark,
unexplored Freudian corner of their mind.)

and so, gentle readers, this might be the explanation why people
get so heated up when they discuss free will and AI. just speculating,
of course.


                     Thanasis Kehagias

------------------------------

Date: 11 May 88 21:41:59 GMT
From: oodis01!uplherc!sp7040!obie!wsccs!dharvey@tis.llnl.gov  (David
      Harvey)
Subject: Re: AIList V6 #86 - Philosophy

In article <3200016@uiucdcsm>, channic@uiucdcsm.cs.uiuc.edu writes:
>
> In his article Brian Yamuchi (yamauchi@speech2.cs.cmu.edu) writes:
> Do you believe your career was merely the result of some bizarre genetic
> combination or pure chance?
>
People like you need to watch "Being There" at least 10 times.  The fact
that I was born to a lower class family shouldn't have any effect on my
career choice vs. the ones made by young Ron Reagan should it?  And I
can imagine that the poor starving Ethiopians have just as much a chance
of becoming a Computer Scientist as I do.  Chance has much more of an
impact than many want to admit in determining what we do.  I can also
imagine the great fame and glory that I will achieve for a great
scientific discovery since it will happen just because I will it!  Never
mind the fact that my IQ is not even close to Albert Einstein's!  Also,
genetic structure has a very significant impact on how we live our
lives.  Even a casual perusal of the studies of identical twins
separated at birth will produce an uncanny amount of similarities, and
this also includes IQ levels, even when the social environments are
radically different.  You dismiss these factors as if they are
insignificant and trivial.
>
> The attack is over.  The following is a plea to all AI researchers.  Please
> do not try to persuade anyone, especially impressionable students, that s\he
> does not have free will.  Everyone has the ability to choose to bring peace
> to his or her own life and to the rest of society, and has the ability to
> MAKE A DIFFERENCE in the world.  Free will should not be compromised for the
> mere prospect of creating an intelligent machine.
>
I am student (perhaps more depressable than impressable) and haven't
noticed anyone persuading me in any way.  A lot have tried to convince
me that I have free will, but for some reason I always get lost in the
quagmire of linguistic semantics which makes the term almost impossible
to define clearly.  You must understand that I have read much of the
works of modern philosophers (Descartes, Spinoza, Leibniz, Berkeley,
Hume, and Kant among them) and the whole issue remains unresolved for
me.  I tend to lean toward the AI perspective, but....

        The only thing you can know for sure is

           That you can't know anything for sure!  (-:

                                        dharvey @ wsccs

        Nobody represents me, and I represent Nobody.

------------------------------

Date: 12 May 88 07:14:05 GMT
From: TAURUS.BITNET!shani@ucbvax.berkeley.edu
Subject: Re: Free Will & Self-Awareness

In article <415@aiva.ed.ac.uk>, jeff@aiva.BITNET writes:
>
> I do believe you.  But I'd still like to know how I can write moral
> programs in Basic, or even ones that have my value system.
>
Whell, I said this only as a figure of speech, but still, if, for inctance,
you write a video-game or some thing like that, you may encounter some points,
in which you have to decide what the program will do, on a 'value' basis
(balancing difficlty, bonous points, things like that...). This is, more or
less, what I ment...

O.S.

------------------------------

Date: 12 May 88 14:31:41 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.MIT.EDU 
      (Stephen Smoliar)
Subject: Re: Acting irrationally (was Re: Free Will & Self Awareness)

In article <31570@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>I appreciated Bruce Krulwich's analysis of the cognitive chain
>initiated by a yelling/hitting episode.  The fifth (and last)
>link in the chain of reasoning, is that the target of the verbal
>abuse draws a conclusion about the abuser:
>
>>    5.  Thus he learns that the other person feels strongly about
>>        something, which most of the time is the goal that the yeller
>>        or hitter had in the first place
>
>Wouldn't it have been easier if the yeller had simply disclosed his/her
>value system in the first place?  Or do I have an unrealistic expectation
>that the yeller is in fact able to articulate his/her value system to an
>inquiring mind?
>
As long as the agents we are talking about are "all-too-human" (as Nietzsche
put it), your expectation is quite unrealistic.  I think you are overlooking
how great an extent we rely on implict assumptions in any intercourse.  If
we had to articulate everything explicitly, we would probably never get around
to discussing what we really wanted to discuss.  The problem comes in deciding
WHAT needs to be explicitly articulated and what can be left in the "implicit
background."  That is a problem which we, as humans, seem to deal with rather
poorly, which is why there is so much yeeling and hitting in the world.

------------------------------

Date: 12 May 88 15:18:37 GMT
From: esosun!jackson@seismo.css.gov  (Jerry Jackson)
Subject: Re: More Free Will


>If so, no one can be held responsible or need to feel responsible for his/her
>actions.  I cannot accept that.


>> }The attack is over.  The following is a plea to all AI researchers.  Please
>> }do not try to persuade anyone, especially impressionable students, that s\he
>> }does not have free will.  Everyone has the ability to choose to bring peace
>> }to his or her own life and to the rest of society, and has the ability to
>> }MAKE A DIFFERENCE in the world.  Free will should not be compromised for the
>> }mere prospect of creating an intelligent machine.
>>
>> Believe it or not, Minsky makes a similar plea in his discussion of free will
>> in _The Society of Mind_. He says that we may not be able to figure out where
>> free will comes from, but it is so deeply ingrained in us that we cannot deny
>> it or ignore it.
>
>Since it can't be denied, let's go one step further.  Free will has created
>civilization as we know it.  People, using their individual free wills,
>chose to make the world the way it is.  Minsky chose to write his book,


Is this intended to be a convincing argument?  The fact that you cannot
accept something is hardly a valid reason for me to reject it.  Saying
it's so doesn't make it so.  I agree that if free will is unreal, the
foundations of our society in terms of laws, praise, blame and responsibility
in general fall apart... This seems to me (initially anyway) to be a bad
thing. That's not, however, a good reason to ignore the problem.  I think
it is clear that within the standard causal model of the world that most
science-oriented folks have adopted, there is no room for free will.  Sure,
one can introduce "quantum uncertainty" into the picture, but I don't think
having a decision made by a sub-atomic event is really what people like to
think of as "free will"...

If events actually do have causes (What a novel idea!), then free will must
somehow come from outside the causal stream (from some *non-physical* realm?).
So, I contend, belief in free will constitutes belief in some sort of
non-physical entity interacting with the physical body.  I'm certainly not
about to say this is wrong; I just wish the free will proponents would admit
where they are coming from.

On a different note, consider what is meant by making a 'choice'..
I submit that when the options presented to a 'chooser' differ greatly in
value, there is really no choice to be made -- it makes itself.  However,
when the options are very close in value, the choice becomes difficult
(exactly when it makes the least difference).  In fact, the most difficult
choices occur when the options at hand are virtually equal in value. At that
point one might as well roll the dice anyway.

Finally, to be honest, I think the question of free will/determinism is
illusory.  It pre-supposes a rigid separation between the 'actor' and the
'outside world'.  Within a causal framework, the only 'entity' that can
possibly act on its own is the universe itself (of which we are not so
separate parts).  So, to really experience free will, try to identify with
a greater and greater piece of the whole pie, instead of some arbitrary
individual.

--Jerry Jackson

------------------------------

Date: 12 May 88 18:14:12 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Free Will & Self-Awareness

I was glad to see John Nagle bring up Asimov's 3 moral laws of robots.
Perhaps the time has come to refine these just a bit, with the intent
of shaping them into a more implementable rule-base.

I propose the following variation on Asimov:

      I.   A robot may not harm a human or other sentient being,
           or by inaction permit one to come to harm.

     II.   A robot may respond to requests from human beings,
           or other sentient beings, unless this conflicts with
           the First Law.

    III.   A robot may act to protect its own existence, unless this
           conflicts with the First Law."

     IV.   A robot may act to expand its powers of observation and
           cognition, and may enlarge its knowledge base without limit.

Can anyone propose a further refinement to the above?

--Barry Kort

------------------------------

Date: 12 May 88 19:08:58 GMT
From: bbn.com!pineapple.bbn.com!barr@bbn.com  (Hunter Barr)
Subject: Re: More Free Will

In article <3200017@uiucdcsm> channic@uiucdcsm.cs.uiuc.edu writes:
>"How do I know?" is as old a question as western philosophy if not all
>philosophy.  The reason the question is that old is because subjective
>personal experience is undeniable.  I know I have free will because that
>is my experience.

You have seen optical illusions where a straight line is made to
appear curved.  No-one doubts that your subjective experience tells
you that the line is curved.  In fact, because all humans are built
the same in certain respects (at least those who are able to see and
thus can experience optical illusions), all humans will have that same
subjective experience when they look at that straight line.  The same
is true of many other optical illusions, auditory illusions, and,
naturally, the illusion that we call free will.  No-one says we do not
experience it, only that we *mis-understand* what we experience.
After I have taken a straight-edge to determine that the line is
straight, I may still "feel" that I am looking at a curved line, but I
now believe the drawing's caption, which calls it a straight line.

No-one has yet defined free-will so that a straight-edge can be found
for it.  But many humans activities have been shown to be less
voluntary than they appear subjectively.  For expample:

As a child, I thought that I chose what I wanted to eat, and I felt I
had completely free will in this matter.  I noticed that I always
chose the M&Ms over the spinach (no matter what Popeye said), but I
still felt that my choice was completely free.  Now I know a little
about the human digestive track, body-chemistry, and psychology; I can
see that these played a big part in my "choices".  Such a big part
that I now think they completely determined my choices.  So it seems
useless to talk about free will in this case.  Any child will make
similar choices, most the exact same ones, because that is how
children are built.  Only external factors will change the child's
choices, such as reward/punishment for choosing one food over another.
(I consider learning about the real world to be a form of external
reward/punishment.  For example, I learned that a steady diet of M&Ms
could lead to a scary visit to the Dentist.)

Choices we make as adults seem much the same, probably depending on
factors we do not yet comprehend, just as we did not comprehend the
factors which led us to choose M&Ms as children.  As I said, no-one
has yet provided a reliable straight-edge for us to use on the free
will experience (a definition of free will would help.)  But each
experience with a choice which we *do* understand (like M&Ms over
spinach) provides one little peice of that straight edge.  All the
little peices in my experience look like they will fit a straight-edge
called " determinism."  When we learn more about the factors which
influence our choices, we will develop more or less confidence in this
hypothetical straight-edge.  But until we understand most of these
factors, we cannot claim to have this ill-defined "free will",
whatever our subjective experience.

This puts me in agreemnet with someone you quoted who said:


>> No one denies that we humans experience free will.  But that
>> experience says nothing about its nature; at least, nothing ruling
>> out determinism and chance.

I think this gives me an answer to your question:

>From where does this tendency to IGNORE subjective experience when discussing
>SUBJECTIVE PHENOMENA originate?  My own (and your own) experience of free will
>tells me (and you) a great deal about its nature.  In fact, this experience is
>the most reliable source of information regarding free will.  And the
>experience is neither deterministic or random.  No matter what decision I make,
>whether choosing a political candidate, a career, or a flavor of ice cream,
>I experience neither that my choice is determined, nor random, NOR some
>nebulous combination of the two.

I answer that this discussion itself is proof that we do not ignore
subjective experience.  It does indeed tell us a great deal, and
provides essential insight, but it is by no stretch of the imagination
our most reliable source of information.  Try to adapt your reasoning
to the straight line illusion.  Following your reasoning, I should not
bother to rummage around in my desk for a straight-edge; nor should I
doubt in any way my subjective experience that it is curved.  Why, you
don't want me to think about things at all!  You just want me to sit
there and "experience" that the line is curved, as if that experience
matched reality.  What if my ability to better understand reality
makes me able to help people?  What if my ability to distinguish germ
theory from superstition enables me to wipe out smallpox?  What if my
ability to distinguish straight lines from curved ones enables me to
build safer cars?  Then maybe my ability to distinguish demermined
behaviour from this nebulous "free will" will enable me to wipe out a
mental disease.  Whatever turns out to be the root our "free will"
experience, we will not learn to understand it merely by sitting here
and experiencing it subjectively.


>Free will explained as an additive blend of determinism and chance
>directly attacks the concept of individual responsibility.  Can any
>machine, based on this theory of free will, possibly benefit society
>enough to counteract the detrimental effect of a philosophy which
>implies that we aren't accountable for our choices?

We say, "A tornado was responsible for destroying my house," and,
"John's dog is responisble for this scar on my leg."  If we could stop
the tornado from touching down elsewhere, we would.  Under most
circumstances we actually do prevent John's dog from biting anyone
else, either by incarcerating it or killing it.  The same will always
be true of people.  Where we can stop them from repeating their
offenses, we will.  What about one-time offenders?  Say someone kills
her husband and we don't think she'll do it again.  Punish her on the
basis of how likely it is that she will commit the same crime again?
No.  This would produce the damage you fear.  For the prevention of
crime it is important that punishment be applied predictably and
evenly, with as few exceptions as possible.  This means we should keep
the current idea of responsibility around in Law to do the job it has
always done-- support prevention by making sure a crime is punished
predictably.  This may mean a stricter interpretation of "Ignorance of
the Law is no excuse."  Yours is an excellent argument against
treating insane offenders differently from sane ones.  It is too easy
to say that anyone who commits a crime has demonstrated mental illness
by committing the crime.  But if in fact determinism turns out to be
true, then we will want to convert AILIST
BABYL

------------------------------

End of AIList Digest
********************

∂24-May-88  2359	@MC.LCS.MIT.EDU:AIList-REQUEST@AI.AI.MIT.EDU 	AIList Digest   V7 #6   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 24 May 88  23:59:04 PDT
Received: from FORD.MIT.EDU by MC.LCS.MIT.EDU via Chaosnet; 24 MAY 88  14:57:19 EDT
Date: Tuesday, 24 May 1988, 14:52-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Sender: nick@MIT-AI
Reply-to: AIList@AI.AI.MIT.EDU
Subject: AIList Digest   V7 #6
To: ailist-outgoing@mc


AIList Digest           Wednesday, 25 May 1988      Volume 7 : Issue 6

Today's Topics:

  Philosophy - Even More Free Will
    (Last of three digests on this topic)

----------------------------------------------------------------------

Date: 13 May 88 21:57:50 GMT
From: dalcs!iisat!paulg@uunet.uu.net  (Paul Gauthier)
Subject: Re: More Free Will


 I'm sorry, but there is no free will. Every one of us is bound by the
laws of physics. No one can lift a 2000 tonne block of concrete with his
bare hands. No one can do the impossible, and in this sense none of us have
free will.

 I am partially wrong there, as long as you don't WANT to do the impossible
you can have a sort of free will. But as soon as you feel that you want to
do something that cannot be done then your free will is gone.

 Let me define my idea of free will: Free will is being able to take any
course of action which you want to take. So if you never want to take a
course of action which is forbidden to you, your free will is retained.

 Free will is completely subjective. There is no 'absolute free will.' At
least that is how I look at free will. Since it is subjective to the person
whose free will is in question it follows that as long as this person THINKS
he is experiencing free will then he is. If he doesn't know that his decisions
are being made for him, and he THINKS they are his own free choices then he
is NOT being forced into a course of action he doesn't desire so he has free
will.

 Anyways, I suppose there'll be a pile of rebuttles against this (gosh, I
hope so -- I love debates!).


--
 ==============================================================================
  ===         Paul Gauthier at International Information Services          ===
   ===              {uunet, utai, watmath}!dalcs!iisat!paulg              ===
    ========================================================================

------------------------------

Date: 13 May 88 21:58:15 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Raising Consciousness

I was heartened by Drew McDermot's well-written summary of the Free
Will discussion.

I have not yet been dissuaded from the notion that Free Will is
an emergent property of a decision system with three agents.

The first agent generates a candidate list of possible courses
of action open for consideration.  The second agent evaluates
the likely outcome of pursuing each possible course of action,
and estimates it's utility according to it's value system.  The
third agent provides a coin-toss to resolve ties.

Feedback from the real world enables the system to improve its
powers of prediction and to edit it's value system.

If the above model is at all on target, the decision system would
seem to have free will.  And it would not be unreasonable to hold
it accountable for its actions.

On another note, I think it was Professor Minsky who wondered
how we stop deciding an issue.  My own feeling is that we terminate
the decision-making process when a more urgent or interesting
issue pops up.  The main thing is that our decision making machinery
chews on whatever dilemma captures its attention.

--Barry Kort

------------------------------

Date: 14 May 88 17:16:38 GMT
From: vu0112@bingvaxu.cc.binghamton.edu  (Cliff Joslyn)
Subject: Re: More Free Will

In article <2@iisat.UUCP> paulg@iisat.UUCP (Paul Gauthier) writes:
> I am partially wrong there, as long as you don't WANT to do the impossible
>you can have a sort of free will. But as soon as you feel that you want to
>do something that cannot be done then your free will is gone.
>
> [ other good comments deleted ]

I'm totally perplexed why the concept of *RELATIVE FREEDOM* is so
difficult for people to adhere to.

Can someone *please* rebut the following:

1) Absolute freedom is theoretically impossible.  Absolute freedom is
perhaps best characterized as a uniform distribution on the real line.
This distribution is not well formed.  The concept of *absolute
randomness* is not well defined.  For example, it is *determined* that
the six sided die cannot roll a seven.

2) Absolute determinism, while theoretically possible, is both
physically impossible and theoretically unobservable.  Computers are one
of the most determined systems we have, but a variety of low-level
errors, up to and including quantum effects, pollute its pure
determinism.  Further, any sufficiently large determined system will
yield to chaotic processes, so that its determinism is itself
undeterminable.

3) Therefore all real systems are *RELATIVELY FREE*, and *RELATIVELY
DETERMINED*, some more, some less, depending on their nature, and on how
they are observed and modeled.  Certainly all organisms, including
people, fall into this range.

4) Since when we qualify an adjective, the adjective still holds
(something a little hot is still hot), therefore, it *IS TRUE* that
*PEOPLE ARE FREE* and it *IS TRUE* that *PEOPLE ARE DETERMINED*.  No
problem.

5) As biological systems evolve, their freedom increases, so that, e.g.
people are more free than cats or snails.  When people project this
relatively more freedom into their own absolute freedom they are
committing arrogance and folly.  When people project ideological
(naive?) ideas about causality, and conclude that we are completely
determined, they are also commiting folly.

Any takers?

--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: 14 May 88 22:46:42 GMT
From: sunybcs!stewart@boulder.colorado.edu  (Norman R. Stewart)
Subject: Re: Free Will


>From: paulg@iisat.UUCP (Paul Gauthier)

writes:
> I'm sorry, but there is no free will. Every one of us is bound by the
>laws of physics. No one can lift a 2000 tonne block of concrete with his
>bare hands. No one can do the impossible, and in this sense none of us have
>free will.

     I don't believe we're concerned with what we are capable of doing,
but rather our capacity to desire to do it.  Free will is a mental, not
a physical phenomenom.  What we're concerned with is if the brain (nervous
system, organism, aggregation of organisms and objects) is just so many
atoms (sub-atomic particles?, sub-sub-atomic particles) bouncing around
according to the laws of physics, and behavior simply the unalterable
manifestion of the movement of these particles.              /|\
                                                              |
                                                      Note: in a closed system.





Norman R. Stewart Jr.             *
C.S. Grad - SUNYAB                *  If you want peace, demand justice.
internet: stewart@cs.buffalo.edu  *                  (of unknown origin)
bitnet:   stewart@sunybcs.bitnet  *

------------------------------

Date: 15 May 88 19:03:53 GMT
From: COYOTE.STANFORD.EDU!eyal@ucbvax.berkeley.edu  (Eyal Mozes)
Subject: Re: Free Will & Self-Awareness

In article <434@aiva.ed.ac.uk> jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
>> Eyal Mozes writes:
>>all the evidence I'm familiar with points to the fact that it's
>>always possible for a human being to control his thoughts by a
>>conscious effort.
>
>It is not always possible.  Think, if no simpler example will do, of
>obsessives.  They have thoughts that persist in turning up despite
>efforts to eliminate them.

First of all, even an obsessive can, at any given moment, turn his
thoughts away from the obsession by a conscious effort. The problem of
obsession is in that this conscious effort has to be much greater than
normal, and also in that, whenever the obsessive is not consciously
trying to avoid those thoughts, they do persist in turning up.

Second, an obsession is caused by anxiety and self-doubt, which are the
result of thinking the obsessive has done, or failed to do, in the
past. And, by deliberately training himself, over a period of time, in
more rational thinking, sometimes with appropriate professional help,
the obsessive can eliminate the excessive anxiety and self-doubt and
thus cure the obsession. So, indirectly, even the obsession itself is
under the person's volitional control.

>Or, consider when you start thinking about something.  An idea just
>occurs and you are thinking it: you might decide to think about
>something, but you could not have decided to decide, decided to
>decide to decide, etc. so at some point there was no conscious
>decision.

Of course, the point at which you became conscious (e.g. woke up from
sleep) was not a conscious decision. But as long as you are conscious,
it is your choice whether to let your thoughts wander by chance
association or to deliberately, purposefully control what you're
thinking about. And whenever you stop your thoughts from wandering and
start thinking on a subject of your choice, that action is by conscious
decision.

This is why I consider Ayn Rand's theory of free will to be such an
important achievement - because it is the only free-will theory
directly confirmed by what anyone can observe in his own thoughts.

        Eyal Mozes

        BITNET: eyal%coyote@stanford
        ARPA:   eyal@coyote.stanford.edu

------------------------------

Date: 16 May 88 08:11:37 GMT
From: TAURUS.BITNET!shani@ucbvax.berkeley.edu
Subject: Re: More Free Will

In article <2@iisat.UUCP>, paulg@iisat.BITNET writes:
>  Let me define my idea of free will: Free will is being able to take any
> course of action which you want to take. So if you never want to take a
> course of action which is forbidden to you, your free will is retained.
>

Hmm... quite correct. But did it ever occured to you that we WANT to be limited
by the laws of physics, because this is the only way to form a realm?

Why do you think people are so willing to pay lot's of $ to TSR, just to play
with other limitations?...

O.S.

------------------------------

Date: 16 May 88 15:00:25 GMT
From: sunybcs!sher@boulder.colorado.edu  (David Sher)
Subject: Re: Raising Consciousness

There is perhaps a minor bug in Drew Mcdermott's (who teaches a great
grad level ai class) analysis of free will.  If I understand it
correctly it runs like this:
To plan one has a world model including future events.
Since you are an element of the world then you must be in the model.
Since the model is a model of future events then your future actions
are in the model.
This renders planning unnecessary.
Thus your own actions must be excised from the model for planning to
avoid this "singularity."

Taken naively, this analysis would prohibit multilevel analyses such
as is common in game theory.  A chess player could not say things like
if he moves a6 then I will move Nc4 or Bd5 which will lead ....
Thus it is clear that to make complex plans we actually need to model
ourselves (actually it is not clear but I think it can be made clear
with sufficient thought).

However we can still make the argument that Drew was making its just
more subtle than the naive analysis indicates.  The way the argument
runs is this:
Our world model is by its very nature a simplification of the real
world (the real world doesn't fit in our heads).  Thus our world model
makes imperfect predictions about the future and about consequence.
Our self model inside our world model shares in this imperfection.
Thus our self model makes inaccurate predictions about our reactions
to events.  We perceive ourselves as having free will when our self
model makes a wrong prediction.

A good example of this is the way I react during a chess game.  I
generally develop a plan of 2-5 moves in advance.  However sometimes
when I make a move and my opponent responds as expected I notice a
pattern that previously eluded me.  This pattern allows me to make a
move that was not in my plans at all but would lead to greater gains
than I had planned.  For example noticing a knight fork.  When this
happens I have an intense feeling of free will.

As another example I had planned on writing a short 5 line note
describing this position.  In fact this article is running several
pages.  ...

-David Sher
ARPA: sher@cs.buffalo.edu       BITNET: sher@sunybcs
UUCP: {rutgers,ames,boulder,decvax}!sunybcs!sher

------------------------------

Date: 16 May 88 15:55:12 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net  (Jeff Dalton)
Subject: Re: Free Will & Self-Awareness

In article <8805092354.AA05852@ucbvax.Berkeley.EDU> Eyal Mozes writes:
1 all the evidence I'm familiar with points to the fact that it's
1 always possible for a human being to control his thoughts by a
1 conscious effort.

In article <434@aiva.ed.ac.uk> jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
2 It is not always possible.  Think, if no simpler example will do, of
2 bsessives.  They have thoughts that persist in turning up despite
2 efforts to eliminate them.

In article <8805151907.AA01702@ucbvax.Berkeley.EDU> Eyal Mozes writes:
>First of all, even an obsessive can, at any given moment, turn his
>thoughts away from the obsession by a conscious effort. The problem of
>obsession is in that this conscious effort has to be much greater than
>normal, and also in that, whenever the obsessive is not consciously
>trying to avoid those thoughts, they do persist in turning up.

That an obsessive has some control over his thoughts does not mean
he can always control his thoughts.  If all you mean is that one can
always at least temporarily change what one is thinking about and
can eventually eliminate obsessive thoughts or the tune that's running
through one's head, no one would be likely to disagree with you,
except where you seem to feel that obsessions are just the result
of insufficiently rational thinking in the past.

>So, indirectly, even the obsession itself is  under the person's
>volitional control.

I would be interested in knowing what you think *isn't* under a
person's volitional control.  One would normally think that having
a sore throat is not under conscious control even though one can
chose to do something about it or even to try to prevent it.

2 Or, consider when you start thinking about something.  An idea just
2 occurs and you are thinking it: you might decide to think about
2 something, but you could not have decided to decide, decided to
2 decide to decide, etc. so at some point there was no conscious
2 decision.

>Of course, the point at which you became conscious (e.g. woke up from
>sleep) was not a conscious decision. But as long as you are conscious,
>it is your choice whether to let your thoughts wander by chance
>association or to deliberately, purposefully control what you're
>thinking about. And whenever you stop your thoughts from wandering and
>start thinking on a subject of your choice, that action is by conscious
>decision.

But where does the "subject of your own choice" come from?  I wasn't
thinking of letting one's thoughts wander, although what I said might
be interpreted that way.  When you decide what to think about, did
you decide to decide to think about *that thing*, and if so how did
you decide to decide to decide, and so on?

Or suppose we start with a decision, however it occurred.  I decide
read your message.  As I do so, it occurs to me, at various points,
that I disagree and want to say something in reply.  Note that these
"occurrences" are fairly automatic.  Conscious thought is involved,
but the exact form of my reply is a combination of conscious revision
and sentences, phrases, etc. that are generated by some other part of
my mind.  I think "he thinks I'm just talking about letting the mind
wander and thinking about whatever comes up."  That thought "just
occurs".  I don't decide to think exactly that thing.  But my
consciousness has that thought and can work with it.  It helps provide
a focus.  I next try to find a reply and begin by reading the passage
again.  I notice the phrase "subject of your own choice" and think
then write "But where does the...".

Of course, I might do other things.  I might think more explicitly
about that I'm doing.  I might even decide to think explicitly rather
than just do so.  But I cannot consciously decide every detail of
every thought.  There are always some things that are provided by
other parts of my mind.

Indeed, I am fortunate that my thoughts continue along the lines I
have chosen rather than branch off on seemingly random tangents.  But
the thoughts of some people, schizophrenics say, do branch off.  It is
clear in many cases that insufficient rationality did not cause their
problem: it is one of the consequences, not one of the causes.

As an example of "other parts of the mind", consider memory.  Suppose
I decide to remember the details of a particular event.  I might not
be able to, but if I can I do not decide what these memories will be:
they are given to me by some non-conscious part of my mind.

>This is why I consider Ayn Rand's theory of free will to be such an
>important achievement - because it is the only free-will theory
>directly confirmed by what anyone can observe in his own thoughts.

As far as you have explained so far, Rand's theory is little more
than simply saying that free will = the ability to focus consciousness,
which we can all observe.  Since we can all observe this without the
aid of Rand's theory, all Rand seems to be saying is "that's all there
is to it".

-- Jeff

------------------------------

Date: 16 May 88 12:07:18
From: ALFONSEC%EMDCCI11.BITNET@CUNYVM.CUNY.EDU

Date: 16 May 1988, 12:02:14 HOE

From: ALFONSEC at EMDCCI11 (EARN, Manuel Alfonseca)
To:   AILIST@AI.AI.MIT.EDU at EDU
Ref:  Free will et al.

Previous appends have stated that all values are learned.
I believe that some are innate. For instance, the
crudest form of the justice value
  "Why should I receive less than (s)he?"
seems to exist in babies as soon as they can perceive
and before anybody has tried to teach them.

Any comments? How does this affect free will in AI?

Regards,

Manuel Alfonseca, ALFONSEC at EMDCCI11
Usual disclaimer: My opinions are my own.

------------------------------

Date: 16 May 88 16:15:32 GMT
From: wlieberm@teknowledge-vaxc.arpa  (William Lieberman)
Subject: Free Will-Randomness and Question-Structure

12-May-88 15:36:36-PDT,2503;000000000000
Date: Thu, 12 May 88 15:33:21 pdt
From: wlieberm@teknowledge-vaxc.ARPA (William Lieberman)
Message-Id: <8805122233.AA28641@teknowledge-vaxc.ARPA>
To: vu0112@bingvaxu.cc.binghamton.edu
Subject: Re: Free Will & Self Awareness
Newsgroups: comp.ai
In-Reply-To: <1179@bingvaxu.cc.binghamton.edu>
References: <770@onion.cs.reading.ac.uk>
Organization: Teknowledge, Inc., Palo Alto CA
Cc: wlieberm@vaxc


Re: Free Will and Determinism.


This most interesting kind of discussion reminds me of the old question,

   " What happens when the irresistable cannonball hits the irremovable post? "

The answer lies in the question, not in other parts of the outside world.

If you remember your Immanual Kant and his distinction between analytic and
synthetic statements, the cannonball question would be an analytic statement,
of the form, " The red barn is red." - A totally useless statement, because
nothing new about the outside world is implied in the statement. Similarly,
I would say the cannonball question, since it is internally contradictory,
wastes the questioner's time if he tries to hook it to the outside world.

A concept like 'random' similarly may be thought of in terms simply of
worldly unpredictability TO THE QUESTIONER.  If he comes from a society where
they get differing results every time they add two oranges to two oranges,
TO THEM addition of real numbers is random. (Also wouldn't an example of
a non-recurring expansion of decimals, but certainly not random, be any
irrational number, such as pi?)

The concept of inherent randomness implies there is no conceivable system
that will ever or can ever be found that could describe what will happen in
a given system with a predefined envelope of precision. Is it possible to
prove such a conjecture? It's almost like Fermat's Last Theorem.

To me, the concept of randomness has to do with the subject's ability to
descibe events forthcoming, not with the forthcoming events themselves.
That is, randomness only exists as long as there are beings around who
perceive their imprecise or limited predictions as incomplete. The events
don't care, and occur regardless. It's important to not forget that the
subjects themselves (us, e.g.) are part of the world, too.

My main point here is that very often, questions that seem impossible to
resolve often need to have the structure of the question looked at, rather
than the rest of the outside world for empirical data to support or refute
the question.

Bill Lieberman

------------------------------

Date: 19 May 88 09:05:38 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: Acting irrationally (was Re: Free Will & Self Awareness)

In article <180@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>Here's a simple rule: explicitly articulate everything, at least once.
Sorry, I didn't quite get that :-)
>
>A truly reasoning being doesn't hesitate to ask, either, if something
>hasn't been explicitly articulated, and it is necessary for continuing
>discussion.
Read Irvine Goffman's "The Presentation of Self in Everyday Life" and
you'll find that we do not correspond to your "truly reasoning being".
We let all sorts of ambiguities and incompleteness drop, indeed it's
rude not to, as well as displaying a lack of empathy, insight, intuition
and considerateness.  Sometimes, you should ask, but certainly not
always, unless your a natural language front-end, then I insist :-)

This idealisation is riddled with assumptions about meaning which I
leave your AI programs to divine :-)  Needless to say, this approach
to meaning results in infinite regresses and closures imposed by
contingency rather than a mathematical closure
                                    n+1              n
                        information    = information

where n is the number of clarifying exchanges between the tedious pedant
(TP) and the unwilling lexicographer (UL).  i.e there exists an n such that
UL abuses TP, TP senses annoyance in UL, TP gives up, UL gives up, TP
agrees to leave it until tomorrow, or  ...
!
TP and UL have a wee spot of social negotiation and agree on the meaning
(i.e. UL hits TP really hard)
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

                The proper object of the study of Mankind is Man, not machines

------------------------------

Date: 19 May 88 09:14:59 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: what is this thing called `free will'

In article <38@aipna.ed.ac.uk> sean@uk.ac.ed.aipna.UUCP (Sean Matthews) writes:
>2. perfect introspection is a logical impossibility[2]
That doesn't make it impossible, just beyond comprehension through logic.
Now, if you dive into Philosophy of Logic, you'll find that many other
far more mundane phenomena aren't capturable within FOPC, hence all
this work on non-standard logics.  Slow progress here though.

Does anyone seriously hold with certainty that logical impossibility
is equivalent to commonsense notions of falsehood and impossibility?
Don't waste time with similarities, such as Kantian analytic statements
such as all "Bachelors are unmarried", as these rest completely on language
and can thus often be translated into FOPC to show that bachelor(X) AND
married(X) is logically impossible, untrue, really impossible, ...

Any physicists around here use logic?
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

                The proper object of the study of Mankind is Man, not machines

------------------------------

Date: 20 May 88 09:01:43 GMT
From: News <mcvax!reading.ac.uk!News.System@uunet.UU.NET>
Subject: Submission for comp-ai-digest

Path: reading!onion!henry!jadwa
From: jadwa@henry.cs.reading.ac.uk (James Anderson)
Newsgroups: comp.ai.digest,comp.ai
Subject: Free-Will and Purpose
Message-ID: <814@onion.cs.reading.ac.uk>
Date: 20 May 88 09:01:42 GMT
Sender: news@onion.cs.reading.ac.uk
Reply-To: jadwa@henry.cs.reading.ac.uk (James Anderson)
Distribution: comp
Organization: Comp. Sci. Dept., Reading Univ., UK.
Lines: 29

Free-will does not exclude purposive behaviour.

Suppose that the world is entirely deterministic, but that
intelligent, purposive creatures evolve. Determinism is no
hindrance to evolution: variety can be introduced systematically
and systematic, but perhaps very complex, selection will do the
rest. The question of free-will does not arise here.

In human societies, intelligence and purposive behaviour are good
survival traits and have allowed us to secure our position in the
natural world. (-: If you don't feel secure, try harder! :-)

                              * * *

I realise that this is a very condensed argument, but I think you
will understand my point. For those of you who like reading long
arguments you could try:

    "Purposive Explanation in Psychology", Margaret A. Boden, The
     Harvester Press, 1978, Hassocks, Sussex, England.

    ISBN 0-85527-711-4

It presents a quite different rational for accepting the idea of
purposive behaviour in a deterministic world.

James

(JANET) James.Anderson@reading.ac.uk

------------------------------

Date: 20 May 88 16:48:00 GMT
From: killer!tness7!ninja!sys1!hal6000!trsvax!bryan@ames.arpa
Subject: Re: Acting irrationally (was Re: Fr


/* ---------- "Re: Acting irrationally (was Re: Fr" ---------- */

In article <5499@venera.isi.edu>, smoliar@vaxa.isi.edu (Stephen Smoliar) writes:
>> I think you are overlooking how great an extent we rely on implict
>> assumptions in any intercourse.  If we had to articulate everything
>> explicitly, we would probably never get around to discussing what we
>> really wanted to discuss.

/* Written  9:44 am  May 17, 1988 by proxftl.UUCP!tomh (Tom Holroyd)

> Here's a simple rule: explicitly articulate everything, at least once.
                                              ↑↑↑↑↑↑↑↑↑↑!!

That's the rub, "everything" includes every association you've ever had with
hearing or using every word, including all the events you've forgotten about
but which influence the "meaning" any particular word has for you, especially
the early ones while you were acquiring a vocabulary.

You seem to have some rather naive ideals about the meanings of words.

> A truly reasoning being doesn't hesitate to ask, either, if something
> hasn't been explicitly articulated, and it is necessary for continuing
> discussion.

A truly reasoning being often thinks that things WERE explicitly articulated
to a sufficient degree, given that both parties are using the same language.

------------------

Convictions are more dangerous enemies of truth than lies.
                                                - Nietzsche

...ihnp4!convex!ctvax!trsvax!bryan      (Bryan Helm)

------------------------------

Date: 21 May 88 01:03:13 GMT
From: mcvax!ukc!its63b!aipna!sean@uunet.uu.net  (Sean Matthews)
Subject: Re: what is this thing called `free will'

in article <1193@crete.cs.glasgow.ac.uk>
gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes
>In article <38@aipna.ed.ac.uk> {I} write
>>2. perfect introspection is a logical impossibility[2]
>That doesn't make it impossible, just beyond comprehension through logic.
>Now, if you dive into Philosophy of Logic, you'll find that many other
>far more mundane phenomena aren't capturable within FOPC, hence all
>this work on non-standard logics.  Slow progress here though.

Mr Cockton is confusing one fairly restricted logic with the whole
plethora I was referring to.  There are logics specifically designed for
dealing with problems of self reference (cf Craig Smory\'nski in
Handbook of philosophical logic Vol2 `modal logic and self-reference')
and they place very clear restrictions on what is possible in terms of
self-referential systems and what is not; there has not been `Slow
progress here'.

> anyone seriously hold with certainty that logical impossibility
>is equivalent to commonsense notions of falsehood and impossibility?

I freely admit that I don't understand what he means here, unless he
is making some sort of appeal to metaphysical concepts of truth apart
from demonstrability and divorced from the concept of even analytic
falsehood in any way.  There are Western philosophers (even good ones)
who invoke metaphysics to prove such things as `God exists' (I feel
that God exists, therefore God exists---Rousseau), or even `God does
not exist' (I feel that God does not exist, therefore God does not
exist---Nietztche).

Certainly facts may be `true' irrespective of whether we can `prove'
them (the classical example is `this statement is not provable')
though this again depends on what your idea of `truth' is. And there
are different types of `truth' as he points out; any synthetic `truth'
is always tentative, a black sheep can be discovered at any time,
disposing of the previous ``truth'' (two sets of quotation marks) that
all sheep were a sort of muddy light grey, whereas analytic `truth' is
`true' for all time (cf Euclids `Elements').  But introspective `truth's
are analytic, being purely mental; we have a finite base of knowledge
(what we know about ourselves), and a set of rules that we apply to
get new knowledge about the system; if the rules or the knowledge
change then the deductions change, but the change is like changing
Euclid's fifth postulate; the conclusions differ but the conclusions
from the original system, though they may contradict the new
conclusions, are still true, since they are prefixed with different
axioms, and any system that posits perfect introspection is going to
contain contradictions (cf Donald Perlis: `Meta in logic' in
`Meta-level reasoning and reflection', North Holland for a quick
survey).

What happens in formal logic is that we take a subset of possible
concepts (modus ponens, substitution, a few tautologies, some modal
operators perhaps) and see what happens; if we can generate a
contradiction in this (tiny) subset of accepted `truth's, then we can
generate a contradiction in the set of all accepted `truth's using
rational arguments this should lead us to reevaluate what we hold as
axioms.  These arguments could be carried out in natural language, the
symbols, which perhaps seem to divorce the whole enterprise from
reality, are not necessary, they only make things easier; after all
Aristotle studied logic fairly successfully without them.

Se\'an Matthews
Dept. of Artificial Intelligence JANET:sean%sin@uk.ac.ed.aiva
University of Edinburgh          ARPA: sean%uk.ac.ed.aiva@nss.cs.ucl.ac.uk
80 South Bridge                  UUCP: ...!mcvax!ukc!aiva!sean
Edinburgh, EH1 1HN, Scotland

PS I apologise beforehand for any little liberties I may have taken with
the finer points of particular philosophies mentioned above.

------------------------------

Date: 21 May 88 06:33:19 GMT
From: quintus!ok@sun.com  (Richard A. O'Keefe)
Subject: McDermott's analysis of "free will"

I have been waiting for someone else to say this better than I can,
and unfortunately I've waited so long that McDermott's article has
expired here, so I can't quote him.

Summary: I think McDermott's analysis is seriously flawed.
Caveat:  I have probably misunderstood him.

I understand his argument to be (roughly)
    an intelligent planner (which attempts to predict the actions of
    other agents by simulating them using a "mental model") cannot
    treat itself that way, otherwise it would run into a loop, so it
    must flag itself as an exception to its normal models of
    causality, and thus perceives itself as having "free will".
[I'm *sure* I'm confusing this with something Minsky said years ago.
 Please forgive me.]

1.  From the fact that a method would get an intelligent planner into
    serious trouble, we cannot conclude that people don't work that way.
    To start with, people have been known to commit suicide, which is
    disastrous for their future planning abilities.  More seriously,
    people live in a physical world, and hunger, a swift kick in the
    pants, strange noises in the undergrowth &c, act not unlike the
    Interrupt key.  People could well act in ways that would have them
    falling into infinite loops as long as the environment provided
    enough higher-priority events to catch their attention.

2.  It is possible for a finite computer program (with a sufficiently
    large, but at all times finite) store to act *as if* it was an
    one-way infinite tower of interpreters.  Brian Cantwell Smith
    showed this with his design for 3-Lisp.  Jim desRiviers for one
    has implemented 3-Lisp.  So the mere possibility of an agent
    having to appear to simulate itself simulating itself ... doesn't
    show that unbounded resources would be required:  we need to know
    more about the nature of the model and the simulation process to
    show that.

3.  In any case, we only get the infinite regress if the planner
    simulates itself *exactly*.  There is a Computer Science topic
    called "abstract interpretation", where you model the behaviour
    of a computer program by running it in an approximate model.
    Any abstract interpreter worth its salt can interpret itself
    interpreting itself.  The answers won't be precise, but they are
    often useful.

4.  At least one human being does not possess sufficient knowledge of
    the workings of his mind to be able to simulate himself anything BUT
    vaguely.  I refer, of course, to myself.  [Well, I _think_ I'm
    human.]  If I try to predict my own actions in great detail, I run
    into the problem that I don't know enough about myself to do it,
    and this doesn't feel any different from not knowing enough about
    President Reagan to predict his actions, or not knowing enough
    about the workings of a car.  I do not experience myself as a
    causal singularity, and the actions I want to claim as free are the
    actions which are in accord with my character, so in some sense are
    at least statistically predictable.  Some other explanation must be
    found for _my_ belief that I have "free will".

Some other issues:

    It should be noted that dualism has very little to do with the
    question of free will.  If body and mind are distinct substances,
    that doesn't solve the problem, it only moves the
    determinism/randomness/ whatever else from the physical domain to
    the mental domain.  Minds could be nonphysical and still be
    determined.

    What strikes me most about this discussion is not the variety of
    explanations, but the disagreement about what is to be explained.
    Some people seem to think that their freest acts are the ones
    which even they cannot explain, others do not have this feeling.
    Are we really arguing about the same (real or illusory) phenomenon?

------------------------------

Date: 22 May 88 02:22:57 GMT
From: uflorida!novavax!proxftl!bill@gatech.edu  (T. William Wells)
Subject: Re: Free Will & Self-Awareness

In article <445@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> In article <8805092354.AA05852@ucbvax.Berkeley.EDU> Eyal Mozes writes:
>...
> As far as you have explained so far, Rand's theory is little more
> than simply saying that free will = the ability to focus consciousness,
> which we can all observe.  Since we can all observe this without the
> aid of Rand's theory, all Rand seems to be saying is "that's all there
> is to it".
>
> -- Jeff

Actually, from what I have read so far, it seems that the two of
you are arguing different things; moreover, eyal@COYOTE.STANFORD
.EDU (Eyal Mozes) has committed, at the very least, a sin of
omission: he has not explained Rand's theory of free will
adequately.

Following is the Objectivist position as I understand it.  Please
be aware that I have not included everything needed to justify
this position, nor have I been as technically correct as I might
have been; my purpose here is to trash a debate which seems to be
based on misunderstandings.

To those of you who want a justification, I will (given enough
interest) eventually be doing so on talk.philosophy.misc, where I
hope to be continuing my discussion of Objectivism.  Please
direct any followups to that group.

Entities are the only causes: they cause their actions.  Their
actions may be done to other entities, and this may require the
acted on entity to cause itself to act in some way.  In that
case, one can use `cause' in a derivative sense, saying: the
acting entities (the agents) caused the acted upon entities (the
patient) to act in a certain way.  One can also use `cause' to
refer to a chain of such.  This derivative sense is the normal
use for the word `cause', and there is always an implied action.

If, in order that an entity can act in some way, other entities
must act on it, then those agents are a necessary cause for the
patient's action.  If, given a certain set of actions performed
by some entities, a patient will act in a certain way, then those
agents are a sufficient cause for the patient's actions.

The Objectivist version of free will asserts that there are (for
a normally functioning human being) no sufficient causes for what
he thinks.  There are, however, necessary causes for it.

This means that while talking about thinking, no statement of the
form "X(s) caused me to think..." is an valid statement about
what is going on.

In terms of the actual process, what happens is this: various
entities provide the material which you base your thinking on
(and are thus necessary causes for what you think), but an
action, not necessitated by other entities, is necessary to
direct your thinking.  This action, which you cause, is
volition.

> But where does the "subject of your own choice" come from?  I wasn't
> thinking of letting one's thoughts wander, although what I said might
> be interpreted that way.  When you decide what to think about, did
> you decide to decide to think about *that thing*, and if so how did
> you decide to decide to decide, and so on?

Shades of Zeno!  One does not "decide to decide" except when one
does so in an explicit sense.  ("I was waffling all day; later
that evening I put my mental foot down and decided to decide once
and for all.") Rather, you perform an act on your thoughts to
direct them in some way; the name for that act is "decision".

Anyway, in summary, Rand's theory is not just that "free will =
the ability to focus consciousness" (actually, to direct certain
parts of one's consciousness), but that this act is not
necessitated by other entities.

------------------------------

Date: 22 May 88  0644 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: the free will discussion

Here are the meta remarks promised in my previous message
giving my substantive views.  I hope the moderator will put
them in an issue subsequent to the one including the substantive
views.

There are three ways of improving the world.
(1) to kill somebody
(2) to forbid something
(3) to invent something new.

During World War II, (1) was appropriate, and it has occasionally
been appropriate since, but, in the main it's not appropriate now,
and few people's ideas for improvement take this form.  However,
there may be more people in category (2) than in category (3).
Gilbert Cockton seems to be firmly in category (2), and I can't
help regarding him as a minor menace with his proposals that
institutions suppress AI research.  At least the menace is minor
as long as Mrs. Thatcher is around; I wouldn't be surprised if
Cockton could persuade Tony Benn.

I would like to deal substantively with his menacing proposals, but
I find them vague and would prefer to respond  to precise criteria
of what should be suppressed, how they are regarded as applying
to AI, and what forms of suppression he considers legitimate.

I find much of the discussion ignorant of considerations and references
that I regard as important, but different people have different ideas of
what information should be taken into account.  I have read enough of
the sociological discussion of AI to have formed the opinion that it
is irrelevant to progress and wrong.  For example, views that seem
similar to Cockton's inhabit a very bad and ignorant book called "The
Question of Artificial Intelligence" edited by Stephen Bloomfield, which I
will review for "Annals of the History of Computing".  The ignorance is
exemplified by the fact the more than 150 references include exactly one
technical paper dated 1950, and the author gets that one wrong.

The discussion of free will has become enormous, and I imagine
that most people, like me, have only skimmed most of the material.
I am not sure that the discussion should progress further, but if
it does, I have a suggestion.  Some neutral referee, e.g. the moderator,
should nominate principal discussants.  Each principal discussant should
nominate issues and references.  The referee should prune the list
of issues and references to a size that the discussants are willing
to deal with.  They can accuse each other of ignorance if they
don't take into account the references, however perfunctorily.
Each discussant writes a general statement and a point-by-point
discussion of the issues at a length limited by the referee in
advance.  Maybe the total length should be 20,000 words,
although 60,000 would make a book.  After that's done we have another
free-for-all.  I suggest four as the number of principal discussants
and volunteer to be one, but I believe that up to eight could
be accomodated without making the whole thing too unwieldy.
The principal discussants might like help from their allies.

The proposed topic is "AI and free will".

------------------------------

Date: 23 May 88 00:12:04 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.MIT.EDU 
      (Stephen Smoliar)
Subject: Re: Acting irrationally (was Re: Free Will & Self Awareness)

In article <180@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>In article <5499@venera.isi.edu>, smoliar@vaxa.isi.edu (Stephen Smoliar)
>writes:
>
>> I think you are overlooking how great an extent we rely on implict
>> assumptions in any intercourse.  If we had to articulate everything
>> explicitly, we would probably never get around to discussing what we
>> really wanted to discuss.
>
>>The problem comes in deciding WHAT needs to be explicitly articulated
>>and what can be left in the "implicit background." That is a problem
>>which we, as humans, seem to deal with rather poorly, which is why
>>there is so much yeeling and hitting in the world.
>
>Here's a simple rule: explicitly articulate everything, at least once.
>
>The problem, as I see it, is that there are a lot of people who, for
>one reason or another, keep some information secret (perhaps the
>information isn't known).
>
No, the problem is that there is always TOO MUCH information to be explicitly
articulated over any real-time channel of human communication.  If you don't
believe me, try explicitly articulating the entire content of your last
message.

------------------------------

Date: 23 May 88 14:47:46 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Free Will & Self-Awareness

Ewan Grantham has insightfully noted that our draft "laws of robotics"
begs the question, "How does one recognize a fellow sentient being?"

At a minimun, a sentient being is one who is able to sense it's environment,
construct internal maps or models of that environment, use those maps
to navigate, and embark on a journey of exploration.  By that definition,
a dog is sentient.  So the robot has no business killing a barking dog.
Anyway, the barking dog is no threat to the robot.  A washing machine
isn't scared of a barking dog.  So why should a robot fear one?

--Barry Kort

------------------------------

End of AIList Digest
********************

∂25-May-88  1423	@MC.LCS.MIT.EDU:nick%AI.AI.MIT.EDU@XX.LCS.MIT.EDU 	AIList Digest   V7 #7   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 25 May 88  14:23:40 PDT
Received: from XX.LCS.MIT.EDU (CHAOS 2420) by MC.LCS.MIT.EDU 25 May 88 16:34:24 EDT
Received: from FORD.MIT.EDU by XX.LCS.MIT.EDU via Chaosnet; 24 May 88 15:25-EDT
Date: Tuesday, 24 May 1988, 15:16-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Sender: nick@AI.AI.MIT.EDU
Reply-to: AIList@AI.AI.MIT.EDU
Subject: AIList Digest   V7 #7
To: ailist-outgoing@MC.LCS.MIT.EDU, AIList@AI.AI.MIT.EDU



AIList Digest           Wednesday, 25 May 1988      Volume 7 : Issue 7

Today's Topics:

  Queries  -  Fifth Generation Project Status
              Exciting work in AI
              expert system building tools
              References Needed: Case based reasoning
              Qualitative Reasoning
              Information about Translator Systems

  Responses - ai languages on unix
              Analogical reasoning
              Reasoning by Analogy
              Pointers to Social Theory
              proof checker
  

----------------------------------------------------------------------

Date: 13 May 88 09:55:00 EDT
From: Mark (M.P.) Turchan <MPTURCHA%BNR.BITNET@MITVMA.MIT.EDU>
Subject: Query on Fifth Generation Project Status

I'm probably not the first one to point this out to John Nagle,
but the Fifth Generation Project is a TEN YEAR program and not
a FIVE YEAR program. It is just completing its second phase
of research. If John or others are interested in its current
status, then I suggest they visit Tokyo from November 28 -
December 2, 1988, when the 1988 International Conference on
Fifth Generation Computer Systems is to be held, sponsored by ICOT.
I'm sure that you can find out the latest news on the project
when you attend this conference.

Yes, it has been five years since the Feigenbaum and McCorduck
book on the Fifth Generation Project was first published. While
it is quite clear that the book was more of an attempt to lobby
for increased levels of AI research funding in the U.S. in response
to the "Fifth Generation Challenge", I do not believe that the
book offered much insight into the project itself. It seems to
have succeeded in increasing the funding however, at least in
the U.S. In Canada, we are having a difficult time convincing
the Canadian Government that more money could be spent on AI
R&D.

For a much more recent treatment of the Fifth Generation Project,
I recommend that everyone read the following book:

    The Fifth Generation Fallacy: Why Japan is Betting Its
    Future on Artificial Intelligence
    by J. Marshall Unger        Oxford Univ. Press 1987
                                ISBN 0-19-504939-X

This book has little to do with AI, and does not say much about
the Fifth Generation Project, but it offers a very convincing
argument as to the underlying motivation (problems with
machine processing of the Japanese language) for a project in
Japan such as the Fifth Generation program. The author spent
1985 at the University of Tokyo, and collected much of the
material for this book while in Japan.

I have no affiliation with the author of this book, other than
the fact that I was conducting research at the U of Tokyo around
the same time as he was. Unfortunately we were in different faculties,
so I never met him. I quite agree with his perspective, however.

Mark Paul Turchan               BITNET: MPTURCHA@BNR
AI Exploratory
Bell-Northern Research Ltd.
P.O. Box 3511, Station C
Ottawa, Ontario, CANADA K1Y 4H7
(613) 765-2700

------------------------------

Date: Sun, 15 May 88 17:10:42
From: Spencer Star <STAR%LAVALVM1.BITNET@MITVMA.MIT.EDU>
Subject: Analogical reasoning

     This is an answer to a request for sources on analogical reasoning.

D. Gentner & C. Toupin, "Systematicity and surface similarity in the
development of analogy", COGNITIVE SCIENCE, 10, 277-300 (1986).

J. Carbonell, "Derivational Analogy: A theory of ....", in R. Michalski,
J. Carbonell, T. Mitchell, _Machine Learning_, (vol 1), 1983.

S. Kedar-Cabelli, "Analogy--From a unified perspective", Laboratory for
Computer Science Research, Hill Center, Rutgers University, Technical
Report ML-TR-3 (Dec 1985).

Rogers P. Hall, "Understanding analogical reasoning: computational
approaches", Department of Informatin and Computer Science,
University of California, Irvine, 92717, Nov 1986

Many more papers appear in proceedings from AAAI and IJCAI.  It is an
active area in machine learning also.  I understand Hall's paper was going
to be published in AI Journal.  That paper and Kedar-Cabelli's are fairly
long surveys.

                      --Spencer Star

------------------------------

Date: 16 May 88 06:30:00 GMT
From: goldfain@osiris.cso.uiuc.edu
Subject: Re: ai languages on unix wanted


Stony Brook Prolog is public domain and runs on Unix systems.

     SBProlog is currently maintained by Prof. S.K. Debray at Univ. of
Arizona.

------------------------------

Date: 16 May 88 10:31:50 GMT
From: eagle!icdoc!qmc-cs!flash@bloom-beacon.MIT.EDU  (Flash Sheridan)
Subject: Re: ai languages on unix wanted

Look at PopLog.  It's got an okay Common Lisp and a Prolog, plus Pop-11.
It's cheap or free to academics.
Try aarons@cvaxa.susx.ac.uk

From: flash@ee.qmc.ac.uk (Flash Sheridan)
Reply-To: sheridan@nss.cs.ucl.ac.uk
or_perhaps_Reply_to: flash@cs.qmc.ac.uk

------------------------------

Date: 16 May 88 13:22:26 GMT
From: aplcen!jhunix!apl_aimh@mimsy.umd.edu  (Marty Hall)
Subject: Re: Reasoning by Analogy

>In article <1533@csvax.liv.ac.uk> stian@csvax.liv.ac.uk writes:
>Does anyone know of any work done on reasoning by analogy. Any references
>received gratefully.

There are several applicable articles in Proceedings of DARPA "Case-Based
Reasoning Workshop".  Proceedings were published last week, and assumedly
are available from the publisher - Morgan Kaufmann, 2929 Campus Dr, San
Mateo, CA,  94403.
                                - Marty Hall
--
ARPA (preferred) - hall@alpha.ece.jhu.edu [hopkins-eecs-alpha.arpa]
UUCP   - ..seismo!umcp-cs!jhunix!apl_aimh | Bitnet  - apl_aimh@jhunix.bitnet
Artificial Intelligence Laboratory, MS 100/601,  AAI Corp, PO Box 126,
Hunt Valley, MD  21030   (301) 683-6455

------------------------------

Date: Mon, 16 May 88 11:38:07 EDT
From: <kroddis@ATHENA.MIT.EDU>
Subject: ai languages on unix wanted


Gabe Nault asked about PROLOGs that run on UNIX. Quintus PROLOG
(Marketing Department, Quintus Computer Systems, 1310 Villa St,
Mountain View, CA 94041, 415/965-7700) runs under UNIX on Sun, Apollo,
Vax and probably more. Given the comment about the $2000 fee for STAR,
I suspect Quintus will be beyond Gabe's budget. How about C Prolog?
This is by Fernando Pereira who I think is at the AI Center, SRI
International, 333 Ravenswood Ave, Menlo Park, CA 94025,
PEREIRA@SRI-AI. I have used C Prolog on Athena at MIT (UNIX on DEC and
IBM equipment) and not had any difficulties.

|----------------------------------------------------------------------------|
|                                     Kim Roddis                             |
|                                     Rm 5-332A                              |
|                                     Civil Engineering Dept.                |
|                                     MIT                                    |
|                                     Cambridge, MA 02139                    |
|                            kroddis@athena.mit.edu (INTERNET)               |
|----------------------------------------------------------------------------|

------------------------------

Date: Mon, 16 May 88 14:10:30 BST
From: Gilbert Cockton <gilbert%cs.glasgow.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Pointers to Social Theory

        I recall someone asking for references on criticisms of
        Systems Theory, presumably in social theory.

        The best place to start is with any of Anthony Giddens work.
        He's been prolific on social theory since the 1970s, and I can't
        recommend any particular one of his books.  Giddens addresses Systems
        Theory as one of many candidate underpinnings for social theory.

        Criticisms of the Systems Perspective (and Talcott-Parson's related
        Functionalism, hopefully not the only social theory Americans have
        encountered) will be found in most theoretical discussions of Social
        Theory, but if you can't lay your hands on anything by Giddens, do try
        to find something from the European tradition, because this is where
        most modern social theory has originated.

        Given the detachment of the Anglo-Saxon logical and empiricist
        traditions (which dominate much US/UK philosophy) from continental
        philosphy and social theory, many readers may find contemporary social
        theory dense and inpenetrable.  Be warned, it is not written according
        to the tenets of technical writing, and has to be read actively.
        Giddens later work will probably be the easiest for beginners.
        (I saw someone cite Lucy Suchman's book in the comp.ai debate - if you
         can handle the language there, you should be OK with other
         social theory).

        If anyone is put off reading social theory because of the
        language, think how Godel/Montague comes across to a sociologist!

------------------------------

Date: 17 May 88 13:19:24 GMT
From: babbage!reiter@husc6.harvard.edu  (Ehud Reiter)
Subject: Exciting work in AI

About a month ago, I posted a note asking if any "exciting" work existed
in AI which:
        1) Was highly thought of by at least 50% of AI researchers.
        2) Was a positive contribution, not an analysis showing problems
in previous work.
        3) Was in AI as narrowly defined (i.e. not in robotics or vision)

Well, I'm still looking.  I have received some suggestions, but almost
all of them have seemed problematical.  The most promising were Spencer
Star's suggestions for exciting work in machine learning (published in
a previous AIList, including Valiant's theoretical analyses, Quinlan's
decision trees, and explanation-based learning).  However, after
looking at some books and course syllabi in machine learning, I was
forced to conclude that the topics mentioned by Spencer did not satisfy
condition (1), as the topics he mentioned had very little overlap with
the topics in the books and syllabi (which, incidentally, had very
little overlap with each other).

So, I'm still looking for work which meets the above criteria, and hoping
to thereby convince my friend that there is some cohesion to AI.  If anyone
has suggestions, please send them to me!

                                        Ehud Reiter
                                        reiter@harvard  (ARPA,BITNET,UUCP)
                                        reiter@harvard.harvard.EDU  (new ARPA)

------------------------------

Date: Tue, 17 May 88 11:17:17
From: Leslie Burkholder <lb0q+@andrew.cmu.edu>
Subject: proof checker

Two queries concerning proof checkers have appeared. How about either

(1) The Boyer-Moore theorem prover.
Contact
Computational Logic Inc
1717 West Sixth St Suite 290
Austin Texas 78703

(2) Mizar.
Contact
Andrzej Trybulec / Howard Blair
EECS
University of Connecticut
Storrs CT 06268

Leslie Burkholder
(If anyone suggests anything else please let me know.)

------------------------------

Date: 20 May 88 09:20:36 GMT
From: mcvax!ukc!its63b!aiva!kk@uunet.uu.net  (Kathleen King)
Subject: expert system building tools


I'm trying to find out what folk think of various expert system building
tools they have experience with. If there's enough interest I'll post
the results to the net. Ta.

Do you now or have you ever used any of the following tools?

Acquaint (A.K.A 'Daisy')
APES
Arity Expert System Development Package
ART
Auto-Intelligence
Crystal
DUCK
ENVISAGE and SAGE
ES Environment
ESP advisor
ESP Frame Engine
EST(Expert Systems Toolkit)
Experkit
ExperOps
Expert Controller
Expert Ease/Super Expert
Expert Edge
Exsys
Ex-Tran 7
1st Class
1st Class Fusion
Flops
GEST (Generic Expert System Tool)
GOLDWORKS
GURU
G2
HUMBLE
Insight 2+
Intelligence/Compiler
KDS 3
KEATS (Knowledge Engineer's Assistant)
KEE
KES (Knowledge Engineering System)
Keystone
Knowledge Craft
Knowledge Workbench
Knowol
Leonardo
Lisp In-Ate/Micro In-Ate
LOOPS
M1
MacKIT
MicroExpert
Muse
Nexpert/Nexpert Object
Nexus
OPS5
OPS83
Personal Consultant Easy
Personal Consultant Plus
PICON
RuleMaster 2
Savoir
Super Expert
S1
TIMM
TOPSI
TWAICE
VP Expert
Wisdom XS
Xi Plus
XPER
XSYS


If so I'd greatly appreciate hearing your answers to the following questions.

1) How long did it take to learn?
2) Did you teach yourself or get 'learning support'?
3) Do you still use it?
4) Would you choose it again or something else?
5) What sort of application have you used it for?
6) Have you used it for more than one application?

I realise that many PC users who might have these tools do not have access to
the net. Second hand information from them is just as valuable.
Thanks verrrry much.

------------------------------

Date: 23 May 88 16:27:33 GMT
From: csli!leey@labrea.stanford.edu  (Chin Lee)
Subject: References Needed: Case based reasoning


Could anyone point me to some good references to Case Based Reasoning?

Thanks in advance.

------------------------------

Date: Mon, 23 May 88 13:35:17 edt
From: Mr. David Smith <dsmith@gelac.arpa>
Subject: ai languages on unix


Gabe Nault asked about Unix hosted AI languages.  A version of TOPSI, an
OPS-5 rule-based system runs on Unix.  They can be reached at (404) 565-0771.

------------------------------

Date: Tue, 24 May 88 08:53:11 GMT
From: Roberto Bruzzese <BRUZZESE%IBACSATA.BITNET@MITVMA.MIT.EDU>
Subject: Qualitative Reasoning


I'm trying to build a prototype using commonsense reasoning, here at
CSATA-Technopolis Laboratories (ITALY).

I wonder if somebody could mail me some info or article about

  - availability of computer systems using commonsense
    reasoning

  - technical difficulties in building such systems

I'm also looking for articles and state of art info on Qualitative Modelling.

Thank in advance,
                         Roberto Bruzzese

------------------------------

Date: Tue, 24 May 88 18:19:42
From: UZR515%DBNRHRZ1.BITNET@CUNYVM.CUNY.EDU
Subject: Information about Translator Systems


Subject: Wanted: Information on current or old work in designing and
implementation of Computer Translation Systems


I am beginning the design and implementation of an automatic translator
System, which should translate english copmuter text-books in persian!

I would greatly appreciate any descriptions of or references to research
in this area, as well as information on what translator systems are
available, especially the work on analysing of english sentences and
implementation of dictionaries.



Good luck!

Hooshang

------------------------------

End of AIList Digest
********************

∂26-May-88  2225	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #8   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 26 May 88  22:25:10 PDT
Date: Fri 27 May 1988 00:42-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #8
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 27 May 1988        Volume 7 : Issue 8

Today's Topics:

  Philosophy and Social science
  Re: Arguments against AI are arguments against human formalisms
  Re: Re: Re: Exciting work in AI -- (stats vs AI learning)
  non-AI theories about symbols
  the mind of society
  Alternative to Probability (was Re: this is philosophy ??!!?)
  Assumptions in dialog (was Re: Acting irrationally)

----------------------------------------------------------------------

Date: 4 May 88 14:28:25 GMT
From: attcan!houdi!marty1@uunet.uu.net  (M.BRILLIANT)
Subject: Re: this is philosophy ??!!?

In article <1588@pt.cs.cmu.edu>, Anurag Acharya writes:
>  Gilbert Cockton writes:
> ...
> > Your system should prevaricate, stall, duck the
> >issue, deny there's a problem, pray, write to an agony aunt, ask its
> >mum, wait a while, get its friends to ring it up and ask it out ...
>
> Whatever does all that stuff have to do with intelligence per se ?
> ....

Pardon me for abstracting out of context.  Also for daring to comment
when I am not an AI researcher, only an engineer waiting for a useful
result.

But I see that as an illuminating bit of dialogue.  Cockton wants to
emulate the real human decision maker, and I cannot say with certainty
that he's wrong.  Acharya wants to avoid the pitfalls of human
fallibility, and I cannot say with certainty that he's wrong either.

I wish we could see these arguments as a conflict between researchers
who want to model the human mind, and researchers who want to make more
useful computer programs.  Then we could acknowledge that both schools
belong in AI, and stop arguing over which should drive out the other.

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

Disclaimer: Opinions stated herein are mine unless and until my employer
            explicitly claims them; then I lose all rights to them.

------------------------------

Date: 9 May 88 14:12:39 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.MIT.EDU 
      (Stephen Smoliar)
Subject: Re: this is philosophy ??!!?

In article <May.6.18.48.07.1988.29690@cars.rutgers.edu> byerly@cars.rutgers.edu
(Boyce Byerly ) writes:
>
>Perhaps the logical deduction of western philosophy needs to take a
>back seat for a bit and let less sensitive, more probalistic
>rationalities drive for a while.
>
I have a favoire paper which I always like to recommend when folks like Boyce
propose putting probabilistic reasoning "in the driver's seat:"

        Alvan R. Feinstein
        Clinical biostatistics XXXIX.  The haze of Bayes, the aerial palaces
                of decision analysis, and the computerized Ouija board.
        CLINICAL PHARMACOLOGY AND THERAPUTICS
        Vol. 21, No. 4
        pp. 482-496

This is an excellent (as well as entertaining) exposition of many of the
pitfalls of such reasoning written by a Professor of Medicine and Epidemiology
at the Yale University School of Medicine.  I do not wish this endorsement to
be interpreted as a wholesale condemnation of the use of probabilities . . .
just a warning that they can lead to just as much trouble as an attempt to
reduce the entire world of first-order predicate calculus.  We DEFINITELY
need abstractions better than such logical constructs to deal with issues
such as uncertainty and belief, but it is most unclear that probability
theory is going to provide those abstractions.  More likely, we should
be investigating the shortcomings of natural deduction as a set of rules
which represent the control of reasoning and consider, instead, possibilities
of alternative rules, as well as the possibility that there is no one rule
set which is used universally but that different sets of rules are engaged
under different circumstances.

------------------------------

Date: 9 May 88 14:56:25 GMT
From: attcan!lsuc!spectrix!yunexus!geac!geacrd!cbs@uunet.uu.net 
      (Chris Syed)
Subject: Re: Social science gibber [Was Re:  Various Future of AI

This is a comment upon parts of two recent submissions, one by
Simon Brooke and another from Jeff Dalton.

Brooke writes:

> AI has two major concerns: the nature of knowledge, and the nature of
> mind. These have been the central subject matter of philosophy since
> Aristotle, at any rate. The methods used by AI workers to address these
> problems include logic - again drawn from Philosophy. So to summarise:
> AI addresses philosophical problems using (among other things)
> philosophers tools. Or to put it differently, Philosophy plus hardware -
> plus a little computer science - equals what we now know as AI. The fact
> that some workers in the field don't know this is a shameful idictment on
> the standards of teaching in AI departments.

  If anyone doubts these claims, s/he might try reading something on Horne
  clause logic. {I know, Horne probably dosen't have an 'e' on it}. And,
  as Brooke says, a dose of Thomas Kuhn seems called for. It is no accident
  that languages such as Prolog seem to appeal to philosophers.
  In fact, poking one's head into a Phil common room these days is much like
  trotting down to the Comp Sci dept. All them philosophers is talking
  like programmers these days. And no wonder - at last they can simulate
  minds. Meanwhile, try Minsky's _Community of Mind_ for a peek at the
  crossover from the other direction. By the by, it's relatively hard to
  find a Phil student, even at the graduate level, who can claim much
  knowledge of Aristotle these days (quod absit)! Nevertheless, dosen't
  some AI research have more mundane concerns than the study of mind?
  Like how do we zap all those incoming warheads whilst avoiding wasting
  time on the drones?

Jeff Dalton writes:

> Speaking of outworn dogmas, AI seems to be plagued by behaviorists,
> or at least people who seem to think that having the right behavior
> is all that is of interest: hence the popularity of the Turing Test.

  I'm not sure that the Turing Test is quite in fashion these days, though
  there is notion of a 'Total Turing Test' (Daniel C. Dennet, I think?).
  Behaviourism, I must admit, gives me a itch (positively reinforcing, I'm
  sure). But I wonder just what 'the right behaviour' _is_, anyway? It
  seems to me that children (from a Lockean 'tabula rasa' point of view),
  learn & react differently from adults (with all that emotional baggage
  they carry around). One aspect of _adult_ behaviour I'm not sure
  AI should try to mimic is our nasty propensity to fear admitting one's
  wrong. AI research offers Philosophy a way to strip out all the
  social and cultural surrounds and explore reasoning in a vaccuum...
  to experiment upon artificial children. But adult  humans cannot observe,
  judge, nor act without all that claptrap. As an Irishman from MIT once
  observed, "a unique excellence is always a tragic flaw". Maybe it
  depends on what you're after?

      {uunet!mnetor,yunexus,utgpu}                     o        ~
      !geac!geacrd!cbs (Chris Syed)    ~          \-----\---/
      GEM: CHRIS:66                   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   "There can be no virtue in obeying the law of gravity." - J.E.McTaggart.

------------------------------

Date: 9 May 88 21:31:36 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net  (Jeff Dalton)
Subject: Re: Arguments against AI are arguments against human
         formalisms

In article <1103@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
> BTW, Robots aren't AI. Robots are robots.

I'm reminded of the Lighthill report that caused a significant loss of
AI funding in the UK about 10 years ago.  The technique is to divide
up AI, attack some parts as useless or worse, and then say the others
are "not AI".  The claim then is that "AI", properly used, turns out
to encompass only things best not done at all.  All of the so-called
AI that's worth supporting (and funding) belongs to other disciplines
and so should be done there.

Another example of this approach can be found earlier in the message:

> Note how scholars like John Anderson restrict themselves to proper
> psycholgical data. I regard Anderson as a psychologist, not as an AI
> worker.

A problem with this attack is that it is not at all clear that
AI should be defined so narrowly as to exclude, for example, *all*
robotics.  That robots are robots does not preclude some of them
being programmed using AI techniques.  Nor would an artificial
intelligence embodied in a robot automatically fail to be AI.

The attack seeks to set the terms of debate so that the defenders
cannot win.  Any respectable result cited will turn out to be "not
AI".  Any argument that AI is possible will be sat on by something
like the following (from <1069@crete.cs.glasgow.ac.uk>):

   Before the 5th Generation scare, AI in the UK had been sat on for
   dodging too many methodological issues.  Whilst, like the AI pioneers,
   they "could see no reasons WHY NOT [add list of major controversial
   positions", Lighthill could see no reasons WHY in their work.

In short, the burden of proof would be such that it could not be met.
The researcher who wanted to persue AI would have to show the research
would succeed before undertaking it.

Fortunately, there is no good reason to accept the narrow definition
of AI, and anyone seeking to reject the normal use of the term should
accept the burden of proof.  AI is not confined to attempts at human-
level intelligence, passing the Turing test, or other similar things
now far beyond its reach.

Moreover, the actual argument against human-level AI, once we strip
away all the misdirection, makes claims that are at least questionable.

> The argument against AI depends on being able to use written language
> (physical symbol hypothesis) to represent the whole human and physical
> universe.  AI and any degree of literate-ignorance are incompatible.
> Humans, by contrast, may be ignorant in a literate sense, but
> knowlegeable in their activities.  AI fails as this unformalised
> knowledge is violated in formalisation, just as the Mona Lisa is
> indescribable.

The claim that AI requires zero "literate-ignorance", for example, is
far from proven, as is the implication that humans can call on
abilities of a kind completely inaccessable to machines.  For some
reasons to suppose that humans and machines are not on opposite sides
of some uncrossable line, see (again) Dennett's Elbow Room.

------------------------------

Date: 9 May 88 21:46:31 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net  (Jeff Dalton)
Subject: Re: Sorry, no philosophy allowed here.

In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) says:
> If you can't write it down, you cannot possibly program it.

Not so.  I can write programs that I could not write down on paper
because I can use other programs to so some of the work.  So I might
write programs that are too long, or too complex, to write on paper.

------------------------------

Date: Mon, 9 May 88 20:11:02 PDT
From: larry@VLSI.JPL.NASA.GOV
Subject: Philosophy: Informatique & Marxism

--The following bounced when I tried two different ways to send it directly.

Gilbert Cockton: Even one reference to a critique of Systems Theory would be
helpful if it includes a bibliography.  If you can't find one without too much
trouble, please send at least a few sentences explaining the flaw(s) you see.
I would dearly love to be able to advance beyond it, but don't yet see an
alternative.

--The difficulty of formalising knowledge.

Cockton makes a good point here.  The situation is even worse than he
indicates.  Many, perhaps most or all decisions seem to be made
subconsciously, apparently by an analog vector-sum operation rather than
logical, step-by-step process.  We then rationalize our decision, usually so
quickly & easily that we are never aware of the real reasons.  This makes
knowledge and procedure capture so difficult that I suspect most AI
researchers will try (ultimately unsuccessfully) to ignore it.

--Marxism.

Economics is interesting in that it produced a cybernetic explanation of
production, value, & exchange (at least as early as the 17th century) long
before there was cybernetics.  Marx (and Engels) spent a lot of time studying
& explaining this process, and much of their work is still useful.  Other
parts have not been supported by advances in history & knowledge.

The workers (at least in America) are predominantly opposed to revolution and
support capitalism, despite much discontent about the way bosses treat them.
Part of this may be because our society holds out the hope that many of us can
become bosses ourselves.  Another reason is that many workers, either directly
or through retirement plans, have become owners.  Technology has also
differentiated the kinds of workers from mostly physical laborer to skilled
workers of many different types who sympathize with their own subclass rather
than workers in general.

Further, once workers feel they have reached an acceptable subsistence,
oftentimes they develop other motivations for work having nothing to do with
material concerns.  People from a middle-class or higher background often
stereotype the "lowest proletariat" as beer-drinking slobs whose only interest
is food, football, and sex.  Coming from a working class background (farmers
and factory laborers), I know that "doing a good job" is a powerful motivator
for many workers.  The "higher proletariat" (who are further from the
desperate concern for survival) show this characteristic even more strongly.
Most engineers I know work for reasons having nothing to do with money; the
same is true of many academics and artists.  (This is NOT to say money is
unimportant to them.)

Just as the practice of economics has deviated further & further from the
classical Marxist viewpoint, so has theory.  Materialism, for instance, has
changed drastically in a world where energy is at least as important as
matter, which has itself become increasingly strange.  Too, the science of
"substance" has been joined by a young, confused, but increasingly vigorous,
fertile and rigorous science of "form," variously called (or parts of it)
computer science, cybernetics, communications theory, information science,
informatique, etc.  This has interesting implications for theories of monetary
value and the definition of capital, implications that Marx did not see (&
probably could not, trapped in his time as he was).

Informatique has practical implications of which most of us on this list are
well aware.  One of the most interesting economically is the future of
guardian-angel programs that help us work: potentially putting us out of a
job, elevating our job beyond old limits, or (as any powerful tool can)
harming us.  And in one of the greatest ironies of all, AI researchers working
in natural language and robotics have come to realize the enormous
sophistication of "common labor" and the difficulties and expense of
duplicating it mechanically.
                                    Larry @ jpl-vlsi

------------------------------

Date: Tue, 10 May 88 09:44 EDT
From: Stephen Robbins <Stever@WAIKATO.S4CC.Symbolics.COM>
Subject: AIList V6 #97 - Philosophy

    Date: 6 May 88 22:48:09 GMT
    From: paul.rutgers.edu!cars.rutgers.edu!byerly@rutgers.edu  (Boyce Byerly )
    Subject: Re: this is philosophy ??!!?

    2) In representing human knowledge and discourse, it fails because it
    does not recognize or deal with contradiction.  In a rigorously
    logical system, if

      P ==> Q
      ~Q
       P
    Then we have the tautology ~Q and Q.

    If you don't believe human beings can have the above deriveably
    contradictory structures in their logical environments, I suggest you
    spend a few hours listening to some of our great political leaders :-)

There's also an issue of classification, with people.  How do they even \know/
something's a Q or ~Q?

One of the most fascinating (to me) moments in the programming class I teach is
when I hand out a sheet of word problems for people to solve in LISP.  If I
call them "mini program specs," the class grabs them and gets right to work.

If I call them "word problems," I start to get grown men and women telling me
that "they can't \do/ word problems."  Despite their belief that they \can/
program!

It seems to be a classification issue.

    ------------------------------

    Date: 6 May 88 23:05:54 GMT
    From: oliveb!tymix!calvin!baba@sun.com  (Duane Hentrich)
    Subject: Re: Free Will & Self Awareness

    In article <31024@linus.UUCP> bwk@mbunix (Barry Kort) writes:
    >Why then, when a human engages in undesirable behavior, do we resort
    >to such unenlightened corrective measures as yelling, hitting, or
    >deprivation of life-affirming resources?

Probably some vague ideas that negative reinforcement works well.  Or
role-modeling parents who did the same thing.

    For the same reason that the Enter/Carriage Return key on many keyboards
    is hit repeatedly and with great force, i.e. frustration with an
    inefficient/ineffective interface which doesn't produce the desired results.

Yeah.  I've noticed this:  if something doesn't work, people do it longer,
harder, and faster. "Force it!" seems to be the underlying attitude.  But in my
experience, slowing down and trying carefully works much better.  "Force it!"
hasn't even been a particularly good heuristic for me.

Actually, I wonder if it's people-in-general, or primarily a Western
phenomenon.

-- Stephen

------------------------------

Date: 10 May 88 17:31:38 GMT
From: dogie!mish@speedy.wisc.edu
Subject: Re: Sorry, no philosophy allowed here.

In article <414@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes...

>In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
>(Gilbert Cockton) says:
>> If you can't write it down, you cannot possibly program it.
>
>Not so.  I can write programs that I could not write down on paper
>because I can use other programs to so some of the work.  So I might
>write programs that are too long, or too complex, to write on paper.

YACC Lives! I've written many a program that included code 'written' by
some other program (namely YACC).
  The point is that the computer allows us to extend what we know. I may
not have actually written the code, but I knew how to tell the computer to
write the code. In doing so, I created a program that I never (well, almost
never) could have written myself even though I knew how
_____________________     ____________________________     ___________________
                     \   /                            \   /
      Tom             \ /     Bit: mish@wiscmacc       \ /      Univ. Of Wis.
       Mish            X  Arpa: mish@vms.macc.wisc.edu  X        Madison
        Jr.           / \    Phone: (608) 262-8525     / \        MACC
_____________________/   \____________________________/   \___________________

------------------------------

Date: 10 May 1988 14:10 (Tuesday)
From: munnari!nswitgould.oz.au!wray@uunet.UU.NET (Wray Buntine)
Subject: Re: Re: Re: Exciting work in AI -- (stats vs AI learning)

It is worthwhile me clarifying Stuart Crawford's
(Advanced Decision Systems, stuart@ads.com, Re: Exciting work in AI)
recent comments on my own discussion of Quinlan's work, because it brings
out a distinction between purely statistical approaches and
approaches from the AI area for learning prediction rules from noisy data.

This prediction problem is OF COURSE an applied statistics one.
(My original comment never presumed otherwise---the reason I
 posted the comment to AIList in the first place was to point this out)

But, it is NOT ALWAYS a purely applied statistics problem (hence my
comments about Quinlan's "improvements").

1.  In knowledge acquisition, we usually don't have a purely statistical
    problem, we often have
        a small amount of data ,
        a knowledgeable but only moderately articulate expert .
    To apply a purely statistical approach to the data alone is clearly
    ignoring a very good source of information: the "expert".
    To expect the expert to sprout forth relevant information is naive.
    We have to produce a curious mix of applied statistics and cognitive
    psych to get good results.  With comprehensibility of statistical results,
    prevelant in learning work labelled as AI, we can draw
    the expert into giving feedback on statistical results (potential rules).
    This is a devious but demonstrably successful means of capturing some of
    his additional information.
    There are other advantages of comprehensibility in the "knowledge
    acquisition" context that again arise for non-statistical reasons.

2.  Suffice it to say, trees may be more comprehensible than rules sometimes
    (when they're small they certainly give a better picture of the overall
    result), but when they're large they aren't always.
    Transforming trees to rules is not simply a process of picking a branch
    and calling it a rule.  A set of disjunctive rules can be logically
    equivalent to a minimal size tree that is LARGER BY AN ORDER OF MAGNITUDE.
    In a recent application (reported in CAIA-88)
    the expert flatly refused to go over trees, but on being shown rules
    found errors in the data preparation, errors in the problem formulation, and
    provided substantial extra information (the rules jogged his memory), ....
    merely because he could easily comprehend what he was looking at.
    Need I say, subsequent results were far superior.

In summary, when learning prediction rules from noisy data, AI approaches
complement straight statistical ones in knowledge acquisition contexts, for
reasons outside the domain of statistics.  In our experience, and the
experience of many others, this can be necessary to produce results.


Wray Buntine
wray@nswitgould.oz
School of Computing Science
University of Technology, Sydney
PO Box 123, Broadway
Australia, 2007

------------------------------

Date: 10 May 88 11:56 PDT
From: hayes.pa@Xerox.COM
Subject: Re: AIList V6 #98 - Philosophy

Nancyk has made a valuable contribution to the debate about logic and AI,
raising the discussion to a new level.  Of course, once one sees that to believe
that ( for example )  if P and Q are both true, then P&Q is true, is merely an
arifact of concealed organismal Anthropology, a relic of bougeois ideology,  the
whole matter becomes much clearer.  We who have thought that psychology might be
relevant to AI have missed the point: of course, political science -
specifically, Marxist political science - is the key to making progress.  Let us
all properly understand the difference between the Organismus-Umwelt
Zusammenhang  and the Mensch-Umwelt Zusammenhang,  and let our science take
sides with the working people,  and we will be in a wholly new area.  And I
expect Gilbert will be happier.  Thanks, Nancy.

Pat Hayes

------------------------------

Date: 11 May 88 04:39:31 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Arguments against AI are arguments against human
         formalisms

In article <1103@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
> BTW, Robots aren't AI. Robots are robots.

      Rod Brooks has written "Robotics is a superset of AI".  Robots have
all the problems of stationary artificial intelligences, plus many more.
Several of the big names in AI did work in robotics back in the early days of
AI.  McCarthy, Minsky, Winograd, and Shannon all did robotics work at one
time.  But they did it in a day when the difficulty of the problems to be
faced was not recognized.  There was great optimism in the early days,
but even such seemingly simple problems such as grasping turned out to be
very hard.  Non-trivial problems such as general automatic assembly or
automatic driving under any but the most benign conditions turned out to
be totally out of reach with the techniques available.

      Progress has been made, but by inches.  Nevertheless, I suspect that
over the next few years, robotics will start to make a contribution to
the more classic AI problems, as the techniques being developed for geometric
reasoning and sensor fusion start to become the basis for new approaches
to artificial intelligence.

      I consider robotics a very promising field at this point in time.
But I must offer a caution.  Working in robotics is risky.  Failure is
so obvious.  This can be bad for your career.

                                        John Nagle

------------------------------

Date: Wed 11 May 88 14:18:27-PDT
From: Conrad Bock <BOCK@INTELLICORP.ARPA>
Subject: non-AI theories about symbols


For those interested in non-AI theories about symbols, the following is
a very quick summary of Freud and Marx.

In Freud, the prohibition on incest forces children to express their
sexuality through symbolic means.  Sexual desire is repressed in the
unconscious, leaving the symbols to be the center of people's attention.
People begin to be concerned with things for which there seems no
justification.

Marx observed that the act of exchanging objects in an economy forces us
to abstract from our individual labors to a symbol of labor in general
(money).  The abstraction becomes embodied in capital, which people
constantly try to accumulate, forgetting about the value of the products
themselves.

Both Marx and Freud use term `fetish' to refer to the process in which
symbols (of sex and labor) begin to form systems that operate
autonomously.  In Freud's fetishism, someone may be obsessed with feet
instead of actual love; in Marx, people are interested in money instead
of actual work.  In both cases, we lose control of something of our own
creation (the symbol) and it dominates us.

Conrad Bock

------------------------------

Date: 12 May 88 19:18:17 GMT
From: centro.soar.cs.cmu.edu!acha@pt.cs.cmu.edu  (Anurag Acharya)
Subject: Re: this is philosophy ??!!?

In article <86@edai.ed.ac.uk> rjc@edai.ed.ac.uk (Richard Caley) writes:
>> Imagine Mr. Cockton, you are standing on the 36th floor of a building
>> and you and your mates decide that you are Superman and can jump out
>> without getting hurt.
>Then there is something going wrong in the negotiations within the
group!!

Oh, yes! There definitely is! But it is still is a "negotiation" and it
is "social"!. Since 'reality' and 'truth' are being defined as
"negotiated outcomes of social processes", there are no constraints on
what these outcomes may be. I can see no reason why a group couldn't
conclude that ( esp. since physical world constraints are not
necessarilly a part of these "negotiations").

>Saying that Y is the result of process X does not imply that any result
>from X is a valid Y. In particular 'reality is the outcome
>of social negotiation' does not imply that "real world" (whatever that is)
>constraints do not have an effect.

Do we have "valid" and "invalid" realities around ?

>If we decided that I was Superman then presumably there is good evidence
>for that assumption, since it is pretty hard to swallow. _In_such_a_case_
>I might jump. Being a careful soul I would probably try some smaller drops
>first!

Why would it be pretty hard to swallow ? And why do you need "good"
evidence ?  For that matter, what IS good evidence - that ten guys (
possibly deranged or malicious ) say so ? Have you thought why would you
consider getting some real hard data by trying out smaller drops ? It is
because Physical World just won't go away and the only real evidence
that even you would accept are actual outcomes of physical events.
Physical world is the final arbiter of "reality" and "truth" no matter
what process you use to decide on your course of action.


>To say you would not jump would be to say that you would not accept that
>you were Superman no matter _how_ good the evidence.

If you accept consensus of a group of people as "evidence", does the
degree of goodness depend on the number of people, or what ?

> Unless you say that the
>concept of you being Superman is impossible ( say logically inconsistent with
>your basic assumptions about the world ), which is ruled out by the
>presuppositions of the example ( since if this was so you would never come
>to the consensus that you were him ), then you _must_ accept that sufficient
>evidence would cause you to believe and hence be prepared to jump.

Ah, well.. if you reject logical consistency as a valid basis for
argument then you could come to any conclusion/consensus in the world
you please - you could conclude that you (simultaneously) were and were
not Superman! Then, do you jump out or not ? ( or maybe teeter at the
edge :-)) On the other hand, if you accept logical consistency as a
valid basis for argument - you have no need for a crowd to back you up.

Come on, does anyone really believe that if he and his pals reach a consensus on
some aspect of the world - the world would change to suit them ? That is the
conclusion I keep getting out of all these nebulous and hazy stuff about
'reality' being a function of 'social processes'.
--
Anurag Acharya                Arpanet: acharya@centro.soar.cs.cmu.edu

"There's no sense in being precise when you don't even know what you're
 talking about"   -- John von Neumann

------------------------------

Date: Fri, 13 May 88 06:00:14 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: the mind of society

A confession first about _The Society of Mind_:  I have not yet read all
of this remarkable linearized hypertext document.  I do not think this
matters for what I am saying here, but I certainly could be mistaken and
would welcome being corrected.

SOM gives a useful and even illuminating account of what might be going
on in mental process.  I think this account misses the mark in the
following way:

While SOM demonstrates in an intellectually convincing way that the
experience of being a unitary self or an ego is illusory, it still
assumes that individual persons are separate.  It even appears to
identify minds with brains in a one-one correspondence.  But the logic
of interacting intelligent agents applies in obvious ways to interacting
persons.  What then, for example, of the mind of society?

(I bring to this training and experience in family therapy.  Much of the
subconscious mental processes and communications of family members serve
to constitute and maintain the family as a homeostatic system--analogous
to the traffic of SOM agents maintaining the ego, see below.
Individuals--especially those out of communication with family members--
recreate relationships from their families in their relationships
outside the family; and vice versa, as any parent knows.  I concur with
Gilbert Cockton's remarks about social context.  If in alienation we
pretend social context doesn't matter, we just extend the scope of that
which we ignore, i.e. that which we relegate to subconsciousness.)

Consider the cybernetic/constructivist view that mind is not
transcendent but rather is immanent (an emergent property) in the
cybernetic loop structure of what is going on.  I believe SOM is
consistent with this view as far as it goes, but that the SOM account
could and should go farther--cf e.g. writings of Gregory Bateson, Paul
Watzlawick, Maturana, Varela, and others; and in the AI world, some
reaching in this direction in Winograd & Flores _Understanding
Computers_.

Minsky cites Freud ("or possibly Poincare'") as introducing serious
consideration of subconscious thought.  However, the Buddhists, for
example, are pretty astute students of the mind, and have been
investigating these matters quite systematically for a long time.

What often happens when walking, jogging, meditating, laughing, just
taking a deep sighing breath, etc is that there are temporarily no
decisions to be made, no distinctions to be discriminated, and the
reactive mind, the jostling crowd of interacting agents, quiets down a
bit.  It then becomes possible to observe the mental process more
"objectively".  The activity doesn't stop ("impermanence" or ceaseless
change is the byword here).  It may become apparent that this activity
is and always was out of control.  What brings an activity of an agent
(using SOM terms) above the threshold between subconscious and conscious
mental activity?  What lets that continuing activity slip out of
awareness?  Not only are the activities out of control--most of them
being most of the time below the surface of the ocean, so to speak, out
of awareness--but even the constant process of ongoing mental and
emotional states, images, and processes coming to the surface and
disappearing again below the surface turns out also to be out of
control.

This temporary abeyance in the need to make decisions has an obvious
relation to Minsky's speculation about "how we stop deciding" (in
AIList V6 #98):

MM> I claim that we feel free when we decide to not try further to
MM> understand how we make the decisions: the sense of freedom comes from a
MM> particular act - in which one part of the mind STOPs deciding, and
MM> accepts what another part has done.  I think the "mystery" of free will
MM> is clarified only when we realize that it is not a form of decision
MM> making at all - but another kind of action or attitude entirely, namely,
MM> of how we stop deciding.

The report here is that if you stop deciding voluntarily--hold the
discrimination process in abeyance for the duration of sitting in
meditation, for example, not an easy task--there is more to be
discovered than the subjective feeling of freedom.  Indeed, it can seem
the antithesis of freedom and free will, at times!

So the agents of SOM are continually churning away, and if they're
predetermined it's not in any way that amounts to prediction and control
as far as personal awareness is concerned.  And material from this
ongoing chatter continually rises into awareness and passes away out of
awareness, utterly out of personal control.  (If you don't believe me,
look for yourself.  It is a humbling experience for one wedded to
intellectual rigor and all the rest, I can tell you.)

Evidently, this fact of impermanence is due to there being no ego there
to do any controlling.  Thus, one comes experientially to the same
conclusion reached by the intellectual argumentation in SOM:  that there
is no self or ego to control the mind.  Unless perhaps it be an emergent
property of the loop structures among the various agents of SOM.
Cybernetics has to do with control, after all.  (Are emergent properties
illusory?  Ilya Prigogine probably says no.  But the Buddhists say the
whole ball of wax is illusory, all mental process from top to bottom.)

Here, the relation between "free will" and creativity becomes more
accessible.  Try substituting "creativity" for "free will" in all the
discussion thus far on this topic and see what it sounds like.  It may
not be so easy to sustain the claim that "there is no creativity because
everything is either determined or random." And although there is a
profound relation between creativity and "reaching into the random" (cf
Bateson's discussions of evolution and learning wrt double-bind theory),
that relation may say more about randomness than it does about
creativity.

If the elementary unit of information and of mind is a difference that
makes a difference (Bateson), then we characterize as random that in
which we can find no differences that make a difference.  Randomness is
dependent on perspective.  Changes in perspective and access to new
perspectives can instantly convert the random to the non-random or
structured.  As we have seen in recent years, "chaos" is not random,
since we can discern in its nonlinearity differences that make a
difference.  (Indeed, a cardinal feature of nonlinearity as I understand
it is that small differences of input can make very large differences of
output, the socalled "butterfly effect".)  From the point of view of
personal creativity, "reaching into the random" often means reaching
into an irrelevant realm for analogy or metaphor, which has its own
structure unrelated in any obvious or known way to the problem domain.
(Cf. De Bono.)  "Man's reach shall e'er exceed his grasp,/Else what's a
meta for?"  (Bateson, paraphrasing Browning.)

It is interesting that the Buddhist experience of the Void--no ego, no
self, no distinctions to be found (only those made by mind)--is logically
equivalent to the view associated with Vedanta, Qabala, neoplatonism,
and other traditions, that there is but one Self, and that the personal
ego is an illusory reflection of That i.e. of God.  ("No distinctions
because there is no self vs other" is logically equivalent to "no
distinctions because there is but one Self and no other", the
singularity of zero.)  SOM votes with the Buddhists, if it matters, once
you drop the presumed one-one correspondence of minds with brains.

On the neoplatonist view, there is one Will, and it is absolutely free.
It has many centers of expression.  You are a center of expression for
that Will, as am I.  Fully expressing your particular share of (or
perspective on) that Will is the most rewarding and fulfilling thing you
can do for yourself; it is in fact your heart's desire, that which you
want to be more than anything else.  It is also the most rewarding and
beneficial thing you can possibly do for others; this follows directly
from the premise that there is but one Will.  (This is thus a
perspective that is high in synergy, using Ruth Benedict's 1948 sense of
that much buzzed term.)  You as a person are of course free not to
discover and not to do that which is your heart's desire, so the
artifactual, illusory ego has free will too.  It's "desire" is to
continue to exist, that is, to convince you and everyone else that it is
real and not an illusion.

Whether you buy this or not, you can still appreciate and use the
important distinction between cleverness (self acting to achieve desired
arrangement of objectified other) and wisdom (acting out of the
recognition of self and other as one whole).  I would add my voice to
others asking that we develop not just artificial cleverness, but
artificial wisdom.  Winograd & Flores again point in this direction.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: 13 May 88 14:19:00 GMT
From: vu0112@bingvaxu.cc.binghamton.edu  (Cliff Joslyn)
Subject: Alternative to Probability (was Re: this is philosophy ??!!?)

In article <5459@venera.isi.edu> Stephen Smoliar writes:

>We DEFINITELY need abstractions better than such logical constructs to
>deal with issues such as uncertainty and belief, but it is most unclear
>that probability theory is going to provide those abstractions.  More
>likely, we should be investigating the shortcomings of natural deduction
>as a set of rules which represent the control of reasoning and consider,
>instead, possibilities of alternative rules, as well as the possibility
>that there is no one rule set which is used universally but that
>different sets of rules are engaged under different circumstances.


Absolutely right.

Furthermore, such a theory exists: Fuzzy Systems Theory.  Over the past
fifteen years, through the work of Zadeh, Prade, Dubois, Shafer, Gaines,
Baldwin, Klir, and many others, we now understand that probability
measures in particular are a very special case of Fuzzy Measures in
general.  Belief, Plausibility, Possibility, Necessity, and Basic
Probability Measures all provide alternative, and very powerful,
formalisms for representing uncertainty and indeterminism.

The traditional concept of 'information' itself is also recognized as a
special case.  Other formalisms include measures of Fuzziness,
Uncertainty, Dissonance, Confusion, and Nonspecificity.

These methods are having a very wide impact in AI, especially with
regards to the representation of uncertainty in artificial reasoning.

Primary references:
        Klir, George, _Fuzzy Sets, Uncertainty, and Information_,
        Prentice Hall, 1988, Prentice Hall.

        Dubois, D., Prade, H. _Fuzzy Sets and Systems: Theory and
        Applications_, 1980, Academic.

        Shafer, G., _A Mathematical Theory of Evidence_, 1976,
        Princeton.

        Zadeh, L.A. "The Role of Fuzzy Logic in the Management of
        Uncertainty in Expert Systems," in Gupta MM et. al.,
        _Approximatte Reasoning in Expert Systems_, 1985, U Cal
        Berkley.


Journals:
        _Fuzzy Sets and Systems_

        _International J. of Approximate Reasoning_

        _International J. of Man-Machine Studies_

        _Information Science_

        _International J. of General Systems_
--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: 16 May 88 10:29 PDT
From: hayes.pa@Xerox.COM
Subject: Re: AIList Digest   V7 #1

I was fascinated by the correspondence between Gabe Nault and Mott Given in
vol7#1, concerning "an artifical intelligence language ... something more than
lisp or xlisp."    Can anyone suggest a list of features which a programming
language must have which would qualify it as an "artificial intelligence
language"  ?

Pat Hayes

------------------------------

Date: 16 May 88 18:26:11 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: AIList V6 #86 - Philosophy

David Sher has injected some new grist into the discussion of
"responsibility" for machines and intelligent systems.

I tend to delegate responsibility to machines known as "feedback
control systems".  I entrust them to maintain the temperature of
my house, oven, and hot water.  I entrust them to maintain my
highway speed (cruise control).  When these systems malfunction,
things can go awry in a big way.  I think we would have no trouble
saying that such feedback control systems "fail", and their failure
is the cause of undesirable consequences.

The only interesting issue is our reaction.  I say fix them (or
improve their reliability) and get on with it.  Blame and punishment
are pointless.  If a system is unable to respond, doesn't it make
more sense to restore its ability than to merely label it "irresponsible"?

--Barry Kort

------------------------------

Date: Mon, 16 May 88 19:00 MST
From: DanPrice@HIS-PHOENIX-MULTICS.ARPA
Subject: Sociology vs Science Debate

In regard to the Sociology vs Science debate it seems to me that in
business & politics the bigger & more important a decision; the less
rational is the process used to arrive at that decision.  Or to put it
another way, emotion overrides logic every time!  Question:  What is the
relationship between emotion and intelligence and how do we program
emotion into our logical ai machines??  Do we have an ai system that can
tell whether a respondant is behaving in an emotional or a logical way??

------------------------------

Date: 26 May 88 16:27:03 GMT
From: krulwich-bruce@yale-zoo.arpa  (Bruce Krulwich)
Subject: Assumptions in dialog (was Re: Acting irrationally)

In article <180@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>True communication can only occur when both parties understand what all
>the symbols used to communicate mean.  This doesn't mean you have to
>explicitly define what you mean by "tree" every time you use the word
>tree, but it's a good idea to define it once, especially if it's something
>more complex than "tree" (with due respect to all sentient hardwood).

This can't be true.  True communications occurs whenever the two party's
understanding of the words used overlap in the areas in which they are in
fact being used.  Every word that a speaker uses will have a large set of
information associated with it in the speaker's mind, but only a small
subset of that information will actually be needed to understand what the
speaker is saying.  The trick is for the listener to (1) have the necessary
information as a subset of the information that he has about the word
(which is what you are considering above), and (2) correctly choose that
subset from the information he has (which is a form of the indexing problem).

>>The problem comes in deciding WHAT needs to be explicitly articulated
>>and what can be left in the "implicit background." That is a problem
>>which we, as humans, seem to deal with rather poorly, which is why
>>there is so much yeeling and hitting in the world.

Au contraire, humans do this quite well.  True, there are problems,
but most everyday communication is quite successful.  Computers at this
point can't succeed at this (the indexing problem) anywhere near as well
as people do (yet).

>Here's a simple rule: explicitly articulate everything, at least once.
>The problem, as I see it, is that there are a lot of people who, for
>one reason or another, keep some information secret (perhaps the
>information isn't known).

You are vastly underestimating the amount of knowledge you (and everybody)
have for every word, entity, and concept you know about.  More likely is
the idea that people have a pretty good idea of what other people know (see
Wilks' recent work, for example).  Again, this breaks down seemingly often,
but 99% of the time people seem to be correct.  Just because it doesn't
take any effort to understand simple sentences like "John hit Mary" doesn't
mean that there isn't alot of information selection and assumption making
going on.


Bruce Krulwich

------------------------------

End of AIList Digest
********************

∂01-Jun-88  2145	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #10  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 1 Jun 88  21:45:02 PDT
Date: Thu  2 Jun 1988 00:04-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #10
To: AIList@AI.AI.MIT.EDU


AIList Digest            Thursday, 2 Jun 1988      Volume 7 : Issue 10

Today's Topics:
  expert systems in the railroad industry
  Re: ai languages on unix wanted
  Connectionist Medical Expert Systems
  Re: References Needed: Case based reasoning
  References re AI in weather forecasting?
  Re: References re AI in weather forecasting?
  Traveling Salesman Problem (a request)
  BRAINS
  Re: ai languages on unix wanted (Poplog availability)
  Re: Genetic algorithms
  Connectionist reference wanted
  ai expert system shells for IBM AT or compatibles
----------------------------------------------------------------------

Date: 24 May 88 07:07:08 GMT
From: portal!cup.portal.com!Barry_A_Stevens@uunet.uu.net
Subject: expert systems in the railroad industry

A man with Canadian Pacific Railway in Montreal has built an expert system
which works in conjunction with a gas chromatograph and/or spectrograph to
perform an analysis of the oil in large engines. By looking for the metals
that are carried in the oil, expensive engine failures are found in
advance. They are claiming some impressive savings.
   The details are not at hand now. If interested, please call or write,
I'll try to help.
Barry Stevens
Applied AI Systems
PO Box 2747
Del Mar CA 92014
619-755-7231
{_

------------------------------

Date: 24 May 88 15:54:25 GMT
From: nyser!cmx!jfbrule@itsgw.rpi.edu  (Jim Brule)
Subject: Re: ai languages on unix wanted

>} I am starting a Master's thesis and am interested in finding
>} an artifical intelligence language that either runs under Unix or
>} can be ported to a Unix system.
>

Try POPLOG: three "AI languages" (LISP, PROLOG, POP-11) in a single
environment. Nice online help, nice interfaces between the languages
(or at least, more often than not), and Common Lisp. Contact Robin
Popplestone at UMass Amherst.



--
  //\//\\  //\//\\  //\//\\|"Time flies like an arrow;
\// //  \\// //  \\// //  \|  Fruit flies like a banana."  G. Marx
// //\\  // //\\  // //\\  |--------------------------------------
/\//  \\//\//  \\//\//  \\/|Jim Brule' | jfbrule@rodan.acs.syr.edu

------------------------------

Date: 25 May 88 17:59:16 GMT
From: bhb@cadre.dsl.pittsburgh.edu  (Barry Blumenfeld)
Subject: Connectionist Medical Expert Systems

Can anyone give me pointers to people developing medical expert systems
using a connectionist architecture?


Barry Blumenfeld
bhb@cadre.dsl.pittsburgh.edu
bhbst@cisunx.UUCP

------------------------------

Date: 26 May 88 12:09:01 GMT
From: aplcen!jhunix!apl_aimh@mimsy.umd.edu  (Marty Hall)
Subject: Re: References Needed: Case based reasoning

In article <4007@csli.STANFORD.EDU> leey@csli.STANFORD.EDU (Yichin Lee) writes:
>
>Could anyone point me to some good references to Case Based Reasoning?

Try the Proceedings of the 1988 DARPA Case-Based Reasoning Workshop that just
took place in (beautiful! :-) Clearwater Beach, Florida.  The Proceedings were
edited by Janet Kolodner and included some earlier AAAI papers as well as
new ones.  Morgan Kaufmann published the proceedings, so assumedly you can get
it from them: Morgan Kaufmann Publishers, Inc., 2929 Campus Dr, San Mateo, CA
94403.  ISBN # for the proceedings is 0-934613-93-1.
        Regards-
                        - Marty Hall
--
ARPA (preferred) - hall@alpha.ece.jhu.edu [hopkins-eecs-alpha.arpa]
UUCP   - ..seismo!umcp-cs!jhunix!apl_aimh | Bitnet  - apl_aimh@jhunix.bitnet
Artificial Intelligence Laboratory, MS 100/601,  AAI Corp, PO Box 126,
Hunt Valley, MD  21030   (301) 683-6455

------------------------------

Date: 26 May 88 15:53:01 GMT
From: aplcen!jhunix!apl_aimh@mimsy.umd.edu  (Marty Hall)
Subject: References re AI in weather forecasting?

Any pointers on where to look re AI in weather forecasting?  I have a couple
from AI in Engineering Proceedings, but can't find any others.
     Thanks!
                            - Marty Hall
--
ARPA - hall@bravo.cs.jhu.edu [hopkins-eecs-bravo.arpa]
UUCP   - ..seismo!umcp-cs!jhunix!apl_aimh | BITNET  - apl_aimh@jhunix.bitnet
Artificial Intelligence Laboratory, MS 100/601,  AAI Corp, PO Box 126,
Hunt Valley, MD  21030   (301) 683-6455

------------------------------

Date: 27 May 88 14:29:34 GMT
From: bbn.com!aboulang@bbn.com  (Albert Boulanger)
Subject: Re: References re AI in weather forecasting?

The Environmental sciences group at NOAA research labs in Boulder has
been holding a yearly meeting called AIRES (AI Research in
Environmental Science). I don't know if they are having a meeting this
year. A point of contact would be William Moninger at the NOAA Labs. I
attended the first one. The following is some info on projects that I
know of:

Synoptic Scale forecasting over Canada.
"Knowledge Representation in an Expert Storm Forecasting System"
Renee Ellio & Johannes de Haan
IJCAI 85
&
"Representing Quantitative and Qualitative Knowledge in a Knowledge-Based
Storm-Forecasting System
Renee Ellio & Johannes de Haan
Int. J. Man-Machine Studies (1986) 25, 523-547

The survey (very brief!) paper you saw:
"Expert Systems in Meteorology"
Benoit Faller
AI in Engineering proceedings
Mentions the Canada work, fog forecasting, & Avalanche prediction

Severe thunderstorm forecasting.
"Validation of a Weather Forecasting Expert System"
Steven Zubrick, Radian Corp.
Machine Intelligence Workshop II Loch Lomond, Scotland, March 1985.
+
"RuleMaster: An Expert System to Aid in Severe Thunderstorm Forecasting"
Steven Zubrick & Charles Reise
14th Conference on Severe Local Storms, Indianapolis, IN, Oct. 29- Nov
1, 1985.

Downbust detection from doppler radar.
Steve Campbell MIT Lincoln Labs

Visibility prediction for airports.
Mark Stunder Geomet (Contract with AFGL Hanscomb AFB)

Some PROFS related work (ARCHER), William Moninger NOAA ERL.

Short term mesoscale forecasting (nowcasting) over the Cape Canaveral area
for NASA.
James Davis & Robert McArthur ADL Inc.

Solar flare forecasting (THEO).
Patrick McIntosh
NOAA Space Environmental Laboratory, Boulder.

Recognizing patterns from weather maps.
Bill Havens, University of British Columbia.

Downslope snowfall forecasting.
George Swetnam Mitre Corp & Richard Bunting UCAR.

There are of course other environment-related projects/ideas:
flood prediction, forest-fire modeling/prediction (Don Latham at the
BLM, Missoula Mont. is a point of contact here.)
I am sure I have left projects out.

Now the funding picture for weather expert systems in the USA. NOAA's
money is mostly for operational tasks.  They do conduct research, but
mostly in-house (PROFS for instance).  NASA is a possible source of
funding. ALBM project at DARPA has a weather forecasting component.
NEXRAD (joint military and NOAA) doppler radar project may have some
money. AWIPS 90 (next weather forecasting system for the 90's NOAA) has
a short description of expert system needs. FAA & BLM are also
possibilities. Perhaps the best possibility would be to build
nowcasting systems in the private sector.

My apologies for any errors in my listing.


Albert Boulanger
aboulanger@bbn.com
BBN Labs.
Albert Boulanger
BBN Labs Inc.
ABoulanger@bbn.com (arpa)
Phone: (617)873-3891

------------------------------

Date: Fri, 27 May 88 12:00:20 EDT
From: csrobe@icase.arpa (Charles S. Roberson)
Subject: Traveling Salesman Problem (a request)

Greetings,

    I am currently doing some work with the TSP and as a result I would like
help from the net in obtaining two items:

        (1) a standard algorithm that currently performs well on the TSP,
and
        (2) maps of cities that are used in classical/pathological cases.

Particularly, we would like the code used by S. Lin and B. W. Kernighan
in "An Effective Heuristic Algorithm for the Traveling-Salesman Problem"
published in _Operations_Research_ (1973), Vol 21, pp. 498-516.  For the
cities, we would like problems with 20 to 100 cities given in x-y coordinates,
if possible.

Off course *any* tidbit of information that someone is willing to share
will be gratefully appreciated.

Thanks,
-c
+-------------------------------------------------------------------------+
|Charles S. Roberson          ARPANET:  csrobe@icase.arpa                 |
|ICASE, MS 132C               BITNET:   $csrobe@wmmvs.bitnet              |
|NASA/Langley Rsch. Ctr.      UUCP:     ...!uunet!pyrdc!gmu90x!wmcs!csrobe|
|Hampton, VA  23665-5225      Phone:    (804) 865-4090
+-------------------------------------------------------------------------+

------------------------------

Date: 28 May 88 20:35:30 GMT
From: well!jjacobs@lll-lcc.llnl.gov  (Jeffrey Jacobs)
Subject: BRAINS


The current issue of IEEE Expert has a sidebar on Expert Systems
developed in Japan; the majority of them were developed in something
called "Brains"; is anybody familiar with this tool?  Referecnes,
or exposition is desired.

-Jeff Jacobs, CONSART Systems Inc., Technical & Managerial Consultants
-P.O. Box 3016, Manhattan Beach, CA 90266, (213)376-3802
-BIX: jeffjacobs, CIS: 75076,2603, USENET: jjacobs@WELL

------------------------------

Date: 30 May 88 15:04:10 GMT
From: mcvax!ukc!warwick!cvaxa!aarons@uunet.uu.net  (Aaron Sloman)
Subject: Re: ai languages on unix wanted (Poplog availability)

>Subject: ai languages on unix wanted
>From: gabe@viusys.UUCP (Gabe Nault @ Unisys, D.A. MINIS PMO, McLean, VA)
>Newsgroups: comp.sources.wanted,comp.ai

>>From: flash@ee.qmc.ac.uk
>>    (Flash Sheridan @ EE Dept, Queen Mary College, U London E1-4NS)
>>Newsgroups: comp.sources.wanted,comp.ai

>>Look at PopLog.  It's got an okay Common Lisp and a Prolog, plus Pop-11.

Thanks for the plug. However, I must make a couple of minor corrections.
1. Standard ML is also available in Poplog as an optional extra.

>>It's cheap or free to academics.

2. It is (was) free only to SERC/Alvey-funded academics in the UK, though
academic discounts for others are generally large (e.g. 80 to 85%).

>>Try aarons@cvaxa.susx.ac.uk

3. Although Poplog is developed at Sussex University we don't handle
most of the distribution. We distribute only for UK academics, who should
contact:
    Alison Mudd                 - alim@cvaxa.sussex.ac.uk or
    School of Cognitive Sciences
    University of Sussex
    Brighton BN1 9QN            - phone 0273 - 606755

For Academic enquiries/sales in USA and Canada about Poplog or Alphapop
(reviewed in Byte May 1988 - but only runs on Mac at present), contact

    Prof Robin Popplestone
    Dept. of Computer and Information Science
    Lederle Graduate Research Center
    University of Massachusetts
    Amherst, MA  01003, USA

Email pop@cs.umass.edu
or
    Prof Robin Popplestone
    Computable Functions Inc.,
    35 South Orchard Drive,
    Amherst, MA 01002, USA      Phone(413) 253-7637

For non-academic enquiries/sales of Poplog in USA/Canada
    Systems Designers International Inc
    Industrial Division
    New Castle Corporate Commons,
    55 Read's Way,
    New Castle,
    Delaware 19720, USA
    Phone (302) 323 1900  (800)888-9988

elsewhere
    The AI Business Centre
    SD-Scicon
    Pembroke House,
    Pembroke Broadway
    Camberley, Surrey, GU15 3XD
                                    Phone +44 (276) 686200

>>From: flash@ee.qmc.ac.uk (Flash Sheridan)
>>Reply-To: sheridan@nss.cs.ucl.ac.uk
>>or_perhaps_Reply_to: flash@cs.qmc.ac.uk

I hope this information is helpful.

Aaron Sloman,
School of Cognitive Sciences, Univ of Sussex, Brighton, BN1 9QN, England
    ARPANET : aarons%uk.ac.sussex.cvaxa@nss.cs.ucl.ac.uk
              aarons%uk.ac.sussex.cvaxa%nss.cs.ucl.ac.uk@relay.cs.net
    JANET     aarons@cvaxa.sussex.ac.uk
    BITNET:   aarons%uk.ac.sussex.cvaxa@uk.ac
        or    aarons%uk.ac.sussex.cvaxa%ukacrl.bitnet@cunyvm.cuny.edu

As a last resort (it costs us more...)
    UUCP:     ...mcvax!ukc!cvaxa!aarons
            or aarons@cvaxa.uucp
Phone:  University +(44)-(0)273-678294  (Direct line. Diverts to secretary)

------------------------------

Date: 30 May 88 16:46:17 GMT
From: pollux.usc.edu!pi@oberon.usc.edu  (Bill Pi)
Subject: Re: Genetic algorithms

In article <317@mmlai.UUCP> barash@mmlai.UUCP (Rev. Steven C. Barash) writes:
>
>A while back someone posted an extended definition of "Genetic algorithms".
>If anyone still has that, or has their own definition, could you please
>e-mail it to me?  (There's probably lots of room for opinions here;
>I'm interested in all perspectives).
>
>I would also appreciate any pointers to literature in this area.
Up till now, there are two conferences held already for Genetic Algorithms:

Proceeding of the First International Conference on Genetic Algorithms and
Their Applications, ed. J. J. Grefenstette, 1985.

Genetic Algorithms and Their Applications: Proceeding of the Second Inter-
national Conference o Genetic Algorithms, ed. J. J. Grefenstette, 1987.

They can be ordered from:

    Lawrence Erlbaum Associates, Inc.
    365 Broadway
    Hillsdale, NJ 07642
    (201) 666-4110

A latest collection of research notes on GA is

Genetic Algorithms and Simulated Annealing, ed. L. Davis, 1987, Morgan kaufmann
Publishers, Inc., Los Altos, Ca.

Also, A mailing list exists for Genetic Algorithms researchers. For more info.
send mail to "GA-List-Request@NRL-AIC.ARPA".

Jen-I Pi :-)                         UUCP:    {sdcrdcf,cit-cav}!oberon!durga!pi
Department of Electrical Engineering CSnet:   pi@usc-cse.csnet
University of Southern California    Bitnet:  pi@uscvaxq
Los Angeles, Ca. 90089-0781          InterNet: pi%durga.usc.edu@oberon.USC.EDU

------------------------------

Date: Mon, 30 May 88 22:32:18 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Connectionist reference wanted


      What's a good recent book on current connectionist thinking?  I've
read PDP, and Hillis' thesis, of course, but want something which summarizes
recent mainstream thinking in the field.

      Are energetic approaches along the lines of Witkin, Kass, and
Khatib, or the related simulated annealing techniques, considered connectionist?

      I write this with trepidation, knowing the predeliction of this group
to engage in interminable definitional arguments.  So please, don't reply
to other replies to this query.  I am asking so that I can get a feeling
of where my own work might be considered to fit.

                                        John Nagle

[A thought: if these energetic techniques catch on, we are all going to have
to study tensor calculus.]

------------------------------

Date: 1 Jun 88 23:10:47 GMT
From: frants@polya.stanford.edu (Leonid Frants)
Subject: ai expert system shells for IBM AT or compatibles


I'm looking for an expert system shell running on a IBM PC/AT and
compatibles. It must have an interface to C or some other programming
language. The price up to $2000, but the cheaper the better. Any advice
or help would be appreciated. It should handle numeric  data easily.

Thanks,

replies to:

Leonid Frants
frants@polya.stanford.edu

------------------------------

End of AIList Digest
********************

∂02-Jun-88  2006	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #16  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 2 Jun 88  20:06:27 PDT
Date: Thu  2 Jun 1988 22:43-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #16
To: AIList@AI.AI.MIT.EDU


AIList Digest             Friday, 3 Jun 1988       Volume 7 : Issue 16

Today's Topics:

 Queries:
   inductive expert system tools
   Response to: current connectionist literature, etcetera
   Expert Systems Shells Info

 Philosophy:
   Artificial Intelligence Languages
   Self Simulation
   Free Will & Self Awareness

----------------------------------------------------------------------

Date: 31 May 88 12:54:06 GMT
From: mcvax!dnlunx!marlies@uunet.uu.net  (Steenbergen M.E.van)
Subject: Wanted, information on inductive expert system tools.


Hello,

I am new to USENET. I am engaged in artificial intelligence research. At the
moment I am investigating the possibilities of inductive expert systems. In
the literature I have encountered the names of a number of (supposedly)
inductive expert system building tools: Logian, RuleMaster, KDS, TIMM,
Expert-Ease, Expert-Edge, VP-Expert. I would like to have more information
about these tools (articles about them or the names of dealers in Holland). I
would be very grateful to everyone sending me any information about these or
other inductive tools. Remarks of people who have worked with inductive expert
systems are also very welcome. Thanks!

Marlies

          ..!mcvax!dnlunx!marlies

------------------------------

Date: Thu, 2 Jun 88 03:34:49 CDT
From: lugowski@resbld.csc.ti.com
Subject: current connectionist literature, etcetera

Responding to John Nagle's CURRENT connectionist literature inquiry:

In my opinion, there is no good comprehensive book on current
connectionist thinking.  For one thing, folks are too busy going to
conferences.  For another, everyone has their own little garden to
tend.  Recent book content of interest includes "Neural Darwinism" (to
judge from preprints) as well as the commentaries by Jim Anderson in
"Neuroscience", a recent compendium of not-so-recent papers.  You
don't want to miss a not-yet-out MIT/Bradford Books book, Pentti
Kanerva's 1984 thesis (CSLI 84-7), if you haven't read it yet.  Other
than that, I'd repeat the obligatory advice: monitor technical
reports, the journals "Nature" and "Neural Networks" and the two
connectionist mailing lists.  To apply for subscription to those
lists, send to:

   connectionists-request@q.cs.cmu.edu    (sparsely firing connectionists)
   neuron-digest-request@csc.ti.com       (everyone, sparse and otherwise)


As for categorizing work, anything small-grained, bottom up and
parallel probably can pass for connectionist.  It's not the formalism,
it's the claim, really: One must make massively parallel claims
pertaining to massive parallelism.  (Smirk, lest I get crucified.)

Simulated annealing is "very connectionism".  Some of the nicest
connectionist work of late (Durbin & Willsaw, Cambridge, also stuff
out of Los Alamos) has at least references to simulated annealing as
benchmark.  The trick one would like to see done is casting simulated
annelaing as a localized computation *without* the closed-form cost
function or globally computed energy -- everything strictly "grassroots".

As for tensor calculus, the very idea appears contra connectionism,
I hold with those who would like to see discrete, adpative and local
formalisms take over the domains historically ceded to 19th century's
closed-form mathematical analysis and its applications.  Tensor
calculus?  Sure, but check them determinants at the bar, pardner...

[Above opinions are strictly mine.]

                -- Marek Lugowski

                   lugowski@resbld.csc.ti.com
                   lugowski@ngstl1.ti.com
                   marek@iuvax.cs.indiana.edu

------------------------------

Date: 2 Jun 88 13:14:16 GMT
From: uh2@psuvm.bitnet  (Lee Sailer)
Subject: Expert Systems Shells Info wanted

Here's you chance to help out some poor folk at a small college...8-)

I have two students who want to learn about expert systems.  One wants
to build a system that answers micro-economics questions---"The banana crop
fails, what happens to the price of apples?"---and the other will probably
do something in manufacturing.

I'm the "advisor".  I have lots of book knowledge about ES, but we don't have
any software here at this point.  I need pointers to useful systems and
advice.  We have msdos machines and Macintoshes, plus an odd Unix box or two,
and of course the ever popular IBM mainframe.

What we don't have is much money.

Advice gratefully accepted.

------------------------------

Date: 31 May 88 16:22:26 GMT
From: elk@cblpn.att.com (Edwin King)
Reply-to: elk@cblpn.att.com (55214-Edwin King)
Subject: Re: Artificial Intelligence Languages


>Can anyone suggest a list of features which a programming
>language must have which would qualify it as an "artificial intelligence
>language"  ?

I realize this may not exactly jive with the current list of "AI languages"
but, in my opinion there are only a couple of features a language
must have to be of any real use for AI.  They are

          1) True linked list capability.  Sure, there are ways
             to fake this, but the headaches are enormous.  But,
             do be aware that as long as you can use pointers
             and such to create this effect, I will include it
             on the list.  Structure references are helpful
             as well (such as struct in C, or record in PASCAL).
             I don't think the actual manipulations of these lists
             have to be built-in since building a library for that
             purpose is easy enough that most folks have probably
             already done it.

          2) For some (not all) AI fields easy access to the
             hardware itself it nice (like robotic and the like).

          3) Easy to use string functions, or a library to do such.

So, by this criteria, all the commonly held "AI languages" would fit
(like PROLOG, LISP, POP, et cetera ad nauseum).  But, I really think
a few others (C, Pascal, ADA, Bliss--though that may be stretching a bit,
and definitely ASSEMBLY) can also be used effectively given just a little
overhead to library building.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~  Ed King           | Have we been here before or are we yet to come? ~
~  elk@cblpn.ATT.COM |              -- Sarah Jane Smith                ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

------------------------------

Date: 2 Jun 88 16:55:10 GMT
From: sdcrdcf!csun!sdsu!caasi@hplabs.hp.com  (Richard Caasi)
Subject: Re: Self Simulation

In article <33245@linus.UUCP> writes:
>I was captured by the notion of self-simulation, and started day-dreaming,
>imagining myself as an actor inside a simulation.  I found that, as the
>director of the day-dream, I had to delegate free will to my simulated
>self.  The movie free-runs, sans script.  It was just like being asleep.
>
>So, perhaps a robot who engages in self-simulation is merely dreaming
>about itself.  That's not so hard.  I do it all the time.
>
>--Barry Kort

Wasn't it Chuang-Tzu who wrote:  Once I dreamt I was a butterfly.
After I awoke, I didn't know if I was a man dreaming about being
a butterfly or a butterfly dreaming about being a man.

------------------------------

Date: 31 May 88 16:32:12 GMT
From: umix!umich!eecs.umich.edu!itivax!dhw@uunet.UU.NET (David H.
      West)
Subject: Re: AIList Digest   V7 #4 [bwk@mitre-bedford.arpa: Re: Free
         Will & Self Awareness]


In article <8805250055.AA01059@BLOOM-BEACON.MIT.EDU>, bwk@mitre-bedford.arpa
  (Barry W. Kort) writes:
> [...]  I can use my imagination to conceive a course
> of action which increases both of our utility functions.  Free will
> empowers me to choose a Win-Win alternative.  Without free will, I am
> predestined to engage in acts that hurt others.  Since I disvalue hurting
> others, I thank God that I am endowed with free will.
>
> Is there a flaw in the above line of reasoning?  If so, I would be
> grateful to someone for pointing it out to me.

   Whether there is a flaw depends on what one supposes the
conclusion(s) to be ;-)
   Robert Axelrod (in _The Evolution of Cooperation_) has shown by
simulation that an evolutionary system containing only rather simple
automata can learn to play Prisoners' Dilemma with a win-win
(actually TFT, which is win-win against another TFT) strategy.
In this particular instance it is the system that learns, rather
than individuals, which are too transient.
   Do you wish to ascribe free will to such a (deterministic but
stochastically driven) system?

David West                 dhw%iti@umix.cc.umich.edu

------------------------------

End of AIList Digest
********************

∂03-Jun-88  0117	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #11  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 3 Jun 88  01:17:08 PDT
Date: Thu  2 Jun 1988 00:18-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #11
To: AIList@AI.AI.MIT.EDU


AIList Digest            Thursday, 2 Jun 1988      Volume 7 : Issue 11

Today's Topics:

  Seminars and Announcements

----------------------------------------------------------------------

Date: 10 May 88 05:09:30 GMT
From: csli!nash@labrea.stanford.edu  (Ron Nash)
Subject: CSLI Reports

The Spring 1988 catalog of reports published by The Center for the
Study of Language and Information at Stanford University is now
available online, in HyperCard format (for Macintosh computers).
Abstracts are included.

This is an update of (not a supplement to) the previous catalog.
So if you missed the last edition, this one contains the complete
list.

The file is available by anonymous ftp from csli.stanford.edu
The relevant file is: pub/csli-abstracts.hqx

Those without internet access can send a 3.5" disk and a self-addressed
envelope to:

        Publications
        CSLI
        Ventura Hall
        Stanford University
        Stanford, CA  94305-4115


(CSLI was founded in 1983 by researchers from Stanford University,
 SRI International, and Xerox PARC to further research and development
 of integrated theories of language, information, and computation.)

-------------------------------------------------------------------

Ron Nash
Center for the Study of Language and Information
Stanford University
nash@russell.stanford.edu

------------------------------

Date: 16 May 88 05:29:57 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Qualified referees for connectionist material


BBS (Behavioral & Brain Sciences), published by Cambridge university
Press, is an international, interdisciplinary journal devoted
exclusively to Open Peer Commentary on important and controversial
current target articles in the biobehavioral and cognitive sciences.

Because of the growing volume of connectionist and
connectionism-related submissions now being received at BBS, we are
looking for more referees who are qualified and willing to evaluate
submitted manuscripts. If you are professionally qualified in
connectionism, parallel distributed processing, associative networks,
neural modeling etc., and wish to serve as a referee for BBS, please
send your CV to the email or USmail address below. Individuals who are
already BBS Associates need only specify that this is a specialty area
that they wish to review in.

USMAIL: BBS, 20 Nassau Street, Rm. 240, Princeton NJ 08542
--
Stevan Harnad   ARPANET:        harnad@mind.princeton.edu        or
harnad%princeton.mind.edu@princeton.edu    UUCP:   princeton!mind!harnad
        CSNET:  harnad%mind.princeton.edu@relay.cs.net
        BITNET: harnad%mind.princeton.edu@pucc.bitnet

------------------------------

Date: Tue, 17 May 88 11:19:18
From: Dana S. Nau <nau@frabjous.cs.umd.edu>
Subject: Special SIGART issue on Knowledge Acquisition

Papers are being solicited for a special issue of the ACM SIGART
Newsletter on knowledge acquisition.  Send technical papers (5000
words), extended abstracts (1000 words), and any correspondence
by September 26, 1988 to Christopher Westphal, Knowledge Acquisition
Material, The BDM Corporation, 7915 Jones Branch Drive, McLean, VA
22102; (703) 848-7910.

------------------------------

Date: Tue, 24 May 88 17:16:02 PDT
From: Greg Jordan <gjordan@cirm.northrop.com>
Subject: Westex-88 Announcement


EXPERT SPEAKERS ON EXPERT SYSTEMS
ANNOUNCED FOR WESTEX-88

     Three special, all-day tutorials and a two day program
featuring well-known invited speakers and contributed papers from
the artificial intelligence community are planned for WESTEX-88, the
third annual WESTEX Conference sponsored by the Western Committee of
the Computer Society of the IEEE and the IEEE Los Angeles Council.
It will be held June 28-30 at the Anaheim Marriott Hotel in Anaheim,
California.

     Professor Edward Feigenbaum of Stanford University has been
announced as featured speaker for WESTEX-88.

     In a special presentation on Wednesday, June 29, the
internationally prominent Feigenbaum will present new observations
and predictions regarding expert systems drawn from his decades of
leadership and experience in the field.  His topic will be "Expert
Systems: Payoffs and Promises."

     This year's conference and exposition will give special
emphasis to management issues associated with fielding successful
applications in expert systems.

All-day tutorials focusing on three different tracks will be offered
on Tuesday, June 28.

     Tutorial one, Basic concepts, will be taught by Kenneth
Modesitt, Rockwell International and will cover the concepts and
benefits of expert systems.  It is designed to provide analysts and
developers with an overview of the most important concepts and
techniques, and to suggest pragmatic ways of using the new
techniques.

     Tutorial two, Advanced Concepts, will be taught by Avron Barr,
Aldo Ventures.  This tutorial will cover the expert systems
development process with particular emphasis on managing the
process, including self-management of the knowledge engineer.

     Tutorial three, Special Topics, taught by Miriam Bischoff,
Teknowledge Inc., will cover actual experiences in screening
applications for expert systems as well as special topics for those
who have already made the commitment for the use and/or development
of expert systems.

June 29 Sessions Focus on Management of Expert Systems

     The two-day program beginning June 29 will focus on management
of expert systems and will include presentations by Steve Lukasik,
Northrop Corp., on "Expert Systems Genesis at DARPA, Its Progress
and Future", and George Friedman, Northrop Corp., on "Fundamental
Management Issues of Expert Systems."  Peter Friedland of NASA Ames
will speak on "An Overview of AI Activity at NASA Ames" during the
noon luncheon.

     Invited conference presentations on major expert systems
management issues will cover "A Systems Engineering View of Expert
Systems," Ed Taylor, TRW; "Testing and Evaluation of Expert
Systems," K. L. Bellman, Aerospace Corp., and "ADA and Expert
Systems Integrated into Large Scale Systems," Douglas Flaherty,
McDonnell Douglas.

     Two invited presentations exploring the subject "Deployment in
ADA: Problem or Solution?" will explore "Ada and Expert Systems,
Experience with Large Projects," from Mark Miller, Computer and
Thought, and "An Ada-Based Expert Systems Building Tool," from Brad
Allen, Inference Corp.

June 30 Sessions Track Management and Implementation Issues

     Invited presentations for the Thursday, June 30 program include
"Real-time Expert Systems," presented by Mike Buckley, Rockwell
International; "Multiprocessor Architectures for Expert Systems,"
presented by Harold Brown, Stanford University; "An Infrastructure
for Integration and Synchronization of Multiple Expert Systems,"
from James Greenwood, ADS; "ABE: An Environment for Large Scale
Intelligent System Integration," from Lee Erman, Teknowledge Inc.,
and "SMNET as a Development Environment," presented by Michael
Fielding,  Perceptronics.  Allen Sears of DARPA will speak on
"Intelligent Systems and Military Applications" during the noon
luncheon.

     Thursday afternoon's program will feature a number of
contributed papers focusing in two areas: expert systems techniques
and implementation issues.

     Advance registration fees for Institute of Electrical and
Electronics Engineers (IEEE) members are $140 for technical
sessions, luncheon and proceedings; $115 for the June 28 tutorials,
luncheon and texts.  Advance registration fees for non-IEEE members
are $185 for technical sessions, luncheon and proceedings, $160 for
June 28 tutorials, luncheon and texts.  Advance registration must be
postmarked no later than June 10, 1988.

     For more information contact Marti Wolf at 213/777-2965.

     WESTEX-88 is sponsored by Western Committee of the Computer
Society of the IEEE and IEEE Los Angeles Council and is managed by
Electronic Conventions Management, Los Angeles, California.

------------------------------

Date: 25 May 88 04:26:51 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Language Learnability:  BBS Call for Commentators


Below is the abstract of a forthcoming target article to appear in
Behavioral and Brain Sciences (BBS), an international journal of "open
peer commentary" in the biobehavioral and cognitive sciences, published
by Cambridge University Press. For information on how to serve as a
commentator or to nominate qualified professionals in these fields as
commentators, please send email to:         harnad@mind.princeton.edu
or write to:          BBS, 20 Nassau Street, #240, Princeton NJ 08542
                                                  [tel: 609-921-7771]
-----------------------------------------------------------------------
       The Child's Trigger Experience:   "Degree-0" Learnability

                         David Lightfoot
                      Linguistics Department
                      University of Maryland

A selective model of human language capacities holds that people come
to know more than they experience. The discrepancy between experience
and eventual capacity is bridged by genetically provided information.
Hence any hypothesis about the linguistic genotype (or "Universal
Grammar," UG) has consequences for what experience is needed and what
form people's mature capacities (or "grammars") will take. This BBS
target article discusses the "trigger experience," i.e., the experience
that actually affects a child's linguistic development. It is argued
that this must be a subset of a child's total linguistic experience
and hence that much of what a child hears has no consequence for the
form of the eventual grammar. UG filters experience and provides an
upper bound on what constitutes the triggering experience. This filtering
effect can often be seen in the way linguistic capacity can change between
generations. Children only need access to robust structures of minimal
("degree-0") complexity. Everything can be learned from simple, unembedded
"domains" (a grammatical concept involved in defining an expression's
logical form). Children do not need access to more complex structures.
--
Stevan Harnad   ARPANET:        harnad@mind.princeton.edu        or
harnad%princeton.mind.edu@princeton.edu    UUCP:   princeton!mind!harnad
        CSNET:  harnad%mind.princeton.edu@relay.cs.net
        BITNET: harnad%mind.princeton.edu@pucc.bitnet

------------------------------

Date: Wed, 25 May 88 10:46:34 EDT
From: Maureen Searle <msearle%watsol.waterloo.edu@RELAY.CS.NET>
Subject: Call for Papers


                              Reminder
                              --------

                        UNIVERSITY OF WATERLOO
             CENTRE FOR THE NEW OXFORD ENGLISH DICTIONARY
                        4TH ANNUAL CONFERENCE
                CALL FOR PAPERS - CALL FOR PANELISTS
                        INFORMATION  IN  TEXT

                         October 27-28, 1988
                          Waterloo, Canada

This year's conference will focus on ways that text stored as electronic
data allows information to be restructured and extracted in response to
individualized needs. For example, text databases can be used to:

     -  expand the information potential of existing text
     -  create and maintain new information resources
     -  generate new print information

Papers presenting original research on theoretical and applied aspects of
this theme are being sought.  Typical but not exclusive areas of interest
include computational lexicology, computational linguistics, syntactic
and semantic analysis, lexicography, grammar defined databases, lexical
databases and machine-readable dictionaries and reference works.

Submissions will be refereed by a program committee.  Authors should send
seven copies of a detailed abstract (5 to 10 double-spaced pages) by
June 10, 1988 to the Committee Chairman, Dr. Gaston Gonnet, at:

                      UW Centre for the New OED
                      University of Waterloo
                      Waterloo, Ontario
                      Canada, N2L 3G1

Late submissions risk rejection without consideration.  Authors will be
notified of acceptance or rejection by July 22, 1988.  A working draft
of the paper, not exceeding 15 pages, will be due by September 6, 1988
for inclusion in proceedings which will be made available at the
conference.

One conference session will be devoted to a panel discussion entitled
MEDIUM AND MESSAGE: THE FUTURE OF THE ELECTRONIC BOOK.  The Centre invites
individuals who are interested in participating as panel members to submit
a brief statement (approximately 150 words) expressing their major
position on this topic. Please submit statements not later than
June 10, 1988 to the Administrative Director, Donna Lee Berg, at the above
address.  Selection of panel members will be made by July 22, 1988.
The Centre is interested in specialists or generalists in both academic and
professional fields (including editors, publishers, software designers and
distributors) who have strongly held views on the information potential of
the electronic book.

                           PROGRAM COMMITTEE

Roy Byrd (IBM Corporation)           Michael Lesk (Bell Communications Research)
Reinhard Hartmann (Univ. of Exeter)  Beth Levin (Northwestern University)
Ian Lancashire (Univ. of Toronto)    Richard Venezky (Univ. of Delaware)
              Chairman:  Gaston Gonnet (Univ. of Waterloo)

------------------------------

Date: 25 May 88 19:05:03 GMT
From: feifer@locus.ucla.edu
Subject: AAAI-88: last call for student volunteers


ANNOUNCEMENT:  Last Call: Student Volunteers Needed for AAAI-88

DEADLINE:      July 1, 1988

AAAI-88 will be held August 20-26, 1988 in beautiful St. Paul,
Minnesota.  Student volunteers are needed to help with local
arrangements and staffing of the conference.  To be eligible for
a Volunteer position, an individual must be an undergraduate or
graduate student in any field at any college or university.

This is an excellent opportunity for students to participate
in the conference.   Volunteers receive FREE registration at
AAAI-88, conference proceedings, "STAFF" T-shirt, and are
invited to the volunteer party. More importantly, by
participating as a volunteer, you become more involved and
meet students and researchers with similar interests.

Volunteer responsibilities are varied, including conference
preparation, registration, staffing of sessions and tutorials and
organizational tasks.  Each volunteer will be assigned
twelve (12) hours.

If you are interested in participating in AAAI-88 as a
Student Volunteer, apply by sending the following information:

Name
Electronic Mail Address (for mailing from arpa site)
USMail Address
Telephone Number(s)
Dates Available
Student Affiliation
Advisor's Name

to:

valerie@SEAS.UCLA.EDU

or

Valerie Aylett
3531-K Boelter Hall
Computer Science Dept.
UCLA
Los Angeles, California  90024-1596



Thanks, and I hope you join us this year!


Richard Feifer
Student Volunteer Coordinator
AAAI-88 Staff
-----------------------------------------------------------------
Richard G. Feifer                 feifer@cs.ucla.edu
UCLA
145 Moore Hall  --  Los Angeles  --  Ca  90024

------------------------------

Date: Thu 26 May 88 14:21:38-EDT
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: BBN AI Seminar:  Josh Tennenberg

                    BBN Science Development Program
                       AI Seminar Series Lecture

                    ABSTRACTION IN SYMBOLIC PLANNING

                             Josh Tennenberg
                        University of Rochester
                        (josh@cs.rochester.edu)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                        10:30 am, Tuesday May 31


The use of abstraction in planning is explored in order to simplify the
task of reasoning about the effects of an agent's actions within a complex
world.  Two representational issues emerge which form the basis of this
research.  First, the abstract views must sanction plan construction for
frequently occurring problems, yet never sanction the deduction of
contradictory assertions.  Second, a correspondence between the abstract
and concrete views must be maintained so that abstract solutions bear a
precise relationship to the concrete level solutions derived from them.
These issues are explored within two different settings.  In the first, an
abstraction hierarchy is induced by relaxing some of the constraints on the
application of actions.  In the second, a predicate mapping function is
defined which extends the notion of inheritance from object types to
arbitrary relations and actions.

------------------------------

Date: Thu, 26 May 88 16:04:31
From: GOLUMBIC%ISRAEARN.BITNET@CORNELLC.CCS.CORNELL.EDU

Date: 26 May 88, 16:03:16 IDT
From: Martin Charles Golumbic   972 4 296282         GOLUMBIC at ISRAEARN
To:   AILIST at AI.AI.MIT

                   "Pre-announcement announcement"
----------------------------------------------------------------------
           Annals of Mathematics and Artificial Intelligence

Arrangements have been finalized with the J. C. Baltzer Scientific
Publishing Company to form a new publication series entitled
the "Annals of Mathematics and Artificial Intelligence".
Martin Golumbic will serve as the editor-in-chief.
This new series will be parallel (maybe orthogonal) to the existing
"Annals of Operations Research", and Peter Hammer will be general
editor of the overall series which will include both.

The Annals of Math and AI will be devoted to reporting significant
contributions on the interaction of mathematical and computational
techniques reflecting the evolving disciplines of artificial
intelligence.  The Annals will publish monographs, edited volumes of
original manuscripts, survey articles and well-refereed conference
proceedings of the highest caliber within this increasingly important
field.  All papers will be subjected to peer refereeing at the
standard of the major scientific journals.  It is our intent to
represent a wide range of topics of concern to the scholars applying
quantitative, combinatorial, logical and algebraic methods to areas as
diverse as decision support, automatic deduction, reasoning,
knowledge-based systems, machine learning, computer vision, robotics
and motion planning as well as influencing the growth potential
of new areas of applied mathematics and computational theory
generated by this cross-fertilization.

This new series will be similar in format to the Annals of OR which
first appeared in 1984 and is now publishing at a rate of six volumes
per year.  The Annals will serve as a permanent record of research
developments, with each issue or volume focused on a topic and
featuring one or more guest editors.  In coordination with the
editorial board, the guest editors will be personally responsible for
the collection of papers to appear in that volume, for the refereeing
process and for the time schedule.  Smaller collections of papers will
be published in separate issues and combined into volumes of
approximately 400 pages.  Larger collections will be published as full
volumes.

Collections on the following topics are currently under preparation
to appear during 1989/90:  AI and Statistics (W. Gale and D. Hand),
Motion Planning (M. Sharir), Mathematical Stability in Computer Vision
(R. Hummel), Logic and Intelligent Database Systems (S. Tsur),
Formal aspects of semantic networks (J. Sowa), and several others
are under consideration.

Proposals are invited for additional collections on topics within
intelligent systems that show a strong foundational component.
For further information, conference organizers and potential guest
editors may contact

                  Prof. Martin Charles Golumbic
                         Editor-in-chief
        Annals of Mathematics and Artificial Intelligence
                  IBM Israel Scientific Center
                          Technion City
                          Haifa, ISRAEL

------------------------------

Date: Fri 27 May 88 10:37:36-EDT
From: adelson%cs.tufts.edu@relay.cs.net
Reply-to: adelson%cs.tufts.edu@relay.cs.net
Subject: FYI -- CAI Conference at Tufts


                   SOFTWARE, IMAGINATION, EDUCATION:
    Educationally Effective Curricular Software in Higher Education


SPEAKERS:
             Jon Barwise             John Kemeny
             John Seely Brown        Seymour Papert
             Marc H. Brown           Judah L. Schwartz
             Daniel C. Dennett       George Smith
             Mitchell Kapor          Edwin Taylor
             Alan Kay

Effective educational software is rare indeed--hard to create, and hard
to recognize.  This conference will address the difficult questions:
What software actually works with students, and why?  Leading thinkers
will explore the possibilities and limitations of computers in higher
education, and discuss demonstrations of the best existing software.

            CONFERENCE HELD MAY 31 THROUGH JUNE 3, 1988


For reservations and information
about fees and location phone or mail:
Judy Medler
617-628-5000 X 5209
CSNET:  BARNEY%CC.TUFTS.EDU
BITNET: JCMEDLER@TUFTS

Sponsored by the Curricular Software Studio,
with major funding by the Alfred P. Sloan Foundation.

------------------------------

Date: Mon 30 May 88 20:01:01-EDT
From: Ben Olasov <G.OLASOV@CS.COLUMBIA.EDU>
Subject: LISP BBoard

For the benefit of  any AI reseachers or  LISP people who are  working
with, or considering working with the LISP interpreter in the  AutoCAD
CAD package on  AI/ CAD  interfaces of some  kind, there  is a  dialup
bulletin board in  New York City  intended to provide  LISP tools  for
just such work/ research.  The primary aim of the CAD section on  this
board  is  to  be  a  resource  for  AutoLISP  (and  LISP)  developers
generally, with  some emphasis  on applying  knowledge  representation
techniques in  AutoCAD's LISP  environment.

The dial-up number in New York City is (212) 980-0770.

There is no registration  fee or on-line time  charge required to  use
this board.

------------------------------

Date: 31 May 88 21:56:46 GMT
From: BOSCO.BERKELEY.EDU!grossman@ucbvax.berkeley.edu
Subject: AI and DECS 


A list of talks follows:


                           Workshop On

                AI and Discrete Event Control Systems
                        July 7 and 8, 1988
                     NASA-Ames Research Center
                     Moffett Field, California


Hamid Berenji, NASA-Ames Research Center
The Role of Approximate Reasoning in AI-based Control

Peter Caines, McGill University
Dynamical Logic Observers for Finite Automata, Part 1

James Demmel, Courant Institute
Hierarchical Control  Studies in Dextrous Manipulation
Using the Utah/MIT Hand

Russel Greiner, University of Toronto
Dynamical Logic Observers for Finite Automata, Part 2

Robert Hermann, NASA-Ames Research Center and Boston University
The Scott Theory of Fixed Points Symbolic Control

Michael Heymann, Israel Institute of Technology
Real-Time Discrete Event Processes

Peter Ramadge, Princeton University
Discrete Event Systems, Modeling and Complexity

Stan Rosenschein, CSLI, Stanford University
Real Time AI Systems

Gerry Sussman, Massachusetts Institute of Technology
Automatic Extraction of Features From Dynamical Systems


For more information, please contact:

Robert Grossman                                 (415) 642-8196
Department of Mathematics                       (415) 642-6526 (messages)
University of California, Berkeley              grossman@cartan.berkeley.edu
Berkeley, CA 94720                              grossman@ucbcarta.bitnet




Robert Grossman
Department of Mathematics
University of California, Berkeley
Berkeley, CA 94720

------------------------------

End of AIList Digest
********************

∂03-Jun-88  0117	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #12  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 3 Jun 88  01:17:39 PDT
Date: Thu  2 Jun 1988 00:48-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #12
To: AIList@AI.AI.MIT.EDU


AIList Digest            Thursday, 2 Jun 1988      Volume 7 : Issue 12

Today's Topics:

  Twin Studies
  Philosophy
  Symbolics stock

----------------------------------------------------------------------

Date: 16 May 88 03:59:48 GMT
From: quintus!ok@sun.com  (Richard A. O'Keefe)
Subject: Re: AIList V6 #86 - Philosophy

In article <523@wsccs.UUCP>, dharvey@wsccs.UUCP (David Harvey) writes:
> lives.  Even a casual perusal of the studies of identical twins
> separated at birth will produce an uncanny amount of similarities, and
> this also includes IQ levels, even when the social environments are
> radically different.

ONLY a casual perusal of the studies of separated twins will have this
effect.  There is a selection effect:  only those twins are studied who
are sufficiently far from separation to be located!  A lot of these
so-called "separated" twins have lived in the same towns, gone to the
same schools, ...

------------------------------

Date: 20 May 88 09:00:08 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Twin Studies: Problems of Confounding Variables and Sample
         Populations

In article <2865@cvl.umd.edu> harwood@cvl.UUCP (David Harwood) writes:
>       Can you substantially prove that there is not sound research
>which shows comparatively significant psychological similarity of
>identical twins, even when growing up apart?

To start, yes there are similarities.  The results weren't being
questioned, it was the interpretation, and the original 'experimental'
design.  These are the two major weak links in psychological research,
and for that matter in mathematical modelling (e.g. sociobiological
applications of game theory).

On experimental design, there is a problem in assuming that any population
of separated identical twins share all the variation likely in the
full population.  The role of adoption agencies is particularly
important, as they all have ideals of parenthood which many social
groups will not be able to fulfil. Hence the separated identical twins
will be less environmentally separated than the Bronx and the Berkshires.
Another problem in experimental design is the very measure of
'radically different environments'.   The twin studies cannot rely on
assuming that ANY difference in environment could be relevant to
development; the relationship has to be established by separate research.

It largely because of the uncritical approach to social environments
that any interpretations of 'results' will be invalid.  If you don't
control for confounding factors, your results aren't worth the paper
they're printed on.

I have no published work here either, but had to write on the topic as
part of my Education degree.  All the above is so obvious in
psychology that it wouldn't be worth publishing, except within a more
thorough review article.  I've bothered to post this to
        a) defend Richard's argument
        b) improve some people's awareness of experimental design
           and thus hopefully encourage more constructive criticism
           and less credulity about BIG twin studies.

Apologies to anyone who wants this sort of stuff out of comp.ai, but
if you are interested in computer simulation of human behaviour, I
don't see how you can justify the exclusion of anything to do with the
study of humanity from this group.
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 20 May 88 21:35:25 GMT
From: wlieberm@teknowledge-vaxc.arpa  (William Lieberman)
Subject: Re: Twin studies (was AIList V6 #86 - Philosophy)

In article <8805201729.AA28919@decwrl.dec.com> cooper@pbsvax.dec.com writes:
>> <<Richard O'Keefe claiming that studies of separated identical twins being
>>   invalid because of a tendency for their environments to continue to
>>   be the same.>>
>
>I can't say that I am overly familiar with this area, but all the "separated
>twin" studies which I have seen discussed as evidential (rather than merely
>suggestive of further research) have seemingly controlled for this by
>comparing the variance of the characteristic under study in identical
>twins separated at birth (100% genetic similarity) against fraternal
>twins separated at birth (50% genetic similarity).  Is there a significant
>body of studies which I am unfamiliar with, or is there some reason to
>believe that the treatment of identical twins after separation is
>substantially different from the treatment of fraternal twins after
>separation?
>
>
>               Topher Cooper
>
>USENET: ...{allegra,decvax,ihnp4,ucbvax}!decwrl!pbsvax.dec.com!cooper
>INTERNET: cooper%pbsvax.DEC@decwrl.dec.com
>       or cooper@pbsvax.dec.com





Topher Cooper's remarks are well-thought out and relevant.

His thoughts should be extended a little, though, I feel.

At first, there would not seem to be any reason to believe the treatment of
identical twins (after separation) should be substantially different from the
treatment of fraternal twins (after similar separation). And there may, in
fact, not exist substantive difference in treatment.

But what is difficult conceptually (and therefore, in practice) to control for
are phenotypically-based differences (factors, such as looks, which are
observable) between, on the one hand, the set of identical twins (basically
no obvious differences), and on the other hand, the set of fraternal
differences (plenty of obvious, overt differences, such as in their looks - say
handsome vs ugly).

If the fraternal twins differ only in LOOKING different (to the adult adopting
parents, etc), that fact ALONE MAY cause differential behavior TOWARD those
children, chain-reacting a cause and effect cycle that winds up as being
measured as "differences in intelligence (or behavior) "due to" genetic-based
differences!

Thus, while the difference observed within the set of fraternal twins
is demonstrably due to the fact of fraternal vs identical origins, the thesis
that the difference is DUE to neurologically-related differences in the
nervous system is NOT thus demonstrated!  All that will have been
demonstrated, and I think most can agree has been demonstrated, is that the
differences observed are due to SOMETHING related to genetics - but one must
be very cautious, until a specific anatomical, biochemical, etc. analysis has
been done on the complete developmental structure of the brain to show one way
or another (which we are years away from being able to do) of drawing
the conclusion that psychological factors, such as intelligence, are
necessarily the sole initial CAUSE of later observed behavioral differences.

In other words, until such time as it will be possible to specifically
measure every aspect of TOTAL behavior, one may not conclude that a
genetic difference is solely (or at all) linked to a conjectured fundamentally
neurologically-based difference.

Bill Lieberman

------------------------------

Date: 21 May 88 20:03:15 GMT
From: mind!clarity!ghh@princeton.edu  (Gilbert Harman)
Subject: Re: Twin studies (was AIList V6 #86 - Philosophy)

Could someone please post references to the twin studies
being referred to?  I am only familiar with older ones that
have turned out to be based on fraudulent data.

        Gil Harman
        Princeton University Cognitive Science Laboratory
        Princeton, NJ 08542

ghh@princeton.edu
HARMAN@PUCC.BITNET

------------------------------

Date: 27 May 88 19:46:50 GMT
From: dan@ads.com (Dan Shapiro)
Reply-to: dan@ads.com (Dan Shapiro)
Subject: Re: [DanPrice@HIS-PHOENIX-MULTICS.ARPA: Sociology vs Science
         Debate]


I'd like to take the suggestion that "bigger decisions are made less
rationally" one step further...  I propose that
irrationality/bias/emotion (pick your term) are *necessary*
corollaries of intelligence - that they arise because of the need to
make decisions on partial information, to fill in "the gap", so to
speak, between what an agent knows and the responses it might apply.
The claim is that it is not in general possible to prove one's way
from situation and goals to action, and that some force encoding bias
is required.

I.e., if you place a donkey between two equally attractive bales of
hay, it doesn't starve.  It chooses the left one because it *likes*
the left barrel of hay.

In a deeper form, this is an argument against deductive planning as an
action selection technique.

------------------------------

Date: 27 May 88 22:31:07 GMT
From: ejs@orawest.sri.com (e john sebes)
Reply-to: ejs@orawest.uucp (e john sebes)
Subject: Re: [mcvax!ukc!its63b!aiva!jeff@uunet.uu.net: Re: Sorry, no
         philosophy allowed here.]


In article <19880527050233.8.NICK@MACH.AI.MIT.EDU>
>
>In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
>(Gilbert Cockton) says:
>> If you can't write it down, you cannot possibly program it.
>
>Not so.  I can write programs that I could not write down on paper
>because I can use other programs to so some of the work.  So I might
>write programs that are too long, or too complex, to write on paper.

Yes so. You can write such programs (such as a YACC application) because
someone else has written the other program (such as YACC). And that
someone else couldn't have written that other program unless he could
have written it down, or used some other other program.....

If you want to be more precise, try this version of Gilbert Cockton's remark:
    If nobody can write it down, then nobody can possibly program it.

I should have hoped that the essential point was obvious.
If anyone really beleives that programming languages are some kind of
priviledged formalism in which otherwise impossible things become
possible, I'd like to hear their views.

--John Sebes

------------------------------

Date: Fri, 27 May 88 23:04:18 -0200
From: Antti Ylikoski <ayl%hutds.hut.fi%FINGATE.BITNET@MITVMA.MIT.EDU>
Subject: the human mind as a logical system

It would seem that the human mind is very fault-tolerant with respect
to locigal oddities.

Example: a human being can be a queer reasoner in the sense of
Smullyan.

I recall that a queer reasoner believes a proposition p (Bp) and
simultaneously believes he/she doesn's believe p (B - (Bp)), the minus
sign denoting logical negation.

Let John be a true believer of some obscure faith.  Say the Tur
religion by Edgar R. Burroughs in his Tarzan books.

Let p be the proposition "Tur exists".

Let John lament his lack of faith to a Tur priest.

Then John believes in Tur (Bp) but believes he doesn't believe in Tur
(B - (Bp)).

Andy Ylikoski

------------------------------

Date: 28 May 88 11:20:19 GMT
From: cae780!leadsv!esl!ssh@hplabs.hp.com  (Sam)
Subject: Re: Symbolics stock

->marsh@mbunix (Ralph Marshall) sez ->
->Summarized advice:
->1) Face squarely in directions of Symbolics shares.
->2) Turn 180 degrees.
->3) Run like hell; don't look back or you turn into a pillar of cons cells.
->
->Symbolics sells GREAT software; they just can't push boxes worth a damn.
->Their equipment is way too expensive for deliverable systems in almost any
->realistic situation, their maintenance costs even for research use is
->exhorbitant, and they don't seem to get the message from what customer
->base they have left.
...(More stuff deleted)...

For this reason I've recommended any project I've been related with
NOT be Symbolics-based for the last four years.  Obviously, I'm not
alone.

I also regret that many mistakes killed the D-machines from Xerox,
which were great to work in/on, but were doomed by brain-damaged sales
/ marketing strategists at Xerox.

                                                -- Sam

------------------------------

Date: 29 May 88 01:57:42 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Symbolics stock


      I don't know of anybody buying Symbolics boxes in quantity around
here.  Everybody seems to have a few, but no new ones are being acquired.
Besides, they're no longer a status symbol; nowadays you have to have a
Connection Machine to impress anybody.  But the general machine for getting
work done seems to be a Sun III.

                                        John Nagle

------------------------------

Date: 1 Jun 88 22:21:46 GMT
From: bbn.com!pineapple.bbn.com!barr@bbn.com  (Hunter Barr)
Subject: Re: Symbolics stock

In article <692@esl.UUCP> ssh@esl.UUCP (Sam) writes:
>->marsh@mbunix (Ralph Marshall) sez ->

<Both posters bash Symbolics for being expensive and unresponsive.>


I'm no investment expert, but it looks to me like you have Symbolics
confused with LMI.  LMI hung on at the edge of bankruptcy for a very
long time, whereas Symbolics seems to gave plenty of cash to see them
through this development cycle and into the next one.  All the
indications are that the coming batch of hardware and software is very
solid.

Symbolics is taking exactly the right steps to get out of the "box"
business, by putting their effort into the Ivory chip and their
software development.  As someone who uses Symbolics Lisp Machines
regularly (as well as VAXen, SUN workstations, and other machines), I
can tell you that their latest release of software (Genera 7.2) shows
that they are responsive to the demands of the market:

    It contains many popular improvements and enhancements.

    It was delivered on time.

    It marks the return of the "source included" policy, with a very
    reasonable price.  It actually contains more of the source than
    7.1 even without the fee!

I don't have enough money to outfit my VAX or SUN like a Lispm; the
memory, software, and OS source-code licenses are far too expensive.
Moreover, it is obviously going to be a couple of years until the
development tools on these machines catch up to where Lispms are now.
(I am betting on Saber C, but maybe SUN's SPE will surprise us.)

If I did have that much money, I would buy more Symbolics stock.  I
think the only way they are going out of business is if they are
bought by Sony, or DEC, or a very big defense contractor.

In almost every large project on which I've worked, there has been
some component which was best implemented on Symbolics machines,
usually for its development environment, but sometimes for the unique
hardware itself.  I will continue to recommend them where they are the
best solution, which I expect to be often.

                            ______
                            HUNTER

------------------------------

End of AIList Digest
********************

∂03-Jun-88  0118	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #13  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 3 Jun 88  01:18:04 PDT
Date: Thu  2 Jun 1988 01:28-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #13
To: AIList@AI.AI.MIT.EDU


AIList Digest            Thursday, 2 Jun 1988      Volume 7 : Issue 13

Today's Topics:

  More Free Will

----------------------------------------------------------------------

Date: 16 May 88 23:06:34 GMT
From: oodis01!uplherc!sp7040!obie!wsccs!dharvey@tis.llnl.gov  (David
      Harvey)
Subject: Re: More Free Will

In article <3200017@uiucdcsm>, channic@uiucdcsm.cs.uiuc.edu writes:
> ...
> Since it can't be denied, let's go one step further.  Free will has created
> civilization as we know it.  People, using their individual free wills,
> chose to make the world the way it is.  Minsky chose to write his book,
> I chose to disagree, someone chose to design computers, plumbing, buildings,
> automobiles, symphonies and everything that makes life enjoyable.
> Our reality is our choice, the product of our free will operating on our
> value system.
> ...
If free will has created civilization as we know it, then it must be
accepted with mixed emotions.  This means that Hitler, Stalin, some of
the Catholic Popes during the middle ages and others have created a
great deal of havoc that was not good.  One of the prime reasons for AI
is to perhaps develop systems that prevent things like this from
happening.  If we with our free will (you said it, not me) can't seem to
create a decent world to live in, perhaps a machine without free will
operating within prescribed boundaries may do a better job.  We sure
haven't done too well.
>
> ...................... I believe all people, especially leading
> scientific minds, have the wisdom to use their undeniable free will
                                                ↑↑↑↑↑↑↑↑↑↑↑↑
> to making choices in values which will promote world harmony. ...
> ...
> Free will explained as an additive blend of determinism and chance
> directly attacks the concept of individual responsibility.  Can any
> machine, based on this theory of free will, possibly benefit society
> enough to counteract the detrimental effect of a philosophy which
> implies that we aren't accountable for our choices?
> ...
>
>
> Tom Channic
> University of Illinois
> channic@uiucdcs.cs.uiuc.edu
> {decvax|ihnp4}!pur-ee!uiucdcs!channic


You choose to believe that free will is undeniable.  The very fact that
many people do deny it is sufficient to prove that it is deniable.  It
is like the existence of God;  impossible to prove, and either accepted
or rejected by each individual.
While it is rather disturbing (to me at least) that we may not be
responsible for our choices, it is even more disturbing that by our
choices we are destroying the world.  For heaven's sake, Reagan and
friends for years banned a Canadian film on Acid Rain because it was
political propaganda.  Never mind the fact that we are denuding forests
at an alarming rate.  To repeat, if we with our free will (you said it,
not me) aren't doing such a great job it is time to consider other
courses of action.  By considering them, we are NOT adopting them as
some religious dogma, but intelligently using them to see what will
happen.

David A Harvey
Utah Institute of Technology (Weber State College)
dharvey@wsccs

------------------------------

Date: 26 May 88 08:12:29 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: Free Will & Self-Awareness

In article <5569@venera.isi.edu> Stephen Smoliar writes:
>>I cite 4 years reading of comp.ai.digest seminar abstracts as evidence.
>>
>Now that Gilbert Cockton has revealed the source of his knowledge of artificial
>intelligence
OK, OK, then I call every AI text I've ever read as well.  Let's see
Nielson, Charniak and the other one, Rich, Schank and Abelson,
Semantic Information Processing (old, but ...), etc.  (I use AI
programming concepts quite often, I just don't fall into the delusion
that they have any bearing on mind).

The test is easy, look at the references.  Do the same for AAAI and
IJCAI papers.  The subject area seems pretty introspective to me.
If you looked at an Education conference proceedings, attended by people who
deal with human intelligence day in day out (rather than hack LISP), you
would find a wide range of references, not just specialist Education references.
You will find a broad understanding of humanity, whereas in AI one can
often find none, just logical and mathematical references. I still
fail to see how this sort of intellectual background can ever be
regarded as adequate for the study of human reasoning.  On what
grounds does AI ignore so many intellectual traditions?

As for scientific method, the conclusions you drew from a single
statement confirm my beliefs about the role of imagination in AI.
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 26 May 88 09:46:01 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: AIList Digest V7 #6 Reply to McCarthy from a minor menace

>There are three ways of improving the world.
>(1) to kill somebody
>(2) to forbid something
>(3) to invent something new.
This is a strange combination indeed.  We could at least add
(4) to understand ourselves.
Not something I would ever put in the same list as murder, but I'm
sure there's a cast iron irrefutable cold logical reason behind it.

But perhaps this is what AI is doing, trying to improve our understanding
of ourselves.  But it may not do this because of
(2) it forbids something
that is, any approach, any insight, which does not have a computable expression.
This, for me, is anathema to academic liberal traditions and thus
>as long as Mrs. Thatcher is around; I wouldn't be surprised if
>Cockton could persuade Tony Benn.
is completely inaccurate.  Knowledge of contemporary politics needs to
be added to the AI ignorance list.  Mrs. Thatcher has just cut a lot of IT
research in the UK, and AI is one area which is going to suffer. Tony Benn on
the other hand, was a member of the government which backed the Transputer
initiative.  The Edinburgh Labour council has used the AI department's sign in
some promotional literature for industries considering locating in Edinburgh.
They see the department's expertise as a strength, which it is.

Conservatives such as Thatcher look for immediate value for money in research.
Socialists look for jobs.  Academic liberals look for quality.  I may only have
myself to blame if this has not been realised, but I have never advocated an
end to all research which goes under the heading of AI.  I use some of it in my
own research, and would miss it.  I have only sought to attack the arrogance of
the computational paradigm, the "pure" AI tradition where tinkerers play at the
study of humanity.  Logicians, unlike statisticians, seem to lack the humility
required to serve other disciplines, rather than try to replace them.  There is
a very valuable role for discrete mathematical modelling in human activities,
but like statistics, this modelling is a tool for domain specialists and not
an end in itself.  Logic and pure maths, like statistics, is a good servant
but an appalling master.

>respond to precise criteria of what should be suppressed
Mindless application of the computational paradigm to
      a) problems which have not yielded to stronger methods
      b) problems which no other paradigm has yet provided any understanding of.
For b), recall my comment on statistics.  If no domain specialism has
any empirical corpus of knowledge, AI has nothing to test itself
against. It is unfalsifiable, and thus likely to invent nothing.
On a), no one in AI should be ignorant of the difficulties in relating
formal logic to ordinary language, never mind non-verbal behaviour and
kinaesthetic reasoning. AI has to make a case for itself based on a
proper knowledge of existing alternative approaches and their problems.
It usually assumes it will succeed spectacularly where other very bright
and dedicated people have failed (see the intro. to Winograd and Flores).

> how they are regarded as applying to AI
"Pure" AI is the application of the computational paradigm to the study of
human behaviour.  It is not the same as computational modelling in
psychology, as here empirical research cannot be ignored.  AI, by isolating
itself from forms of criticism and insight, cannot share in the
development on an understanding of humanity, because its raison d'etre,
the adherence to a single paradigm, without question, without
self-criticism, without a humble relationship to non-computational
paradigms, prevents it ever disappearing in the face of its impotence.

>and what forms of suppression he considers legitimate.
It may be partly my fault if anyone has thought otherwise, but you should
realise that I respect your freedom of association, speech and publication.
If anyone has associated my arguments with ideologies which would sanction
repression of these freedoms, they are (perhaps understandably) mistaken.
There are three legitimate forms of "suppression"
      a) freely willed diversion of funding to more appropriate disciplines
      b) run down of AI departments with distribution of groups across
         established human disciplines, with service research in maths.  This
         is how a true discipline works. It leads to proper humility,
         scholarship and ecleticism.
      c) proper attention to methodological issues (cf the Sussex
         tradition), which will put an end to the sillier ideas.
         AI needs to be more self-critical, like a real discipline.
Social activities such as (a) and (b) will only occur if the arguments
with which I agree (they are hardly original) get the better of "pure"
AI's defence that it has something to offer (in which case answer that guy's
request for three big breaks in AI research, you're not doing very well on
this one).  It is not so much suppression, as withdrawal of encouragement.

>similar to Cockton's inhabit a very bad and ignorant book called "The Question
> of Artificial Intelligence" edited by Stephen Bloomfield, which I
>will review for "Annals of the History of Computing".
Could we have publishing information on both the book and the review please?
And why is it that AI attracts so many bad and ignorant books against it?  If
you dive into AI topics, don't expect an easy time.  Pure AI is attemping a
science of humanity and it deserves everything it gets.  Sociobiology and
behaviourism attacted far more attention.  Perhaps it's AI's turn.  Every
generation has its narrow-minded theories which need broadening out.

AI is forming an image of humanity.  It is a political act.  Expect opposition.
Skinner got it, so will AI.

>The referee should prune the list of issues and references to a size that
> the discussants are willing to deal with.
And, of course, encoded in KRL!  Let's not have anything which takes effort to
read, otherwise we might as well just go and study instead of program.

>The proposed topic is "AI and free will".
Then AI and knowledge-respresentation, then AI and Hermeneutics (anyone
read Winograd and Flores properly yet), then AI and epistemology, then ..
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 28 May 88 08:13:06 GMT
From: quintus!ok@sun.com  (Richard A. O'Keefe)
Subject: AI and Sociology

I believe it was Gilbert Cockton who raised the question "why does AI
ignore Sociology?"  Two kinds of answers have been given so far:
(1) AI is appallingly arrogant trash.
(2) Sociology is appallingly arrogant trash.

I want to suggest that there is a straightforward reason for the mutual
indifference (some Sociologists have taken AI research as _subject_
material, but AI ideas are not commonly adopted by Sociologists) which
is creditable to both disciplines.

Lakatos's view of a science is that it is not so much a set of theories as
a research programme: a way of deciding what questions to pursue, what
counts as an explanation, and a way of dealing with puzzles.  For example,
he points out that Newton's theory of gravity is in principle unfalsifiable,
and that the content of the theory may be seen in the kinds of explanations
people try to come up with to show that apparent exceptions to the theory
are not real exceptions.

The key step here is deciding what to study.

Both in its application to Robotics and its application to Cognitive
Science, AI is about the mental processes of individuals.

As a methodological basis, Sociology looks for explanations in terms of
social conditions and "forces", and rejects "mentalistic" explanations.

Let me provide a concrete example.  One topic of interest in AI is how
a program could make "scientific discoveries".  AM and Eurisko are
famous.  A friend with whom I have lost contact was working on a program
to try to predict the kinetics of gas phase reactions.  Pat Langley's
"BACON" programs are well known.

Scientific discovery is also of interest to Sociology.  One book on this
topic (the only one on my shelves at the moment) is
        The social basis of scientific discoveries
        Augustine Brannigan
        Cambridge University Press, 1981
        0 521 28163 6
I take this as an *example*.  I do not claim that this is all there is
to Sociology, or that all Sociologists would agree with it, or that all
Sociological study is like this.  All I can really claim is that I am
interested in scientific discovery from an AI point of view, and when I
went looking for Sociological background this is the kind of thing I found.

Brannigan spends chapter 2 attacking some specific "mentalistic" accounts
of scientific discovery, and in chapter 3 rubbishes the mentalistic
approach completely.  If I understand him, his major complaint is that
accounts such as Koestler's "bisociation" fail to be accounts of
*scientific* *discovery*. Indeed, a section of chapter 3 is headed
    "Mentalistic models confuse learning with discovery."

It turns out that he is not concerned with the question "how do scientific
discoveries happen", but with the question "what gets CALLED a scientific
discovery, and why?"  Which is a very interesting question, but ignores
everything about scientific discovery which is of interest to AI people.

The very reason that AI people are interested in scientific discovery
(apart from immediately practical motives) is that it is a form of learning
in semi-formalised domains.  If one of Pat Langley's programs discovers
something that happens not to be true (such as coming up with Phlogiston
instead of Oxygen) he is quite happy as long as human scientists might have
made the same mistake.  As I read Brannigan's critical comments on the
"mentalistic" theories he was rubbishing, I started to get excited, seeing
how some of the suggestions might be programmable.

Page 35 of Brannigan:
    "... in the social or behavioural sciences we tend to obfuscate the
    social significance of familiar phenomena by explaining them in terms
    of 'underlying' causes.  Though this is not always the case, it is
    true with discovery and learning."
This is to reject in principle attempts to explain discovery and learning
in terms of underlying causes.
    "... the equivalence of learning and discovery is a _confusion_.
    From a social perspective, 'to _learn_' means something quite
    different from 'to _discover_'."
Emphasis his.  He would classify a rediscovery as a mere learning,
which at the outset rejects as uninteresting precisely the aspects that
AI is interested in.

Something which is rather shocking from an AI perspective is found on
page 64:
    "... the hallmark of this understanding is the ascription of learning
    to some innate abilities of the individual.  Common sensically,
    learning is measured by the degree of success that one experiences
    in performing certain novel tasks and recalling certain past events.
    Mackay's ethnographic work suggests, on the contrary, that learning
    consists in the institutional asciprtion of success whereby certain
    ordered and identified as learning achievements to the exclusion of
    other meaningful performances."
Page 66:
    "Although as folk members of society we automatically interpret
    individual discovery or learning as the outcome of a motivated
    course of inference, sociologically we must consider the cognitive
    and empirical grounds in terms of which such an achievement is
    figured.  From this position, cleverness in school is understood,
    not as a function of innate mental powers, but as a function of
    the context in which the achievements associated with cleverness
    are made accountable and remarkable."

To put it bluntly, if we take statements made by some AI people or some
Sociologists at face value, they cast serious doubts on the sanity of
the speakers.  But taken as announcements of a research programme to
be followed within the discipline, they make sense.

AI says "minds can be modelled by machines", which is, on the face of it,
crazy.  But if we read this as "we propose to study the aspects of mind
which can be modelled by machines, and as a working assumption will suppose
that all of them can", it makes sense, and is not anti-human.
Note that any AI practicioner's claim that the mechanisability of mind is
a discovery of AI is false, that is an *assumption* of AI.  You can't
prove something by assuming it!

Sociology says "humans are almost indefinitely plastic and are controlled
by social context rather than psychological or genetic factors", which is,
on the face of it, crazy.  But if we read this as "we propose to study the
influence of the social context on human behaviour, and as a working
assumption will suppose that all human behaviour can be explained this way",
it makes sense, and is not as anti-human as it at first appears.
Note that any Sociologist's claim that determination by social forces is
a discovery of Sociology is false, that is an *assumption* of Sociology.

Both research programmes make sense and both are interesting.
However, they make incompatible decisions about what counts as interesting
and what counts as an explanation.  So for AI to ignore the results of
Sociology is no more surprising and no more culpable than for carpenters
to ignore Musicology (both have some sort of relevance to violins, but
they are interested in different aspects).

What triggered this message at this particular date rather than next week
was an article by Gilbert Cockton in comp.ai.digest, in which he said

    "But perhaps this is what AI is doing, trying to improve our
    understanding of ourselves.  But it may not do this because of
    (2) it forbids something
    that is, any approach, any insight, which does not have a computable
    expression.  This, for me, is anathema to academic liberal traditions ..."

But of course AI does no such thing.  It merely announces that
computational approaches to the understanding are part of _its_ territory,
and that non-computational approaches are not.  AI doesn't say that a
Sociologist can't explain learning (away) as a function of the social
context, only that when he does so he isn't doing AI.

A while back I sent a message in which I cited "Plans and Situated Actions"
as an example of some overlap between AI and Sociology.  Another example
can be found in chapter 7 of
        Induction -- Processes of Inference, Learning, and Discovery
        Holland, Holyoak, Nisbett, and Thagard
        MIT Press, 1986
        0-262-08160-1

Perhaps we could have some other specific examples to show why AI should
or should not pay attention to Sociology?

------------------------------

Date: 28 May 88 20:58:32 GMT
From: oodis01!uplherc!sp7040!obie!wsccs!rargyle@tis.llnl.gov  (Bob
      Argyle)
Subject: Re: Free Will & Self Awareness

In article <5323@xanth.cs.odu.edu>, Warren E. Taylor writes:
[stuff deleted]
> Adults understand what a child needs. A child, on his own, would quickly kill
> himself.
...
> Flame away.
>    Warren.

so stop interferring with that child's free will!  [W.C.Fields] :-)

We genetically are programmed to protect that child (it may be a
relative...); not so programmed however for protecting any
computers running an AI program.  AI seems the perfect place to test the
freewill doctrine without the observer interferring with the 'experiment.'
At least one contributor to the discussion has called for an end to AI
because of the effects on impressionable undergraduates being told that
there isn't any free will.

Send Columbus out and if he falls off the edge, so much the better.
IF we get some data on what 'free will' actually is out of AI, then let
us discuss what it means.  It seems we either have free will or we
don't; finding out seems indicated after is it 3000 years of talk.vague.
So is the sun orbitting around the earth?  this impressionable
undergraduate wants to see some hard data.

Bob @ WSCCS

------------------------------

Date: 28 May 88 21:05:27 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net  (Jeff Dalton)
Subject: Re: DVM's request for definitions

In article <894@maize.engin.umich.edu> Brian Holtz writes:
>If you accept definition (1) (as I do), then the only alternative to
>determinism is dualism, which I don't see too many people defending.

Dualism wouldn't necessarily give free will: it would just transfer
the question to the spiritual.  Perhaps that is just as deterministic
as the material.

------------------------------

Date: 29 May 88 10:26:38 GMT
From: g.gp.cs.cmu.edu!kck@pt.cs.cmu.edu  (Karl Kluge)
Subject: Gilbert Cockton and AI

In response to various posts...

> From: gilbert@cs.glasgow.ac.uk (Gilbert Cockton)
>  AI depends on being able to use written language (physical symbol
>  hypothesis) to represent the whole human and physical universe.  AI
>  and any degree of literate-ignorance are incompatible.  Humans, by
>  contrast, may be ignorant in a literate sense, but knowlegeable in
>  their activities.  AI fails as this unformalised knowledge is
>  violated in formalisation, just as the Mona Lisa is indescribable.
>  Philosophically, this is a brand of scepticism.  I'm not arguing that
>  nothing is knowable, just that public, formal knolwedge accounts for
>  a small part of our effective everyday knowledge (see Heider).

This shows the extent of your misunderstanding of the premises
underlying AI. In particular, you appear to have a gross
misunderstanding of the "physical symbol hypothesis" (sic).

First, few AI researchers (if any) would deny that there are certain
motor functions or low level perceptual processes which are not
symbolic in nature.

Second, the implicit (and unproven) assumption in the above quote is
that knowledge which is not public is also not formal, and that the
inability to access the contents of an arbitrary symbol structure in the
mind implies the absence of such symbol structures. Nowhere does the
Physical Symbol System Hypothesis imply that all symbol structures are
accessible by the conscious mind, or that all symbols in the symbols
structures will match concepts that map onto words in language.

>  Because so little of our effective knowledge is formalised, we learn
>  in social contexts, not from books.  I presume AI is full of relative
>  loners who have learnt more of what they publicly interact with from
>  books rather than from people.  Well I didn't, and I prefer
>  interaction to reading.

You presume an awful lot. Comments like that show the intellectual level
of your critique of AI.

> The question is, do most people WANT a computational model of human
> behaviour?  In these days of near 100% public funding of research,
> this is no longer a question that can be ducked in the name of
> academic freedom.

Mr. Cockton, Nature does not give a damn whether or not people WANT a
computational model of human behavior any more than it gave a damn
whether or not people wanted a heliocentric Solar System.

> I am always suspicious of any academic activity which has to request
> that it becomes a philosophical no-go area.  I know of no other area of
> activity which is so dependent on such a wide range of unwarranted
> assumptions.

AI is founded on only one basic assumption, that there are no models of
computation more powerful (in a well defined sense) than the Turing
machine, and in particular that the brain is no more powerful a
computational mechanism than the Turing machine. If you have some
scientific evidence that this is a false assumption, please put it on
the table.

> My point was that MENTAL determinism and MORAL responsibility are
> incompatible.  I cite the whole ethos of Western (and Muslim? and??)
> justice as evidence.

Western justice *presuposes* moral responsibility, but that in no way
serves *as evidence for* moral responsibility. Even if there is no moral
responsib- ility, there will still always be responsibility as a causal
agent. If a faulty electric blanket starts a fire, the blanket is not
morally responsible.  That wouldn't stop anyone from unplugging it to
prevent future fires.

> If AI research has to assume something which undermines fundamental
> values, it better have a good answer beyond academic freedom, which
> would also justify unrestricted embryo research, forced separation of
> twins into controlled upbringings, unrestricted use of pain in learning
> research, ...

What sort of howling non-sequitor is this supposed to be? Many people
feel that Darwinian evolution "has to assume something which undermines
fundam- ental values", that isn't an excuse to hide one's head in the
sand and ignore evolution, or to cut funding for research into
evolutionary biology.

> > I regard artificial intelligence as an excellent scientific approach
> > to the pursuit of this ideal . . . one which enables me to test
> > flights of my imagination with concrete experimentation.

> I don't think a Physicist or an experimental psychologist would agree
> with you. AI is DUBIOUS, because so many DOUBT that anyone in AI has a
> elaborated view of truth and falsehood in AI research. So tell me, as
> a scientist, how we should judge AI research?  In established
> sciences, the grounds are clear.  Certainly, nothing in AI to date
> counts as a controlled experiment, using a representative population,
> with all irrelevant variables under control.  Given the way AI programs
> are written, there is no way of even knowing what the independent
> variable is, and how it is being driven.  I don't think you know what
> experimental method is, or what a clearly formulated hypothesis is
> either.  You lose your science badge.

Well, what kind of AI research are you looking to judge? If you're
looking at something like SOAR or ACT*, which claim to be computational
models of human intelligence, then comparisons of the performance of the
architecture with data on human performance in given task domains can be
(and are) made.

If you are looking at research which attempts to perform tasks we
usually think of as requiring "intelligence", such as image
understanding, without claiming to be a model of human performance of
the task, then one can ask to what extent does the work capture the
underlying structure of the task?  how does the approach scale? how
robust is it? and any of a number of other questions.

> Don't you appreciate that free will (some degree of choice) is
> essential to humanist ideals.  Read about the Renaissance which spawned
> the Science on whose shirt tails AI rides.  Perhaps then you will
> understand your intellectual heritage.

Mr. Cockton, it is more than a little arrogant to assume that anyone who
disagrees with you is some sort of unread, unwashed social misfit, as
you do in the above quote and the earlier quote about the level of
social interaction of AI reearchers. If you want your concerns about AI
taken seriously, then come down off your high horse.

Karl Kluge (kck@g.cs.cmu.edu)

People have opinions, not organizations. Ergo, the opinions expressed above
must be mine, and not those of CMU.

------------------------------

Date: 30 May 88 08:34:58 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Bad AI: A Clarification

In article <1242@crete.cs.glasgow.ac.uk> Gilbert Cockton blurts:
>Mindless application of the computational paradigm to
>     a) problems which have not yielded to stronger methods
>     b) problems which no other paradigm has yet provided any understanding of.
This is poorly expressed and misleading.  Between "problems" and "which" insert
"concerning human existence".  As this stands, it looks like I want to withdraw
encouragement from ALL computer research.  Apologies to anyone who's taken this
seriously enough to follow-up, or was just annoyed (but you shouldn't be anyway)

Bad AI is research into human behaviour and reasoning, usually conducted by
mathematicians or computer scientists who are as well-qualified for the study
of humanity as is an archaeologist with a luminous watch for the study of
radiation (of course I understand radiation, I've got a luminous watch,
haven't I? ;-))

AI research seems to fall into two groups:
        a) machine intelligence;
        b) simulation of human behaviour.
No problem with a), apart from the use of the now vacuous term "intelligence",
which psychometricians have failed miserably to pin down.  No problem with b)
if the researcher has a command of the study of humanity, hence the
respectability of computational modelling in psychology.  Also, mathematicians
and computer scientists have no handicaps, and many advantages when the human
behaviour in b) is equation solving, symbolic mathematics, theorem proving and
configuring VAXES.  They are domain experts here.  Problems only arise when they
confuse their excellent and most ingenious programs with human reasoning.

   1) because maths and logic has little to do with normal everyday reasoning
      (i.e. most reasoning is not consciously mathematical, symbolic,
      denotational, driven by inference rules).  Maths procedures are not
      equivalent to any human reasoning.  There is an overlap, but it's small

2)    because they have no training in the difficulties involved in studying
      human behaviour, unlike professional psychologists, sociologists,
      political scientists and economists.  At best, they are informed amateurs,
      and it is sad that their research is funded when research in established
      disciplines is not.  Explaining this political phenomena requires a simple
      appeal to the hype of "pure" AI and the gullibility of its sponsors, as
      well as to the honesty of established disciplines who know that coming to
      understand ourselves is difficult, fraught with methodological problems.
      Hence the appeal of the boy scout enthusiasm of the LISP hacker.

So, the reason for not encouraging AI is twofold.  Firstly, any research which
does not address human reasoning directly is either pure computer science, or
a domain application of computing. There is no need for a separate body of
research called AI (or cybernetics for that matter).  There are just
computational techniques.  Full stop.  It would be nice if they followed
good software engineering practices and structured development methods as
well.  Secondly, where research does address human reasoning directly, it
should be under the watchful eye of competent disciplines.  Neither mathematics
or computer science are competent disciplines.  Supporting "pure" AI research
by logic or LISP hackers makes as much sense as putting a group of historians,
anthropologists and linguists in charge of a fusion experiment.  The word is
"skill".  Research requires skill.  Research into humanity requires special
skills.  Computer scientists and mathematicians are not taught these skills.

When hardware was expensive, it made sense to concentrate research using
computational approaches to our behaviour. The result was AI jounals,
AI conferences, and a cosy AI community insulated from the intellectual
demands of the real human disciplines.  I hope, with MacLisp and all
the other cheap AI environments, that control of the computational
paradigm is lost by the technical experts and passes to those who
understand what it is to study ourselves.  AI will disappear, but the
work won't.  Indeed it will get better, and having to submit to an AI
conference rather than a psychology or related conference (for research
into ourselves), or a computing or application area conference (for
machine 'intelligence') will be a true reflection of the quality of the work.
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 30 May 88 08:42:35 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: Free Will & Self Awareness

In article <1209@cadre.dsl.PITTSBURGH.EDU> Gordon E. Banks writes:
>>Are there any serious examples of re-programming systems, i.e. a system
>>that redesigns itself in response to punishment.
>
>Certainly!  The back-propagation connectionist systems are "punished"
>for giving an incorrect response to their input by having the weight
>strengths leading to the wrong answer decreased.
I was expecting this one.  What a marvellous way with words has mens
technica.  To restore at least one of the many subtle connotations of
the word "punishment", I would ask

Are there any serious examples of resistant re-programming systems, i.e. a
system that redesigns itself only in response to JUST punishment, but
resists torture and other attempts to coerce it into falsehood?

I suspect the next AI pastime will be lying to the Connectionist
machine (Hey, I've got this one to blow raspberries when it sees the
word "Reagan", and that one now adds 2 and 2 to get 5!).

Who gets the key to these machines?  I'm already having visions of
Luddite workers corrupting their connectionist robots :-)  Will the
first machine simulation be of some fundamentalist religious fanatic?
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 30 May 88 09:29:38 GMT
From: TAURUS.BITNET!shani@ucbvax.berkeley.edu
Subject: Re: More Free Will

In article <532@wsccs.UUCP>, dharvey@wsccs.BITNET writes:

> If free will has created civilization as we know it, then it must be
> accepted with mixed emotions.  This means that Hitler, Stalin, some of
> the Catholic Popes during the middle ages and others have created a
> great deal of havoc that was not good.  One of the prime reasons for AI
> is to perhaps develop systems that prevent things like this from
> happening.  If we with our free will (you said it, not me) can't seem to
> create a decent world to live in, perhaps a machine without free will
> operating within prescribed boundaries may do a better job.  We sure
> haven't done too well.

Oh no! wer'e back where we started!

Gee! I think that the problem with weather this world is decent or not is in
your misconception, not with the system. The whole point (Which I said more
then once, I think) is that THERE ISN'T SUCH A THING LIKE OBJECTIVE GOOD OR
OBJECTIVE EVIL OR OBJECTIVE ANY VALUE!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
because anything that is objective is completly indifferent to our values
and other meanings that we human beings give to things! Therfore, HOW ON
EARTH can AI define things that does not exist?!?!?!

Hmm... on second thought, you may use AI to detect self-contradicting ideas,
but realy, it doesn't worth the mony as you can do that with a bit of common
sense... your all idea seem to mean that people will have to accept things only
because the machin have said this... how do you know that the bad guys will not
use false machines to mislead the innosence and poor again???

How about investing in making PEOPLE think, instead???

O.S.

------------------------------

Date: 30 May 88 09:33:23 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: AI and Sociology

Firstly, thanks very much to Richard O'Keefe for taking time to put
together his posting.  It is a very valuable contribution.  One
objection though:
In article <1033@cresswell.quintus.UUCP> Richard A. O'Keefe writes:
>But of course AI does no such thing.  It merely announces that
>computational approaches to the understanding are part of _its_ territory,
>and that non-computational approaches are not.
This may be OK for Lakatos, but not for me.  Potty as some of the
ideas I've presented may seem, they are all well rehearsed elsewhere.

It is quite improper to cut out a territory which deliberately ignores
others.  In this sense, psychology and sociology are guilty like AI,
but not nearly so much, as they have territories rather than a
territory.  Still, the separation of sociology from psychology is
regrettable, but areas like social psychology and cognitive sociology
do bridge the two, as do applied areas such as education and management.
Where are the bridges to "pure" AI?  Answer that if you can.

The place such arguments appear most are in curriculum theory (and
also some political theory, especially Illich/"Tools for Conviviality"
and democrats concerned about technical imperialism).  The argument
for an integrated approach to the humanities stems from the knowledge
that academic disciplines will always adopt a narrow perspective, and
that only a range of disciplines can properly address an issue.  AI can be
mulitidisciplinary, but it is, for me, unique in its insistence on a single
paradigm which MUST distort the researcher's view of humanity, as well as the
research consumer's view on a bad day.  Indefensible.

Some sociologists have been no better, and research here has also lost support
as a result.  I do not subscribe to the view that everything has nothing but a
social explanation.  Certainly the reason the soles stay on my shoes has
nothing much to do with my social context.  Many societies can control the
quality of their shoe production, but vary on nearly everything else.
Undoubtedly my mood states and my reasoning performance have a physiological
basis as amenable to causal doctrines as my motor car.  But I am part of a
social context, and you cannot fully explain my behaviour without appeal to it.

Again, I challenge AI's rejection of social criticisms of its paradigm.  We
become what we are through socialisation, not programming (although some
teaching IS close to programming, especially in mathematics).  Thus a machine
can never become what we are, because it cannot experience socialisation in the
same way as a human being.  Thus a machine can never reason like us, as it can
never absorb its model of reality in a proper social context.  Again, there are
well documented examples of the effect of social neglect on children.  Machines
will not suffer in the same way, as they only benefit from programming, and
not all forms of human company.  Anyone who thinks that programming is social
interaction is really missing out on something (probably social interaction :-))

RECOMMENDED READING

Jerome Bruner on MACOS (Man: A Course of Study), for the reasoning
behind interdisciplinary education.

Skinner's "Beyond Freedom and Dignity" and the collected essays in
response to it, for an understanding of where behaviourism takes you
("pure" AI is neo-behaviourist, it's about little s-r modelling).

P. Berger and T. Luckman's "Social Construction of Reality" and
I. Goffman's "Presentation of Self in everyday life" for the social
aspects of reality.

Feigenbaum and McCorduck's "Fifth Generation" for why AI gets such a bad name
(Apparently the US invented computers single handed, presumably while John
 Wayne was taking the Normandy beaches in the film :-))
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 30 May 88 09:51:15 GMT
From: shani%TAURUS.BITNET@jade.berkeley.edu.user@host.BITNET
Reply-to: <shani%TAURUS.BITNET@CORNELLC.CCS.CORNELL.EDU>
Subject: Re: AIList Digest   V7 #4 [bwk@mitre-bedford.arpa: Re: Free
         Will & Self


[Barry writes]
> Assume that I possess a value system which permits me to rank my
> personal preferences regarding the likely outcome of the courses
> of action open to me.  Suppose, also, that I have a (possibly crude)
> estimate of your value system.  If I were myopic (or maybe just stupid)
> I would choose my course of action to maximize my payoff without regard
> to you.  But my knowledge of your value system creates an interesting
> opportunity for me.  I can use my imagination to conceive a course
> of action which increases both of our utility functions.  Free will
> empowers me to choose a Win-Win alternative.  Without free will, I am
> predestined to engage in acts that hurt others.  Since I disvalue hurting
> others, I thank God that I am endowed with free will.
>
Like allways... the perfect Lawful good claim (Did mr. gaygax think of you
when he wrote the section about LG ? :-) )

Oh barry boy! can't you stop mixing the objective facts with your OUN PERSONAL
values? why do you insist on imposing your oun point of vew on everything?

Now... I suggest that you will sit down and make a list of all things you
disvalue and still people sometimes do. Out of the dispair, maybe you will
finaly realise that your point of vew is but one of more then 4 billion points
of vew. There is nothing wrong with that! your point is perfect and does not
contradict common sense or anything, but please! it is your point of vew,
according to your values! others that do not disvalue hurting as you do are
still free-willing as much as you are. Bsides, suppose you had a value system
which you where absoloutly sure that it is good for everyone, dont you think
that by giving it to people you will hurt them? all in all, the name of the
game is 'find your oun meaning'....

O.S.

------------------------------

Date: 30 May 88 10:41:52 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: What Next!

>specifically, Marxist political science - is the key to making progress.
>And I expect Gilbert will be happier.
Pat, this is so ignorant.  I doubt that you have much command of political
thought at all.  Marxism regards itself as a science.  Following Engels, it
became deterministic.  Cybernetics is strong in Russia, largely because it
fits in so well with Anti-Duhring style philosophy of science. I do not.

The only intellectual connections between the ideas I have repeated on
social aspects of reality and Marx are:
        i) Early dialetical materialism in the "German Ideology",
           certainly not beloved by traditional Marxist-Leninists.
        ii) Marx, with Durkheim and Weber, was one of the founding
            fathers of sociology. Thus any sociology has some connection
            with him, just as logic can't escape Principia Mathematica

For a proud defender of logic, this was worse than no response at all.
Logic is anything but a bourgeois illusion.  It is an academic artefact
which I find hard to link to ownership of the mode of production.
How many factory owners are logicians?  And why not?
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: Mon, 30 May 88 12:13:20 BST
From: Gilbert Cockton <gilbert%cs.glasgow.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Immortality, Home-made BDI states and Systems Theory (3
         snippets)

In article <8805250631.AA09406@BLOOM-BEACON.MIT.EDU>
>Step (4) gives us virtual immortality, since whenever our current
>intelligence-carrying hardware (human body? computer? etc.) is about to
>give up (because of a disease, old age ...) we can transfer the
>intelligence to another piece of hardware. there are some more delicate
>problems here, but you get the idea.
Historically, I think there's going to be something in this.  There is
no doubt that we can embody in a computer program something that we
would not sensibly embody in a book.  In this sense, computers are
going to alter what we can pass on from one generation to another.

But there are also similarities with books.  Books get out of date, so
will computer programs.  As I've said before, we've got one hell of a
maintenance problem with large knowledge-based programs.  Is it really
going to be more economical than people?  See the latest figures on
automation in the car industry, where training costs are going through
the roof as robots move from single functions to programmable tasks.
GM's most heavily automated plant (Hamtramck, Michigan) is less productive
than a much less automated one at Fremont Ca. (Economist, 21/5/88, pp103-104).


In art. <8805250631.AA09382@BLOOM-BEACON.MIT.EDU>
>
>Let my current beliefs, desires, and intentions be called my BDI state.
These change.  Have you no responsibility for the way they change?  Do
you just wake up one morning a different person, or are you consciously
involved in major changes of perspective?  Or do you never change?

In article <19880527050240.9.NICK@MACH.AI.MIT.EDU>
>Gilbert Cockton: Even one reference to a critique of Systems Theory would be
>helpful if it includes a bibliography.
I recommended the work of Anthony Giddens (Kings College, Cambridge).
There are sections on systems theory in his "Studies in Social and Political
Theory" (either Hutchinson or Macmillan or Polity Press, can't remember which).

A book which didn't impress me a long time ago was Apple's "Ideology
and Education" or something like that.  He's an American Marxist, but
I remember references to critiques of systems theory in between his
polemic.  I'll try to find a full reference.

Systems theory is valuable compared to classical science.  To me,
systems theory and simulation as a scientific method go hand in hand.
It falls down in its overuse of biological concepts (which with
mathematics represent the two scientific influences on many post-war
approaches to humanity. Sociobiological game theory, ugh!)

Another useful book is David Harvey's "Science, Ideology and Human
Geography" (Longman?), which followed his systems theory/postivist
"Explanation in Geography".  You'll see both sides of systems theory
in his work.

Finally,  I am surprised at the response to my original comments on
free will and AI.  The point is still being missed that our
current society needs free will, whether or not it can be established
philosophically that free will exists or not.  But I have changed my
mind about "AI's" concern about the issue, both in the orderliness
of John McCarthy's representation of a 1969 paper (missed it due
to starting secondary school :-)), and in Drew McDermott's awareness
of the importance of the issue and its relation to modern science
and dualism, plus all the other traffic on the issue.  I only wish
AI theorists could get to grips with the socialisation question as well,
and understand more sympathetically why dualism persists (by law in the case
of the UK school curriculum).

Hope you're enjoying all this as much as I am :-)
Gilbert.

------------------------------

Date: 30 May 88 12:01:44 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: The Social Construction of Reality

In article <218@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
>If you have any questions about the game the consensus reality
>freaks are playing, you can send me e-mail.
Caught out at last! Mail this CIA address for details of the Marxist attempt
to undermine the All-American AI net.  Digest no relativist epistemologies
without clearance from this mailbox.  All is revealed by Nancy Reagan's
astrologer, a secondhand version of BACON (St. Francis to the scientists)

Come on T. Willy, you just don't like social constructs that's all. :-)
Of course there's a resistance in the physical world to any old social
construct.  But blow me, aren't people different?  Take Kant, a German
while alive but a horse when dead.  Guilty under the Laws of Physics.  Why,
even digestability seems to be a social construct, just look at all that
anti-americanism over the good old US hamburger.  And as for language, well,
that's just got no respect at all for the laws of physics.  Now wonder those
damned commies get to twist the true All-American meaning out of words.

Remember. Don't go out at night without positivism in your pocket.
          Report all subcultural tendencies to the appropriate authority.
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

End of AIList Digest
********************

∂03-Jun-88  0119	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #14  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 3 Jun 88  01:18:49 PDT
Date: Thu  2 Jun 1988 01:40-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #14
To: AIList@AI.AI.MIT.EDU


AIList Digest            Thursday, 2 Jun 1988      Volume 7 : Issue 14

Today's Topics:

  Still More Free Will

----------------------------------------------------------------------

Date: 30 May 88 14:41:18 GMT
From: mcvax!ruuinf!piet@uunet.uu.net  (Piet van Oostrum)
Subject: Re: More Free Will

In article <532@wsccs.UUCP> dharvey@wsccs.UUCP (David Harvey) writes:

   If free will has created civilization as we know it, then it must be
   accepted with mixed emotions.  This means that Hitler, Stalin, some of
   the Catholic Popes during the middle ages and others have created a
   great deal of havoc that was not good.  One of the prime reasons for AI
   is to perhaps develop systems that prevent things like this from
   happening.  If we with our free will (you said it, not me) can't seem to
   create a decent world to live in, perhaps a machine without free will
   operating within prescribed boundaries may do a better job.  We sure
   haven't done too well.

I agree we haven't done too well, but if these same persons (i.e. WE) are
going to design a machine, what make you think this machine will do a
better job???

If the machine doesn't have a free will, the designers must decide what
kind of decisions it will make, and it will be based upon their insights,
ideas, moral etc.

Or would you believe AI researchers (or scientists in general) are
inherently better than rulers, popes, nazi's, communists or catholics, to
name a few?

Hitler and Stalin had scientists work for them, and there are now AI
researchers working on war-robots and similar nasty things. That doesn't give
ME much hope from that area.

--
Piet van Oostrum, Dept of Computer Science, University of Utrecht
Padualaan 14, P.O. Box 80.089, 3508 TB Utrecht, The Netherlands
Telephone: +31-30-531806              UUCP: ...!mcvax!ruuinf!piet

------------------------------

Date: 30 May 88 16:40:28 GMT
From: dvm@yale-zoo.arpa  (Drew Mcdermott)
Subject: Free will

More on the self-modeling theory of free will:

Since no one seems to have understood my position on this topic,
I will run the risk that no one cares about my position, and try
to clarify.

Sometimes parties to this discussion talk as if "free will" were
a new kind of force in nature.  (As when Biep Durieux proposed that
free will might explain probability rather than vice versa.)  I am
sure I misrepresent the position; the word "force" is surely wrong
here (as is the word "new").  The misrepresentation is unavoidable;
this kind of dualism is simply not a live option for me.  Nor can
I see why it needs to be a perenially live option on an AI discussion
bulletin board.

So, as I suggested earlier, let's focus on the question of free will
within the framework of Artificial Intelligence.  And here it
seems to me the question is, How would we tell an agent with free
will from an agent without it?  Two major strands of the discussion
seem completely irrelevant from this standpoint:

  (1) Determinism vs. randomness.  The world is almost
certainly not deterministic, according to quantum mechanics.  Quantum
mechanics may be false, but Newtonian mechanics is certainly false,
so the evidence that the world is deterministic is negligible.
(Unless the Everett-Wheeler interpretation of quantum mechanics is true,
in which case the world is a really bizarre place.)  So, if determinism
is all that's bothering you, you can relax.  Actually, I think what's
really bothering people is the possibility of knowledge (traditionally,
divine knowledge) of the outcomes of their future decisions, which has
nothing to do with determinism.

  (2) My introspections about my ability to control my thoughts or
whatnot.  There is no point in basing the discussion on such evidence,
until we have a theory of what conscious thoughts are.  Such a theory
must itself start from the outside, looking at a computational agent
in the world and explaining what it means for it to have conscious
thoughts.  That's a fascinating topic, but I think we can solve the
free will problem with less trouble.

  So, what makes a system free?  To the primitive mind, free decisions
are ubiquitous.  A tornado decides to blow my house down; it is worth
trying to influence its decision with various rewards or threats.
But nowadays we know that the concept of decision is just out of place
in reasoning about tornados.  The proper concepts are causal; if we
can identify enough relevant antecedent factors, we can predict (and
perhaps someday control) the tornado's actions.  Quantum mechanics
and chaos set limits to how finely we can predict, but that is
irrelevant.

  Now we turn to people.  Here it seems as if there is no need to do
away with the idea of decision, since people are surely the paradigmatic
deciders.  But perhaps that attitude is "unscientific."  Perhaps the
behaviorists are right, and the way we think about thunderstorms is
the right way to think about people.  If that's the actual truth, then
we should be tough-minded and acknowledge it.

  It is *not* the truth.  Freedom gets its toehold from the fact that
it is impossible for an agent to think of itself in terms of causality.
Contrast my original bomb scenario with this one:

   R sees C wander into the blast area, and go up to the bomb.  R knows
   that C knows all about bombs, and R knows that C has plenty of time to
   save itself, so R decides to do nothing.  (Assume that preventing the
   destruction of other robots gets big points in R's utility function.)

In this case, R is reasoning about an agent other than itself.  Its problem
is to deduce what C will actually do, and what C will actually suffer.  The
conclusion is that C will prosper, so R need do nothing.  It would
be completely inappropriate for R to reason this way about itself.  Suppose
R comes to realize that it is standing next to a bomb, and it reasons as
follows:

   R knows all about bombs, and has plenty of time to save itself, so I need
   do nothing.

Its reasoning is fallacious, because it is of the wrong kind.  R is not
being called on to deduce what R will do, but to be a part of the causal
fabric that determines what R will do, in other words: to make a
decision.  It is certainly possible for a robot to engage in a reasoning
pattern of this faulty kind, but only by pretending to make a decision,
inferring that the decision will be made like that, and then not
carrying it out (and thus making the conclusion of the inference false).
Of course, such a process is not that unusual; it is called "weakness of
the will" by philosophers.  But it is not the sort of thing one would be
tempted to call an actual decision.  An actual decision is a process of
comparative evaluation of alternatives, in a context where the outcome
of the comparison will actually govern behavior.  (A robot cannot decide
to stop falling off a cliff, and an alcoholic or compulsive may not
actually make decisions about whether to cease his self-destructive
behavior.)

   This scenario is one way for a robot to get causality wrong when
reasoning about itself, but there is a more fundamental way, and that is
to just not notice that R is a decision maker at all.  With this
misperception, R could tally its sources of knowledge about all
influences on R's behavior, but it would miss the most important one,
namely, the ongoing alternative-evaluation process. Of course, there are
circumstances in which this process is in fact not important.  If R is
bound and gagged and floating down a river, then it might as well
meditate on hydrodynamics, and not work on a decision.  But most of the
time the decision-making process of the robot is actually one of the
causal antecedents of its future.  And hence, to repeat the central
idea, *there is no point in trying to think causally about oneself while
making a decision that is actually part of the causal chain.  Any system
that realizes this has free will.*

  This theory accounts for why an agent must think of itself as outside
the causal order of things when making a decision.  However, it need not
think of other agents this way.  An agent can perfectly well think of
other agents' behavior as caused or uncaused to the same degree the
behavior of a thunderstorm is caused or uncaused.  There is a
difference: One of the best ways to cause a decision-making agent to do
something is to give him a good reason to do it, whereas this strategy
won't work with thunderstorms.  Hence, an agent will do well to sort
other systems into two categories, those that make free decisions and
those that don't, and deal with them differently.

  By the way, once a decision is made there is no problem with its maker
thinking of it purely causally, in exactly the same way it thinks about
other decision makers.  An agent can in principle see *all* of the
causal factors going into its own past decisions, although in practice
the events of the past will be too random or obscure for an exhaustive
analysis.  It is surely not dehumanizing to be able to bemoan that if
only such-and-such had been brought to my attention, I would have
decided otherwise than I did, but, since it wasn't, I was led
inexorably to a wrong decision.

 Now let me deal with various objections:

   (1) Some people said I had neglected the ability of computers to do
reflexive meta-reasoning.  As usual, the mention of meta-reasoning makes
my head swim, but I shall try to respond.  Meta-reasoning can mean
almost anything, but it usually means escaping from some confining
deductive system in order to reason about what that system ought to
conclude.  If this is valuable, there is no reason not to use it.  But
my picture is of a robot faced with the possibility of reasoning about
itself as a physical system, which is in general a bad thing to do.
The purpose of causal-exemption flagging is to shut pointless reasoning
down, meta or otherwise.

So, when O'Keefe says:

    So the mere possibility of an agent having to appear to simulate itself
    simulating itself ... doesn't show that unbounded resources would be
    required:  we need to know more about the nature of the model and the
    simulation process to show that.

I am at a loss.  Any system can simulate itself with no trouble.  It
could go over past or future decisions with a fine-tooth comb, if it
wanted to.  What's pointless is trying to simulate the present period
of time.  Is an argument needed here?  Draw a mental picture: The robot
starts to simulate, and finds itself simulating ...  the start of a
simulation.  What on earth could it mean for a system to figure out
what it's doing by simulating itself?

  (2) Free will seems on this theory to have little to do with
consciousness or values.  Indeed it does not.  I think a system could
be free and not be conscious at all; and it could certainly be free and
not be moral.

  What is the minimal level of free will?  Consider a system for
scheduling the movement of goods into and out of a warehouse.  It has to
synchronize its shipments with those of other agents, and let us
suppose that it is given those other shipments in the form of various
schedules that it must just work around.  From its point of view, the
shipments of other agents are caused, and its own shipments are to be
selected.  Such a system has what we might call *rudimentary* free
will.  To get full-blown free will, we have to suppose that the system
is able to notice the discrepancy between boxes that are scheduled to
be moved by someone else, and boxes whose movements depend on its
decisions.  I can imagine all sorts of levels of sophistication in
understanding (or misunderstanding) the discrepancy, but just noticing
it is sufficient for a system to have full-blown free will.  At that
point, it will have to realize that it and its tools (the things it
moves in the warehouse) are exempt from causal modeling.

   (3) Andrew Gelsey has pointed out that a system might decide what to
do by means other than simulating various alternative courses of action.
For instance, a robot might decide how hard to hit a billiard ball by
solving an equation for the force required.  In this case, the
asymmetry appears in what is counted as an independent variable (i.e.,
the force administered).  And if the robot notices and appreciates the
asymmetry, it is free.

   (4) David Sher has objected

      If I understand [McDermott's theory] correctly it runs like this:
      To plan one has a world model including future events.
      Since you are an element of the world then you must be in the model.
      Since the model is a model of future events then your future actions
      are in the model.
      This renders planning unnecessary.
      Thus your own actions must be excised from the model for planning to
      avoid this "singularity."

      Taken naively, this analysis would prohibit multilevel analyses such
      as is common in game theory.  A chess player could not say things like
      if he moves a6 then I will move Nc4 or Bd5 which will lead ....

The response to this misreading should be obvious.  There are two ways
to think about my future actions.  One way is to treat them as
conditional actions, begun now, and not really future actions at all.
(Cf. the notion of strategy in game theory.)

The more interesting insight is that an agent can reason about its
future actions as if they were those of another agent.  There is no
problem with doing this; the future is much like the past in this
respect, except we have less information about it.  A robot could
reason at its leisure about what decision it would probably make if
confronted with some future situation, and it could use an arbitrarily
detailed simulation of itself to do this reasoning, provided it has
time to run it before the decision is to be made.  But all of this
self-prediction is independent of actually making the decision.  When
the time comes to actually make it, the robot will find itself free
again.  It will not be bound by the results of its simulation.  This
may seem like a nonsequitur; how could a robot not faithfully execute
its program the same way each time it is run?  There is no need to
invoke randomness; the difference between the two runs is that the
second one is in a context where the results of the simulation are
available.  Of course, there are lots of situations where the decision
would be made the same way both times, but all we require is that the
second be correctly classified as a real -- free -- decision.

I find Sher's "fix" to my theory more dismaying:

   However we can still make the argument that Drew was making its just
   more subtle than the naive analysis indicates.  The way the argument
   runs is this:
   Our world model is by its very nature a simplification of the real
   world (the real world doesn't fit in our heads).  Thus our world model
   makes imperfect predictions about the future and about consequence.
   Our self model inside our world model shares in this imperfection.
   Thus our self model makes inaccurate predictions about our reactions
   to events.  We perceive ourselves as having free will when our self
   model makes a wrong prediction.

This is not at all what I meant, and seems pretty shaky on its own
merits.  This theory makes an arbitrary distinction between an agent's
mistaken predictions about itself and its mistaken predictions about
other systems.  I think it's actually a theory of why we tend to
attribute free will to so many systems, including thunderstorms.  We
know our freedom makes us hard to predict, and so we attribute freedom
to any system we make a wrong prediction about.  This kind of paranoia
is probably healthy until proven false.  But the theory doesn't explain
what we think free will is in the first place, or what its explanatory
force is in explaining wrong predictions.

Free will is not due to ignorance.  Imagine that the decision maker is a
robot with a very routine environment, so that it often has complete
knowledge both of its own listing and of the external sensory data it
will be receiving prior to a decision.  So it can simulate itself to any
level of detail, and it might actually do that, thinking about decisions
in advance as a way of saving time later when the actual decision had
to be made.  None of this would allow it to avoid making free
decisions.
                                     -- Drew McDermott

------------------------------

Date: Mon, 30 May 88 15:05:47 EDT
From: Bharat.Dave@CAD.CS.CMU.EDU
Subject: reconciling free will and determinism


Perhaps  following  quote  from  "Chaos" by J. Gleick may help reconcile
apparently dichotomous concepts of determinism and free  will.  This  is
more  of  a  *metaphor* rather than a rigorous argument.  Discussing the
work of a group of people at UCSC on chaotic systems, the author  quotes
Doyne Farmer[pg.251].


       Farmer  said,  "On  a philosophical level, it struck me as an
    operational way to define free will, in a way that  allowed  you
    to   reconcile   free  will  with  determinism.  The  system  is
    deterministic, but you can't say what it's going to do next...

       Here was one coin  with  two  sides.  Here  was  order,  with
    randomness   emerging,  and  then  one  step  further  away  was
    randomness with its own underlying order."


Two other themes that pop up quite  often  in  this  book  are  apparent
differences  between  behavior  at micro and macro scales, and sensitive
dependence on initial conditions on the behavior of  dynamical  systems.
Choice  of  the  vantage  point  significantly  affects what you see. At
molecular level, we are all a stinking mess with  a  highly  boring  and
uniform  behavior;  on a different level, I can't predict with certainty
how most people will respond to this message.

Seems like you can't be free unless you acknowledge determinism :-)

------------------------------

Date: 31 May 88 03:24:17 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Free will


       Since this discussion has lost all relevance to anything anybody
is likely to actually implement in the AI field in the next twenty years
or so, could this be moved to talk.philosophy?

                                        John Nagle

------------------------------

Date: 31 May 88 14:10:08 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Self Simulation

Drew McDermott's lengthy posting included a curious nugget.  Drew
paints a scenario in which a robot engages in a simulation which
includes the robot itself as a causal agent in the simulation.  Drew
asks, "What on earth could it mean for a system to figure out what
it's doing by simulating itself?"

I was captured by the notion of self-simulation, and started day-dreaming,
imagining myself as an actor inside a simulation.  I found that, as the
director of the day-dream, I had to delegate free will to my simulated
self.  The movie free-runs, sans script.  It was just like being asleep.

So, perhaps a robot who engages in self-simulation is merely dreaming
about itself.  That's not so hard.  I do it all the time.

--Barry Kort

------------------------------

Date: 31 May 88 14:35:33 GMT
From: uhccux!lee@humu.nosc.mil  (Greg Lee)
Subject: Re: Free will

Edward Lasker, in his autobiographical book about his experiences as
a chess master, describes a theory and philosophical tract
by his famous namesake, Emmanuel Lasker, who was world chess
champion for many years.  It concerned a hypothetical being,
the Macha"ide, which is so advanced and profound in its thought
that its choices have become completely constrained.  It can
discern and reject all courses of action that are not optimal,
and therefore it must.  It is so evolved that it has lost
free will.
                Greg, lee@uhccux.uhcc.hawaii.edu

------------------------------

Date: 31 May 88 14:39:41 GMT
From: uvaarpa!virginia!uvacs!cfh6r@umd5.umd.edu  (Carl F. Huber)
Subject: Re: Free Will & Self Awareness

In article <5323@xanth.cs.odu.edu> Warren E. Taylor writes:
>In article <1176@cadre.dsl.PITTSBURGH.EDU>, Gordon E. Banks writes:
>
>"Spanking" IS, I repeat, IS a form of redesigning the behavior of a child.
>Many children listen to you only when they are feeling pain or are anticipating
>the feeling of pain if they do not listen.

>         Also, pain is often the only teacher a child will listen to. He

From what basis do you make this extraordinary claim?  Experience? or do you
have some reputable publications to mention - I would like to see the studies.
I also assume that "some" refers to the 'average' child - not pathological
exceptions.

>                                     I have an extremely hard-headed nephew
>who "deserves" a spanking quite often because he is doing something that is
>dangerous or cruel or simply socially unacceptable. He is also usually
>maddeningly defiant.
 ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
Most two to six year olds are.  How old is this child? 17? What other methods
have been tried?  Spanking most generally results from frustrated parents
who beleive they have "tried everything", while they actually haven't
begun to scratch the surface of what works.

>learns to associate a certain action with undesirable consequences. I am not
>the least bit religious, but the old Biblical saying of "spare the rod..." is

right, so let's not start something we'll regret, ala .psychology.
>
>Flame away.
>   Warren.

voila.  cheers.

-carl

------------------------------

Date: 31 May 88 15:27:11 GMT
From: mailrus!caen.engin.umich.edu!brian@ohio-state.arpa  (Brian
      Holtz)
Subject: Re: DVM's request for definitions


In article <1020@cresswell.quintus.UUCP>, ok@quintus.UUCP (Richard A.
O'Keefe) writes:
> In article <894@maize.engin.umich.edu>, brian@caen.engin.umich.edu (Brian
> Holtz) writes:
> > 3. volition: the ability to identify significant sets of options and to
> > predict one's future choices among them, in the absence of any evidence
> > that any other agent is able to predict those choices.
> >
> > There are a lot of implications to replacing free will with my notion of
> > volition, but I will just mention three.
> >
> > - If my operationalization is a truly transparent one, then it is easy
> > to see that volition (and now-dethroned free will) is incompatible with
> > an omniscient god.  Also, anyone who could not predict his behavior as
> > well as someone else could predict it would no longer be considered to
> > have volition.

[proceeding excerpts not in any particular order]

> For me, "to have free will" means something like "to act in accord with
> my own nature".  If I'm a garrulous twit, people will be able to predict
> pretty confidently that I'll act like a garrulous twit (even though I
> may not realise this), but since I will then be behaving as I wish I
> will correctly claim free will.

Recall that my definition of free will ("the ability to make at least some
choices that are neither uncaused nor completely determined by physical
forces") left little room for it to exist.  Your definition (though I doubt
you will appreciate being held to it this strictly) leaves too much room:
doesn't a falling rock, or the average computer program, "act in accord with
[its] own nature"?

> One thing I thought AI people were taught was "beware of the homunculus".
> As soon as we start identifying parts of our mental activity as external
> to "ourselves" we're getting into homunculus territory.

I agree that homunculi are to be avoided; that is why I relegated "the
ability to make at least some choices that are neither uncaused nor
completely determined by *external* physical forces" to being a definition
not of free will, but of "self-determination".  The free will that you are
angling for sounds a lot like what I call self-determination, and I would
welcome any efforts to sharpen the definition so as to avoid the
externality/internality trap.  So until someone comes up with a definition
of free will that is better than yours and mine, I think the best course is
to define free will out of existence and take my "volition" as the
operationalized designated hitter for free will in our ethics.

> What has free will to do with prediction?  Presumably a dog is not
> self conscious or engaged in predicting its activities, but does that
> mean that a dog cannot have free will?

Free will has nothing to do with prediction; volition does.  The question of
whether a dog has free will is a simple one with either your definition *or*
mine.  By my definition, nothing has free will; by yours, it seems to me
that everything does. (Again, feel free to refine your definition if I've
misconstrued it.)  A dog would seem to have self-determination as I've
defined it, but you and I agree that my definition's reliance on
ex/in-ternality makes it a suspect categorization.  A dog would clearly not
have volition, since it can't make predictions about itself.  And since
volition is what I propose as the predicate we should use in ethics, we are
happily exempt from extending ethical personhood to dogs.

> "no longer considered to have volition ..."  I've just been reading a
> book called "Predictable Pairing" (sorry, I've forgotten the author's)
> name, and if he's right it seems to me that a great many people do
> not have volition in this sense.  If we met Hoyle's "Black Cloud", and
> it with its enormous computational capacity were to predict our actions
> better than we did, would that mean that we didn't have volition any
> longer, or that we had never had it?

A very good question.  It would mean that we no longer had volition, but
that we had had it before.  My notion of volition is contingent, because it
depends on "the absence of any evidence that any other agent is able to
predict" our choices.  What is attractive to me about volition is that it
would be very useful in answering ethical questions about the "free will"
(in the generic ethical sense) of arbitrary candidates for personhood: if
your AI system could demonstrate volition as defined, then your system would
have met one of the necessary conditions for personhood.  What is unnerving
to me about my notion of volition is how contingent it is: if Hoyle's "Black
Cloud" or some prescient god could foresee my behavior better than I could,
I would reluctantly conclude that I do not even have an operational
semblence of free will.  My conclusion would be familiar to anyone who
asserts (as I do) that the religious doctrine of predestination is
inconsistent with believing in free will.  I won't lose any sleep over this,
though; Hoyle's "Black Cloud" would most likely need to use analytical
techniques so invasive as to leave little of me left to rue my loss of
volition.

------------------------------

Date: 31 May 88 15:51:48 GMT
From: mind!thought!ghh@princeton.edu  (Gilbert Harman)
Subject: Re: Free will

In article <17470@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
>
>       Since this discussion has lost all relevance to anything anybody
>is likely to actually implement in the AI field in the next twenty years
>or so, could this be moved to talk.philosophy?
>
>                                       John Nagle


Drew McDermott's suggestion seems highly relevant to
implementations while offering a nice approach to at least
one problem of free will.  (It seems clear that people have
been worried about a number of different things under the
name of "free will".)  How about keeping a discussion of
McDermott's approach here and moving the rest of the
discussion to talk.philosophy?

                       Gilbert Harman
                       Princeton University Cognitive Science Laboratory
                       221 Nassau Street, Princeton, NJ 08542

                       ghh@princeton.edu
                       HARMAN@PUCC.BITNET

------------------------------

Date: Tue, 31 May 88 17:08 EDT
From: GODDEN%gmr.com@RELAY.CS.NET
Subject: RE: Free Will and Self-Awareness


  >From: bwk@mitre-bedford.arpa  (Barry W. Kort)
   [...]
  >It is not clear to me whether aggression is instinctive (wired-in)
  >behavior or learned behavior.

You might want to take a look at the book >On Aggression< by Konrad
Lorenz.  I don't have the complete reference with me, but can supply
upon request.  I read it back in 1979, but if I recall one of the
primary theses set forth in the book is that agression is indeed
instinctive in all animal life forms including humans and serves
both to defend and perpetuate the species.  Voluminous citations of
purposeful and useful aggressive behavior in many species are provided.
I think he also philosophizes on how we as thinking and peace-loving
people (with free will (!)) can make use of our conscious recognition of
our innate aggression to keep it at appropriate levels of manifestation.
I became very excited about his ideas at the time.

-Kurt Godden
 GM Research
 godden@gmr.com

------------------------------

Date: Tue, 31 May 88 18:48:56 -0700
From: peck@Sun.COM
Subject: free will

First of all, i what to say that i'm probably in this camp:
> 4. (Minsky et al.) There is no such thing as free will.  We can dispense
> with the concept, but for various emotional reasons we would rather not.

I haven't followed all this discussion, but it seems the free-will protagonists
basic claim is that "choices" are made randomly or by a homonculus,
and the physicists claim that the homonculus has no free will either.

 Level 1 (psychological): an intelligent agent first assumes that it has
choices, and reasons about the results of each and determines predictively
the "best" choice. ["best" determined by some optimization criterion function]
This certainly looks like free-will, especially to those experiencing it
or introspecting on it.

 Level 0 (physics): the process or computation which produces the reasoning
and prediction at level 1 is deterministic. (plus or minus quantum effects)
>From level 0 looking up to level 1, it's hard to see where free will comes in.
Do the free-will'ists contend that the mind retroactively controls the
laws of physics?  More perspicuous would be that the mind distorts its own
perception of itself to believe that it is immune to the laws of physics.


My real questions is: Does it matter whether man or machine has free will?
 In what way does the existance of free will make a more intelligent actor,
a better information processor, or a better controller of processes?

If an agent makes good decisions or choices, or produces control outputs,
stores or interprets information, or otherwise produces behaviors,
 What does it matter (external to that agent) whether it has free will?
 What does it matter *internal* to that agent?
 Does it matter if the agent *believes* it has free will?

[For those systems sophisticated enough to introspect, *believing* to have
 free will is useful, at least initially (was this Minsky's argument?).]

Are there any objective criteria for determining if an agent has free-will?

[If, as i suspect, this whole argument is based on the assumption that:
  Free-will iff (not AI), then it seems more feasible to work on the
  AI vs (not AI) clause, and *then* debate the free-will clause.]


My brief response to the moral implication of free will:
 For those agents that can accept the non-existance of free will and still
function reliably and intelligently (ie the "enlightened few"), the concept is
not necessary.  For others, (the "majority") the concept of free will,
like that of God, and sin, and laws, and purpose, and the rest of what
Vedanta/Budhists would call the "Illusion", is either necessary or useful.
  In this case, it is important *not* to persuade someone that they do
not have free will.  If they can not figure out the reason and ramifications
of that little gem of knowledge, it is probably better that they not be
burdened.  yes, a little knowledge *is* a dangerous thing.

- ---
  My other bias: the purpose of intelligence to produce more flexible, more
capable controllers of processes (the first obvious one is the biochemical
plant that is your body). These controllers of course are mostly
characterized by their control of information: how well they understand or
model the environment or the controlled process determines how well they can
predict or control that environment or process.  So, intelligence is
inexorably tied up with information processing (interpreting, storing,
encoding/decodeing), and decision making.

 From this point of view, the more interesting questions are: What is the
criterion function (utility function, feedback funtion, or whatever you call
it); how is it formed; how is it modified; how is it evaluated?

 The question of evaluation is an important difference between artificial
intelligence and organic intelligence.  In people the evaluation is done by
the "hardware", it is modified as the body is modified, bio-chemically.
It is the same biochemistry that the evaluation function is trying to control!

Thought for the day:
 If it turns out that the criterion function is designed to self-perpetuate
*itself* (the function, not merely its agent), by arranging to have itself
be modified as the results of the actions based on its predictive evaluations,
[ie, it is self serving and self perpetuating just as genes/memes/virusus are]
would that help explain why choices seem indeterminate and "free"?

------------------------------

Date: 1 Jun 88 09:38:00 EDT
From: "CUGINI, JOHN" <cugini@icst-ecf.arpa>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf.arpa>
Subject: Free will - the metaphysics continues...


Drew McDermott writes:

> I believe most of the confusion about this concept comes from there
> not being any agreed-upon "common sense" of the term "free will."  To
> the extent that there is a common consensus, it is probably in favor
> of dualism, the belief that the absolute sway of physical law stops at
> the cranium.  Unfortunately, ever since the seventeenth century, the
> suspicion has been growing among the well informed that this kind of
> dualism is impossible.  ...
>
> If we want to debate about AI versus dualism, ...we can.  [Let's]
> propose technical definitions of free will, or propose dispensing with
> the concept altogether. ...
>
> I count four proposals on the table so far:
>
> 1. (Propose by various people) Free will has something to do with randomness.
>
> 2. (McCarthy and Hayes) When one says "Agent X can do action A," or
> "X could have done A,"
>
> 3. (McDermott) To say a system has free will is to say that it is
> "reflexively extracausal,"
>
> 4. (Minsky et al.) There is no such thing as free will.

I wish to respond somewhat indirectly by trying to describe the
"classical" position - ie to present a paradigm case of free will.
Thus, we will have an account of *sufficient* conditions for free
will.  Readers may then consider whether these conditions are
necessary - whether we can back off to a less demanding case, and
still have free will.  I suspect the correct answer is "no", but
I'll not argue that point too thoroughly.

*** *** *** *** *** ***

Brief version:  Free will is the ability of a conscious entity to make
free decisions.  The decision is free, in that, although the entity
causes the decision, nothing causes the entity to make the decision.

*** *** *** *** *** ***

(Rough analogy: an alpha particle is emitted (caused) by the decay of
a nucleus, but nothing caused the nucleus to decay and emit the
particle - the emitting [deciding] is uncaused).

There's an unfortunate ambiguity in the word "decision" - it can mean
the outcome (the decision was to keep on going), or the process (his
decision was swift and sure).  Keeping these straight, it is the
decision-process, the making of the decision-outcome, that is
uncaused.  The decision-outcome is, of course, caused (only) by the
decision-process.

Discussion: I'm going to opt for breadth at the expense of depth -
please read in non-nitpicking spirit.

1.  Randomness - free will is related to randomness only in that both
are examples of acausality.  True, my uncaused decision is "random"
wrt to the physical world, and/or the history of my consciousness - ie
not absolutely predictable therefrom.  That doesn't mean it's random,
in the stronger sense of being a meaningless event that popped out of
nowhere - see next item.

2.  Conscious entity - free will is a feature which only a conscious
entity (like you and me, and maybe your dog) can have - can anyone
credit an unconscious entity with free will, even if it "makes a
random decision" in some derivative sense (eg a chess-playing program
which uses QM processes to vary its moves) ?  "Consciousness is a
problematic concept" you say? - well, yes, but not as much so as is
free will - and I think it's only problematic to those who insist that
it be reduced to more simple concepts.  There ain't any.
    Free decisions "pop out" of conscious entities, not nuclei.  If
you don't know what a conscious entity is, you're not reading this.
No getting around it, free will brings us smack up to the problem of
the self - if there are no conscious selves, there is no free will.
While it may be difficult to describe what it takes to be a conscious
self, at least we don't doubt the existence of selves, as we may the
existence of free will.  So the strategy here is to take selves as a
given (for now at least) and then to say under what conditions these
selves have free will.

3.  Dualism - I believe we can avoid this debate; I maintain that free
will requires consciousness.  Whether consciousness is physical, we
can leave aside.  I can't resist noting that Saul Kripke is probably
as "well-informed" as anyone, and last I heard, he was a dualist.  It's
quite fashionable nowadays to take easy verbal swipes at dualism as an
emblem of one's sophistication.  I suspect that some swipers might be
surprised at how difficult it is to concoct a good argument against
dualism.

4.  Physics - Does the requirement for acausality require the
violation of known physical laws?  Very unclear.  First, note that all
kinds of macro-events are ultimately uncaused in that they stem from
ontologically random quantum events (eg radiation causing birth
defects, cancer...).  Whether brain events magnify QM uncertainty in
this way no one really knows, but it's not to be ruled out.  Further,
very little is understood of the causal relations between brain and
consciousness (hence the dualism debate).  At any rate, the position
is that, for the conscious decision-process to be free, it must be
uncaused.  If this turns out to violate physical laws as presently
understood, and if the present understanding turns out to be correct,
then this just shows that there is no free will.

5.  No denial of statistical probabilitities or influence - None of
the above denies that allegedly free deciders are, in fact, quite
predictable (probabilistically).  It is highly unlikely that I will
decide to put up my house for sale tomorrow, but I could.  My
conscious reasons for not doing so do not absolutely determine that I
won't.  I could choose to in spite of these reasons.

6.  Free will as "could have decided otherwise" - This formulation is
OK as long as the strength of the "could" includes at least physical
possibility, not just logical possibility.  If one could show that
my physical brain-state today physically determines that I will
(consciously) decide to sell my house tomorrow, it's not a free
decision.

7.  A feature, not an event - I guess most will agree that free will
is a capability (like strong arms) which is manifested in particular
events - in the case of free will, the events are free decisions.  An
entity might make some caused decisions and some free - free will only
says he/she can make free ones, not that all his/her decisions are
free.

8.  Rationality / Intelligence - It may well be true that rationality
makes free will worth having but there's no reason not to consider the
will free even in the absence of a whole lot of intelligence.
Rationality makes strong arms more valuable as well, but one can still
have strong arms without it.  As long as one can acausally decide, one
has free will.

9.  Finding out the truth - Need I mention that the above is intended
to define what free will is, not necessarily to tell one how to go
about determining whether it exists or not.  To construct a test
reflecting the above considerations is no small task.  Moreover, one
must decide (!) where the burden of proof lies: a) I feel free,
therefore it's up to you to prove my feelings are illusory and that
all my decision-processes are caused, or b) the "normal" scientific
assumption is that all macro-events have proximate macro-causes, and
therefore it's up to me to show that my conscious processes are a
"special case" of some kind.

John Cugini  <Cugini@icst-ecf.arpa>

------------------------------

Date: 1 Jun 88 14:39:00 GMT
From: apollo!nelson_p%apollo.uucp@eddie.mit.edu
Subject: Free will and self-awareness


Gilbert Cockton posts:
>The test is easy, look at the references.  Do the same for AAAI and
>IJCAI papers.  The subject area seems pretty introspective to me.
>If you looked at an Education conference proceedings, attended by people who
>deal with human intelligence day in day out (rather than hack LISP), you
>would find a wide range of references, not just specialist Education
>references.
>You will find a broad understanding of humanity, whereas in AI one can
>often find none, just logical and mathematical references. I still
>fail to see how this sort of intellectual background can ever be
>regarded as adequate for the study of human reasoning.  On what
>grounds does AI ignore so many intellectual traditions?

  Because AI would like to make some progress (for a change!).  I
  originally majored in psychology.  With the exception of some areas
  in physiological pyschology, the field is not a science.  Its
  models and definitions are simply not rigorous enough to be useful.
  This is understandable since the phenomena it attempts to address
  are far too complex for the currently available intellectual and
  technical tools.  The result is that psychologists and sociologists
  waste much time and money over essentially unresolvable philosophical
  debates, sort of like this newsgroup!   When you talk about an
  'understanding of humanity' you clearly have a different use of
  the term 'understanding' in mind than I do.

  Let's move this topic to talk.philosophy!!

                                             --Peter Nelson

------------------------------

Date: Wed, 01 Jun 88 12:48:12
From: ALFONSEC%EMDCCI11.BITNET@CUNYVM.CUNY.EDU
Subject: Free will et al.

>Thanasis Kehagias <ST401843%BROWNVM.BITNET@MITVMA.MIT.EDU>
>Subject: AI is about Death
> . . .
>SUGGESTED ANSWER: if AI is possible, then it is possible to create
>intelligence. all it takes is the know-how and the hardware. also, the
>following inference is not farfetched: intelligence -> life. so if AI
>is possible, it is possible to give life to a piece of hardware. no ghost
>in the machine. no soul. call this the FRANKENSTEIN HYPOTHESIS, or, for
>short, the FH (it's just a name, folks!).

The fact that we have "created intelligence" (i.e. new human beings)
since thousands of years ago, has not stopped the current controversy
or the discussion about the existence of the soul.
If, sometime in the future, we make artificial intelligence beings,
the discussions will go on the same as today. What is to prevent
a machine from having a soul?

The question cannot be decided in a discussion, because it comes
from totally different axioms, or starting points.
The (non-)existence of the soul (or of free will) is not a conclusion,
but an axiom. It is much more difficult to convince people to
change their axioms than to accept a fact.

Regards,

Manuel Alfonseca, ALFONSEC at EMDCCI11

------------------------------

End of AIList Digest
********************

∂03-Jun-88  0119	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #15  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 3 Jun 88  01:19:35 PDT
Date: Thu  2 Jun 1988 02:02-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #15
To: AIList@AI.AI.MIT.EDU


AIList Digest            Thursday, 2 Jun 1988      Volume 7 : Issue 15

Today's Topics:

  Re: Asimov's Laws of Robotics (Revised)
  randomness
  Acting Irrationally
  Re: Aah, but not in the fire brigade, jazz ensembles, rowing eights,...
  Human-human communication
  Re: Fuzzy systems theory was (Re: Alternative to Probability)
  Unadulterated Behavior

----------------------------------------------------------------------

Date: 27 May 88 15:57:30 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Asimov's Laws of Robotics (Revised)

I enjoyed reading Mike Sellers' reaction to my posting on Asimov's
Laws of Robotics.

Mike stumbles over the "must/may" dilemma:
>>      II.   A robot may respond to requests from human beings,
>                     ↑↑↑
>>            or other sentient beings, unless this conflicts with
>>            the First Law.
>
>Shouldn't "may" be "must" here, to be imperitive?  Otherwise it would seem
>to be up to the robot's discretion whether to respond to the human's requests.

I changed "must" to "may" because humans sometimes issue frivolous or
unwise orders.  If I tell Artoo Detoo to "jump in the lake", I hope
he has enough sense to ignore my order.

With the freedom granted by "may", I no longer need as many caveats
of the form "unless this conflicts with a higher-precedence law."

Note that along with freedom goes responsibility.  The robot now has
a duty to be aware of possible acts which could cause unanticipated
harm to other beings.  The easiest way for the robot to ensure that
a freely chosen act is safe is to inquire for objections.
This also indemnifies the robot from finger-pointing later on.

I respectfully decline Mike's suggestion to remove all references to
"sentient beings".  There are some humans who function as deterministic
finite-state automata, and there are some inorganic systems who behave
as evolving intelligences.  Since I sometimes have trouble distinguishing
human behavior from humane behavior, I wouldn't expect a robot to be
any more insightful than a typical person.

I appreciated Mike's closing paragraph in which he highlighted the
difficulty of balancing robot values, and compared the robot's dilemma
with the dilemma faced by our own civilization's leadership.

--Barry Kort

------------------------------

Date: Sun, 29 May 88 11:00:20 -0200
From: Antti Ylikoski <ayl%hutds.hut.fi%FINGATE.BITNET@MITVMA.MIT.EDU>
Subject: randomness

In AIList Digest   V7 #4, Barry Kort writes:

>If I wanted to give my von Neumann machine a *true* random number
>generator, I would connect it to an A/D converter driven by thermal
>noise (i.e. a toasty resister).

I recall that a Zener diode is a good source of noise (but cannot remember
the spectrum it gives).

It could be a good idea to utilize a Zener / A-D converter random number
generator in Monte Carlo simulations.

Andy Ylikoski

PS.  A pearl: Orthodox Christianity: Baruch Ha Ba, B'Shem Adonnnnnai

------------------------------

Date: Mon 30 May 88 23:22:32-PDT
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: Acting Irrationally


>> Thus he learns that the other person feels strongly ...

> Wouldn't it have been easier if the yeller had simply disclosed his/her
> value system in the first place?  Or do I have an unrealistic expectation
> that the yeller is in fact able to articulate his/her value system to an
> inquiring mind?  --Barry Kort

Yelling is not necessarily an irrational act.  It is also a
communicative act, indicating an expectation based on custom
rather than rationality.  Custom tells us how to behave toward
others who follow the same customs, but give us no guidance in
behavior toward those who break custom but remain within the law
and the bounds of rationality.  Such people (weirdos, geniuses,
punkers, foreigners, teenagers, etc.) make us nervous and complicate
our lives, so we respond with anger.  We also use anger, real or
simulated, to let our children know which rules are based on
custom and are thus not explainable.

It would be nice if we could just explain our value systems, but
we don't seem to be wired that way.  (Anyway, we don't understand
our own culture well enough.)  At least we're civilized enough
not to stone or enslave those who are different from us --
at least, not often as part of government or religious policy.

Machines will have to be taught to recognize our communicative
anger.  I hope they won't have to emulate it as well.

                                        -- Ken Laws

------------------------------

Date: 31 May 88 15:33:09 GMT
From: uflorida!novavax!proxftl!tomh@gatech.edu  (Tom Holroyd)
Subject: Re: Aah, but not in the fire brigade, jazz ensembles, rowing
         eights,...

In article <1171@crete.cs.glasgow.ac.uk>, Gilbert Cockton writes:
> In article <5499@venera.isi.edu> Stephen Smoliar writes:
> >The problem comes in deciding
> >WHAT needs to be explicitly articulated and what can be left in the "implicit
> >background."
> ...
> For people who haven't spent all their life in academia or
> intellectual work, there will be countless examples of carrying out
> work in near 100% implicit background (watch fire and ambulance
> personelle who've worked together as a team for ages, watch a basketball
> team, a steeplejack and his mate, a good jazz ensemble, ...)

No.  Fire and ambulance personnel have regulations, basketball has rules
and teams discuss strategy and tactics during practice, and even jazz
musicians use sheet music sometimes.  I don't mean to say that implicit
communication doesn't exist, just that it's not as useful.  I don't know
how to build steeples, but I'll bet it can be written down.

Articulate as much as you can.  It's true we learn by doing, but we need to
be told what to do in case it's not obvious (eating is obvious).

Tom Holroyd
UUCP: {uunet,codas}!novavax!proxftl!tomh

The white knight is talking backwards.

------------------------------

Date: 31 May 88 15:05:00 GMT
From: uflorida!novavax!proxftl!tomh@gatech.edu  (Tom Holroyd)
Subject: Human-human communication

In article <32403@linus.UUCP>, Barry W. Kort writes:
> It is estimated that the human mind accumulates and retains over
> a lifetime enough information to fill 50,000 volumes.  That's quite
> a library.  The human input/output channel operates at about 300 bits
> per second (30 characters per second).  Exchanging personal knowledge
> bases is a time-consuming operation.  We are destined to remain unaware
> of vast portions of our civilization's collective information base.

This illustrates the problem quite nicely.  Obviously, if we are to
achieve understanding of our fellow man, we need to use our human I/O
channels as efficiently as possible.

> Much of what we know is not easily reduced to language.  That which
> cannot be described in words may have to be demonstrated in action.
> Some people speak of secret knowledge or private language.

Name one thing that isn't expressible with language! :-)

Even actions can be described.  We can't describe the unknown, of course.

A dog might "know" something and not be able to describe it, but this is
a shortcoming of the dog.  Humans have languages, natural and artificial,
that let us manipulate and transmit knowledge.

Does somebody out there want to discuss the difference between the dog's
way of knowing (no language) and the human's way of knowing (using language)?

Tom Holroyd
UUCP: {uunet,codas}!novavax!proxftl!tomh

The white knight is talking backwards.

------------------------------

Date: 31 May 88 21:04:03 GMT
From: uvaarpa!virginia!uvacs!cfh6r@mcnc.org  (Carl F. Huber)
Subject: Re: Asimov's Laws of Robotics (Revised)

In article <33085@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>Mike stumbles over the "must/may" dilemma:
>>>      II.   A robot may respond to requests from human beings,
>>                     ↑↑↑
>>Shouldn't "may" be "must" here, to be imperitive?  Otherwise it would seem
>>to be up to the robot's discretion whether to respond to the human's requests.
>
>I changed "must" to "may" because humans sometimes issue frivolous or
>unwise orders.  If I tell Artoo Detoo to "jump in the lake", I hope
>he has enough sense to ignore my order.
>--Barry Kort

There may be some valid examples to demonstrate your point, but this
doesn't cut it.  If you tell Artoo Detoo to "jump in the lake", you hope
he has enough sense to understand the meaning of the order, and that
includes its frivolocity factor.  You want him (it?) to obey the order
according to its intended meaning.  There is also a lot of elbow room in
the word "respond" - this certainly doesn't mean "obey to the letter".
-carl

------------------------------

Date: 31 May 88 19:31:27 GMT
From: ukma!uflorida!usfvax2!pollock@ohio-state.arpa  (Wayne Pollock)
Subject: Re: Fuzzy systems theory was (Re: Alternative to Probability)

In article <487@sequent.cs.qmc.ac.uk> root@cs.qmc.ac.uk (The Superuser) writes:
>...
>>>Because fuzzy logic is based on a fallacy
>>Is this kind of polemic really necessary?
>
>Yes.  The thing the fuzzies try to ignore is that they haven't established
>that their field has any value whatsoever except a few cases of dumb luck.

On the other hand, set theory, which underlies much of current theory, is
also based on fallacies; (given the basic premses of set theory one can
easily derive their negation).  As long as fuzzy logic provides a framework
for dicussing various concepts and mathematical ideas, which would be hard
to describe in traditional terms, the theory serves a purpose.  It will
undoubtedly continue to evolve as more people become familar with it--it
may even lead some researcher someday to an interesting or useful insight.
What more do you want from a mathematical theory?

Wayne Pollock (The MAD Scientist)       pollock@usfvax2.usf.edu
Usenet:         ...!{ihnp4, cbatt}!codas!usfvax2!pollock
GEnie:          W.POLLOCK

------------------------------

Date: 31 May 88 23:22:27 GMT
From: ncar!noao!amethyst!kww@ames.arpa  (K Watkins)
Subject: Language-related capabilities (was Re: Human-human
         communication)

In article <238@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>Name one thing that isn't expressible with language! :-)

>A dog might "know" something and not be able to describe it, but this is
>a shortcoming of the dog.  Humans have languages, natural and artificial,
>that let us manipulate and transmit knowledge.
>
>Does somebody out there want to discuss the difference between the dog's
>way of knowing (no language) and the human's way of knowing (using language)?

A dog's way of knowing leaves no room that I can see for distinguishing
between the model of reality that the dog contemplates and the reality
itself.  A human's way of knowing--once the human is a competent user of
language--definitely allows this distinction, thus enabling lies, fiction,
deliberate invention, and a host of other useful and hampering results of
recognized possible disjunction between the model and the reality.

One aspect of this, probably one of the most important, is that it makes it
easy to recognize that in any given situation there is much unknown but
possibly relevant data...and to cope with that recognition without freaking
out.

It is also possible to use language to _refer_ to things which language cannot
adequately describe, since language users are aware of reality beyond the
linguistic model.  Some would say (pursue this in talk.philosophy, if at all)
language cannot adequately describe _anything_; but in more ordinary terms, it
is fairly common to hold the opinion that certain emotional states cannot be
adequately described in language...whence the common nonlinguistic
"expression" of those states, as through a right hook or a tender kiss.

Question:  Is the difficulty of accurate linguistic expression of emotion at
all related to the idea that emotional beings and computers/computer programs
are mutually exclusive categories?

If so, why does the possibility of sensory input to computers make so much
more sense to the AI community than the possibility of emotional output?  Or
does that community see little value in such output?  In any case, I don't see
much evidence that anyone is trying to make it more possible.  Why not?

------------------------------

Date: 31 May 88 23:51:27 GMT
From: ncar!noao!amethyst!kww@ames.arpa  (K Watkins)
Subject: Re: Aah, but not in the fire brigade, jazz ensembles, rowing
         eights,...

In article <239@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>Articulate as much as you can.  It's true we learn by doing, but we need to
>be told what to do in case it's not obvious (eating is obvious).
>
Life is too short; in the case of a sufficiently aware articulator, both
articulator and audience would die of old age before the articulator explained
_everything_ s/he could about how to write the letter A.

I am not being facetious here;  I agree with the desirability of making
valuable information explicit.  But I believe that the question of which
information is valuable is a complex one.  It may seem simple at first;
but in many cases it is hard for the articulator to tell which behaviors are
relevant even to his/her own performance, let alone the as-yet hypothetical
performance of the audience.  And the assumption that one thing is obvious but
another is not is the source of much (most?) disgruntled contempt between
teachers and pupils.  For instance, it is not even obvious to me what you mean
by saying "eating is obvious."  Is _how_ to eat obvious? to whom? is what or
when or why to eat obvious?  Are the currently much-famed eating disorders
(anorexia, bulimia, etc.) instances of persons sufficiently defective (?) as
to be oblivious to the obvious?

Note:  This subject fascinates me in part because I am often accused of
articulating far more than "necessary"...so (obviously?) my sense of what is
obvious could use some work.  Part of this issue lies in the fact that, when
I articulate more than "necessary," I tend to lose my audience, and that
audience loses whatever "necessary" information I was going to impart further
down the line.

After all, this message is more than a screen long; how many people who read
the first screen are still reading? :-) What have those who quit before this
point lost that they would have valued?  And what, in my discussion, has been
"unnecessary articulation of the obvious" whose omission would have improved
the sum effect of my communication?

------------------------------

Date: 1 Jun 88 06:17:02 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: Re: Language-related capabilities (was Re: Human-human
         communication)

In article <700@amethyst.ma.arizona.edu>, K Watkins writes:
> If so, why does the possibility of sensory input to computers make so much
> more sense to the AI community than the possibility of emotional output?  Or
> does that community see little value in such output?  In any case, I don't see
> much evidence that anyone is trying to make it more possible.  Why not?

Aaron Sloman had a paper "You don't need a soft skin to have a warm heart".
I don't know whether that has been followed up.

------------------------------

Date: Wed, 1 Jun 88 12:44:34 MDT
From: silbar%mpx1@LANL.GOV (Dick Silbar)
Subject: Unadulterated Behavior

In AIList V7, #9, Warren Taylor writes a beautiful sentence that I would
like to quote again, albeit out of context:

"You only need to observe a baby for a short while to see a very nearly
unadulterated human behavior."

Quite possibly, even fully unadulterated.

          Dick Silbar

------------------------------

Date: 1 Jun 88 14:00:46 GMT
From: pitstop!sundc!netxcom!sdutcher@sun.com  (Sylvia Dutcher)
Subject: Re: Human-human communication

In article <238@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>
>Name one thing that isn't expressible with language! :-)

Look out your window and describe the view to someone who's been blind
since birth.

Describe a complex mathematical formula, without writing it down.

Describe the unusual mannerisims of a friend, without demonstrating them.

When you get in a heated discussion, do you gesture with your hands and
body?

We can express just about anything with language, but is the listener
receiving exactly what we are sending?  Even the same word, with the
same definition, can mean different things to different people, or in
different contexts.


>Tom Holroyd
>UUCP: {uunet,codas}!novavax!proxftl!tomh
>
>The white knight is talking backwards.


--
Sylvia Dutcher                          *  "We cannot accurately describe
NetExpress Communications, Inc.         *  the world, we can only describe
1953 Gallows Rd.                        *  a view of it."
Vienna, Va. 22180                       *         David Hockney

------------------------------

Date: 1 Jun 88 20:44:45 GMT
From: frabjous!nau@mimsy.umd.edu  (Dana Nau)
Subject: Re: Fuzzy systems theory was (Re: Alternative to Probability)

In article <1073@usfvax2.EDU> Wayne Pollock writes:
>On the other hand, set theory, which underlies much of current theory, is
>also based on fallacies; (given the basic premses of set theory one can
>easily derive their negation).

Not so.  Where in the world did you get this idea?  Admittedly, _naive_ set
theory leads to Russell's paradox--but this was the reason for the
development of axiomatic set theories such as Zermelo-Fraenkel set theory
(ZF).  The consistency of ZF is unproved--but this is a natural consequence
of Goedel's incompleteness theorem, and is much different from your
contention that set theory is inconsistent.  I suggest you read, for
example, Shoenfield's _Mathematical_Logic_ (Addison-Wesley, 1967), or
Rogers's _Theory_of_Recursive_Functions_and_Effective_Computability_
(McGraw-Hill, 1967).

Dana S. Nau                             ARPA & CSNet:  nau@mimsy.umd.edu
Computer Sci. Dept., U. of Maryland     UUCP:  ...!{allegra,uunet}!mimsy!nau
College Park, MD 20742                  Telephone:  (301) 454-7932

------------------------------

End of AIList Digest
********************

∂03-Jun-88  2225	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #17  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 3 Jun 88  22:25:28 PDT
Date: Sat  4 Jun 1988 00:58-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #17
To: AIList@AI.AI.MIT.EDU


AIList Digest            Saturday, 4 Jun 1988      Volume 7 : Issue 17

Today's Topics:

 Philosophy -
  Even Yet More Free Will ...
  Who else isn't a science?
  grounding of thought in emotion
  Souls (in the machine and at large)
  Objective standards for agreement on theories
  punishment metaphor

----------------------------------------------------------------------

Date: 1 Jun 88 15:36:13 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Constructive Question

What's the difference between Cognitive Science and AI?  Will the
recent interdisciplinary shift, typified by the recent PDP work, be
the end of AI as we knew it?

What IS in a name?
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 1 Jun 88 14:30:38 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Me and Karl Kluge (no flames, no insults, no abuse)

In article <1792@pt.cs.cmu.edu> kck@g.gp.cs.cmu.edu (Karl Kluge) writes:
>>  Because so little of our effective knowledge is formalised, we learn
>>  in social contexts, not from books.  I presume AI is full of relative
>>  loners who have learnt more of what they publicly interact with from
>>  books rather than from people.
>
>You presume an awful lot. Comments like that show the intellectual level
>of your critique of AI.
I also presume that comparative attitudes to book and social knowledge
are a measurable, and probably fairly stable, feature of someone's
make up.  It would be intriguing to test the hypothesis that AI
researchers place more faith in the ability of text (including
programs) to capture social reality than other academic groups.  Now,
does this still have no intellectual respectability?
>
>Well, what kind of AI research are you looking to judge? If you're looking
>at something like SOAR or ACT*, which claim to be computational models of
>human intelligence, then comparisons of the performance of the architecture
>with data on human performance in given task domains can be (and are) made.
You obviously have missed my comments about work by John Anderson and
other psychological research.  If AI were all conducted this way,
there would be less to object about.

>If you are looking at research which attempts to perform tasks we usually
>think of as requiring "intelligence", such as image understanding, without
>claiming to be a model of human performance of the task, then one can ask
>to what extent does the work capture the underlying structure of the task?
>how does the approach scale? how robust is it? and any of a number of other
>questions.
OK then. Point to an AI text book that covers Task Analysis?  Point to
work other than SOAR and ACT* where the Task Domain has been formally
studied before the computer implementation?  My objection to much work
in AI is that there has been NO proper study of the tasks which the
program attempts to simulate.  Vision research generally has very good
psychophysical underpinnings, and I accept that my criticisms do not
apply to this area either.  To supply one example, note how the
research on how experts explain came AFTER the dismal failure of rule
traces in expert systems to be accepted as explanation.  See Alison
Kidd's work on the unwarranted assumptions behind much (early?) expert
systems work.  One reason I did not pursue a PhD in AI was that one
potential supervisor told me that I didn't have to do any empirical
work before designing a system, indeed I was strongly encouraged NOT
to do any empirical studies first.  I couldn't believe my ears.  How
the hell can you model what you've never studied?  Fiction.

>Mr. Cockton, it is more than a little arrogant to assume that anyone who
>disagrees with you is some sort of unread, unwashed social misfit
When did I mention hygiene? On "unread", this a trivial charge to
prove, just read through the references in AAAI and IJCAI.  AI
researchers are not reading what educational researchers are reading,
something which I can't understand, as they are both studying the same
thing.  Finally, anyone who is programming a lot of the time cannot be
studying people as much as someone who never programs.

I never said anything about being a misfit.  Modern societies are too
diverse for the word to be used without qualification.  Being part of a
subculture, like science or academia is only a problem when it prevents
comfortable interaction with people from different subcultures.  Part
of the subculture of AI is that the intellectual tools of maths and
physics transfer to the study of humans.  Part of the subculture of
human disciplines is that they do not.  I would be a misfit in AI, AI
types could be misfits in a human discipline.  I've certainly seen a
lot of misanthropy and "we're building a better humanity" in recent
postings.  Along with last year's debate over "flawed" minds, it's
clear that many posters to this group believe they can do a better job
than whatever made us. But what is it exactly that an AI is going to be
better than?  No image of man, no superior AI.  Wonder that's why some
AI people have to run humanity down.  It improves the chance of ELIZA
being better than us.

The point I have been making repeatedly is that you cannot study human
intelligence without studying humans.  John Anderson and his paradigm
partners and Vision apart, there is a lot of AI research which has
never been near a human being. Once again, what the hell can a computer
program tell us about ourselves?  Secondly, what can it tell us that we
couldn't find out by studying people instead?
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 1 Jun 88 19:34:02 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Human-human communication

Tom Holroyd brings us around to the inevitable ineffable dilemma.
How can we talk about that which cannot be encoded in language?

I suppose we will have to invent language to do the job.

I presently possess some procedural knowledge which I am unable
to transmit symbolically.  I can transmit the <name> of the knowledge,
but I cannot transmit the knowledge itself.

I know how to walk, how to balance on a bicycle, and how to reduce
my pulse.  But I can't readily transmit that knowledge in English.
In fact, I don't even know how I know these things.

I suspect I could teach them to others, but not by verbal lecture.
The dog has similar kinds of knowledge.  Maybe our friends in robotics
can assure us that there is a code for doing backward somersaults,
but such language is not commonly exchanged over E-mail channels.

--Barry Kort

------------------------------

Date: 3 Jun 88 06:58:13 GMT
From: pasteur!agate!garnet!weemba@ames.arpa  (Obnoxious Math Grad
      Student)
Subject: Who else isn't a science?

In article <3c671fbe.44e6@apollo.uucp>, nelson_p@apollo writes:

>>fail to see how this sort of intellectual background can ever be
>>regarded as adequate for the study of human reasoning.  On what
>>grounds does AI ignore so many intellectual traditions?

>  Because AI would like to make some progress (for a change!).  I
>  originally majored in psychology.  With the exception of some areas
>  in physiological pyschology, the field is not a science.  Its
>  models and definitions are simply not rigorous enough to be useful.

Your description of psychology reminds many people of AI, except
for the fact that AI's models end up being useful for many things
having nothing to do with the motivating application.

Gerald Edelman, for example, has compared AI with Aristotelian
dentistry: lots of theorizing, but no attempt to actually compare
models with the real world.  AI grabs onto the neural net paradigm,
say, and then never bothers to check if what is done with neural
nets has anything to do with actual brains.

ucbvax!garnet!weemba    Matthew P Wiener/Brahms Gang/Berkeley CA 94720

------------------------------

Date: 3 Jun 88 08:17:00 EDT
From: "CUGINI, JOHN" <cugini@icst-ecf.arpa>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf.arpa>
Subject: Free will - a few quick questions for McDermott:


Drew McDermott writes:

> Freedom gets its toehold from the fact that it is impossible for an
> agent to think of itself in terms of causality.

I think the strength of "impossible" here is that: the agent would get
itself into an infinite regress if it tried, or some such.

In any event, isn't the question: Is the agent's "decision" whether
or not to make the futile attempt to model itself in terms of causality
caused or not?  I assume McDermott believes it is caused, no?
(unless it is made to depend on some QM process...).

If this decision not to get pointlessly self-referential is caused,
then why call any of this "free will"?  I take McDermott's earlier point
that science always involves a technical re-definition of terms which
originated in a less austere context, eg "force", "energy", "light",
etc.

But it does seem to me that the thrust of the term "free will" has always
been along the lines of an *uncoerced* decision.  We've learned more
throughout the years about the subtle forms such coercion might take,
and some are willing to allow some kinds of coercion (perhaps "internal")
and not others.  (I think coercion from any source means unfreedom).

But McDermott's partially self-referential robot is clearly determined
right from the start (or could be).  What possible reason is there
for attributing to it, even metaphorically, ***free*** will?
Why should (pragmatically or logically ?) necessary partial ignorance
of one's own place in the causal nexus be called **freedom**?
Why not call it "Impossibility of Total Self-Prediction" ?

John Cugini  <Cugini@ecf.icst.nbs.gov>

------------------------------

Date: Fri, 3 Jun 88 11:37:41 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: grounding of thought in emotion

DS> AIList 7.12
DS> From: dan@ads.com (Dan Shapiro)
DS> Subject: Re: [DanPrice@HIS-PHOENIX-MULTICS.ARPA: Sociology vs Science
DS>          Debate]

DS> I'd like to take the suggestion that "bigger decisions are made less
DS> rationally" one step further...  I propose that
DS> irrationality/bias/emotion (pick your term) are *necessary*
DS> corollaries of intelligence . . .

There have been indications in recent years that feelings are the
organizers of the mind and personality, that thoughts and memories are
coded in and arise from subtle feeling-tones.  Feelings are the vehicle,
thoughts are passengers.  Physiologically, this has to do with links
between the lymbic system and the cortical system.  References:  Gray &
LaViolette in _Man-Environment Systems_ 9.1:3-14, 15-47; _Brain/Mind
Bulletin_ 7.6, 7.7 (1982); S. Sommers (of UMass Boston) in _Journal of
Personality and Social Psychology 41.3:553-561; perhaps Manfred Clynes'
stuff on "sentics".

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: 3 Jun 88 06:53:00 GMT
From: quintus!ok@sun.com  (Richard A. O'Keefe)
Subject: Re: Constructive Question

In article <1313@crete.cs.glasgow.ac.uk>, Gilbert Cockton writes:
> What's the difference between Cognitive Science and AI?  Will the
> recent interdisciplinary shift, typified by the recent PDP work, be
> the end of AI as we knew it?
>
> What IS in a name?

To answer the second question first, what's in a name is "history".

I do not expect much in the way of agreement with this, but for me
- Cognitive Science is the discipline which attempts to understand
  and (ideally) model actual human individual & small-group behaviour.
  People who work in this area maintain strong links with education,
  psychology, philosophy, and even AI.  Someone who works in this
  area is likely to conduct psychological experiments with ordinary
  human subjects.  It is a science.

- AI is the discipline which attempts to make "intelligent" artefacts,
  such as robots, theorem provers, IKBSs, & so on.  The primary goal
  is to find *any* way of doing things, whether that's the way humans
  do it or not is not particularly interesting.  Machine learning is a
  part of AI:  a particular technique may be interesting even if humans
  *couldn't* use it.  (And logic continues to be interesting even though
  humans normally don't follow it.)  Someone trying to produce an IKBS may
  obtain and study protocols from human experts; in part it is a matter
  of how well the domain is already formalised.
  AI is a creative art, like Mathematics.

- The "neural nets" idea can be paraphrased as "it doesn't matter if you
  don't know how your program works, so long as it's parallel."

If I may offer a constructive question of my own:  how does socialisation
differ from other sorts of learning?  What is so terrible about learning
cultural knowledge from a book?  (Books are, after all, the only contact
we have with the dead.)

------------------------------

Date: Fri, 03 Jun 88 13:13:15 EDT
From: Thanasis Kehagias <ST401843%BROWNVM.BITNET@MITVMA.MIT.EDU>
Subject: Souls (in the machine and at large)


Manuel Alfonseca, in reply to my Death posting, points out that
creation of AI will not settle the Free Will/Existence of Soul argument,
because it is a debate about axioms, and finally says:

>but an axiom. It is much more difficult to convince people to
>change their axioms than to accept a fact.

true... in fact, people often retain their axioms even in the face of
seemingly contradicting facts.... but what i wanted to suggest is that
many people object to the idea of AI because they feel threatened by the
possibility that life is something completely under human control.
having to accommodate the obvious fact that humans can destroy human
life, they postulate (:axiom) a soul, an afterlife for the soul,and that
this belongs to a spiritual realm and cannot be destroyed by humans.
this is a negative approach, if you ask me.. i am much more attracted to
the idea that there is a soul in Natural Intelligence, there will be a
soul in the AI(if and when it is created) and it (the soul) will be
created by the humans.

Manuel is absolutely right in pointing that even if AI is created
the controversy will go on. the same phenomenon has occurred in many
other situations where science infringed on religious/metaphysical
dogma. to name a few instances: the Geocentric-Heliocentric theories,
Darwin's theory of Evolution (the debate actually goes back to Lamarck,
Cuvier et.al.) and the Inorganic/Organic Chemistry debate. notice that
their chronological order more or less agrees with the shift from the
material to the mental (dare we say spiritual?). anyway, IF (and it is a
big IF) AI is ever created, certainly nothing will be resolved about the
Human Condition. but, i think, it is useful to put this AI debate in
historical perspective, and recognize it as just another phase in the
process of the growth of science.


              OF COURSE this is just an interpretation
              OF COURSE this is not Science
              OF COURSE Science is just an interpretation




                         Thanasis Kehagias

------------------------------

Date: 2 Jun 88 16:10:13 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Language-related capabilities (was Re: Human-human
         communication)

In his rejoinder to Tom Holroyd's posting, K. Watkins writes:

>Question:  Is the difficulty of accurate linguistic expression of emotion at
>all related to the idea that emotional beings and computers/computer programs
>are mutually exclusive categories?
>
>If so, why does the possibility of sensory input to computers make so much
>more sense to the AI community than the possibility of emotional output?  Or
>does that community see little value in such output?  In any case, I don't see
>much evidence that anyone is trying to make it more possible.  Why not?

These are interesting questions, and I hope we can mine some gold along
this vein.

I don't think that it is an accident that emotional states are difficult
to capture in conventional language.  My emotions run high when I find
myself in a situation where words fail me.  If I can name my emotional
state, I can avoid the necessity of acting it out nonverbally.  Trouble
is, I don't know the names of all possible emotional states, least of
all the ones I have not visited before.

Nevertheless, I think it is useful for computer programs to express
emotions.  A diagnostic message is a form of emotional expression.
The computer is saying, "Something's wrong.  I'm stuck and I don't
know what to do."  And sure enough, the computer doesn't do what
you had in mind.  (By the way, my favorite diagnostic message is
the one that says, "Your program bombed and I'm not telling you
why.  It's your problem, not mine.")

So, as I see it, there is a possibility of emotional output.  It is
the behavior exhibited under abnormal circumstances.  It is what the
computer does when it doesn't know what to do or how to do what you asked.

--Barry Kort

------------------------------

Date: Fri 3 Jun 88 13:06:50-PDT
From: Conrad Bock <BOCK@INTELLICORP.ARPA>
Subject: Objective standards for agreement on theories


Those concerned  about whether theories are given by reality or are social
constructions might look at Clark Glymour's book ``Theory and Evidence''.
It is the basis for his later book, mentioned already already on the AIList,
that gives software that determines the ``best theory'' for a set of data.
Glymour states that his intention has been to balence some of the completely
relativist positions that arose with Quine (who said: ``to be is to be
denoted'').  His theoretical work has many case studies from Copernicus
to Freud that apply his algorithm to show that it picks the ``winning''
theory.

Conrad Bock

------------------------------

Date: Fri, 3 Jun 88 16:15:25 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: denial presupposes free will

DH> AIList Digest 7.13
DH> From: oodis01!uplherc!sp7040!obie!wsccs!dharvey@tis.llnl.gov  (David
DH>       Harvey)
DH> Subject: Re: More Free Will

DH> While it is rather disturbing (to me at least) that we may not be
DH> responsible for our choices, it is even more disturbing that by our
DH> choices we are destroying the world.  For heaven's sake, Reagan and
DH> friends for years banned a Canadian film on Acid Rain because it was
DH> political propaganda.  Never mind the fact that we are denuding forests
DH> at an alarming rate.

You ought to read Gregory Bateson on the inherently adverse effects of
human purposive behavior.  He develops the theme in several of the
papers and lectures reprinted in _Steps to an Ecology of Mind_,
especially in the last section on social and ecological issues.

DH> . . . if we with our free will (you said it,
DH> not me) aren't doing such a great job it is time to consider other
DH> courses of action.  By considering them, we are NOT adopting them as
DH> some religious dogma, but intelligently using them to see what will
DH> happen.

Awfully hard to deny the existence of free will without using language
that presupposes its existence.  Consider your use of "consider,"
"adopt," "intelligently using," and "see what will happen."  This sounds
like purposive behavior, aka free will.  If you can find a way to make
these claims without presupposing what you're denying, you'll be on
better footing.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Fri, 3 Jun 88 16:17:14 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: punishment metaphor

In article <1209@cadre.dsl.PITTSBURGH.EDU> Gordon E. Banks writes:
>>Are there any serious examples of re-programming systems, i.e. a system
>>that redesigns itself in response to punishment.
>
>Certainly!  The back-propagation connectionist systems are "punished"
>for giving an incorrect response to their input by having the weight
>strengths leading to the wrong answer decreased.

This is an error of logical typing.  It may be that punishment results
in something analogous to reduction of weightings in a living organism.
Supposing that hypothesis to have been proven to everyone's
satisfaction, direct manipulation of such analogs to connectionist
weightings (could they be found and manipulated) would not in themselves
be punishment.

An analog to punishment would be if the machine reduces its own
weightings (or reduces some and increases others) in response to being
kicked, or to being demeaned, or to being caught making an error
("Bad hypercube!  BAD!  No user interactions for you for the rest of the
morning!").

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Fri, 3 Jun 88 16:18:57 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: McDermott model of free will

DM> Date: 30 May 88 16:40:28 GMT
DM> From: dvm@yale-zoo.arpa  (Drew Mcdermott)
DM> Subject: Free will

DM> More on the self-modeling theory of free will:
DM> . . .
DM> What's pointless is trying to simulate the present period
DM> of time.  Is an argument needed here?  Draw a mental picture: The robot
DM> starts to simulate, and finds itself simulating ...  the start of a
DM> simulation.  What on earth could it mean for a system to figure out
DM> what it's doing by simulating itself?

Introspect about the process of riding a bicycle and you shortly fall
over.  Model for yourself the process of speaking and you are shortly
tongue-tied.  It is possible to simulate what one just was doing, but
only by leaving off the doing for the simulation, resuming the doing,
resuming the simulation, and so on.

What might be proposed is a parallel ("shadow mode") simulation, but
it's always going to be a jog out of step, not much help in real time.

What might be proposed is an ongoing modelling of what is >supposed< to
be going on at present.  Behavior then is governed by the model unless
and until interaction in the environment that contradicts the model
exceeds some threshold (another layer of modelling), whereupon an
alternative model is substituted, or the best-fit model is modified
(more work), or the agent deals with the environment directly (a lot of
work indeed).

A great deal of human culture (social construction of reality) may have
the function of introducing and supporting sufficient redundancy to
enable this.  Social conventions have their uses.  We support one
another in a set of simplifications that we can agree upon and that the
world lets us get away with.  (Often there are damaging ecological
consequences.)  We make our environment more routine, more like that of
the robot in Drew McDermott's last paragraph ("Free will is not due to
ignorance.")

It's as if free will must be budgeted:  if everything is a matter for
decision nothing can happen.  The bumbler is I suppose the pathological
extension in that direction, the introspective bicyclist in all things.
For the opposite pathology (the appeal of totalitarianism), see Eric
Fromm, _Escape from Freedom_.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: 3 Jun 88 16:18 PDT
From: hayes.pa@Xerox.COM
Subject: Re: Free Will

Drew McDermott has written a lucidly convincing account of an AI approach to
what could be meant by `free will'.  Now, can we please move the rest of this
stuff - in particular, anything which brings in such topics as:  a decent world
to live in , Hitler and Stalin , Spanking , an omniscient god[sic] , ethics,
Hoyle's "Black Cloud",  sin, and laws, and purpose, and the rest of what
Vedanta/Budhists would call the "Illusion", Dualism  or  the soul - to somewhere
else; maybe talk. philosophy, but anywhere except here.

Pat Hayes

------------------------------

End of AIList Digest
********************

∂04-Jun-88  0152	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #18  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 4 Jun 88  01:52:35 PDT
Date: Sat  4 Jun 1988 01:06-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #18
To: AIList@AI.AI.MIT.EDU


AIList Digest            Saturday, 4 Jun 1988      Volume 7 : Issue 18

Today's Topics:

  Symbolics stock
  randomness
  Fredkin chess tournament results & comment

 Queries -
  AI in weather forecasting
  frame based languages
  parallel inference 
  Response to: connectionist medical expert systems

----------------------------------------------------------------------

Date: 2 Jun 88 21:19:16 GMT
From: heldeib@gmu90x.gmu.edu  (heldeib)
Subject: Re: Symbolics stock


I read in a recent article that many of the AI companies including
Symbolics, of course LMI, Xerox, and software oriented companies
were losing money. I can't find the article at the moment but I
recall that Symbolics lost a great deal of money and that TI was
the only Lisp machine producer that was semi-decent financially.
I'll try to find the article and post the statistics, but I guess
this explains the decline in Symbolics Stock.  I wonder how the
others are doing !

If anyone saw that article plese post it ! I can't remember the
source but it was either an IEEE magazine, AI-Expert, or perhaps
Digital Review !


Hany K. Eldeib
Department of Electrical
and Computer Engineering
George Mason University
Fairfax, VA 22030

UUCP:    uunet!pyrdc!gmu90x!heldeib
Bitnet:  heldeib@gmuvax
Internet:  heldeib@gmuvax.gmu.edu

------------------------------

Date: Fri, 3 Jun 88 10:15:30 EDT
From: csrobe@icase.arpa (Charles S. Roberson)
Subject: Re: randomness

In AIList Digest, Thursday, 2 Jun 1988, V7 #15, Antti Ylikoski writes:

>From: Antti Ylikoski <ayl%hutds.hut.fi%FINGATE.BITNET@MITVMA.MIT.EDU>
>
>In AIList Digest   V7 #4, Barry Kort writes:
>
>>If I wanted to give my von Neumann machine a *true* random number
>>generator, I would connect it to an A/D converter driven by thermal
>>noise (i.e. a toasty resister).
>
>I recall that a Zener diode is a good source of noise (but cannot remember
>the spectrum it gives).
>
>It could be a good idea to utilize a Zener / A-D converter random number
>generator in Monte Carlo simulations.

First off, I know nothing about thermal noise or Zener diodes, however I
think I know a little about random number generation ("A little knowledge
*is* a dangerous thing").

Second, Generally when one wants to generate random numbers, they wish to
draw from a known distribution (Uniform, Normal, Poisson, Exponential,
etc.).  This generally means either they need a different generator for each
distribution or they have a single generator than can be used to build the
other distributions.  (e.g.  A Linear Congruential (Lehmer) Generator is
a good choice since it generates Uniform(0.0,0.1) quite easily).

Third, how random would a Zener / A-D (ZAD) random number generator be?
There are certain characteristics that are considered to be necessary for
*true* randomness

        Is the generator full-period?  (i.e. does it generate all possible
        values before repeating a value?)

        How random does it appear to the eye? (a rather subjective test
        but it can be useful.)

        How reliable is the generator?

        What is the longest run that one can expect in a sequence of
        random numbers, and how many runs are there?  (e.g 78 1 2 3 4 156)
        contains a run of length 4.

        How would it fair under Knuths' Spectral Test?  How well are
        the numbers distributed in n-space?

My guess is that using a Zener / A-D converter as a random number source
will prove to be a biased system in which certain numbers are much more
likely than others.  It may very well be the case that the ZAD generator
generates a very nice gauss(a,b) or some other distribution, but how
confident are you of it working correctly?  On close inspection one is
likely to find that a ZAD generator would be as bad as most other hardware
dependant (bit and clock watching, etc.) methods.

There is a paper coming out in one of the next few issues of Communications
of the ACM by S. K. Park and K. Miller that has a very thorough discussion
of random number generation (including history, red herrings (IBM's RANU),
and a proposed minimal standard).  For anyone who wants to give their von
Neumann machine a (portable) "*true* random number generator", I think they
should read this article.  I also think anyone who reads it will be amazed
at the number of textbook authors who are still advocating substandard or
just plain *bad* generators!  (Some of those books are AI books!)

I did not see Barry Korts original posting, so I don't know the context
of his message.  However, I don't think you can do significantly better
than a Lehmer generator for a von Neumann machine -- its elegant in its
mathematical simplicity, it has been tested over and over (it has years
of mathematics behind it), and it performs quite well on Knuth's
Spectral Test.

I appologize for waxing on about random number generators but valid
research can only be done with validated tools and a bad random number
generator can destroy any simulation.  Know your tools.

-chip
+-------------------------------------------------------------------------+
|Charles S. Roberson          ARPANET:  csrobe@icase.arpa                 |
|ICASE, MS 132C               BITNET:   $csrobe@wmmvs.bitnet              |
|NASA Langley Rsch. Ctr.      UUCP:     ...!uunet!pyrdc!gmu90x!wmcs!csrobe|
|Hampton, VA  23665-5225      Phone:    (804) 865-4090
+-------------------------------------------------------------------------+

------------------------------

Date: 3 Jun 88 18:03:02 GMT
From: sun1.uucp!cracraft@jpl-elroy.arpa  (Stuart Cracraft)
Subject: Fredkin chess tournament results & comment

Article 1178 of rec.games.chess:
From: fhh@unh.cs.cmu.edu (Feng-Hsiung Hsu)
Newsgroups: rec.games.chess
Subject: Computers in Fredkin Masters Open
Keywords: Chess, Computers
Message-ID: <1828@pt.cs.cmu.edu>
Date: 2 Jun 88 15:50:07 GMT
Sender: netnews@pt.cs.cmu.edu
Organization: Carnegie-Mellon University, CS/RI
Lines: 61

Each year in the last few years, the Fredkin Foundation sponsored a chess
event designed to promote computer chess research.  Traditionally, the
reigning ACM North American Computer Chess Champion is invited along with
possibly some of the stronger programs at the time.  This year, the Fredkin
Masters Open was held from May 28 to May 30 on CMU campus.  About 30 masters
participated.  The computer opposition includes ChipTest, the reigning ACM
Champion, Deep Thought 0.01 (0.01 stands for single processor, successor to
ChipTest), Hitech (1985 ACM Champion) and BP (a Compaq 386).  Phoenix
(number 3 finisher in ACM) and Lachex (number 4 finisher in ACM) did not
participate because of problems in obtaining computing time.

Alexander Ivanov (FIDE 2415), a recent Soviet emigre, won the event by scoring
5 out of 6 and received the $1,200 first prize.  Deep Thought tied for 2nd
with 2 masters, scoring 4.5 out of 6.  ChipTest tied for 5th scoring 4 out
of 6.  Hitech scored 3.5 out of 6.  BP scored 3 out of 6.

Based on the 3-month old ratings of the opponents, both Deep Thought and
ChipTest should obtain provisional ratings above 2500.  Deep Thought beat
a 2339, drew a 2292, won against a 2299 and a 2389, lost to Ivanov, and
won against Vivek Rao (2491, among top 60 in the US, top in Pennsylvania),
receiving a provisional rating around 2570.  ChipTest beat a 2354, drew a
2299, beat a 2421, drew a 2345 (?), beat Rao (2491) and lost to a 2321,
receiving a provisional rating around 2501.  Hitech had a rough outing,
lost to a 2201 in the first round, beat and drew a few masters in the
2200 to 2360 range, and received a performance rating around 2312.  BP
did quite well for a micro.  It finished with a respectable performance
rating around 2189.  If our calculation is correct, ChipTest should receive
$100 for its performance.   Along with the $2000 it won in the ACM, ChipTest
has more than paid for its est. $500 cost (actually we never paid for the
parts--they were leftovers from other projects).

Both Deep Thought and ChipTest are definitely overrated at this moment.
Vivek Rao, who lost to both programs, was probably overconfident.  Before
the game against ChipTest, he was openly expressing his contempt of chess
playing computers (he had numerous successful encounters with Hitech
earlier).  ChipTest forced Rao to resign in under 30 moves with an
unexpected sack.  Vowing to take revenge for the loss on Deep Thought, but
still expressing his contempt, he then proceeded to lose the last round game
after Deep Thought played an unexpected pawn push that sent him into
25 minutes of deep thinking.

If computer vs. computer rating does translate into computer vs. human
rating, ChipTest should at best be 50 points above Hitech, or roughly 70
points below its provisional rating.  We will probably never found out what
ChipTest's real rating should be--this is ChipTest's last tourament.

Both ChipTest and Deep Thought are authored by Thomas Anantharaman, Dr. Murray
Campbell and yours truly of the Computer Science Department in Carnegie
Mellon University.  Some of ChipTest source code (under 0.5%, mainly in
evaluation code) originated from Hitech, whose software development was
headed by Dr. Hans Berliner of CMU with hardware designed by Dr. Carl Ebeling
while he was at CMU.  Deep Thought has its code completely rewritten, and
does not contain any code from Hitech.  Dr. Murray Campbell also worked on
the Hitech project in association with Dr. Hans Berliner.  Both Thomas and
I are still graduate students (associated with the Speech group and the
VLSI group respectively.).

Deep Thought was still being wire-wrapped 2 days prior to the event.  One
point for the flakey hardware.

I will post some of the games if there is interest.

Article 1180 of rec.games.chess:
Path: elroy!sun1!cracraft
From: cracraft@sun1.uucp (Stuart Cracraft)
Newsgroups: rec.games.chess
Subject: Re: Computers in Fredkin Masters Open
Keywords: Chess, Computers
Message-ID: <6925@elroy.Jpl.Nasa.Gov>
Date: 2 Jun 88 23:47:09 GMT
References: <1828@pt.cs.cmu.edu>
Sender: news@elroy.Jpl.Nasa.Gov
Reply-To: cracraft@sun1.UUCP (Stuart Cracraft)
Organization: Jet Propulsion Laboratory
Lines: 21

Larry Kaufman has calculated the following ratings based on the Fredkin
results:

    Chiptest     2496  (by method of CRA rating formula)
                 2504  (by method of linear formula)
    DeepThought  2588  (by method of CRA rating formula)
                 2586  (by method of linear formula)

These values are based on a USCF-estimated-rating for Ivanov, the
strong emigree whose FIDE rating was mentioned as 2415. USCF equivalent
for this would probably be 2415+95 = 2510.

Some future speculation:
   If DT's correct rating is USCF 2500, as is more likely -- with
   a good opening book and better endgame knowledge, a dual-processor
   version would probably be about USCF 2550. A full-fledged 100 processor
   version, would gain about 60-fold in speed, resulting in a USCF
   rating of about 2800, or FIDE 2700. So the program would come in
   just behind Kasparov and Karpov.

Stuart

------------------------------

Date: Fri, 3 Jun 88 10:51 EDT
From: Stephen G. Rowley <SGR@STONY-BROOK.SCRC.Symbolics.COM>
Subject: References re AI in weather forecasting?

    Date: 26 May 88 15:53:01 GMT
    From: aplcen!jhunix!apl_aimh@mimsy.umd.edu  (Marty Hall)

    Any pointers on where to look re AI in weather forecasting?  I have a couple
    from AI in Engineering Proceedings, but can't find any others.
         Thanks!
                                - Marty Hall
    --
    ARPA - hall@bravo.cs.jhu.edu [hopkins-eecs-bravo.arpa]
    UUCP   - ..seismo!umcp-cs!jhunix!apl_aimh | BITNET  - apl_aimh@jhunix.bitnet
    Artificial Intelligence Laboratory, MS 100/601,  AAI Corp, PO Box 126,
    Hunt Valley, MD  21030   (301) 683-6455

Well, I have 2 joke weather predictors for you:

[1] The 1-rule version: Tomorrow's weather will be like today's weather.
This is a pretty good predictor.  Of course, you quickly find out people
are interested in weather *changes*. :-)

[2] The 4-rule version (New England only -- weather here is dominated by
the ocean and cold Canadian air masses; we recently used this in a demo
videotape about Joshua):

  * if the barometer is below 1000mbar and falling faster than a certain rate,
    o and if the temperature is below 40
      . and if the wind is from the northwest, expect dry snow
      . else if the wind is from the northeast, expect wet snow
      . else if the wind is from the south, expect rain
    o else if the temperature is above 40, expect rain.

This one is (slightly) less of a joke.  I hooked it up to a Statice
database of weather information gathered from a weather station at MIT.
It usually manages to figure out it's raining about the time the
raindrops hit my window.  Now I get notifications on my lisp machine
when it thinks it's raining...

------------------------------

Date: 29 May 88 13:22:48 GMT
From: bpa!temvax!pacsbb!rkaplan@burdvax.prc.unisys.com  (Randy Kaplan)
Subject: Query on FRAME BASED LANGUAGES


Frame Based Languages - Does anyone know of any more recent implementations
of frame based languages, like FRL, that are currently available. I am
doing research in knowledge acquisition and would like to use such a
language as a representational platform. One written in Common LISP
would be preferred. I am not interested in commercial packages unless
they are both low cost and implement the notion of frames with
languages like this, please let me know as soon as possible as we are in
the midst of the research and would like to begin using the language as
soon as possible.

I can be contacted at, kaplan@vuvaxcom.bitnet.

Thanks.
Randy M. Kaplan
Villanova University

------------------------------

Date: Fri, 3 Jun 88 18:22:57 EDT
From: jesson@nrl-5570-gw (J.R. Jesson)
Subject: References on parallel Inference Needed


Hi!

I'm in the beginning stages of developing a parallel inference engine. I've
managed to gather some scattered references, but I haven't located many
recent papers on this subject.  I have Stanford, Texas and some CMU Tech
reports up to early 1987, and access to many conference proceedings.  If
you have seen or authored papers concerning parallel inference, search,
search heuristics, or the like, I would greatly appreciate hearing from you.
Please drop me mail at one of the addresses below. Thanks...

J.R. Jesson
Merit Technology, Inc.
5068 W. Plano Parkway
Plano, Texas 75075-5009
Voice: (214) 733-7092
UUCP : ...!ihnp4!killer!jesson
ARPA : jesson@nrl-excalibur.arpa

------------------------------

Date: 3 Jun 88 11:51:36 GMT
From: portal!cup.portal.com!Barry_A_Stevens@uunet.uu.net
Subject: connectionist medical expert systems

try contacting Dr. James A Anderson, Brown University, and ask about their
database on antibiotics and diseases.
Barry Stevens
Presi2{ent
{Applied AI Systems
Del Mar CA
619-755-7231

------------------------------

End of AIList Digest
********************

∂05-Jun-88  2333	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #19  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 5 Jun 88  23:32:00 PDT
Date: Sun  5 Jun 1988 23:13-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #19
To: AIList@AI.AI.MIT.EDU


AIList Digest             Monday, 6 Jun 1988       Volume 7 : Issue 19

Today's Topics:

 The Future of the Free Will Discussion

 Philosophy:
  a good jazz ensemble
  Who else isn't a science?
  Bad AI: A Clarification

  randomness

----------------------------------------------------------------------

Date: Sat 4 Jun 88 21:44:01-PDT
From: Raymond E. Levitt <LEVITT@Score.Stanford.EDU>
Subject: Free Will

Raymond E. Levitt
Associate Professor
Center for Integrated Facility Engineering
Departments of Civil Engineering and Computer Science
Stanford University
==============================================================

Several colleagues and I would like to request that the free will debate -
which seems endless - be set up on a different list with one of the more
active contributors as a coordinator.

The value of the AILIST as a source of current AI research issues, conferences,
software queries and evaluations, etc., is diminished for us by having to
plough through the philosophical dialectic in issue after issue of the AILIST.

Perhaps you could run this message and take a poll of LIST readers to help
decide this in a democratic way.

Thanks for taking on the task of coordinating the AILIST.  It is a great
service to the community.

Ray Levitt
-------


   [Editor's Note:

        Thank you, Mr. Levitt, and many thanks to all those who have
   written expressing interest or comments regarding AIList.  I regret that
   I have not had time to respond to many of you individually, as I have
   lately been more concerned with the simple mechanics of generating
   digests and dealing with the average of sixty bounce messages per day
   than with the more substantive issues of moderation.

        However, a new COMSAT mail-delivery program is now orbiting, and
   we may perhaps be able to move away from the days of lost messages,
   week-long delays, and 50K digests ...  My heartfelt apologies to all.

        Being rather new at this job, I have hesitated to express my
   opinion with respect to the free-will debate, preferring to retain the
   status quo and hoping that the problem would fix itself.   But since Mr.
   Levitt is only the latest of several people who have complained about
   this particular issue, I feel I must take some action.

        Clearly this discussion is interesting and valuable to many of
   the participants, but equally clearly it is less so for many others.  I
   have tried as far as possible to group the free-will discussions in
   digests apart from other matters, so people uninterested in the topic
   could simply 'delete' the offending digests unread.  (There are many
   readers who only have access to the undigested stream and cannot do
   this.)

        Several people have suggested moving the discussion to a USENET
   list called 'tallk.philosophy'.  The difficulty here is that AIList
   crosses USENET, INTERNET and BITNET, and not all readers would be able
   to contribute.  In V7#6, John McCarthy <JMC@SAIL.Stanford.EDU> said:

   > I am not sure that the discussion should progress further, but if
   > it does, I have a suggestion.  Some neutral referee, e.g. the moderator,
   > should nominate principal discussants.  Each principal discussant should
   > nominate issues and references.  The referee should prune the list
   > of issues and references to a size that the discussants are willing
   > to deal with.  They can accuse each other of ignorance if they
   > don't take into account the references, however perfunctorily.
   > Each discussant writes a general statement and a point-by-point
   > discussion of the issues at a length limited by the referee in
   > advance.  Maybe the total length should be 20,000 words,
   > although 60,000 would make a book.  After that's done we have another
   > free-for-all.  I suggest four as the number of principal discussants
   > and volunteer to be one, but I believe that up to eight could
   > be accomodated without making the whole thing too unwieldy.
   > The principal discussants might like help from their allies.
   >
   > The proposed topic is "AI and free will".

        I would be more than willing to coordinate this effort, but I
   have, as yet, received no responses expressing an opinion one way or the
   other.  I invite the readers of AIList who have found the free-will
   discussion interesting (as opposed to those who have not) to send me net
   mail at AILIST-REQUEST@AI.AI.MIT.EDU concerning the future of this
   discussion.  Please send me a separate message, and do not intersperse
   your comments with other contributions, whether to the free-will debate
   or other matters.

        In the meantime, I will continue to send out digests covering
   the free-will topic, although separate from other material.

                - nick  ]

------------------------------

Date: Sat, 4 Jun 88 14:21:06 EDT
From: George McKee <mckee%corwin.ccs.northeastern.edu@RELAY.CS.NET>
Subject: Artificial Free Will -- what's it good for?

Obviously many people think that the question of whether or not
humans have free will is important to a lot of people, and
thinking about how it could be implemented in a computer program
is an effective way to clarify exactly what we're talking about.
I think the McDermott's contributions show this -- they're
getting pretty close to pseudocode that you could think about
translating into executable programs.  (But just to put in my
historical two cents, I first saw this kind of analysis in a
proceedings of the Pontifical Academy of Sciences article by
D.M.MacKay in about 1968.)  If free will is programmable,
it's appropriate to then ask "why bother?", and "how will we
recognize success?", i.e. to make explicit the scientific
motivation for such a project, and the methodology used to
evaluate it.
        I can see two potential reasons to work on building
free will into a computer system: (1) formalizing free will into
a program will finally show us the structure of an aspect of the
human mind that's been confusing to philosophers and psychologists
for thousands of years.  (2) free-will-competent computer systems
will have some valuable abilities missing from systems without
free will.
        Reason 1 is unquestionably important to the cognitive
sciences, and insofar as AI programs are an essential tool to
cognitive scientists, *writing* a program that includes free will
as part of its structure might be a worthwhile project.  But
*executing* a program embodying free will won't necessarly show
us anything that we didn't know already.  Free will in its sense
as a consequence of the incompleteness of an individual's self-model
has an essentially personal character, that doesn't get out into
behavior except as verbal behavior in arguments about whether it
exists at all.  For instance, I haven't noticed in this discussion
any mention of how you recognize free will in anyone other than
yourself.  If you can't tell whether I have free will or not, how
will you recognize if my program has it without looking at the code?
And if you always need to look at the code, what's the point in
actually running the program, except for other, irrelevant reasons?
(This same argument applies to consciousness, and explains why I,
and maybe others out there as well, after sketching out some
pseudocode that would have some conscious notion of its own
structure, decided to leave implementation to the people who
work on the formal semantics of reflective languages like
"3lisp" or "brown". (See the proceedings of the Lisp and FP
conferences, but be careful to avoid thinking about multiprocessing
while reading them.))
        Which brings us to Reason 2, and free will from the
perspective of pure, pragmatic AI.  As far as I can tell, the only
way free will can affect behavior is by making it unpredictable.
But since there are many other, easier ways to get unpredictability
without having to invoke the demoniacal (or is it oracular?)
Free Will, I'm back to "why bother?" again.  Unpredictability in
behavior is certainly valuable to an autonomous organism in a
dangerous environment, both as an individual (e.g. a rabbit trying
to outrun a hungry fox) and as a group (e.g. a plant species trying
to find a less-crowded ecological niche), but in spite of my use of
the word "trying" this doesn't need to involve any will, free or
otherwise.  In highly sophisticated systems like human societies,
where statements of ability (like diplomas :-) are often effectively
equivalent to demonstrations of ability, claiming "I have Free Will,
you'll fail if you try to predict/control my behavior!" might well be
quite effective in fending off a coercive challenge.  But computer
systems aren't in this kind of social situation (at least the ones I
work with aren't).  In fact they are designed to be as predictable
as possible, and when they aren't, it indicates a failure either of
understanding or in design.  So again, I don't see the need for
Artificial Free Will, fake or real.
        My background is largely psychology, so I think that it's
valuable to understand how it is that people feel that their behavior
is fundamentally unconstrained by external forces, especially social
ones.  But I also don't think that this illusion has any primary
adaptive value, and I don't think there's anything to be gained by
giving it to a computer.  If this is true, then the proper place
for this discussion is some cognitive-science list, which I'd be
happy to read if I knew where to send my subscription request.
        - George McKee
          NU Computer Science

------------------------------

Date: Fri 03 Jun 1988 15:04 CDT
From: <UUCJEFF%ECNCDC.BITNET@MITVMA.MIT.EDU>
Subject: >> RE>...a steeplejack and his mate, a good jazz ensemble,
         ...)


>No.  Fire and ambulance personnel have regulations, basketball has rules
>and teams discuss strategy and tactics during practice, and even jazz
>musicians use sheet music sometimes.  I don't mean to say that implicit
>communication doesn't exist, just that it's not as useful.  I don't know
>how to build steeples, but I'll bet it can be written down.

This person obviously doesnt know much about music performance.
Of course jazzers use sheet music but have you ever seen a page out of
the Realbook, if you have, you never follow it literally.  Even the more
classically oriented stuff, the bandwidth of the information on the sheet
no where nearly approaches the gestural dimensions required to musically or
otherise correctly interpret the piece.  For one, there is tradition, which
is passed through schools, performance ensembles, and now recording media.
If you are a trumpet player in an orchestra, and you see a dot over a note,
that dot means different things depending on which composer, period, genre,
tempo, etc, ect, ect.

With jazz there are even more intangibles, like you can be on top of the beat
or lay in the pocket.  Nowhere is a written method which can gaurantee that
you are going to get the right feel, man, you just gotto feel it baby *snap*.

You still may want to call jazz a language, of course it is, it has meaning,
but it is not something that can be put down in a machine readable format.

Jeff Beer, UUCJEFF@ECNCDC.BITNET
==================================
Language is a virus --- Laurie Anderson

------------------------------

Date: 3 Jun 88 20:22:32 GMT
From: maui!bjpt@locus.ucla.edu  (Benjamin Thompson)
Subject: Re: Who else isn't a science?

In article <10510@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu writes:
>Gerald Edelman, for example, has compared AI with Aristotelian
>dentistry: lots of theorizing, but no attempt to actually compare
>models with the real world.  AI grabs onto the neural net paradigm,
>say, and then never bothers to check if what is done with neural
>nets has anything to do with actual brains.

This is symptomatic of a common fallacy.  Why should the way our brains
work be the only way "brains" can work?  Why shouldn't *A*I workers look
at weird and wonderful models?  We (basically) don't know anything about
how the brain really works anyway, so who can really tell if what they're
doing corresponds to (some part of) the brain?

Ben

------------------------------

Date: Sat, 4 Jun 88 00:14 EST
From: EBARNES%HAMPVMS.BITNET@MITVMA.MIT.EDU
Subject: Re: Sorry, no philosophy allowed here


Editors:

>If you can't write it down, you can't program it.

This comes down to two questions.  Can we build machines with original
thought capabilities, and what is meant by `program'.  I think that it
is possible to build machines which will think originally.  The question
now becomes: "Is what we do to set these "free thinking" machines up
considered programing".  It would not be a strict set of instructions,
but we would surely instill the rules of deductive reasoning in the
machine.  Whether or not this is "programing" is an uniteresting question.
Call it what you will, one way makes the original statement true and
the other way makes it false.
                                                        Eric Barnes

------------------------------

Date: 4 Jun 88 15:41:26 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu 
      (Stephen Smoliar)
Subject: Re: Bad AI: A Clarification

In article <1299@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>  Research requires skill.  Research into humanity requires special
>skills.  Computer scientists and mathematicians are not taught these skills.
>
There is no questioning the premise of the first sentence.  I am even willing
to grant, further, that artificial intelligence (or at least aspects which
are of particularly interest to me) may be regarded as "research into
humanity."  However, after that, Cockton's argument begins to fall apart.
Just what are those "special skills" which such research "requires?"  Does
anyone have them?  Does Cockton regard familiarity with the humanistic
literature as such a skill?  I suspect there could be some debate as to
whether or not extensive literary backgroud is a skill, particularly when
the main virtue of such knowledge is that it provides one with a history
of how one's predecessors have failed on similar tasks.  There is no doubt
that it is valuable to know that certain paths lead to dead ends;  but when
there are so many forks in the road, it is not always easy to determine WHICH
fork was the one which ultimately embodied the incorrect decision.

Perhaps I am misrepresenting Cockton by throwing too much weight on "being
well read."  In that case, he can set the record straight by doing a better
job of characterizing those skills which he feels computer scientists and
mathematicians lack.  Then he can tell us how many humanists have those
skills and have exercised them in the investigation of intelligence with
a discipline which he seems to think the AI community lacks.  Let he who
is without guilt cast the first stone, Mr. Cockton!  (While we're at it,
is your house made of glass, by any chance?)

One final note on bad AI.  I don't think there is anyone reading this
newsgroup who would doubt that there is bad AI.  However, in another
article, Cockton seems quite willing to admit (as most of us knew already)
that there is bad sociology, too.  One of the more perceptive writers on
social behavior, Theodore Sturgeon (who had the good sense to articulate
his views in the palatable form of science fiction), once observed that
90% of X is crud, for any value of X . . . that can be AI, sociology, or
classical music.  Bad AI is easy enough to find and even easier to pick
on.  Rather than biting the finger of the bad stuff, why not take the
time to look where the finger of the good stuff is really pointing?

------------------------------

Date: 4 Jun 88 16:09:56 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu 
      (Stephen Smoliar)
Subject: Re: AI and Sociology

In article <1301@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>  AI can be
>mulitidisciplinary, but it is, for me, unique in its insistence on a single
>paradigm which MUST distort the researcher's view of humanity, as well as the
>research consumer's view on a bad day.  Indefensible.
>
. . . and patently untrue!  Perhaps Mr. Cockton has suffered from an attempt to
study AI in such a dogmatic environment.  His little anecdote about the
advior who put him off AI is quite telling.  I probably would have been
put off by such an attitude, too.  Fortunately, I could affort the luxury
of changing advisors without changing my personal interest in questions I
wanted to pursue.

First of all, it is most unclear that there is any single paradigm for the
pursuit of artificial intelligence.  Secondly, it is at least somewhat unclear
that any paradigm which certainly will INFLUENCE one's view of humanity also
necessarily DISTORTS it.  To assume that the two thousands years of philosophy
which have preceded us have provided an undistorted view of humanity is
arrogance in its most ignorant form.  Finally, having settled that there
is more than one paradigm, we can hardly accuse the AI community of INSISTING
on any paradigm.
>
>Again, I challenge AI's rejection of social criticisms of its paradigm.  We
>become what we are through socialisation, not programming (although some
>teaching IS close to programming, especially in mathematics).  Thus a machine
>can never become what we are, because it cannot experience socialisation in
>the
>same way as a human being.  Thus a machine can never reason like us, as it can
>never absorb its model of reality in a proper social context.  Again, there
>are
>well documented examples of the effect of social neglect on children.
>Machines
>will not suffer in the same way, as they only benefit from programming, and
>not all forms of human company.

Actually, if there is any agreement at all in the AI community it is in the
conviction to be sceptical of all authoritative usage of the word "never."
I, personally, do not feel that any social criticisms are being rejected
wholesale.  However, AI is a very difficult area to pursue (at least if
you are really interested in a research pursuit, as opposed to marketing
a new shell for building expert systems).  One of the most important keys
to getting any sort of viable result at all is understanding how to break
off a piece of the whole, big, intimidating problem whose investigation is
likely to provide some insight.  This generally leads to the construction
of a model, usually in the form of a software artifact.  The next key is
to investigate that model to see what it has REALLY told us.  A good example
of such an investigation is the one by Brown and Lenat on what AM and EURISKO
APPEAR (their words) to work.

There are valid questions about socialization which can probably be formulated
in terms of communities of automata.  However, we need to form a better vision
of what we can expect by way of the behavior of individual automata before we
can express those questions in any useful way.  There is no doubt that this
will take some time.  However, there is at least a glimmer of hope that when
we get around to expressing them, we will have a better idea of what we are
talking about than those who have chosen to reject the abstraction of
automata out of hand.

------------------------------

Date: 4 Jun 88 16:21:47 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu 
      (Stephen Smoliar)
Subject: Re: Aah, but not in the fire brigade, jazz ensembles, rowing
         eights,...

In article <239@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>In article <1171@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert
>Cockton) writes:
>> In article <5499@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar)
>>writes:
>> >The problem comes in deciding
>> >WHAT needs to be explicitly articulated and what can be left in the
>> >"implicit
>> >background."
>> ...
>> For people who haven't spent all their life in academia or
>> intellectual work, there will be countless examples of carrying out
>> work in near 100% implicit background (watch fire and ambulance
>> personelle who've worked together as a team for ages, watch a basketball
>> team, a steeplejack and his mate, a good jazz ensemble, ...)
>
>No.  Fire and ambulance personnel have regulations, basketball has rules
>and teams discuss strategy and tactics during practice, and even jazz
>musicians use sheet music sometimes.  I don't mean to say that implicit
>communication doesn't exist, just that it's not as useful.  I don't know
>how to build steeples, but I'll bet it can be written down.
>
Take a look at Herb Simon's article in ARTIFICIAL INTELLIGENCE about
"ill-structured problems" and then decide whether or not you want to
make that bet.

------------------------------

Date: 5 Jun 88 17:29:29 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Bad AI: A Clarification


      On this subject, one should read Drew McDermott's "Artificial Intelligence
meets Natural Stupidity" (ACM SIGART newsletter, #57, April 1976.)  His
comments are all too apt today.

                                        John Nagle

------------------------------

Date: 5 Jun 88 18:07:42 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Ill-structured problems

In article <5644@venera.isi.edu> Stephen Smoliar writes:
>Take a look at Herb Simon's article in ARTIFICIAL INTELLIGENCE about
>"ill-structured problems" and then decide whether or not you want to
make that bet.

      A reference to the above would be helpful.

      Little progress has been made on ill-structured problems in AI.
This reflects a decision in the AI community made in the early 1970s to
defer work on those hard problems and go for what appeared to be an
easier path, the path of logic/language/formal representation sometimes
referred to as "mainstream AI".  In the early 1970s, both Minsky and
McCarthy were working on robots; McCarthy proposed to build a robot
capable of assembling a Heathkit color TV kit.  This was a discrete
component TV, requiring extensive soldering and hand-wiring to build,
not just some board insertion.  The TV kit was actually
purchased, but the robot assembly project went nowhere.  Eventually,
somebody at the SAIL lab assembled the TV kit, which lived in the SAIL
lounge for many years, providing diversion for a whole generation of
hackers.

      Embarassments like this tended to discourage AI workers from
attempting projects where failure was so painfully obvious.  With
more abstract problems, one can usually claim (one might uncharitably
say "fake") success by presenting one's completed system only with
carefully chosen problems that it can deal with.  But in dealing
with the physical world, one regularly runs into ill-structured
problems that can't be bypassed.  This can be hazardous to your career.
If you fail, your thesis committee will know.  Your sponsor will know.
Your peers will know.  Worst of all, you will know.

      So most AI researchers abandoned the problems of vision, navigation,
decision-making in ill-structured physical environments, and a number
of other problems which must be solved before there is any hope of dealing
effectively with the physical world.  Efforts were focused on logic,
language, abstraction, and "understanding".  Much progress was made; we
now have a whole industry devoted to the production of systems with
a superficial but useful knowledge of a wide assortment of problems.

      Still, in the last few years, the state of the art in that area
seems to have reached a plateau.  That set of ideas may have been
mined out.  Certainly the public claims made a few years ago have not been
furfilled.  (I will refrain from naming names; that's not my point today.)
The phrase "AI winter" is heard in some quarters.

------------------------------

Date: Sat, 4 Jun 88 17:48:57 EDT
From: aboulang@WILMA.BBN.COM
Reply-to: aboulanger@bbn.com
Subject: randomness


  In AIList Digest   V7 #4, Barry Kort writes:

  >If I wanted to give my von Neumann machine a *true* random number
  >generator, I would connect it to an A/D converter driven by thermal
  >noise (i.e. a toasty resister).

  I recall that a Zener diode is a good source of noise (but cannot remember
  the spectrum it gives).

  It could be a good idea to utilize a Zener / A-D converter random number
  generator in Monte Carlo simulations.

  Andy Ylikoski


Ahem, all this stuff about analog sources being better random sources
is a bit of a "scientific" urban myth. It is instructive to go back to
the papers of the early 60's and see what it took to utilize analog
random sources. The basic problem in analog sources is correlation. To
wit:

"A Hybrid Analog-Digital Pseudo-Random Noise Generator", R.L.T.
Hampton, AFIPS Conference Proceedings, Vol 25, 1964  Spring Joint
Computer Conference. 287-301.

To quote a little:

"By precision clamping, the RMS level of binary noise can be closely
controlled, but the non-stationarity of the circuits used to obtain
electrical noise, even form stationary mechanism such an a
radio-active source, still create problems and expense. For example,
the 80 Kc random-telegraph wave generator .... required a fairly
sophisticated and not completely satisfactory count-rate control loop.

In the design of University of Arizona's new ASTRAC II iterative
differential analyzer ... it was decided to abandon analog noise
generation completely. Instead, the machine will employ a digital
shift-register sequence generator ..."

If you would like to investigate recent high-quality theoretical work
on this matter, see the paper:

"Generating Quasi-random Sequences from Semi-random Sources", Miklos
Santha & Umesh V. Vazirani, Journal of Computer and System Sciences,
Vol 33, No 1, August 1986, 75-87.

They propose a clever method to eliminate the correlations in analog
sources.

Help stamp out scientific urban myths!


Albert Boulanger
aboulanger@bbn.com
BBN Labs

------------------------------

End of AIList Digest
********************

∂06-Jun-88  2009	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #20  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 6 Jun 88  20:08:53 PDT
Date: Mon  6 Jun 1988 22:43-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #20
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 7 Jun 1988       Volume 7 : Issue 20

Today's Topics:

 Queries:
  MACIE
  Response to: Inductive rule-learning tools - BEAGLE
  Object Oriented Programming in Canada

  Resignation from "AI in Engineering"

 Philosophy:
  definition of AI?
  Informatics vs Computer Science  +  AI in Space
  Re: CogSci and AI

----------------------------------------------------------------------

Date: 2 Jun 88 13:22:48 GMT
From: uh2@psuvm.bitnet  (Lee Sailer)
Subject: MACIE

Is it possible to obtain MACIE, the neural-net Expert System described
in the Feb. issue of CACM?

Can someone offer me a pointer to the author, Stephen Gallant, at
Northeastern U?

               thanks.

------------------------------

Date: 3-JUN-1988 21:26:35 GMT
From: POPX%VAX.OXFORD.AC.UK@MITVMA.MIT.EDU
Subject: Response to: Inductive rule-learning tools - BEAGLE

                  Inductive rule-learning tools: BEAGLE


From: Jocelyn Paine,
      Experimental Psychology,
      South Parks Road,
      Oxford.

Janet Address: POPX @ UK.AC.OX.VAX


In  reply  to  M.E.  van  Steenbergen's request  for  details  of  other
inductive tools in AILIST V7/16:


There is a  program called BEAGLE (named after the  ship on which Darwin
sailed to the Galapagos) which "breeds" classificatory rules by "natural
selection". BEAGLE's input is a training  set of records, all having the
same  structure  as  one  another.  Each record  contains  a  number  of
variables, some of which may depend on others. BEAGLE's output is one or
more rules  describing the conditions for  this dependence, in  terms of
ANDs,  ORs, NOTs,  arithmetic comparisons,  and (I  think) plus,  minus,
times, and divide.


An example given by Richard Forsyth, BEAGLE's author: assume  a training
set  describing  observations of  iris  plants,  where there  are  three
varieties of iris. Each record in the set contains:

    The name of that variety - this  is  essentially an enumerated type,
                               which can take one of 3 values;
    The length of its stamens;
    The date at which it starts flowering;
    The length of its petals.

BEAGLE's task is to induce rules  which predict, from the stamen length,
flowering date, and petal length of a new iris, which variety it is.

BEAGLE begins by generating some  syntactically correct rules at random,
paying no  attention to their  meaning. It  then tests them  against the
training set, selecting for the fittest (those which classify best), and
discarding the worst rules. It then subjects their "genetic material" to
crossover  re-combination, as  well as  random mutation,  so making  new
rules. It  continues for several cycles,  until some criterion  (I'm not
sure what) is satisfied by the remaining rules.


Richard says that  BEAGLE has been used by the  health insurance company
BUPA to induce useful relations  concerning heart disease amongst BUPA's
subscribers. He sells it for (at  least) PC-clones (for about #200?) and
VAXes  (for about  #1500?), from  his company  Warm Boot,  Nottingham. I
don't know the rest of his address, or the exact prices of his software:
I'm recalling  a talk he  gave us  recently. However, perhaps  what I've
said will help.

------------------------------

Date: Sun, 5 Jun 88 20:08 EDT
From: "Nahum (N.) Goldmann" <ACOUST%BNR.BITNET@MITVMA.MIT.EDU>
Subject: Object Oriented Programming in Canada

Everybody in Canada who is interested in the Object-Oriented Programming
(OOP) and/or in the behavioral design research as related to the
development of human-machine interfaces (however remotely connected to
these subjects) - please reply to my e-mail address.  The long-term
objective - organization of a corresponding Canadian bulletin board.

Greetings and thanks.

Nahum Goldmann
(613)763-2329

e-mail (BITNET): <ACOUST@BNR.CA>

------------------------------

Date: Mon, 06 Jun 88 10:29:37 EDT
From: <sriram@ATHENA.MIT.EDU>
Subject: Resignation from the AI in Engineering


Due to some personal problems with the publishing director of the
International Journal for AI in Engineering, I have decided to
resign as the co-editor of the Journal. Please note that as
of April 1988, I am no longer associated with Computational Mech.
Publications in any capacity.


If you plan to write a book in the near future, the following books
are worthing reading before you sign a contract.

 The Bussiness of Being a Writer
  S. Golding and K. Sky
  Carroll and Graf Publishers, Inc.

 How to Understand and Negotiate a Book Contract
  R. Balkin
  Writer's Digest Books



Sriram
1-253, Intelligent Engineering Systems Lab.
M.I.T., Cambridge, MA 02139
ARPAnet: sriram@athena.mit.edu
tel.no:(617)253-6981

------------------------------

Date: 5 Jun 88 16:13:25 GMT
From: ruffwork@cs.orst.edu  (Ritchey Ruff)
Subject: definition of AI? (was: who else isn't a science?)

I've always liked my thesis advisor's (Tom Dietterich) definition
of AI.  It works so much better then any other I've seen.

        AI is the study of ill-defined computational methods.

This explains why once we understand something it is no
longer AI, it also explains why AI does not spend a lot of
time trying to verify that "this is the way OUR brain does it".
I think AI should be as defined above, and leave the strong/weak
"AI" to cognitive science...

--ritchey ruff  ruffwork@cs.orst.edu -or- ...!tektronix!orstcs!ruffwork

------------------------------

Date: Thu, 2 Jun 88 23:22:12 PDT
From: larry@VLSI.JPL.NASA.GOV
Subject: Informatics vs Computer Science  +  AI in Space

--
The term "computer science" has done much to limit the field in ways I don't
like.  It focuses attention on computers, which seems to me to make as
little sense as calling musicology "music-instrument studies."

To me the proper study is information: what it is, where it comes from, what
its deep structure is, how it's processed, etc.  Whether it's processed by a
carbon-based or silicon-based life-form, whether the processor naturally
evolved or was built, is of less interest to me than the processing: the
algorithms or heuristics that transform and use information.

Further, the use of "science" in conjunction with "computer" confuses
people, who complain that a science of artifacts makes sense only as anthro-
pology, that the proper companion words are "computer engineering."  (We
went through this argument on AIList a few weeks ago.)

I'd prefer "information science."  Unfortunately, that phrase has already
been picked to mean the automated part of library science (another misnomer).
The French "informatique" seems to capture the connotation I'd prefer.
I doubt if the English-speaking world would take well to it, however,
so my second suggestion is "informatics," the English variant of Swedish
and Russian words.

Informatique/informatics suggests automated information processing, but
leaves open the possibility of a data-processing system composed of people,
or an integral part of it.  That last is usually the case, but because of
the emphasis on computers the people parts are often slighted.  Documen-
tation and user interfaces, for instance, are often poor.

Even worse, attempts are made to do away with the human components
altogether even where people can do a job more flexibly and efficiently (and
possibly more humanely).  Perhaps one of the beneficial effects of AI will
be more respect for the "simple" things people do.  Walking, picking up and
inspecting an object, even standing still without falling, is very complex,
as roboticists have found out.

JPL (which is part of CalTech and does most of its work for NASA) is
responsible for all space exploration beyond the Moon.  We are very
interested in robotics, due to the danger and cost of sending people into
space--and money is very tight for space exploration.  But experience is
showing that robotics has much higher costs than was anticipated, and that
human skills and judgement are needed in even mundane areas.

I've come to believe that the most cost-effective solution is AI-assisted
teleoperators.  It should be possible, for instance, to construct space
stations with almost all the construction personnel on the ground, using
color stereoscopic cameras and remote manipulaters in orbit.  In low-Earth
orbit, and even at geosynchronous orbit, round-trip transmission delays make
direct manipulation possible.

Some complete automation is practical, of course, especially since it can be
fairly dumb; remote operators can watch a robot work and intervene when the
unexpected occurs or the schedule calls for tasks that require human skill
or judgement.  At Moon-orbit and on Moon-surface some AI assistance and more
automation would be needed; round-trip transmission delays are about three
seconds.  The remote operator would primarily act as a supervisor, though
the low gravity of the Moon would make it possible to catch a dropped
object.  However, the remote operator might have to be on Qaaludes to keep
from going nuts!
                                   Larry @ jpl-vlsi

------------------------------

Date: 6 Jun 88 14:04 PDT
From: hayes.pa@Xerox.COM
Subject: Re: CogSci and AI

Gilbert Cockton says "The proper object of the study of humanity is humans, not
machines" .   What he fails to understand is that people ARE machines; and
moreover, to say that they are is not to dehumanise them or reduce them or
disparage them in any way.  Thinking of ourselves as machines - which is really
no more than saying that we belong in the natural world - if anything gives one
MORE respect for people.  Anyone who has thought seriously about navigating
robots will look with awe at a congenital idiot lumbering clumsily around the
streets.

By the way, the difference between Cognitive Science and AI is that the former
is a ( rather loosely defined ) interdisciplinary area where AI, cognitive and
perceptual psychology, psycholinguistics, neuroscience and bits of philosophy
all cohabit with more or less mutual discomfort.  ( Like all such
interdiscipinary areas, it is a bit like a singles bar. )  The key sociological
point to bear in mind is that these different people have interests which
overlap, but are trained to use methods and to respect criteria of success and
honesty which often contradict one another, or at best have no mutual relevance.
This makes mutual communication difficult, and only fairly recently have
computer science and psychology come to see what it is that the other considers
as vital to respectability. ( Respectively: models specified in sufficient
detail to be implemented; theories specified in a way which admits of clean
empirical test.  )  Unfortunately these requirements are often at odds, a source
of continual difficulty in communication.  As for connectionism ( of which the
much cited PDP work is one variety),  nobody doubts its importance, but it
sometimes comes along with a philosophical ( or perhaps methodological ) claim
which is much more controversial, which is that it spells the end of the idea of
mental representations.  This has caused it to become the political
rallying-point for a curious mixture of people ( including Dreyfus, for
example ) who have long been opponents of AI, and so it has come to have the
odor of being somehow on the opposite side of a fence from AI, and we get such
oddities as Gilbert's question, "Will the recent interdisciplinary shift,
typified by the recent PDP work, be the end of AI as we knew it?".  No, it won't.

What's in a name? Nothing, provided we all agree on what it means; but since
communication is notoriously difficult, its wise to try to be sensitive to what
names are intended to mean.

Pat Hayes

( After typing this, I read  Richard O'Keefe's suggested definitions of cogsci
and AI, and largely agree with them. So there's one, Richard. )

------------------------------

End of AIList Digest
********************

∂07-Jun-88  2310	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #22  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 7 Jun 88  23:09:57 PDT
Date: Tue  7 Jun 1988 22:14-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #22
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 8 Jun 1988      Volume 7 : Issue 22

Today's Topics:

 Queries:
  Response to: inductive expert system tools
  Stock Price Forecasting
  Response to: AI in weather forecasting

  Talk Announcement - "Mundane Reasoning"

----------------------------------------------------------------------

Date: 6 Jun 88 18:42:42 GMT
From: esosun!cogen!alen@seismo.css.gov  (Alen Shapiro)
Subject: Response to: inductive expert system tools

In article <402@dnlunx.UUCP> marlies@dnlunx.UUCP (Steenbergen M.E.van) writes:
>
>                  . I am engaged in artificial intelligence research. At the
>moment I am investigating the possibilities of inductive expert systems. In
>the literature I have encountered the names of a number of (supposedly)
>inductive expert system building tools: Logian, RuleMaster, KDS, TIMM,
>Expert-Ease, Expert-Edge, VP-Expert. I would like to have more information
>about these tools (articles about them or the names of dealers in Holland). I
>would be very grateful to everyone sending me any information about these or
>other inductive tools. Remarks of people who have worked with inductive expert
>systems are also very welcome. Thanks!
>
There are basically 2 types of inductive systems

a) those that build an internal model by example (and classify future
   examples against that model) and
b) those that generate some kind of rule which, when run, will classify
   future examples

a) includes perceptron-like systems and more recently neural-net technology
   as well as some of the work my company does that is NOT neural-net based)
b) may be split into 2 camps; 1) systems that produce a single decision tree
   for all decision classes (e.g. Quinlan's ID3 upon which RuleMaster,
   Expert-Ease, Ex-Tran, Superexpert, First Class and more are based);
   2) systems that produce a decision for each class-value (e.g. Michalski's
   AQ11).

I do not include those systems that are not able to generalise in either
a or b since strictly they are not inductive!!

I don't know about dealers in Holland but ITL at George House, 36 N. Hanover
St., Glasgow Scotland G1 2AD (U.K.) are experts in producing REAL expert
systems that are inductively derived. The Turing Institute (same address)
are also well known in this regard.

--alen the Lisa slayer (it's a long story)

DISCLAIMER: I work for a company delivering inductively derived expert systems
into the real world doing real work and saving real money. I can be counted
on to be very biased!!

        ....!{seismo,esosun,suntan}!cogen!alen

------------------------------

Date: Tue, 7 Jun 88 15:52:30+0900
From: Minsu Shin <msshin%isdn.etri.re.kr@RELAY.CS.NET>
Subject: Stock Price Forecasting

  I am looking for references (books, articles,...) or any information
concerning "Forecast of  Stock Price  using  Pattern-Recognition".
  I will produce the gathered information after receiving some
amount of information, if anyone wants.
  Replies via email are fine.
  Many thanks in advance for this favor.
  My addresse is as follows:

 Network  Intellegence  Section
 ISDN Development Dept.
 ETRI
 P.O.Box  8, Tae-Deog  Science  Town
 Dae-Jeon,Chung-Nam, 302-350, KOREA
 Fax : 82-042-861-1033, Telex : TDTDROK K45532

------------------------------

Date: Tue, 7 Jun 88 07:55:06 EDT
From: m06242%mwvm@mitre.arpa
Subject: Response to: AI in weather forecasting

 To: AILIST@AI.AI.MIT.EDU
 From: George Swetnam
 Subject: AI in Weather Forecasting

 In 1985, The MITRE Corporation and the National Center for Atmospheric
 Research collaborated in an experimental expert system for predicting
 upslope snowstorms in the Denver, Colorado area.  An upslope storm is
 one which gets the necessary atmospheric lifting from translation of a
 moist airmass up a topographic slope.  Upslope storms are responsible
 for roughly 60% of the precipitation in the Denver region; in this case
 the topographic slope is the slow, long rise from the Mississippi River
 to the foot of the Rocky Mountains.
 The most recent published information on this work is the paper whose
 title and abstract appear below.

    FIELD TRIAL OF A FORECASTER'S ASSISTANT FOR THE PREDICTION OF
                     UPSLOPE SNOWSTORMS

          G. F. Swetnam and E. J. Dombroski, The MITRE Corporation

       R. F. Bunting, University Corporation for Atmospheric Research


   AIAA 25th Aerospace Sciences Meeting, January 12-15, 1987
                   Paper No.  AIAA 87-0029

                         ABSTRACT

 An experimental expert system has been developed to assist a
 meteorologist in forecasting upslope snowstorms in the Denver, Colorado
 area.  The system requests about 35 data entries in a typical session
 and evaluates the potential for adequate moisture, lifting, and cold
 temperatures.  From these it forecasts the expected snowfall amount.
 The user can trace the reasoning behind the forecast and alter selected
 input data to determine how alternative conditions affect the
 expectation of snow.

 Written in Prolog, the system runs on an IBM PC or PC compatible
 microcomputer.  A field trial was held in the winter of 1985-86 to test
 system operation and improve the rule base.  The system performed well,
 but needs further refinement and automatic data collection before it can
 be considered ready for evaluation in an operational context.

                         George Swetnam  (gswetnam@mitre)
                         The MITRE Corporation
                         7525 Colshire Drive
                         McLean, VA 22102

                         Tel: (703) 883-5845
 *
 *        George
::

------------------------------

Date: Tue, 7 Jun 88 00:13:03 EDT
From: research!dlm@research.att.com
Subject: Talk Announcement

______________________________________________________________________

                TALK ANNOUNCEMENT


Speaker:        Mark Derthick - Dept. of CS, Carnegie Mellon University

Title:          Mundane Reasoning

Date:           Tuesday, June 7
Time:           10:00
Place:          AT&T Bell Laboratories  MH 3D436

Abstract:

Frames are a natural and powerful conception for organizing knowledge.
Yet in most well-defined frame-based knowledge representation systems,
such as KL-ONE, the knowledge base must be logically consistent, no
guesses are made to remedy incomplete knowledge bases, and they
sometimes fail to return answers in a reasonable time, even for
seemingly easy queries.  On the other hand are connectionist knowledge
representation systems, which are more robust in that they can be made
to always return an answer quickly, and knowledge is combined
evidentially.  Unfortunately these systems, if they have a well
defined formal semantics at all, have had much less expressive power
than symbolic systems.  The differing characteristics result from two
independent decisions.  First, the statistical technique of Maximum a
Posteriori estimation is used as a semantic foundation rather than
logical deduction.  Second, heuristic simplifications of the models
considered give rise to fast, but errorful behavior.  Having made this
distinction, it is possible to use the same powerful syntax of
symbolic systems, but interpret it statistically and implement it with
a connectionist network.  Although correct networks are exponentially
large, they serve as a basis from which architectural simplifications
can be made which preserve an intuitive connection to the formal theory.
The knowledge base must be tuned to alleviate errors caused by the
heuristic simplifications, so the system is intended for familiar
everyday situations in which past performance has been used for
training and in which the ramifications of wrong answers are not
serious enough to justify the exponential search time required for
provably correct behavior.

Sponsor: Ron Brachman & Deborah McGuinness (allegra!dlm)

------------------------------

End of AIList Digest
********************

∂08-Jun-88  0833	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #21  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 8 Jun 88  08:33:00 PDT
Date: Tue  7 Jun 1988 22:00-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #21
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 8 Jun 1988      Volume 7 : Issue 21

Today's Topics:

 Philosophy:
  Human-human communication
  Constructive Question
  Why study connectionist networks?
  Fuzzy systems theory
  Ill-structured problems
  The Social Construction of Reality
  Definition of 'intelligent'
  Artificial Intelligence Languages
  Who else isn't a science?

----------------------------------------------------------------------

Date: 2 Jun 88 07:18:09 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: Human-human communication

In article <238@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>
>Name one thing that isn't expressible with language! :-)
Many things learnt by imitation, and taught by demonstration ;-)

I used to be involved in competitive gymnastics.  Last year, I got
involved in coaching.  The differences between the techniques I
learnt and the ones now taught are substantial.  There is a lot less
talk, and much more video.  Many moves are taught by "shaping"
gymnasts into "memory positions"  (aiming for some of these positions
will actually put you somewhere else, but that's the intention).  With
young children especially, trying to describe moves is pointless.
Even with adults, dance notations are a real problem.

We could get pedantic and say that ultimately this is describable.
For something to be USEFULLY describable by language

        a) someone other than the author must understand it
            (thus we comment programs in natural language)
        b) it must be more accurate and more efficient than
           other forms of communication.

Anyone who's interested in robot movement might find some inspiration
in gymnastic training programs for under-5s.  The amount of knowledge and
skill required to chain a few movements together is intriguing. As
with all human learning, the only insights are from failures to learn
(you can't observe someone learnING).  Perhaps the early mistakes of
young gymnasts may give a better insight into running robots :-)
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 6 Jun 88 10:31:47 GMT
From: mcvax!ukc!its63b!epistemi!jim@uunet.uu.net  (Jim Scobbie)
Subject: Re: Constructive Question


The difference between Cognitive Science and AI? The fact that the initials
"C.S." are already booked? :-)

(Actually, A.I.  for artificial insemination is widespread in the real
world in my experience.  When I visited the rural area my mother was
brought up in a few years ago, there were several who had been told I
working in A.I.  who wondered what had become of my bowler hat and
wellingtons.)

All this from a linguist, so in this case the normal disclaimers certainly
apply!

--
Jim Scobbie:    Centre for Cognitive Science and Department of Linguistics,
                Edinburgh University,
                2 Buccleuch Place, Edinburgh, EH8 9LW, SCOTLAND
UUCP:    ...!ukc!cstvax!epistemi!jim     JANET:  jim@uk.ac.ed.epistemi

------------------------------

Date: 6 Jun 88 12:54:40 GMT
From: craig@think.com  (Craig Stanfill)
Subject: Why study connectionist networks?

It seems to me that it is proper for AI to study the degree
to which any computational method behaves in an
``intelligent'' manner.  Connectionist methods are certainly
worth studying, regardless of the degree (small at present)
to which they mimic the behavior of actual neurons.  I do,
however, balk at calling these networks ``neural networks,''
because that implies that an important criteria in judging
the research is how well these networks mimic the behavior
of neurons; if such is the case, then the majority of
existing connectionist research is deeply flawed.  But let's
not get tangled up in words, and let's not let the level of
hype generated within the field obscure the fact that
connectionist networks have some very interesting
properties.

                                        -Craig Stanfill

------------------------------

Date: 6 Jun 88 13:05:59 GMT
From: eagle!icdoc!qmc-cs!flash@bloom-beacon.mit.edu  (Flash Sheridan)
Subject: Re: Fuzzy systems theory was (Re: Alternative to Probability)

In article <1073@usfvax2.EDU> Wayne Pollock writes:
>In article <487@sequent.cs.qmc.ac.uk> root@cs.qmc.ac.uk [ME]  writes:
>>...
>>>>Because fuzzy logic is based on a fallacy
>>>Is this kind of polemic really necessary?
>>
>>Yes.  The thing the fuzzies try to ignore is that they haven't established
>>that their field has any value whatsoever except a few cases of dumb luck.
>
>On the other hand, set theory, which underlies much of current theory, is
>also based on fallacies; (given the basic premises of set theory one can
>easily derive their negation).

Sorry, it's a lot more complicated than that.  For more details, see my
D.Phil thesis when it exists.  (For a start, if you think you know what
you're talking about, what _are_ these "basic premises"?  From your
comment, I think you're about 70 years out of date.)

I'm the first to criticize Orthodox Set Theory.  But its flaws are of an
entirely different kind from those of fuzzy logic.  ST has what it takes,
almost, to be a mathematically respectable theory.  FL isn't even close.

From: flash@ee.qmc.ac.uk (Flash Sheridan)
Reply-To: sheridan@nss.cs.ucl.ac.uk
or_perhaps_Reply_to: flash@cs.qmc.ac.uk

------------------------------

Date: 6 Jun 88 15:30:40 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu 
      (Stephen Smoliar)
Subject: Re: Ill-structured problems

In article <17481@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle)
writes:
>In article <5644@venera.isi.edu> Stephen Smoliar writes:
>>Take a look at Herb Simon's article in ARTIFICIAL INTELLIGENCE about
>>"ill-structured problems" and then decide whether or not you want to
>make that bet.
>
>      A reference to the above would be helpful.
>
Herbert A. Simon
The Structure of Ill Structured Problems
ARTIFICIAL INTELLIGENCE  4(1973), 181-201

Simon discusses houses, rather than steeples;  but I think the difficulties
he encounters are still relevant.

------------------------------

Date: 6 Jun 88 16:31:46 GMT
From: mcvax!ukc!dcl-cs!simon@uunet.uu.net  (Simon Brooke)
Subject: Re: The Social Construction of Reality

Some time ago, on comp.lang.prolog, there was a discussion about the
supposed offensiveness of Richard O'Keefe's upper class English accent.
There are times, however, when such an accent appears to be called for,
and one of these occurred when T. William Wells posted (in
<218@proxftl.UUCP>):

#Oh, yeah.  First, Kant is a dead horse.

Really, the arrogance of the ignorant knows no bounds.

What Wells was objecting to was the claim in an earlier posting
<1157@crete.cs.glasgow.ac.uk> by Gilbert Cockton, that reality was socially
constructed, through a process of evolving concensus. Wells goes on to
ask:

#OK, answer me this: how in the world do they reach a consensus
#without some underlying reality which they communicate through.

This sentence demonstrates a profound misunderstanding. There may indeed
be a 'reality' out there, if by such you mean a system of material
(whatever that word means) objects, but if there is, we can never access
it except though our perceptions, and we have no way of verifying these.
Moreover, we can never verify that our own perceptions of phenomena agree
with those of other people. In order to make communication possible we
assume that our perceptions do accord; but the possibility of communication
between humans remains accutely mysterious in itself - see for example
Barwise and Perry's attempt (as yet unsuccessful) to formalise it.

Thus we are able to communicate not because of the existance of a
'reality' but despite the possible absence of it.

#And this: logic depends on the idea of noncontradiction.  If you
#abandon that, you abandon logic.

Well, tough. Einstein claimed that 'God does not play with dice'; that was
simply a statement of belief. The assumption that reality - if it exists -
is either consistent or coherent is no more than an assumption. It would
be nice if it were true, and life is a lot more comfortable so long as we
believe that it is. But it is simply ideology to claim that it certainly
is.

Later in his piece, Wells (replying to Cockton) writes:

#You assert that consensus determines reality (whatever that means)......
#your proposition has no evidence to validate itself with.

Well, I (personally) would not assert quite that, so let me state the case
more carefully. In the absence of any verifyable access to a real world,
what we conventionally (that is, in normal conversation) refer to as
'reality' *can only be* a social construct - it can't be an individual
construct, as if I talked to you about a world constructed entirely in my
own imagination no communication could take place (note that I am assuming
for the sake of the argument now that communication between humans *is*
possible, despite the fact that we don't understand how). Likewise, it
cannot be given, because we don't have access to any medium through which
it could be given. That's not much evidence, I agree - but as Sherlock
Homes (or was it Sir Arthur Conan Doyle?) repeatedly said, when you have
eliminated the impossible, whatever remains, no matter how incredible,
must be true.

The most depressing thing of all in Well's posting is his ending. He
writes:

#DO NOT BOTHER TO REPLY TO ME IF YOU WANT TO DEFEND CONSENSUS
#REALITY.  The idea is so sick that I am not even willing to reply
#to those who believe in it.
#
#As you have noticed, this is not intended as a counter argument
#to consensus reality.

Unable to find rational argument to defend the articles of his faith,
Wells, like fanatical adherents of other ideologies before him, first
hurls abuse at his opponents, and finally, defeated, closes his ears. I
note that he is in industry and not an academic; nevertheless he is
posting into the ai news group, and must therefore be considered part of
the American AI community. I haven't visited the States; I wonder if
someone could tell me whether this extraordinary combination of ignorance
and arrogance is frequently encountered in American intellectual life?


** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      *
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*                                                                       *
*  Thought for today: isn't it time you learned the Language            *
********************* International Superieur de Programmation? *********

------------------------------

Date: Tue, 7 Jun 88 10:08 EDT
From: GODDEN%gmr.com@RELAY.CS.NET
Subject: Definition of 'intelligent'


The current amusement here at work is found in >Webster's New Collegiate
Dictionary< published by Merriam:

 intelligent: [...] 3: able to perform some of the functions of a computer

-Kurt Godden

------------------------------

Date: Tue, 07 Jun 88 11:01:04 HOE
From: ALFONSEC%EMDCCI11.BITNET@CUNYVM.CUNY.EDU
Subject: Re: Artificial Intelligence Languages

In
AIList Digest             Friday, 3 Jun 1988       Volume 7 : Issue 16
Ed King mentions a list of features an AI language must have:

<         1) True linked list capability.
<
<         2) Easy access to hardware
<
<         3) Easy to use string functions, or a library to do such.
<
<So, by this criteria, all the commonly held "AI languages" would fit
<(like PROLOG, LISP, POP, et cetera ad nauseum).

Please add APL2. It has all of the above plus supporting frames
as a basic data structure. After all, a frame is nothing but a matrix
of complex objects. Object oriented programming is also trivial in APL2.



Regards,

Manuel Alfonseca, ALFONSEC at EMDCCI11

------------------------------

Date: 7 Jun 88 15:14:00 GMT
From: apollo!nelson_p@eddie.mit.edu  (Peter Nelson)
Subject: who else isn't a science


>>regarded as adequate for the study of human reasoning.  On what
>>grounds does AI ignore so many intellectual traditions?

>  Because AI would like to make some progress (for a change!).  I
>  originally majored in psychology.  With the exception of some areas
>  in physiological pyschology, the field is not a science.  Its
>  models and definitions are simply not rigorous enough to be useful.

>Your description of psychology reminds many people of AI, except
>for the fact that AI's models end up being useful for many things
>having nothing to do with the motivating application.
>
>Gerald Edelman, for example, has compared AI with Aristotelian
>dentistry: lots of theorizing, but no attempt to actually compare
>models with the real world.  AI grabs onto the neural net paradigm,
>say, and then never bothers to check if what is done with neural
>nets has anything to do with actual brains.

  But we don't know enough about how real brains work yet, and it
  may be quite a while until we do.  Besides, neural net models
  use a lot fewer nodes that a real brain does to solve a similar
  problem so we're probably not doing *exactly* the same thing as
  a brain anyway.

  I don't see why everyone gets hung up on mimicking natural
  intelligence.  The point is to solve real-world problems. Make
  machines understand continous speech, translate technical articles,
  put together mechanical devices from parts that can be visually
  recognized, pick out high priority targets in self-guiding missiles,
  etc.  To the extent that we understand natural systems and can use
  that knowledge, great!   Otherwise, improvise!

                                    --Peter Nelson

------------------------------

Date: 7 Jun 88 16:21:52 GMT
From: bbn.com!pineapple.bbn.com!barr@bbn.com  (Hunter Barr)
Subject: Re: Human-human communication

In article <1315@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>In article <238@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>>
>>Name one thing that isn't expressible with language! :-)
>Many things learnt by imitation, and taught by demonstration ;-)
>
>I used to be involved in competitive gymnastics.  Last year, I got
>involved in coaching.  The differences between the techniques I
>learnt and the ones now taught are substantial.  There is a lot less
>talk, and much more video.  Many moves are taught by "shaping"
>gymnasts into "memory positions"  (aiming for some of these positions
>will actually put you somewhere else, but that's the intention).  With
>young children especially, trying to describe moves is pointless.
>Even with adults, dance notations are a real problem.
>
>We could get pedantic and say that ultimately this is describable.
>For something to be USEFULLY describable by language
>
>       a) someone other than the author must understand it
>           (thus we comment programs in natural language)
>       b) it must be more accurate and more efficient than
>          other forms of communication.
>
>Anyone who's interested in robot movement might find some inspiration
>in gymnastic training programs for under-5s.  The amount of knowledge and
>skill required to chain a few movements together is intriguing. As
>with all human learning, the only insights are from failures to learn
>(you can't observe someone learnING).  Perhaps the early mistakes of
>young gymnasts may give a better insight into running robots :-)
>--
>Gilbert Cockton, Department of Computing Science,  The University, Glasgow
>       gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert
>
>            The proper object of the study of humanity is humans, not machines

----------------------------------------------------------------
First: thank you, Gilbert, for your contributions to this discussion.
You have been responsible for much of the life in it.  I disagree with
most of your arguments and almost all of your conclusions, but I am
much indebted to you for stimulating my thoughts, and forcing some
rigor in justifying my own opinions.  Without you (and other people
willing to dissent) we might all sit around nodding agreement at each-other!

Now I must "get pedantic," by saying that body movement *is*
describable.  As for part a), you are correct that someone other than
the author must understand it, otherwise we do not have communication.
But you ignore the existance of useful dance notations.  I don't know
much about dance notation, and I am sure there is much lacking in it--
probably standardization for one thing.  But the lack of a universally
intelligable *spoken* language does not make human speech fail the
"usefulness" test.  Mandarin Chinese is an even bigger problem with
adults than dance notation!  If one learned a common dance notation
from childhood, it would be every bit as useful as the Chinese
language.  And I often interpret computer programs with *no* "natural
language" comments whatsoever.  (Pity me if you wish, but don't say
that these computer programs fail to describe anything, simply because
they have no natural language in them.)

To further show that description of movement is possible, imagine that
I tell a gymnastics student this:

  Run over to the rings, get into the Iron-Cross position, then lower
  your body, letting the arms get straight-up before the legs start to
  come down.  At this point your toes should be pointing straight down.
  Then lift your fingers from the rings until you are hanging by your
  pinky-fingers.  Then drop to the floor and come back over here.

Yes it contains ambiguity, but it is pretty clear compared to much of
what passes for communication, even to people who know very little
about gymnastics (like me).

Now for part b).  We need something more accurate and more efficient
than other forms of communication.  Well, one could conceivably plot
out a system whereby different combinations and sequences of smells
stand for body movements.  Cinnamon-sugar-lime, followed by peppermint
could mean "a one-handed cartwheel."  Compared to this smell-system for
dance notation, spoken language is very accurate and efficient.

So your definition allows dance to be considered "USEFULLY
describable" by dance notation, because a) the notation *can* be
understood by those other than the author (namely, those educated to
understand the notation), and b) the notation *is* more accurate and
efficient than other forms of communication (e.g., the smell-system).

It looks like your "definition" is useless, even for your argument.
As it happens, there are many cases where a picture *is* worth a
thousand words, and using your hands to grab the student's body,
putting it into the correct position, *is* the best way teach
gymnastics.  And there are many cases where the real thing *is* more
useful than the symbols we make up for it.  (This is in contrast to the
case where assembly-language mnemonics are easier to follow than the real,
bare bits in core memory.)  But you have in no way shown that
gymnastics movement is not describable.  So I join the challenge:

>>Name one thing that isn't expressible with language! :-)

Well (removing my tongue from my cheek), you don't have to *name* it,
but give us some sort evidence that it exists, and that it cannot be
expressed with symbols.

To the audience at large: I hope you will all pardon me for quoting
the above posting *in toto*, but since I attack Gilbert Cockton I felt
it only fair to avoid taking his words out of context.  Thank you for
reading.  Please reply either to me directly, or to COMP.AI.

                            ______
                            HUNTER

------------------------------

End of AIList Digest
********************

∂08-Jun-88  2149	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #23  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 8 Jun 88  21:48:57 PDT
Date: Thu  9 Jun 1988 00:25-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #23
To: AIList@AI.AI.MIT.EDU


AIList Digest            Thursday, 9 Jun 1988      Volume 7 : Issue 23

Today's Topics:

 Queries:
  Response to: inductive expert system tools
  Stock Price Forecasting
  ITS Conference
  Response to: AI in weather forecasting
  Rule Based ES References
  Response to: Stock Price Forecasting
  Applications of AI for fault tolerance

 Seminars:
  Partial Computation of Database Queries (UNISYS seminar)
  Feasible Learnability and Locality of Grammars (UNISYS seminar)

----------------------------------------------------------------------

Date: 6 Jun 88 18:42:42 GMT
From: esosun!cogen!alen@seismo.css.gov  (Alen Shapiro)
Subject: Response to: inductive expert system tools

In article <402@dnlunx.UUCP> marlies@dnlunx.UUCP (Steenbergen M.E.van) writes:
>
>                  . I am engaged in artificial intelligence research. At the
>moment I am investigating the possibilities of inductive expert systems. In
>the literature I have encountered the names of a number of (supposedly)
>inductive expert system building tools: Logian, RuleMaster, KDS, TIMM,
>Expert-Ease, Expert-Edge, VP-Expert. I would like to have more information
>about these tools (articles about them or the names of dealers in Holland). I
>would be very grateful to everyone sending me any information about these or
>other inductive tools. Remarks of people who have worked with inductive expert
>systems are also very welcome. Thanks!
>
There are basically 2 types of inductive systems

a) those that build an internal model by example (and classify future
   examples against that model) and
b) those that generate some kind of rule which, when run, will classify
   future examples

a) includes perceptron-like systems and more recently neural-net technology
   as well as some of the work my company does that is NOT neural-net based)
b) may be split into 2 camps; 1) systems that produce a single decision tree
   for all decision classes (e.g. Quinlan's ID3 upon which RuleMaster,
   Expert-Ease, Ex-Tran, Superexpert, First Class and more are based);
   2) systems that produce a decision for each class-value (e.g. Michalski's
   AQ11).

I do not include those systems that are not able to generalise in either
a or b since strictly they are not inductive!!

I don't know about dealers in Holland but ITL at George House, 36 N. Hanover
St., Glasgow Scotland G1 2AD (U.K.) are experts in producing REAL expert
systems that are inductively derived. The Turing Institute (same address)
are also well known in this regard.

--alen the Lisa slayer (it's a long story)

DISCLAIMER: I work for a company delivering inductively derived expert systems
into the real world doing real work and saving real money. I can be counted
on to be very biased!!

        ....!{seismo,esosun,suntan}!cogen!alen

------------------------------

Date: Tue, 7 Jun 88 15:52:30+0900
From: Minsu Shin <msshin%isdn.etri.re.kr@RELAY.CS.NET>
Subject: Stock Price Forecasting

  I am looking for references (books, articles,...) or any information
concerning "Forecast of  Stock Price  using  Pattern-Recognition".
  I will produce the gathered information after receiving some
amount of information, if anyone wants.
  Replies via email are fine.
  Many thanks in advance for this favor.
  My addresse is as follows:

 Network  Intellegence  Section
 ISDN Development Dept.
 ETRI
 P.O.Box  8, Tae-Deog  Science  Town
 Dae-Jeon,Chung-Nam, 302-350, KOREA
 Fax : 82-042-861-1033, Telex : TDTDROK K45532

------------------------------

Date: 7 Jun 88 08:13:51 GMT
From: mcvax!unido!cosmo!hase%cosmo.UUCP@uunet.uu.net  (Juergen Seeger)
Subject: ITS Conference

I'm searching for Papers, readers, protocols and so on
on the ITS-Conference in Montreal.
Please send to:

Juergen Seeger
c/o Heinz Heise Verlag
Helstorfer Strasse 7
D-3000 Hannover 61

------------------------------

Date: Tue, 7 Jun 88 07:55:06 EDT
From: m06242%mwvm@mitre.arpa
Subject: Response to: AI in weather forecasting

 To: AILIST@AI.AI.MIT.EDU
 From: George Swetnam
 Subject: AI in Weather Forecasting

 In 1985, The MITRE Corporation and the National Center for Atmospheric
 Research collaborated in an experimental expert system for predicting
 upslope snowstorms in the Denver, Colorado area.  An upslope storm is
 one which gets the necessary atmospheric lifting from translation of a
 moist airmass up a topographic slope.  Upslope storms are responsible
 for roughly 60% of the precipitation in the Denver region; in this case
 the topographic slope is the slow, long rise from the Mississippi River
 to the foot of the Rocky Mountains.
 The most recent published information on this work is the paper whose
 title and abstract appear below.

    FIELD TRIAL OF A FORECASTER'S ASSISTANT FOR THE PREDICTION OF
                     UPSLOPE SNOWSTORMS

          G. F. Swetnam and E. J. Dombroski, The MITRE Corporation

       R. F. Bunting, University Corporation for Atmospheric Research


   AIAA 25th Aerospace Sciences Meeting, January 12-15, 1987
                   Paper No.  AIAA 87-0029

                         ABSTRACT

 An experimental expert system has been developed to assist a
 meteorologist in forecasting upslope snowstorms in the Denver, Colorado
 area.  The system requests about 35 data entries in a typical session
 and evaluates the potential for adequate moisture, lifting, and cold
 temperatures.  From these it forecasts the expected snowfall amount.
 The user can trace the reasoning behind the forecast and alter selected
 input data to determine how alternative conditions affect the
 expectation of snow.

 Written in Prolog, the system runs on an IBM PC or PC compatible
 microcomputer.  A field trial was held in the winter of 1985-86 to test
 system operation and improve the rule base.  The system performed well,
 but needs further refinement and automatic data collection before it can
 be considered ready for evaluation in an operational context.

                         George Swetnam  (gswetnam@mitre)
                         The MITRE Corporation
                         7525 Colshire Drive
                         McLean, VA 22102

                         Tel: (703) 883-5845
 *
 *        George
::

------------------------------

Date: 7 Jun 88 19:05:59 GMT
From: unh!ss1@uunet.uu.net  (Suresh Subramanian)
Subject: Rule Based ES References


  I am building a rule based expert system as a part of the learning
  system for my thesis. The RB expert system consists of a rule base, which has
  the rules; a working area where problems to be solved are represented and
  a interpreter which  runs the RB expert system using Forgy's Rete Match
  algorithm. I need references and suggestions regarding the evaluation of the
  rule based expert system.

   Please email to one of the following addresses.

 1)   ss@unhcs.unh.edu
 2)   ss1@unh.unh.edu or ss1@descartes.unh.edu
 3)   ss1@unh.UUCP
 4) Internet : unh!ss1@uunet.uu.net


  Thanks in advance for the information.

                                                     Suresh Subramanian

------------------------------

Date: 8 Jun 88 13:08:04 GMT
From: cbosgd!osu-cis!dsacg1!ntm1169@clyde.att.com
Subject: Response to: Stock Price Forecasting


> Date: Tue, 7 Jun 88 02:52 EDT
> From: Minsu Shin <msshin%isdn.etri.re.kr@RELAY.CS.NET>
> To: AILIST-REQUEST@mc.lcs.mit.edu
> Subject: Stock Price Forecasting
>
>   I am looking for references (books, articles,...) or any information
> concerning "Forecast of  Stock Price  using  Pattern-Recognition".


I am not sure if this is the reference that you are looking for, but
I saw an article "NeuralWare Expert System Classifies Stock Patterns" on
pages 21 and 24 oF FEDERAL COMPUTER WEEK, May 9, 1988.  It discusses a system
using a software product called Analog Adaptive Pattern Classification System
, a product for IBM PC and compatibles costinga $4995 from NeuralWare INC.
(Sewickley, PA).

--
Mott Given @ Defense Logistics Agency ,DSAC-TMP, P.O. Box 1605,
            Systems Automation Center, Columbus, OH 43216-5002
UUCP:        {cbosgd,gould,cbatt!osu-cis}!dsacg1!mgiven
Phone:       614-238-9431

------------------------------

Date: Wed 8 Jun 88 15:04:57-PDT
From: Singaravel Murugesan <MURUGESAN@PLUTO.ARC.NASA.GOV>
Subject: Applications of AI for fault tolerance

------------------
I am interested in the area of application of AI/Expert System
Techniques for fault diagnosis and fault tolerance in  computers
and real-time control and monitoring systems.

I would appreciate receiving references/bibliography and copies
of reports/publications in these areas. Kindly reply to:

        S. Murugesan
        NASA Ames Research Center
        Mail Stop: 244-4
        Moffett Field
        CA 94035
          Phone: (415)-694-6525
                 FTS: 464-6525
         murugesan@pluto.arc.nasa

------------------------------

Date: Wed, 8 Jun 88 14:59:26 EDT
From: finin@PRC.Unisys.COM
Subject: Partial Computation of Database Queries (UNISYS seminar)


                              AI SEMINAR
                     UNISYS PAOLI RESEARCH CENTER


               Partial Computation of Database Queries

                            Susan Davidson
                   Computer and Information Science
                      University of Pennsylvania

A critical component of real-time systems is the database, which is
used to store external input such as environmental readings from
sensors, as well as system information.  Typically, these databases
are large, due to vast quantities of historical data, and are
distributed, due to the distributed topology of the devices
controlling the application and the critical need for fault tolerance.
Hence, sophisticated database management systems are needed.  However,
most of the database management systems being used for these
applications are hand-coded.  Off-the-shelf database management
systems are not used due in part to a lack of predictability of
response.

In this talk, an iterative method of processing real-time database
queries will be presented.  The method improves the fault-tolerance
and predictability of response in real-time database systems by
guaranteeing an approximate answer to a query at any point in
computation; if for some reason the deadline of a query cannot be met
(for example, due to communication failures or unanticipated locking
which make certain database structures unavailable), a partial answer
can be given.  Partial answers monotonically improve with time in the
sense that any fact which is said to be true remains true as
computation proceeds, and any fact which can be inferred to be false
remains false as computation proceeds.


                      2:00 pm Wednesday, June 15
                         BIC Conference room
                     Unisys Paloi Research Center
                      Route 252 and Central Ave.
                            Paoli PA 19311

   -- non-Unisys visitors who are interested in attending should --
   --   send email to finin@prc.unisys.com or call 215-648-7446  --

------------------------------

Date: Wed, 8 Jun 88 15:19:10 EDT
From: finin@PRC.Unisys.COM
Subject: Feasible Learnability and Locality of Grammars (UNISYS
         seminar)


                              AI SEMINAR
                     UNISYS PAOLI RESEARCH CENTER


            Feasible Learnability and Locality of Grammars

                              Naoki Abe
                   Computer and Information Science
                      University of Pennsylvania


Polynomial learnability is a generalization of a complexity theoretic
notion of feasible learnability originally developed by Valiant in the
context of learning boolean concepts from examples.  In this talk I
will present an intuitive exposition of this learning paradigm, and
then apply this notion to the evaluation of grammatical formalisms for
linguistic description from the point of view of feasible
learnability.  In particular, a novel, nontrivial constraint on the
degree of ``locality'' of grammars will be defined which allows
grammatical formalisms of much linguistic interest to be polynomially
learnable.  If time allows possible implications of this result to the
theory of natural language acquisition will also be discussed.


                      2:00 pm Wednesday, June 1
                         Paoli Auditorium
                     Unisys Paloi Research Center
                      Route 252 and Central Ave.
                            Paoli PA 19311

   -- non-Unisys visitors who are interested in attending should --
   --   send email to finin@prc.unisys.com or call 215-648-7446  --

------------------------------

End of AIList Digest
********************

∂09-Jun-88  0045	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #24  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 9 Jun 88  00:45:24 PDT
Date: Thu  9 Jun 1988 00:38-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #24
To: AIList@AI.AI.MIT.EDU


AIList Digest            Thursday, 9 Jun 1988      Volume 7 : Issue 24

Today's Topics:

 Philosophy:
  Understanding, utility, rigour
  Human-human communication
  Definition of 'intelligent'
  "TV GENIE" transmitter used in robotics - WARNING
  consensual reality

----------------------------------------------------------------------

Date: 6 Jun 88 10:35:14 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Understanding, utility, rigour

In article <3c671fbe.44e6@apollo.uucp> nelson_p@apollo.uucp writes:
>
>  Because AI would like to make some progress (for a change!).
Does it really think that ignoring the failures of others GUARANTEES
it success, rather than even more dismal failure?  Is there an argument
behind this?

>  With the exception of some areas in physiological pyschology,
> the field is not a science.
What do we mean when we say this?  What do you mean by 'scientific'?
I ask because there are many definitions, many approaches, mostly polemic.

>  Its models and definitions are simply not rigorous enough to be useful.
Lack of rigour does not imply lack of utility.  Having applied many of
the models and definitions I encountered in psychology as a teacher, I
can say that I certainly found my psychology useful, even the behaviourism
(it forced me to distinguish between things which could and could not
be learned by rote, the former are good CAI/CAL fodder).

Understanding and rigour and not the same thing. Nor is 'rigour' one
thing.  The difference between humans and computers is what can
inspire them.  Computers are inspired by mechanistic programs, humans
by ideas, models, new understandings, alternative views.  Not all are
of course, and too much science in human areas directed towards the creation
of cast-iron understanding for the uninspired dullard.

>  When you talk about an 'understanding of humanity' you clearly
>  have a different use of the term 'understanding' in mind than I do.
Good, perhaps you might appreciate that it is not all without value.
In fact, it is the understanding you must use daily in all the
circumstances where science has not come to guide your actions.
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 7 Jun 88 22:32:56 GMT
From: esosun!jackson@seismo.css.gov  (Jerry Jackson)
Subject: Re: Human-human communication


Some obvious examples of things inexpressible in language are:



How to recognize the color red on sight (or any other color)..

How to tell which of two sounds has a higher pitch by listening..

And so on...

--Jerry Jackson

------------------------------

Date: Wed, 8 Jun 88 07:54:18 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: re: Definition of 'intelligent'


In AIList Digest 7.21, Kurt Godden <GODDEN%gmr.com@RELAY.CS.NET> writes:

KG> The current amusement here at work is found in >Webster's New Collegiate
KG> Dictionary< published by Merriam:

KG>  intelligent: [...] 3: able to perform some of the functions of a computer

When you quote the entire definition of "intelligent" (sense 3) from
Webster's 9th New Collegiate Dictionary, you find quite a sensible
definition of a certain CS usage of the term:

        3 : able to perform computer functions <an ~ terminal>; also :
        able to convert digital information to hard copy <an ~ copier>

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Wed, 8 Jun 88 09:10:23 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: "TV GENIE" transmitter used in robotics - WARNING

      As I've seen the TV transmitter called the "TV Genie" in several
robotics labs around the country, it seems appropriate to post this
here.  I was considering using one myself, but after consultation with
some hams and the local FCC office, discovered that this is a bad idea.
See below.

                                        John Nagle

   --
The ARRL Letter, Volume 6, No. 22, November 4, 1987


  Published by:
       The American Radio Relay League, Inc.
       225 Main St.
       Newington, CT 06111

  Editor:
       Phil Sager, WB4FDT

       Material from The ARRL Letter may be reproduced in whole
or in part, in any form, including photoreproduction and
electronic databanks, provided that credit is given to The ARRL
Letter and to the American Radio Relay League, Inc.


ORION INDUSTRIES FINED $940,000
     Orion Industries, Inc, of Las Vegas, Nevada, and its owners
have been fined over $940,000 for importing and marketing illegal
radiofrequency devices. In addition, one owner was sentenced to
two years imprisonment.
     The illegal device the firm imported and marketed is a low
powered video transmitter called "TV Genie" designed to retrans-
mit video signals from cameras and VCRs over the air to nearby
television receivers. The FCC said that recent complaints of
interference to air flight communications in Tennessee and a
rural Illinois ambulance service were traced to the device.
     According to the FCC public notice, warnings were issued to
Orion Industries after the FCC received reports that these
devices were being sold. Follow-up investigations revealed that
Orion sold over 27,000 of the devices after receiving the
warnings.
     The penalties assessed were based upon federal law, which
allows maximum fines of twice the gross gain from sales of the
device.

------------------------------

Date: 8 Jun 88 12:57 PDT
From: hayes.pa@Xerox.COM
Subject: consensual reality

I can't help responding to Simon Brooke's acidic comments on William
Wells' rather brusquely expressed response to Cockton's social-science
screaming.  Simon still has a three-hundred-year old DOUBT about the
world, and how we know it's there.  Most English-speaking analytical
philosophy got over that around a century ago, but it seems to have
lived on in German philosophy and become built into the foundations of a
certain branch of social science theory.

Look: of course we can only perceive the world through our perceptions,
this is almost a tautology.  So what?  This is only a problem if one is
anxious to obtain a different kind of knowledge, something which is
ABSOLUTELY certain.  The need for this came largely from religious
thinking, and is now a matter of history.   I'm not certain of anything
more than that there is a CRT screen in front of me right now.  Unlike
Descartes, I'm not interested in nailing down truth more firmly than by
empirical test.  Once one takes this sort of an attitude, science
becomes possible, and all these terrible feelings of alienation, doubt
and the mysteriousness of communication simply evaporate.   Of course,
perception and communication are amazing phenomena, and we don't
understand them; but they aren't isolated in some sort of philosophical
cloud, they are just damned complicated and subtle.

A question: if one doubts the existence of the physical world in which
we live, what gives one such confidence in the existence of the other
people who comprise the society which is supposed to determine our
construction of the reality?  You deny us "any verifyable access to a
real world" , yet later in the same sentence refer to "normal
conversation", which seems to me like a remarkable shift of level of
ontological cynicism.

Pat

------------------------------

Date: 8 Jun 88 20:56:21 GMT
From: bbn.com!pineapple.bbn.com!barr@bbn.com  (Hunter Barr)
Subject: Re: Human-human communication

In article <198@esosun.UUCP> jackson@esosun.UUCP (Jerry Jackson) writes:
>
>Some obvious examples of things inexpressible in language are:
>
>How to recognize the color red on sight (or any other color)..
>
>How to tell which of two sounds has a higher pitch by listening..
>
>And so on...
>
>--Jerry Jackson


All communication is based on common ground between the communicator
and the audience.  Symbols are established for colors and sounds as
for anything else-- by common experience, i.e., common to both
communicator and audience.  Often the *easiest* way to establish this
common ground, is to attach a symbol to something physical.  For
instance, to put a young gymnast's body in the "memory position" and
say, "There.  That is called 'arching your back.'"
Or to point to a red object and say, "That object is red."
Or to play a two notes on piano and say "The second one is higher."

While it is true that most of what happens in our minds (all our acts
of physical perception, emotion, and some of our goal resolution) is
non-linguistic, there is nothing to stop us from attaching linguistic
symbols to any part of it and expressing it in language.  Thus AIers
find the acts of the mind equivalent to (and indistinguishable from)
the manipulation of symbols.  You are mistaken in thinking that
language is unable to deal with non-linguistic phenomena.

I will now express in language:
"How to recognize the color red on sight (or any other color)..":

Find a person who knows the meaning of the word "red."  Ask her to
point out objects which are red, and objects which are not,
distinguishing between them as she goes along.  If you are physically
able to distinguish colors, you will soon get the hang of it.

This is no different from having an English teacher write sentences on
the black-board, distinguishing between those words which are verbs
and those which are not.  That is probably how you learned the meaning
of the English word "verb."

What is the difference between learning the word "red" and learning
the word "verb"?  Surely the latter scenario shows that the concept
"verb" is expressible in language.  It seems to me that we commonly
make use of the word "red" when nothing red is in sight, leading me to
think that lanuage expresses both concepts quite reliably, without
regard to their tangible or otherwise physical existance.

I am experiencing something like this scenario myself these days.  I
just started to study Japanese, and I have yet to pin down *aoi*; as
time goes on I will ask an expert Japanese-speaker to point out
things that fall under that category, and I will eventually get a very
good idea of what is meant by *aoi*. (My current understanding is that
it covers virtually everything which English calls "blue", and
possibly many shades which English calls "green".)

Someone once theorized that over the centuries our understanding of
the Latin color-words may have shifted slightly.  The problem is that
we have no-one whose native language is Latin, who can point to ruddy
objects and say, "Well, this one is not quite *ruber*, but that other
one surely is."  When we translate a piece of Latin text as "He wore a
red cloak," who is to say that an English-speaking eye-witness would
not have called it "orange" or "brown".  I cannot even agree with my
girlfriend which things are purple and which are blue.  This could
never happen with terms like *maior* and *minor*, because there are so
many common objects to keep the distinction clear.

If you feel that I am cheating, try to express something in language
which does *not* fall back on some experience like these.  And please
don't forget to express it where I can read it-- either into my
mailbox, or into this newsgroup.  Thanks for reading and responding--
I love the attention.
                            ______
                            HUNTER

------------------------------

Date: 08 Jun 88  1619 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: consensual reality

The trouble with a consensual or any other subjective concept of reality
is that it is scientifically implausible.  That the world evolved, that
life evolved, that humanity evolved, that civilization evolved and that
science evolved is rather well accepted, and the advocates of subjective
concepts of reality don't usually challenge it.

        However, if evolution of all these things is a fact, then it would
be an additional fact about evolution if humans and human society evolved
any privileged access to facts about the world.  There isn't even any
guarantee from evolution that all the facts of the world or even of
mathematics are in any way conclusively decidable by such creatures as may
evolve intelligence.  What we can observe directly is an accident of the
sense organs we happen to have evolved.  Had we evolved as good
echolocation as bats, we might be able to observe each other's innards
directly.  Likewise there is no mathematical theorem that the truth about
any mathematical question fits within axiomatic systems with nice
properties.

        Indeed science is a social activity and all information comes in
through the senses.  A cautious view of what we can learn would like to
keep science close to observation and would pay attention to the consensus
aspects of what we believe.  However, our world is not constructed in
a way that co-operates with such desires.  Its basic aspects are far
from observation, the truth about it is often hard to formulate in
our languages, and some aspects of the truth may even be impossible to
formulate.  The consensus is often muddled or wrong.

To deal with this matter I advocate a new branch of philosophy I call
metaepistemology.  It studies abstractly the relation between the
structure of a world and what an intelligent system within the world
can learn about it.  This will depend on how the system is connected
to the rest of the world and what the system regards as meaningful
propositions about the world and what it accepts as evidence for these
propositions.

Curiously, there is a relevant paper - "Gedanken Experiments with Sequential
Machines" by E. Moore.  The paper is in "Automata Studies" edited
by C. E. Shannon and J. McCarthy, Princeton University Press 1956.
Moore only deals with finite automata observed from the outside
and doesn't deal with criteria for meaningfulness, but it's a start.

The issue is relevant for AI.  Machines programmed to find out about
the environment we put them in won't work very well if we provide
them with only the ability to formulate hypotheses about the relations
among their inputs and outputs.  They need also to be able to hypothesize
theoretical entities and conjecture about their existence and properties.
It will be even worse if we try to program to regard reality as
consensual, since such a view is worse than false; it's incoherent.

------------------------------

End of AIList Digest
********************

∂09-Jun-88  1606	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #25  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 9 Jun 88  16:03:45 PDT
Date: Thu  9 Jun 1988 18:40-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #25
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 10 Jun 1988       Volume 7 : Issue 25

Today's Topics:

 Philosophy:
  The Social Construction of Reality
  Bad AI: A Clarification
  Emotion (was Re: Language-related capabilities)
  Me and Karl Kluge (no flames, no insults, no abuse)
  Constructive Question  (Texts and social context)
  who else isn't a science
  Consensus and Reality
  human-human communication
  The Social Construction of Reality
  construal of induction
  Hypostatization

----------------------------------------------------------------------

Date: 7 Jun 88 10:52:20 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: The Social Construction of Reality

In article <450001@hplsdar.HP.COM> jdg@hplsdar.HP.COM (Jason Goldman) writes:
>Ad Hominem
Et alia sunt?

When people adopt a controversial position for which there is no convincing
proof, the only scientific explanation is the individual's ideology.  The
dislike of ad hominem arguments among scientists is a sign of their self-imposed
dualism: personality and the environment stop outside the cranium of
scientists, but penetrate the crania of everyone else.

Odi profanum vulgum, et arceo ...
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 7 Jun 88 22:58:28 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net  (Jeff Dalton)
Subject: Re: Bad AI: A Clarification

In article <1299@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:

>AI research seems to fall into two groups:
>       a) machine intelligence;
>       b) simulation of human behaviour.
>No problem with a), apart from the use of the now vacuous term "intelligence",

But later you say:

>So, the reason for not encouraging AI is twofold.  Firstly, any research which
>does not address human reasoning directly is either pure computer science, or
>a domain application of computing.

Vision?  Robotics?  Everything that uses computers can be called pure or
applied CS.  So what?

>There is no need for a separate body of
>research called AI (or cybernetics for that matter).  There are just
>computational techniques.  Full stop.

What happened to "machine intelligence"?  It *is* a separate (but
not totally separate) body of research.  What is the point of arguing
about which research areas deserve names of their own?

BTW, there's no *need* for many things we nonetheless think good.

>It would be nice if they followed good software engineering practices and
>structured development methods as well.

Are you trying to see how many insults can fit into one paragraph?

Are you really trying to oppose "bad AI" or are you opportunistically
using it to attack AI as a whole?  Why not criticise specific work you
think is flawed instead of making largely unsupported allegations in
an attempt to discredit the entire field?

------------------------------

Date: 8 Jun 88 03:11:26 GMT
From: mcvax!ukc!its63b!aipna!rjc@uunet.uu.net  (Richard Caley)
Subject: Emotion (was Re: Language-related capabilities)


In article <700@amethyst.ma.arizona.edu>, K Watkins writes:
> If so, why does the possibility of sensory input to computers make so much
> more sense to the AI community than the possibility of emotional output?  Or
> does that community see little value in such output?  In any case, I don't see
> much evidence that anyone is trying to make it more possible.  Why not?

Eduard Hovy's thesis "Generating Natural Language Under Pragmatic
Constraints" ( Yale 1987 ) describes his attempt to produce a system
which uses certain "emotional" factors to select the contants and
form of a text.

The factors considered are the speaker's attitude towards the information
to be conveyed, its relationship with the audience and the audiences views.

------------------------------

Date: 8 Jun 88 03:45:21 GMT
From: mcvax!ukc!its63b!aipna!rjc@uunet.uu.net  (Richard Caley)
Subject: Re: Me and Karl Kluge (no flames, no insults, no abuse)


In <1312@crete.cs.glasgow.ac.uk>, Gilbert Cockton writes
> OK then. Point to an AI text book that covers Task Analysis?  Point to
> work other than SOAR and ACT* where the Task Domain has been formally
> studied before the computer implementation?

Natural language processing. Much ( by no means all ) builds on the work
of some school of linguistics.

> How the hell can you model what you've never studied?  Fiction.

One stands on the shoulders of giants. Nobody has time to research
their subject from the ground up.

> Wonder that's why some
> AI people have to run humanity down.  It improves the chance of ELIZA
> being better than us.

Straw man. I've never heard anyone try to claim ELIZA was better than
_anything_. Also I don't see 'AI people' running humanity down, it may be
that you consider your image of human beings to be 'better' than that which
you see AI as putting forward, well I'm sure that many 'AI people' would not
agree.

> Once again, what the hell can a computer
> program tell us about ourselves?

According to your earlier postings, if ( strong ) AI was successful it
would tell us that we have no free will, or at least that we can not assume
we have it. I don't agree with this but it is _your_ argument and something
which a computer program could tell us.

> Secondly, what can it tell us that we
> couldn't find out by studying people instead?

What do the theories of physics tell us that we couldn't find out by
studying objects.

>     The proper object of the study of humanity is humans, not machines

Well, there go all the physical sciences, botany, music, mathematics . . .

------------------------------

Date: 8 Jun 88 09:38:16 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: Constructive Question  (Texts and social context)

In article <1053@cresswell.quintus.UUCP> Richard A. O'Keefe writes:
>If I may offer a constructive question of my own:  how does socialisation
>differ from other sorts of learning?  What is so terrible about learning
>cultural knowledge from a book?  (Books are, after all, the only contact
>we have with the dead.)

Books are not the only contact with the dead.  Texts are only one
type of archaeological artefact.  There's nothing `terrible' about book
learning, although note that the interpretation of religious texts is
often controlled within a social context.  Texts are artefacts which
were created within a social context, this is why they date.  When you
lose the social context, even if the underlying "knowledge" has not
changed (whatever that could mean), you lose much of the meaning and
become out of step with current meanings.  See Winograd and Flores on
the hermeneutic tradition, or any sociological approach to literature
(e.g. Walter Benjamin and Terry Eagleton, both flavours of Marxist if
anyone wants to be warned).

Every generation has to rewrite, not only its history, but also its science.
Text production in science is not all driven by accumulation of new knowledge.

There are many differences between socialisation and book knowledge,
although the relationship is ALMOST a set/subset one.  Books are part
of our social context, but private readings can create new contexts
and new readings (hence the resistance to vernacular Bibles by the
Medieval Catholic church).  Universities and colleges provide another
social context for the reading of texts. The legal profession provides
another (lucrative) context for "opinions" on the meanings of texts
relative to some situation.  This, of course, is one of the many
social injustices from which AI will free us.  Lawyers will not
interpret the law for profit, machines sold by AI firms will.

IKBS programs are essentially private readings which freeze, despite
the animation of knowledge via their inference mechanisms (just a
fancy index really :-)).  They are only sensitive to manual reprogramming,
a controlled intervention.  They are unable to reshape their knowledge
to fit the current interaction AS HUMANS DO.  They are insensitive,
intolerant, arrogant, overbearing, single-minded and ruthless.  Oh,
and they usually don't work either :-) :-)
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 8 Jun 88 09:58:23 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: Bad AI: A Clarification

In article <451@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton) writes:
>Are you really trying to oppose "bad AI" or are you opportunistically
>using it to attack AI as a whole?  Why not criticise specific work you
>think is flawed instead of making largely unsupported allegations in
>an attempt to discredit the entire field?

No, I've made it clear that I only object to the comfortable,
protected privilege which AI gives to computational models of
Humanity.  Any research into computer applications is OK by me, as long
as it's getting somewhere.  If it's not, it's because of a lack of basic
research.  I contend that there is no such thing as basic research in AI.
Anything which could be called basic research is the province of other
disciplines, who make more progress with less funding per investigation (no
expensive workstations etc.).

I do not think there is a field of AI.  There is a strange combination
of topic areas covered at IJCAI etc.  It's a historical accident, not
an epistemic imperative.

My concern is with the study of Humanity and the images of Humanity
created by AI in order to exist.  Picking on specific areas of work is
irrelevant.  Until PDP, there was a logical determinism, far more
mechanical than anything in the physical world, behind every AI model
of human behaviour.  Believe what you want to get robots and vision to
work.  It doesn't matter, because what counts is the fact that your
robotic and vision systems do the well-defined and documented tasks
which they are constructively and provably designed to do.  There is
no need to waste energy here keeping in step with human behaviour.
But when I read misanthropic views of Humanity in AI, I will reply.
What's the problem?
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 8 Jun 88 11:52:11 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: Bad AI: A Clarification

In article <451@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton) writes:
>What is the point of arguing about which research areas deserve names
>of their own?
A lot.  Categories imply boundaries.  Don't try to tell me that the
division of research into disciplines has no relevance.  A lot comes
with those names.
>
>>It would be nice if they followed good software engineering practices and
>>structured development methods as well.
>
>Are you trying to see how many insults can fit into one paragraph?
No.  This applies to my research area of HCI too.  No-one in UK HCI
research, as far as I know, objects to the criticism that research
methodologies are useless until they are integrated with existing
system development approaches.  HCI researchers accept this as a
problem.  On software engineering too, HCI will have to deliver its
goods according to established practices.  To achieve this, some HCI
research must be done in Computer Science departments in collaboration
with industry.  There is no other way of finishing off the research
properly.

You've either missed or forgotten a series of postings over the last
two years about this problem in AI.  Project managers want to manage
IKBS projects like existing projects.  Organisations have a large
investment in their development infrastructures.  The problem with
AI techniques is that they don't fit in with existing practices, nor
are there any mature IKBS structured development techniques (and I
only know of one ESPRIT project where they are being developed).
You must also not be talking to the same UK software houses as I am, as
(parts of) UK industry feel that big IKBS projects are a recipe for
burnt fingers, unless they can be managed like any other software project.
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 8 Jun 88 19:53:41 GMT
From: gandalf!seth@locus.ucla.edu  (Seth R. Goldman)
Subject: Re: who else isn't a science

In article <3c84f2a9.224b@apollo.uucp> Peter Nelson writes:
>
>  I don't see why everyone gets hung up on mimicking natural
>  intelligence.  The point is to solve real-world problems. Make
>  machines understand continous speech, translate technical articles,
>  put together mechanical devices from parts that can be visually
>  recognized, pick out high priority targets in self-guiding missiles,
>  etc.  To the extent that we understand natural systems and can use
>  that knowledge, great!   Otherwise, improvise!

It depends what your goals are.  Since AI is a part of computer science
there is naturally a large group of people concerned with finding
solutions to real problems using a more engineering type of approach.
This is the practical end of the field.  The rest of us are interested
in learning something about human behavior/intelligence and use the
computer as a tool to build models and explore various theories.  Some
are interested in modelling the hardware of the brain (good luck) and
some are interested in modelling the behavior (more luck).  It is these
research efforts which eventually produce technology that can be applied
to practical problems.  You need both sides to have a productive field.

------------------------------

Date: 9 Jun 88 04:13:21 GMT
From: bungia!datapg!sewilco@umn-cs.arpa  (Scot E. Wilcoxon)
Subject: Re: who else isn't a science

In article <3c84f2a9.224b@apollo.uucp> Peter Nelson writes:
...
>  I don't see why everyone gets hung up on mimicking natural
>  intelligence.  The point is to solve real-world problems. Make
...
>  etc.  To the extent that we understand natural systems and can use
>  that knowledge, great!   Otherwise, improvise!

The discussion has been zigzagging between this viewpoint and another.
This is the "thought engineering" side, while others have been trying to
define the "thought science" side.  The "thought science" definition is
concerned with how carbon-based creatures actually think.  The "thought
engineering" definition is concerned with techniques which produce
desired results.

There are many cases where engineering has produced solutions which are
different than the definition provided by an existing technique.
Duplicating the motions of a flying bird or dish-washing human does not
directly lead to our present standards of fixed-wing airplanes and
mechanical dishwashers.

-- Scot E. Wilcoxon sewilco@DataPg.MN.ORG
{amdahl|hpda}!bungia!datapg!sewilco Data Progress UNIX masts &
rigging +1 612-825-2607 uunet!datapg!sewilco

------------------------------

Date: Thu, 9 Jun 88 09:31:41 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: Consensus and Reality

In AIList Digest 7.24, Pat Hayes <hayes.pa@Xerox.COM> writes:
PH> A question: if one doubts the existence of the physical world in which
PH> we live, what gives one such confidence in the existence of the other

I can't speak for Simon Brooke, but personally I don't think anyone
seriously doubts the existence of the physical world in which we live.
Something is going on here.  The question is, what.

One reason for our present difficulty in this forum reaching consensus
about what "Reality" is, is that we are using the term in two senses:
The anti-consensus view is that there is an absolute Reality and that is
what we relate to and interact with.  The consensus view is that what we
"know" about whatever it is that is going on here is limited and constrained
in many ways, yet we relate to our categorizations of the world expressing
that "knowledge" as though they were in fact the Reality itself.  When a
consensual realist expresses doubt about the existence of something
generally taken to be real, I believe it is doubt about the status of a
mental/social construct, rather than doubt about the very existence of
anything to which the construct might more or less correspond.  From one
very valid perspective there is no CRT screen in front of you, only an
ensemble of molecules.  Not a very useful perspective for present
purposes.  The point is that neither perspective denies the reality of
that to which the other refers as real, and neither is itself that
reality.

What is being overlooked by those who react with such allergic violence
to the notion of consensual reality is that there is a good relationship
between the two senses or understandings of the word "real":  namely,
precisely that which makes science an evolving thing.  John McCarthy
<JMC@SAIL.Stanford.EDU> has expressed it very well:

JM>         Indeed science is a social activity and all information comes in
JM> through the senses.  A cautious view of what we can learn would like to
JM> keep science close to observation and would pay attention to the consensus
JM> aspects of what we believe.  However, our world is not constructed in
JM> a way that co-operates with such desires.  Its basic aspects are far
JM> from observation, the truth about it is often hard to formulate in
JM> our languages, and some aspects of the truth may even be impossible to
JM> formulate.  The consensus is often muddled or wrong.

The control on consensus is that our agreements about what is going on
must be such that the world lets us get away with them.  But given our
propensity for ignoring (that is, agreeing to ignore) what doesn't fit,
that gives us lots of wiggle room.  Cross-cultural and psychological
data abound.  For a current example in science, consider all the
phenomena that are now respectable science and that previously were
ignored because they could not be described with linear functions.

But nature too is evolving, quite plausibly in ways not limited to the
biological and human spheres.  The universe appears to be less like a
deterministic machine than a creative, unpredictable enterprise.  I am
thinking now of Ilya Prigogine's _Order Out of Chaos_.  "We must give up
the myth of complete knowledge that has haunted Western science for
three centuries.  Both in the hard sciences and the so-called soft
sciences, we have only a window knowledge of the world we want to
describe."  The very laws of nature continue to reconfigure at higher
levels of complexity.  "Nature has no bottom line." (Prigogine, as
quoted in Brain/Mind Bulletin 11.15, 9/8/86.  I don't have the book at
hand.)

Now perhaps I am misconstruing McCarthy's words, since he starts out
saying:

JM> The trouble with a consensual or any other subjective concept of reality
JM> is that it is scientifically implausible.

Since everything else in that message is consistent with the view
presented here, I believe he is overlooking the relationship between the
two aspects of what is real:  the absolute Ding an Sich, and those
agreements that we hold about reality so long as we can get away with
it.  In this relationship, consensual reality is not scientifically
implausible; it is, at its most refined, science itself.

JM> It will be even worse if we try to program to regard reality as
JM> consensual, since such a view is worse than false; it's incoherent.

I suggest looking at the following for a system that by all accounts
works pretty well:

  Pask, Gordon.  1986.  Conversational Systems.  A chapter in _Human
  Productivity Enhancement_, vol. 1, ed. J. Zeidner.  Praeger, NY.

For the coherent philosophy, a start and references may be found in
another chapter in the same book:

  Gregory, Dik.  1986.  Philosophy and Practice in Knowledge
  Representation.  (In book cited above).

Winograd & Flores _Understanding Computers and Cognition_ arrive at a
very similar understanding by a different route.  (Pask by way of
McCulloch, von Foerster, and his own development of Conversation Theory;
Winograd & Flores by way of Maturana & Varela (students of McCulloch)
and hermeneutics.)

JM> To deal with this matter I advocate a new branch of philosophy I call
JM> metaepistemology.  It studies abstractly the relation between the
JM> structure of a world and what an intelligent system within the world
JM> can learn about it.  This will depend on how the system is connected
JM> to the rest of the world and what the system regards as meaningful
JM> propositions about the world and what it accepts as evidence for these
JM> propositions.

Sounds close to Pask's conversation theory.  There is also a new field
being advocated by Paul McLean (brain researcher), called epistemics.
It is said to concern how we can know our "knowing organs," the brain
and mind.  "While epistemology examines knowing from the outside in,
epistemics looks at it from the inside out." (William Gray, quoted in
Brain/Mind Bulletin 7.6 (3/8/82).

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Thu, 9 Jun 88 09:35:25 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: Re:  human-human communication

In AIList Digest 7.24, Hunter Barr <bbn.com!pineapple.bbn.com!barr@bbn.com
or maybe hbarr@pineapple.bbn.com> says regarding Human-human communication:

HB> I will now express in language:
HB> "How to recognize the color red on sight (or any other color)..":

HB> Find a person who knows the meaning of the word "red."  Ask her to
HB> point out objects which are red, and objects which are not,
HB> distinguishing between them as she goes along.  If you are physically
HB> able to distinguish colors, you will soon get the hang of it.

This evades the question by stating in language how to find out for
yourself what I can't tell you in language.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: 9 Jun 88 15:28:45 GMT
From: bbn.com!pineapple.bbn.com!barr@bbn.com  (Hunter Barr)
Subject: Re: The Social Construction of Reality

In article <1332@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>Odi profanum vulgum, et arceo ...


Favete lingua!


I couldn't resist :-).
                            ______
                            HUNTER

------------------------------

Date: Thu, 9 Jun 88 14:15:33 EDT
From: Raul.Valdes-Perez@B.GP.CS.CMU.EDU
Subject: construal of induction

Alen Shapiro states:

>There are basically 2 types of inductive systems
>
>a) those that build an internal model by example (and classify future
>   examples against that model) and
>b) those that generate some kind of rule which, when run, will classify
>   future examples
...
>I do not include those systems that are not able to generalise in either
>a or b since strictly they are not inductive!!

The concept of induction has various construals, it seems.  The one I am
comfortable with is that induction refers to any form of ampliative
reasoning, i.e. reasoning that draws conclusions which could be false
despite the premises being true.  This construal is advanced by Wesley
Salmon in the little book Foundations of Scientific Inference.  Accordingly,
any inference is, by definition, inductive xor deductive.

I realize that this distinction is not universal.  For example, some would
distinguish categories of induction.  I would appreciate reading comments
on this topic in AILIST.

Raul Valdes-Perez
CMU CS Dept.

------------------------------

Date: Thu 9 Jun 88 13:51:46-PDT
From: Conrad Bock <BOCK@INTELLICORP.ARPA>
Subject: Hypostatization


I agree with Pat Hayes that the problem of the existence of the world is
not as important as it used to be, but I think the more general question
about the relation of mind and world is still worthwhile.  As Hayes
pointed out, such questions are entirely worthless if we stay close to
observation and never forget, as McCarthy suggests, that we are
postulating theoretical entities and their properties from our input
output data.  Such an observational attitude is always aware that there
are no ``labels'' on our inputs and outputs that tell us where they come
from or where they go.

Sadly, such keen powers of observation are constantly endangered by the
mind's own activity.  After inventing a theoretical entity, the mind
begins to treat it as raw observation, that is, the entities become part
of the world as far as the mind is concerned.  The mind, in a sense,
becomes divided from its own creations.  If the mind is lucky, new
observations will push it out of its complacency, but it is precisely
the mind's attachment to its creations that dulls the ability to
observe.

Hayes is correct that some forms of Western religion are particularly
prone to this process (called ``hypostatization''), but some eastern
religions are very careful about it.  Kant devastated traditional
metaphysics by drawing attention to it.  Freud and Marx were directly
concerned with hypostatization, though only Marx had the philosophical
training to know that that was what he was doing.

I'd interested to know from someone familiar with the learning
literature whether hypostatization is a problem there.  It would take
the form of assuming the structure of the world that is being learned
about before it is learned.


Conrad Bock

------------------------------

End of AIList Digest
********************

∂09-Jun-88  2056	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #26  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 9 Jun 88  20:55:58 PDT
Date: Thu  9 Jun 1988 18:47-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #26
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 10 Jun 1988       Volume 7 : Issue 26

Today's Topics:

 Queries:
  definition of information
  Route planners
  Induction in Current ES tools
  Re: Response to: AI in weather forecasting

 Free Will:
  How to dispose of the free will issue (long)
  brain research on free will
  Re: How to dispose of the free will issue

----------------------------------------------------------------------

Date: Thu, 9 Jun 88 08:07:49 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: definition of information

It is often acknowledged that information theory has nothing to say
about information in the usual sense, as having to do with meaning.
It is only concerned with a statistical measure of the likelihood of
a particular signal sequence with respect to an ensemble of signal
sequences, a metric misleadingly dubbed by Hartley, Shannon, and
others "amount of information".

Can anyone point me to a coherent definition of information respecting
information content, as opposed to merely "quantity of information"?

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Thu 9 Jun 88 08:46:27-EDT
From: MCHALE@RADC-TOPS20.ARPA
Subject: Route planners


I am interested in the area of application of AI/Expert System
Techniques for flight/route planning in constrained domains
(threats, high traffic, ...).

I would appreciate receiving pointers to existing systems,
references/bibliography and copies of reports/publications
in these areas. Kindly reply to:

        James Lawton
        RADC/COES
        Griffiss AFB
        NY  13441-5700
          Phone: (315)-330-2973

        lawtonj@radc-lonex.arpa

------------------------------

Date: Thu, 09 Jun 88 12:17:27 EDT
From: <sriram@ATHENA.MIT.EDU>
Subject: Induction in Current ES tools


Although ID3 is supposed to do generalization and goody  stuff like
that, my experience with some of the current inductive tools is that
they seem to: 1) consider only positive examples; 2) do no generalization;
 and 3) be like  (very efficient) decision table evaluators.

Any comments?

Sriram

------------------------------

Date: 9 Jun 88 18:13:44 GMT
From: dan%meridian@ads.com (Dan Shapiro)
Reply-to: dan@ads.com (Dan Shapiro)
Subject: Re: Response to: AI in weather forecasting


Has anyone tried using heuristic methods to generate an approximate
weather forecast, and employ the results to initialize a numeric
algorithm?

I have seen this technique applied to problems in computational
chemistry (conformational analysis of molecules) with great effect - 3
to 4 orders of magnitude improvement over the efficiency of the
numerical algorithm alone.  That kind of improvement makes it possible
to attack problems of wholely different complexity.

        Dan Shapiro

------------------------------

Date: 1 Jun 88 20:04:36 GMT
From: mcvax!ukc!warwick!cvaxa!aarons@uunet.uu.net  (Aaron Sloman)
Subject: How to dispose of the free will issue (long)

(I wasn't going to contribute to this discussion, but a colleague
encouraged me. I haven't read all the discussion, so apologise if
there's some repetition of points already made.)

Philosophy done well can contribute to technical problems (as shown by
the influence of philosophy on logic, mathematics, and computing, e.g.
via Aristotle, Leibniz, Frege, Russell).

Technical developments can also help to solve or dissolve old
philosophical problems. I think we are now in a position to dissolve the
problems of free will as normally conceived, and in doing so we can make
a contribution to AI as well as philosophy.

The basic assumption behind much of the discussion of freewill is

    (A) there is a well-defined distinction between systems whose
    choices are free and those which are not.

However, if you start examining possible designs for intelligent systems
IN GREAT DETAIL you find that there is no one such distinction. Instead
there are many "lesser" distinctions corresponding to design decisions
that a robot engineer might or might not take -- and in many cases it is
likely that biological evolution tried both (or several) alternatives.

There are interesting, indeed fascinating, technical problems about the
implications of these design distinctions. Exploring them shows that
there is no longer any interest in the question whether we have free
will because among the REAL distinctions between possible designs there
is no one distinction that fits the presuppositions of the philosophical
uses of the term "free will". It does not map directly onto any one of
the many different interesting design distinctions. (A) is false.

"Free will" has plenty of ordinary uses to which most of the
philosophical discussion is irrelevant. E.g.

    "Did you go of your own free will or did she make you go?"

That question a well understood distinction between two possible
explanations for someone's action. But the answer "I went of my own free
will" does not express a belief in any metaphysical truth about human
freedom. It is merely a denial that certain sorts of influences
operated. There is no implication that NO causes, or no mechanisms were
involved.

This is a frequently made common sense distinction between the existence
or non-existence of particular sorts of influences on a particular
individual's action. However there are other deeper distinctions that
relate to to different sorts of designs for behaving systems.

The deep technical question that I think lurks behind much of the
discussion is

    "what kinds of designs are possible for agents and what are the
    implications of different designs as regards the determinants of
    their actions?"

I'll use "agent" as short for "behaving system with something like
motives". What that means is a topic for another day. Instead of one big
division between things (agents) with and things (agents) without free
will we'll then come up with a host of more or less significant
divisions, expressing some aspect of the pre-theoretical free/unfree
distinction. E.g. here are some examples of design distinctions (some
of which would subdivide into smaller sub-distinctions on closer
analysis):

- Compare (a) agents that are able simultaneously to store and compare
different motives with (b) agents that have no mechanisms enabling this:
i.e. they can have only one motive at a time.

- Compare (a) agents all of whose motives are generated by a single top
level goal (e.g. "win this game") with (b) agents with several
independent sources of motivation (motive generators - hardware or
software), e.g. thirst, sex, curiosity, political ambition, aesthetic
preferences, etc.

- Contrast (a) an agent whose development includes modification of its
motive generators and motive comparators in the light of experience with
(b) an agent whose generators and comparators are fixed for life
(presumably the case for many animals).

- Contrast (a) an agent whose motive generators and comparators change
partly under the influence of genetically determined factors (e.g.
puberty) with (b) an agent for whom they can change only in the light of
interactions with the environment and inferences drawn therefrom.

- Contrast (a) an agent whose motive generators and comparators (and
higher order motivators) are themselves accessible to explicit internal
scrutiny, analysis and change, with (b) an agent for which all the
changes in motive generators and comparators are merely uncontrolled
side effects of other processes (as in addictions, habituation, etc.)
[A similar distinction can be made as regards motives themselves.]

- Contrast (a) an agent pre-programmed to have motive generators and
comparators change under the influence of likes and dislikes, or
approval and disapproval, of other agents, and (b) an agent that is only
influenced by how things affect it.

- Compare (a) agents that are able to extend the formalisms they use for
thinking about the environment and their methods of dealing with it
(like human beings) and (b) agents that are not (most other animals?)

- Compare (a) agents that are able to assess the merits of different
inconsistent motives (desires, wishes, ideals, etc.) and then decide
which (if any) to act on with (b) agents that are always controlled by
the most recently generated motive (like very young children? some
animals?).

- Compare (a) agents with a monolithic hierarchical computational
architecture where sub-processes cannot acquire any motives (goals)
except via their "superiors", with only one top level executive process
generating all the goals driving lower level systems with (b) agents
where individual sub-systems can generate independent goals. In case
(b) we can distinguish many sub-cases e.g.
(b1) the system is hierarchical and sub-systems can pursue their
    independent goals if they don't conflict with the goals of their
    superiors
(b2) there are procedures whereby sub-systems can (sometimes?) override
    their superiors.

- Compare (a) a system in which all the decisions among competing goals
and sub-goals are taken on some kind of "democratic" voting basis or a
numerical summation or comparison of some kind (a kind of vector
addition perhaps) with (b) a system in which conflicts are resolved on
the basis of qualitative rules, which are themselves partly there from
birth and partly the product of a complex high level learning system.

- Compare (a) a system designed entirely to take decisions that are
optimal for its own well-being and long term survival with (b) a system
that has built-in mechanisms to ensure that the well-being of others is
also taken into account. (Human beings and many other animals seem to
have some biologically determined mechanisms of the second sort - e.g.
maternal/paternal reactions to offspring, sympathy, etc.).

- There are many distinctions that can be made between systems according
to how much knowledge they have about their own states, and how much
they can or cannot change because they do or do not have appropriate
mechanisms. (As usually there are many different sub-cases. Having
something in a write-protected area is different from not having any
mechanism for changing stored information at all.)

There are some overlaps between these distinctions, and many of them are
relatively imprecise, but all are capable of refinement and can be
mapped onto real design decisions for a robot-designer (or evolution).

They are just some of the many interesting design distinctions whose
implications can be explored both theoretically and experimentally,
though building models illustrating most of the alternatives will
require significant advances in AI e.g. in perception, memory, learning,
reasoning, motor control, etc.

When we explore the fascinating space of possible designs for agents,
the question which of the various sytems has free will loses interest:
the pre-theoretic free/unfree contrast totally fails to produce any one
interesting demarcation among the many possible designs -- it can be
loosely mapped on to several of them.

So the design distinctions define different notions of free:- free(1),
free(2), free(3), .... However, if an object is free(i) but not free(j)
(for i /= j) then the question "But is it really FREE?" has no answer.

It's like asking: What's the difference between things that have life and
things that don't?

The question is (perhaps) OK if you are contrasting trees, mice and
people with stones, rivers and clouds. But when you start looking at a
larger class of cases, including viruses, complex molecules of various
kinds, and other theoretically possible cases, the question loses its
point because it uses a pre-theoretic concept ("life") that doesn't have
a sufficiently rich and precise meaning to distinguish all the cases
that can occur. (Which need not stop biologists introducing a new
precise and technical concept and using the word "life" for it. But that
doesn't answer the unanswerable pre-theoretical question about precisely
where the boundary lies.

Similarly "what's the difference between things with and things without
free will?" This question makes the false assumpton (A).

So, to ask whether we are free is to ask which side of a boundary we are
on when there is no particular boundary in question. (Which is one
reason why so many people are tempted to say "What I mean by free is..."
and they then produce different incompatible definitions.)

I.e. it's a non-issue. So let's examine the more interesting detailed
technical questions in depth.

(For more on motive generators, motive comparators, etc. see my (joint)
article in IJCAI-81 on robots and emotions, or the sequel "Motives,
Mechanisms and Emotions" in the journal of Cognition and Emotion Vol I
no 3, 1987).

Apologies for length.

Now, shall I or shan't I post this.........????

Aaron Sloman,
School of Cognitive Sciences, Univ of Sussex, Brighton, BN1 9QN, England
    ARPANET : aarons%uk.ac.sussex.cvaxa@nss.cs.ucl.ac.uk
              aarons%uk.ac.sussex.cvaxa%nss.cs.ucl.ac.uk@relay.cs.net
    JANET     aarons@cvaxa.sussex.ac.uk
    BITNET:   aarons%uk.ac.sussex.cvaxa@uk.ac
        or    aarons%uk.ac.sussex.cvaxa%ukacrl.bitnet@cunyvm.cuny.edu
As a last resort (it costs us more...)
    UUCP:     ...mcvax!ukc!cvaxa!aarons
            or aarons@cvaxa.uucp

------------------------------

Date: Mon, 6 Jun 88 10:38:39 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: brain research on free will

Here are excerpts from two articles concerning brain research relating to
the issue of free will:

  . . .
  Benjamin Lebet of the University of California, San Franciso, . . .
  has been studying EEG correlates of conscious expreience since the
  early 1960s.  He bases his model on his experimental finding that a
  distinct brainwave pattern, the readiness potential (RP), occurs 350
  milliseconds . . . before the subjective experience of wanting to
  move. . . .There is another interval of 150 milliseconds before actual
  movement.  During that period, the movement--quick flexion of wrist or
  finger--could be vetoed or blocked by the individual.

  At the moment they were aware of a conscious decision to act, Libet's
  subjects noted the position of a moving target.  (The accuracy of the
  notation of time was checked, or corrected, by objective measurements
  in another setting.)

  In one experiment, they were asked to note when they actually moved.
  They reported having moved slightly _before_ any actual fpysiological
  evidence of movement.  It was as if the "mind's muscle"--their image
  of movement-- preceded actual muscle activation.  The brain's motor
  commands may be experienced as the movement itself.

  The veto or blockade, Libet commented, is in accord with relitious and
  humanistic views of ethical behavior and individual responsibility.
  The choice not to act is "self control."

  On the other hand, he said, if the final intention to act arises
  unconsciously, the mere appearance of an intention could not
  consciously be prevented, even though action could be blocked.  Thus
  religious or philosophical systems can create insurmountable
  difficulties if they blame individuals for simply having a mental
  impulse, even if it not acted out.

  Libet:  Physiology Dept., UCSF School of Medicine, San Francisco 94143.

This is of course controversial:

  In a recent issue of _The Behavioral and Brain Sciences_ (8:529-566),
  26 well-known researchers from seven countries commented on the
  implications of Libet's work.
  . . .
  Most . . . praised his care and ingenuity and his courage in trying to
  understand the complex interaction between conscious and unconscious
  processes.

  Several doubted that subjective reports of time could ever be precise
  enough to trust.  Others suggested that the experiment is a
  combination of materialist and mentalist approaches--hard EEG data for
  the readiness potential and subjective reports for conscious decision.

  John Eccles of the Max Planck Institute (West Germany) accepted the
  accuracy of the findings but reinterpreted them in a way that fits his
  view of mind and brain as separate.

  Conscious intention, Eccles said, may result from our subconscious
  sensing of a particular brainwave configuration, the readiness
  potential.  Intention occurs after we sense this subconscious
  readiness.

  Subjects may be reporting the peak of an urge, according to James
  Ringo of the University of Rochester (NY) Medical Center.  The very
  beginning of the "urge waveform" might be the readiness potential
  evident in the EEG.

  Conscious will might be triggered by an "anticipatory image," as
  described in 1890 by William James.  Eckart Scheerer of the university
  of Oldenburg (West Germany) said that Libet's subjects did not report
  such images preceding the conscious urge because they were not
  instructed to look for them.

  The other commentators noted that the will to veto the chosen movement
  is itself a conscious intention.  What precedes it?

  Charles Wood, a Yale psychologist, noted that an executive function
  activates a computer's programa.  Perhaps the brain's readiness
  potential is evidence of an executive function that triggers its
  conscious deciding.

I quote both these articles from Brain/Mind Bulletin 11.9:1-2 (May 5, 1986).

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: 8 Jun 88 13:43:13 GMT
From: l.cc.purdue.edu!cik@k.cc.purdue.edu  (Herman Rubin)
Subject: Re: How to dispose of the free will issue


The following was posted a long time ago to a different newsgroup.
I did not keep the author's name.  This says it all.


I do not understand all the fuss.  The answer is very simple:

Whether or not we have free will, we should behave as if we do,
because if we don't, it doesn't matter.


--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (ARPA or UUCP) or hrubin@purccvm.bitnet

------------------------------

End of AIList Digest
********************

∂10-Jun-88  1348	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #27  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 10 Jun 88  13:48:11 PDT
Date: Fri 10 Jun 1988 16:30-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #27
To: AIList@AI.AI.MIT.EDU


AIList Digest           Saturday, 11 Jun 1988      Volume 7 : Issue 27

Today's Topics:

 Philosophy:
  education .vs. programming
  who else isn't a science
  Human-human communication
  Consensus and Reality
  Emotion
  the brain
  deduction .vs. inference

 Queries:
  Re: Fuzzy systems theory was (Re: Alternative to Probability)
  AI & Software Engineering

 Seminar:
  Model Based Diagnostic Reasoning -- Phylis Koton (BBN)

----------------------------------------------------------------------

Date: 7 Jun 88 15:06:23 GMT
From: USENET NEWS <linus!news@harvard.harvard.edu>
Subject: education .vs. programming

Path: linus!mbunix!bwk
From: bwk@mitre-bedford.ARPA (Barry W. Kort)
Newsgroups: comp.ai.digest
Subject: Re: Sorry, no philosophy allowed here]
Summary: AI systems will require education, not programming.
Keywords: Programming, Learning, Education, Self Study
Message-ID: <33761@linus.UUCP>
Date: 7 Jun 88 15:06:22 GMT
References: <19880606032026.6.NICK@INTERLAKEN.LCS.MIT.EDU>
Sender: news@linus.UUCP
Reply-To: bwk@mbunix (Barry Kort)
Organization: IdeaSync, Inc., Chronos, VT
Lines: 13

Eric Barnes writes:
>Can we build machines with original thought capabilities,
>and what is meant by `program'?  I think that it is possible
>to build machines who will think originally.  The question
>now becomes: "Is what we do to set these "free thinking" machines up
>considered programing".

I suggest that the process of empowering intelligent systems to
think would be called "education" rather than "programming".
And one of our goals would be the creation of autodidactic
systems (that is, systems who are able to learn on their own).

--Barry Kort

------------------------------

Date: 9 Jun 88 18:34:03 GMT
From: well!sierch@lll-lcc.llnl.gov  (Michael Sierchio)
Subject: Re: who else isn't a science


One of the criticisms of AI is that it is too engineering oriented -- it
is  a field that had its origins in deep questions about intelligence and
automata. Like many fields, the seminal questions remain unanswered, while
Ph.D.s are based on producing Yet Another X-based Theorem Prover/Xprt System/
whatever.

The problem has enormous practical consequences, since practice follows
theory. For instance, despite all the talk about it, why is it that
cognition is mimicked as the basis for intelligence? What about cellular
intelligence, memory, etc of the immune system? Einstein's "positional
and muscular" thinking?

I think that there is, ideally , an interplay between praxis and theory.
But Computer SCIENCE is just that -- or should be -- It has, lamentably,
become an engineering discipline. Just so you knowm, I pay the rent through
the practical application of engineering knowledge. But I love to ponder
those unanswered questions. And never mind my typing -- just wait till I
get my backspace key fixed!
--
        Michael Sierchio @ SMALL SYSTEMS SOLUTIONS
        2733 Fulton St / Berkeley / CA / 94705     (415) 845-1755

        sierch@well.UUCP        {..ucbvax, etc...}!lll-crg!well!sierch

------------------------------

Date: 9 Jun 88 23:06:37 GMT
From: esosun!kobryn@seismo.css.gov  (Cris Kobryn)
Subject: Re: Human-human communication


   In article <198@esosun.UUCP> jackson@esosun.UUCP (Jerry Jackson) writes:
   >
   >Some obvious examples of things inexpressible in language are:
   >
   >How to recognize the color red on sight (or any other color)..
   >
   >How to tell which of two sounds has a higher pitch by listening..
   >
   >And so on...
   >
   >--Jerry Jackson


   All communication is based on common ground between the communicator
   and the audience.  . . .

   I will now express in language:
   "How to recognize the color red on sight (or any other color)..":

   Find a person who knows the meaning of the word "red."  Ask her to
   point out objects which are red, and objects which are not,
   distinguishing between them as she goes along.  If you are physically
   able to distinguish colors, you will soon get the hang of it.

Right.  However, I suspect Mr. Jackson was pointing to a more difficult
problem than the simple one for which you have offered a solution.
I believe he was addressing the well-known problem of expressing
_unexpressible_ (i.e., ineffable) entities such as sensations and
emotions.  This is the sort of with which writers contend.

While a few  writers have made a reasonable dent in the problem,
it remains far from resolution.  (If has been resolved the word
_ineffable_ should be made into an _archaic usage_ dictionary entry.)
A concrete expression of the problem follows:

        How does one verbally explain what the color blue is to someone
        who was born blind?

The problem here is to explain a sensory experience (e.g. seeing
"blue") to someone lacking the corresponding sensory facility
(e.g., vision).  This problem is significantly more difficult than the
one you addressed.  (Although a reasonable explanation has been offered.)

-- Cris Kobryn

------------------------------

Date: 9 Jun 88 16:14 PDT
From: hayes.pa@Xerox.COM
Subject: Re: Consensus and Reality


I suspect we agree, but are using words differently.  Let me try to state a few
things I think and see if you agree with them.   First, what we believe ( know )
about the world - or, indeed, about anything else - can only be believed by
virtue of it being expressed in some sort of descriptive framework, what is
often called a `language of thought':  hence, we must apprehend the world in
some categorical framework: we think of our desks as being DESKS.   Second, the
terms which comprise our conceptual framework are often derived from
interactions with other people: many - arguably, all -  indeed were learned from
other people, or at any rate during experiences in which other people played a
central part. ( I am being deliberately vague here because almost any position
one can take on nature/nurture is violently controversial: all I want to say is
that either view is acceptable. )

None of this is held as terribly controversial by anyone in AI or cognitive
science, so it may be that by your lights we are all consensual realists.  I
suspect that the difference is that you think that when we talk of reality we
mean something more: some `absolute Reality', whatever the hell that is.  All I
mean is the physical world in which we live, the one whose existence no-one, it
seems, doubts.

One of the useful talents which people have is the ability to bring several
different categorical frameworks to bear on a single topic, to think of things
in several different ways.  My CRT screen can be thought of ( correctly ) as an
ensemble of molecules.  But here is where you make a mistake: because the
ensemble view is available, it does not follow that the CRT view is wrong, or
vice versa.  You say:

BN>    From one very valid perspective there is no
BN>    CRT screen in front of you, only an ensemble of
BN>    molecules.

No:  indeed, there is a collection of molecules in front of me, but it would be
simply wrong to say that that was ALL there was in front of me, and to deny that
this collection didnt also comprise a CRT.   That perspective isnt valid.

Perhaps we still agree.  Let me in turn agree with something  else which you
seem to think we realists differ from: neither of these frameworks IS the
reality.  Of course not: no description of something IS that thing.  We dont mix
up beliefs about a world with the world itself: what makes you think we do?  But
to say that a belief about ( say ) my CRT is true is not to say that the belief
IS the CRT.

I suspect, as I said in my earlier note, that you have a stronger notion of
Truth and Reality than I think is useful, and you attribute deep significance to
the fact that this notion - "absolute Reality" - is somehow forever ineffable.
We can never know Reality ( in your sense ): true, but this could not possibly
be otherwise, since to know IS to have a true belief, and a belief is a
description, and a description is couched in a conceptual framework.   And as
Ayer says, it is perverse to attribute tragic significance to what could not
possibly be otherwise.

When your discussion moves on to the evolution of nature, citing Pask, Winograd
and Flores and other wierdos, Im afraid we just have to agree to silently
disagree.

Pat

------------------------------

Date: 10 Jun 88 03:48:07 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Emotion (was Re: Language-related capabilities)


     Emotions are more basic than language; horses, for example, have
what certainly appear to be emotions, but are not particularly verbal
(although see Talking With Horses, by Henry Blake).  It may be fruitful
to research the idea of emotions such as fear being useful in the
construction of robots.  I am working in this direction, but it will
be some time before I have results.  I would like to hear from others
contemplating serious work in this area.

                                        John Nagle

------------------------------

Date: Fri, 10 Jun 88 08:31 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: the brain

In AIList Digest  V7  #5, Ken Laws writes:

>Are my actions fully determined by external context, my BDI state,
>and perhaps some random variables?  Yes, of course -- what else is there?

Don't forget the machinery which carries out yor actions - the brain,
the neurological engine.

Its way of operating seems to differ from person to person - think of
Einstein vs. a ballet primadonna.

Remember the structure of a typical expert system - in addition to the
rule base and the data there is the inference engine.

Antti Ylikoski

------------------------------

Date: Fri, 10 Jun 88 08:16:19 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: Re: Consensus and Reality, Consensus and Reality

From: hayes.pa@Xerox.COM
Subject: Re: Consensus and Reality

PH> First, what we believe ( know )  about the world - or, indeed, about
PH> anything else - can only be believed by virtue of it being expressed in
PH> some sort of descriptive framework, what is often called a `language of
PH> thought':  hence, we must apprehend the world in some categorical
PH> framework: we think of our desks as being DESKS.

I would add that we must distinguish this 'language of thought' from our
various languages of communication.  They are surely related:  our
cognitive categories surely influence natural language use, and the
influence may even go the other way, though the Whorf-Sapir hypothesis
is certainly controversial.  But there is no reason to suppose that they
are identical, and many reasons to suppose that they differ.  (Quests
for a Universal Grammar Grail notwithstanding, languages and cultures do
differ in sometimes quite profound ways.  Different paradigms do exist
in science, different predilections in philosophy, though the same
natural language be spoken.)

Note also that what we know about a 'language of thought' is inferred
from natural language (problematic), from nonlinguistic human behavior,
and sometimes introspection (arguably a special case of the first two).
If we have some direct evidence on it I would like to know.

I agree with your second statement that learning occurs in a social
matrix.  It is not clear that all the "terms which comprise our
conceptual framework" are learned, however.  Some may be innate, either
as evolutionary adaptations or as artefacts of the electrochemical means
our bodies seem to use (such as characteristics of neuropeptides and
their receptors in the lower brain, at the entry points of sensory
nerves to the spinal cord, in the kidney, and elsewhere throughout the
body, for instance, apparently mediating emotion--cf recent work of
Candace Pert & others at NIH).  I also agree that the nature/nurture
controversy (which probably has the free will controversy at its root)
is unproductive here.

PH> I suspect that the difference is that you think that when we talk of
PH> reality we mean something more: some `absolute Reality', whatever the
PH> hell that is.  All I mean is the physical world in which we live, the
PH> one whose existence no-one, it seems, doubts.

No, I only want to establish agreement that we are NOT talking about
some 'absolute Reality' (Ding an Sich), whatever the hell that is.  That
we are constrained to talking about something much less absolute.  That
is the point.

The business about what you are looking at now being an ensemble of
molecules >>instead of<< a CRT screen is an unfortunate red herring.  I
did not express myself clearly enough.  Of course it is both or either,
depending on your perspective and present purposes.  If you are a
computer scientist reading mail, one is appropriate and useful and
therefore "correct".  If you are a chemist or physicist contemplating it
as a possible participant in an experiment, the other "take" is
appropriate and useful and therefore "correct".  And the Ultimate
Reality of it (whatever the hell that is) is neither, but it lets us get
away with pretending it "really is" one or the other (or that it "really
is" some other "take" from some other perspective with some other
purposes).  We are remarkably adept at ignoring what doesn't fit so long
as it doesn't interfere, and that is an admirably adaptive, pro-survival
way to behave.  Not a thing wrong with it.  But I hope to reach
agreement that that is what we are doing.  Maybe we already have:

PH> . . . neither of these frameworks
PH> IS the reality.  Of course not: no description of something IS that
PH> thing.  We dont mix up beliefs about a world with the world itself: what
PH> makes you think we do?  But to say that a belief about ( say ) my CRT is
PH> true is not to say that the belief IS the CRT.

But we do mix up our language of communication with our 'language of
thought' (first two paragraphs above), perhaps unavoidably since we have
only the latter as means for reaching agreement about the former, and
only the former (adapted to conduct in an environment) for cognizing
itself.  And although you and I agree that we do not and cannot know
what is "really Real" (certainly if we could we could not communicate
about it or prove it to anyone), my experience is that many folks do
indeed mix up beliefs about a world with the world itself.  They want a
WYSIWYG reality, and react with violent allergy to suggestions that what
they see is only a particular "take" on what is going on.  They never
get past that to hear the further message that this is OK; that it has
survival value; that it is even fun.

Ad hominem comments ("wierdos") are demeaning to you.  I will be glad to
reach an agreement to disagree about what Prigogine, Pask, Winograd &
Flores, Maturana & Varela, McCulloch, Bateson, or anyone else has said,
but I have to know >what< it is that you are disagreeing with--not just
who.

        Bruce

------------------------------

Date: 10 Jun 88 12:50:31 GMT
From: news@mind.Princeton.EDU
Subject: deduction .vs. inference

Path: mind!confidence!ghh
From: ghh@confidence.Princeton.EDU (Gilbert Harman)
Newsgroups: comp.ai.digest
Subject: Re: construal of induction
Summary: Deductive implication, not inference
Keywords: deduction, induction, reasoning
Message-ID: <2531@mind.UUCP>
Date: 10 Jun 88 12:50:31 GMT
References: <19880609224213.9.NICK@INTERLAKEN.LCS.MIT.EDU>
Sender: news@mind.UUCP
Reply-To: ghh@confidence.UUCP (Gilbert Harman)
Organization: Cognitive Science, Princeton University
Lines: 26

In article <19880609224213.9.NICK@INTERLAKEN.LCS.MIT.EDU>
Raul.Valdes-Perez@B.GP.CS.CMU.EDU writes:
>The concept of induction has various construals, it seems.  The one I am
>comfortable with is that induction refers to any form of ampliative
>reasoning, i.e. reasoning that draws conclusions which could be false
>despite the premises being true.  This construal is advanced by Wesley
>Salmon in the little book Foundations of Scientific Inference.  Accordingly,
>any inference is, by definition, inductive xor deductive.

This same category mistake comes up time and time again.

Deduction is the theory of implication, not the theory of
inference.  The theory of inference is a theory about how to
change one's view.  The theory of deduction is not a theory
about that.

For further elaboration, see Gilbert Harman, CHANGE IN VIEW,
MIT Press: 1986, chapters 1 and 2.  Also Alvin I. Goldman,
EPISTEMOLOGY AND COGNITION, chapter 13.

                       Gilbert Harman
                       Princeton University Cognitive Science Laboratory
                       221 Nassau Street, Princeton, NJ 08542

                       ghh@princeton.edu
                       HARMAN@PUCC.BITNET

------------------------------

Date: 8 Jun 88 18:08:36 GMT
From: hpda!hp-sde!hpfcdc!hpfclp!jorge@bloom-beacon.mit.edu  (Jorge
      Gautier)
Subject: Re: Fuzzy systems theory was (Re: Alternative to Probability)


> Sorry, it's a lot more complicated than that.  For more details, see my
> D.Phil thesis when it exists.

When and where will it be available?

Jorge

------------------------------

Date: Fri, 10 Jun 88 09:30:40 PDT
From: rbogen@Sun.COM (Richard Bogen)
Subject: AI & Software Engineering

I am interested in leads on any papers or research concerned with applications
of AI to the production of complex software products such as Operating Systems.
I feel that there is a tremendous amount of useful information in the heads of
the software developers concerning dependencies between the various data
structures and procedures which compose the system.  Much of this could also be
derived automatically from the compiler and linker when the OS is built. It
would require a rather large database to store all of this but imagine how
useful it would be to the support people and to the people developing products
dependent upon the OS (such as datacomm subsystems).  With a front-end query
language they could check the database for the expected impact of any changes
they were about to make to the system possibily avoiding the time consuming
debugging process of reading a dump later on.  Furthermore it would be possible
to automatically generate a dump formatting program and an online monitor
program from the data stucture declarations in the source code.  By using
include files these programs could always be kept up-to-date whenever the
OS was changed.

------------------------------

Date: Fri 10 Jun 88 13:52:39-EDT
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: BBN AI Seminar -- Phylis Koton

                    BBN Science Development Program
                       AI Seminar Series Lecture

        MODEL-BASED DIAGNOSTIC REASONING USING PAST EXPERIENCES

                              Phylis Koton
                      MIT Lab for Computer Science
                         (ELAN@XX.LCS.MIT.EDU)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                       10:30 am, Tuesday June 14


The problem-solving performance of most people improves with experience.
The performance of most expert systems does not.  People solve
unfamiliar problems slowly, but recognize and quickly solve problems
that are similar to those they have solved before.  People also remember
problems that they have solved, thereby improving their performance on
similar problems in the future.  This talk will describe a system,
CASEY, that uses case-based reasoning to recall and remember problems it
has seen before, and uses a causal model of its domain to justify
re-using previous solutions and to solve unfamiliar problems.

CASEY overcomes some of the major weaknesses of case-based reasoning
through its use of a causal model of the domain.  First, the model
identifies the important features for matching, and this is done
individually for each case.  Second, CASEY can prove that a retrieved
solution is applicable to the new case by analyzing its differences from
the new case in the context of the model.  CASEY overcomes the speed
limitation of model-based reasoning by remembering a previous similar
case and making small changes to its solution.  It overcomes the
inability of associational reasoning to deal with unanticipated problems
by recognizing when it has not seen a similar problem before, and using
model-based reasoning in those circumstances.

The techniques developed for CASEY were implemented in the domain of
medical diagnosis, and resulted in solutions identical to those derived
by a model-based expert system for the same domain, but with an increase
of several orders of magnitude in efficiency.  Furthermore, the methods
used by the system are domain-independent and should be applicable in
other domains with models of a similar form.

------------------------------

End of AIList Digest
********************

∂13-Jun-88  1246	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #28  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 13 Jun 88  12:46:11 PDT
Date: Mon 13 Jun 1988 15:20-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #28
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 14 Jun 1988      Volume 7 : Issue 28

Today's Topics:

  Free Will
  Positive and Negative Reinforcement

----------------------------------------------------------------------

Date: 12 Jun 88 16:36:40 GMT
From: uflorida!novavax!proxftl!bill@umd5.umd.edu  (T. William Wells)
Subject: Re: Free will does not require nondeterminism.

In article <461@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> In article <185@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
> ]The absolute minimum required for free will is that there exists
> ]at least one action a thing can perform for which there is no
> ]external phenomena which are a sufficient cause.
>
> I have a suspicion that you may be getting too much from this
> "external".

You (and several others) seemed to have missed the point.  I did
not post that message in order to defend a particular view of why
free will does not require determinism.  Rather, I posted it so
that those of various philosophical persuasions could adapt it to
their own system.

For example, I am an Objectivist.  This measn that I have a
particular notion of what the difference between external and
internal is.  I also can assign some coherent meaning to the rest
of the posting, and voila!, I have an assertion that makes sense
to an Objectivist.

You can do the same, but that is up to you.

> It must also be considered that everything internal to me might
> ultimately be caused by things external.

It is precisely the possibility that this does not have to be
true, even given that things can do only one thing, that makes
free will something to consider, even in a determinist
philosophy.

------------------------------

Date: 12 Jun 88 16:44:06 GMT
From: uflorida!novavax!proxftl!bill@umd5.umd.edu  (T. William Wells)
Subject: Re: Free Will-Randomness and Question-Structure

In article <1214@crete.cs.glasgow.ac.uk>, Gilbert Cockton writes:
> In article <194@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
> >(N.B.  The mathematician's "true" is not the same thing as the
> > epistemologist's "true".
> Which epistemologist?  The reality and truth of mathematical objects
> has been a major concern in many branches of philosophy.  Many would
> see mathematics, when it succeeds in formalising proof, as one form of
> truth.  Perhaps consistency is a better word, and we should reserve
> truth for the real thing :-)

Actually, the point was just that: when I say that something is
true in a mathematical sense, I mean just one thing: the thing
follows from the chosen axioms; when I say that something is
epistemologically true (sorry about the neologism), I mean one
thing, someone else means something else, and a third declares
the idea meaningless.

Thus the two kinds of truth need to be considered separately.

------------------------------

Date: 12 Jun 88 16:54:44 GMT
From: uflorida!novavax!proxftl!bill@umd5.umd.edu  (T. William Wells)
Subject: Re: Free Will & Self-Awareness

In article <1226@crete.cs.glasgow.ac.uk>, Gilbert Cockton writes:
> In article <205@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
> >The Objectivist version of free will asserts that there are (for
> >a normally functioning human being) no sufficient causes for what
> >he thinks.  There are, however, necessary causes for it.
> Has this any bearing on the ability of a machine to simulate human
> decision making?  It appears so, but I'd be interested in how you think it
> can be extended to yes/no/don't know about the "pure" AI endeavour.

If you mean by "pure AI endeavour" the creation of artificial
consciousness, then definitely the question of free will &
determinism is relevant.

The canonical argument against artificial consciousness goes
something like: humans have free will, and free will is essential
to human consciousness.  Machines, being deterministic, do not
have free will; therefore, they can't have a human-like
consciousness.

Now, should free will be possible in a deterministic entity this
argument goes poof.

------------------------------

Date: 12 Jun 88 18:43:17 GMT
From: uflorida!novavax!proxftl!bill@umd5.umd.edu  (T. William Wells)
Subject: Re: Free Will & Self-Awareness

I really do not want to further define Objectivist positions on
comp.ai.  I have also seen several suggestions that we move the
free will discussion elsewhere.  Anyone object to moving it to
sci.philosophy.tech?

In article <463@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> ]In terms of the actual process, what happens is this: various
> ]entities provide the material which you base your thinking on
> ](and are thus necessary causes for what you think), but an
> ]action, not necessitated by other entities, is necessary to
> ]direct your thinking.  This action, which you cause, is
> ]volition.
>
> Well, how do I cause it?  Am I caused to cause it, or does it
> just happen out of nothing?  Note that it does not amount to
> having free will just because some of the causes are inside
> my body.  (Again, I am not sure what you mean by "other entities".)

OK, let's try to eliminate some confusion.  When talking about an
action that an entity takes, there are two levels of action to
consider, the level associated with the action of the entity and
the level associated with the processes that are necessary causes
for the entity level action.

[Note: the following discussion applies only to the case where
the action under discussion can be said to be caused by the
entity.]

Let's consider a relatively uncontroversial example.  Say I have
a hot stove and a pan over it.  At the entity level, the stove
heats the pan.  At the process level, the molecules in the stove
transfer energy to the molecules in the pan.

The next question to be asked in this situation is: is heat the
same thing as the energy transferred?

If the answer is yes then the entity level and the process level
are essentially the same thing, the entity level is "reducible"
to the process level.  If the answer is no, then we have what is
called an "emergent" phenomenon.

Another characterization of "emergence" is that, while the
process level is a necessary cause for the entity level actions,
those actions are "emergent" if the process level action is not a
sufficient cause.

Now, I can actually try to answer your question.  At the entity
level, the question "how do I cause it" does not really have an
answer; like the hot stove, it just does it.  However, at the
process level, one can look at the mechanisms of consciousness;
these constitute the answer to "how".

But note that answering this "how" does not answer the question
of "emergence".  If consciousness is emergent, then the only
answer is that "volition" is simply the name for a certain class
of actions that a consciousness performs.  And being emergent,
one could not reduce it to its necessary cause.

I should also mention that there is another use of "emergent"
floating around, it simply means that properties at the entity
level are not present at the process level.  The emergent
properties of neural networks are of this type.

------------------------------

Date: 13 Jun 88 01:38:14 GMT
From: wind!ackley@bellcore.bellcore.com  (David Ackley)
Subject: Free will does exist, but it is finite


The more you use it, the less you have.
        -David Ackley
         ackley@bellcore.com

------------------------------

Date: Sat, 11 Jun 88 21:34:48 -0200
From: Antti Ylikoski <ayl%hutds.hut.fi%FINGATE.BITNET@MITVMA.MIT.EDU>
Subject: knowledge, power and AI

In AIList V7 #6, Professor John McCarthy <JMC@SAIL.Stanford.EDU>
writes:

>There are three ways of improving the world.
>(1) to kill somebody
>(2) to forbid something
>(3) to invent something new.

To them, I would add

(4) to teach someone.

It is not enough to *know*.

You also must be able to *do*.

I recommend that Professor McCarthy read Carlos Castaneda's book
Tales of Power.  But be warned - it requires more than intelligence and
knowledge.


Andy Ylikoski

------------------------------

Date: 11 Jun 88 04:00:17 GMT
From: oodis01!uplherc!sp7040!obie!wsccs!terry@tis.llnl.gov  (Every
      system needs one)
Subject: Re: Free Will & Self Awareness

In article <566@wsccs.UUCP>, dharvey@wsccs.UUCP (David Harvey) writes:
> In article <5323@xanth.cs.odu.edu>, Warren E. Taylor writes:
>> Adults understand what a child needs. A child, on his own, would quickly kill
>> himself. Also, pain is often the only teacher a child will listen to. He
>> learns to associate a certain action with undesirable consequences.
>
> Spanking certainly is a form of behavior alteration, although it might
> not be the best one in all circumstances.  It has been demonstrated in
> experiment after experiment that positive reinforcement of desired
> behaviors works much better than negative reinforcement of undesirable
> behavior patterns.

David:

        But what about th "pseudo-observer effect" (my pseudo-terminology)?
If you beat your child, and then the child proceeds to behave in the manner
you desired him (or her, if she's your daughter and not your son :-) to behave,
then the beating worked (produced the desired effect).  In this fashion, the
parent (or total stranger who beats children) has positive reinforcement of
the beating effecting the behavior.

        Given that the child has responded to being beaten once, it is logical
to assume that he would do so again... this, coupled with the prior positive
reinforcement to the parent (or stranger), makes it more likely that they
will beat the child in the future, given a similar situation.

        Consistent reinforcement is more effective than inconsistent
reinforcement, be it positive or negative.

        Besides, you always have your hands; how often do you happen to have
ice-cream immediately available?

| Terry Lambert           UUCP: ...{ decvax, ihnp4 } ...utah-cs!century!terry |
| @ Century Software        OR: ...utah-cs!uplherc!sp7040!obie!wsccs!terry    |
| SLC, Utah                                                                   |
|                   These opinions are not my companies, but if you find them |
|                   useful, send a $20.00 donation to Brisbane Australia...   |
| 'Signatures; it's not how long you make them, it's how you make them long!' |

------------------------------

Date: Sat, 11 Jun 88 15:07 EST
From: <INS_ATGE%JHUVMS.BITNET@MITVMA.MIT.EDU>
Subject: Free Will vs. Society

I believe that as machines and human created creatures begin to pop up
more and more in advanced jobs on our planet, we are going to have to
re-evaluate our systems of blame and punishment.
  Some have said that the important parts of free will is making a choice
between good and evil.  Unfortunately, these concepts are rather
ill-defined.  I subscribe to the notion that there are not universal
'good' and 'evil' properties...I know that others definately disagree on
this point.  My defense rests in the possibility of other extremely
different life systems, where perhaps things like murder and incest, and
some of the other common things we humans refer to as 'evil' are necessary
for that life form's survival.
  Even today, there are many who do not think murder is evil under
cartain circumstances (captial punishment, war, perhaps abortion).
  I feel that we need to develop heuristics to deal with changing
needs for our species, and needs with regard to interaction with
non-human and/or non-carbon based life forms.
   Does determinism eradicate blame?  Not neccessarily.  Lets say system
X caused unwanted harm to system Y.  Even if system X had no other choice
than to cause harm to Y due to its current input and current state,
system X must still be "blamed" for the incident and hopefully
system X can be "fixed" within acceptable guidelines.
   Do our current criminal punishments actually "fix" the erring
systems?  What are socially acceptable "fixes" (to many, captital
punishment is not acceptable).?
   I am sure some may not like the idea of being "scientific" about
punishment of erring systems.  But I think the key word should be
"fixes."  Punishment by jailing may work on humans as a "fix", but
not on an IBM PC.  The IBM PC will undoubtably be fixed better by
replacing bad chips on board.  And this can be determined by
comparison and research.

------------------------------

Date: Fri, 10 Jun 88 15:38:43 +0100
From: mcvax!swivax!vierhout@uunet.UU.NET (Paul Vierhout)
Subject: Re: [DanPrice@HIS-PHOENIX-MULTICS.ARPA: Sociology vs Science
         Debate]

An important component of an emotion-ratio model of human decisionmaking
should incorporate uncertainty handling, and a model of 'rational
expectations'-how to select information, choosing what to find out
and what to ignore- ignorance is perhaps unavoidable, and certainly
an important source of apparently irrational behavior (although the
decision to ignore may itself be rational). This provides a link
between ratio and apparently irrational behavior.

Of course one source of information is _experience_ (one can decide to
ignore experience, which accounts, a.o., for a rational-emotional system
capable of experiencing, and at the same time ignoring, punishment)
and another may be _imagination_ (one can decide not to ignore
some possiblities, and deduce, by imagining, some possible consequences which
otherwise would not have been considered). Both link to 'emotion'.

Of course, you could ignore this notice.

Visit Amsterdam this summer, the weather is fine ande the canals are more
beautiful than ever.

------------------------------

End of AIList Digest
********************

∂13-Jun-88  1636	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #29  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 13 Jun 88  16:36:20 PDT
Date: Mon 13 Jun 1988 15:29-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #29
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 14 Jun 1988      Volume 7 : Issue 29

Today's Topics:

  Philosophy and AI

----------------------------------------------------------------------

Date: 9 Jun 88 04:05:36 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu 
      (Stephen Smoliar)
Subject: Re: Me and Karl Kluge (no flames, no insults, no abuse)

I see that Gilbert Cockton is still judging the quality of AI by his
statistical survey of bibliographies in AAAI and IJCAI proceedings.
In the hope that the rest of us agree to the speciousness of such arguments,
I shall try to take a more productive approach.
In article <1312@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>
>The point I have been making repeatedly is that you cannot study human
>intelligence without studying humans.  John Anderson and his paradigm
>partners and Vision apart, there is a lot of AI research which has
>never been near a human being. Once again, what the hell can a computer
>program tell us about ourselves?  Secondly, what can it tell us that we
>couldn't find out by studying people instead?

Let us consider a specific situation.  When we study a subject like physics,
there is general agreement that a good textbook must include not only an
exposition of fundamental principles but also a few examples of solved
problems.  Why are these examples of benefit to the student?  It would
appear that he uses them as some sort of a model (perhaps the basis for
analogical reasoning) when he starts doing assigned problems;  but how
doesd he know when an example is the right one to draw upon?  The underlying
question is this:  HOW DOES KNOWLEDGE OF SUCCESSFULLY SOLVED PROBLEMS
ENHANCE OUR ABILITY TO SOLVE NEW PROBLEMS?

Now, the question to Mr. Cockton is:  What have all those researchers who
don't spend so much time with computer programs have to tell us?  From what
I have been able to discern, the answer is:  NOT VERY MUCH.  Meanwhile, there
are a variety of AI projects which have begun to address the questions
concerned with what constitutes experiential memory and how it might be
modeled.  I am not claiming they have come up with any answers yet, but
I see no more reason to rail against their attempts than to attack attempts
by those who would not sully their investigative efforts with such ugly
artifacts as computer programs.

------------------------------

Date: 9 Jun 88 09:06:42 GMT
From: mcvax!ukc!dcl-cs!simon@uunet.uu.net  (Simon Brooke)
Subject: AI seen as an experiment to determine the existence of
         reality

Following the recent debate in this newsgroup about the value of AI, a
thought struck me. It's a bit tenuous....

As I understand it, Turing's work shows that the behaviour of any
computing device can be reproduced by any other. Modern cosmology holds
that:
        1] there is a material world.
        2] if there is a spiritual world, it's irrelevent, as the
                spiritual cannot affect the material.
        3] the brain is a material object, and is the organ which largely
                determines the behaviour of human beings.

If all this is so, then it is possible to exactly reproduce the workings
of a human brain in a machine (I think Turing actually claimed this, but I
can't remember where).

So AI could be seen as an experiment to determine whether a material world
actually exists. While the generation of a completely successful
computational model of a human brain would not prove the existence of th
material, the continued failure to do so over a long period would surely
prove its non-existence... wouldn't it?


** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      *
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*                                                                       *
*  Neural Nets: "It doesn't matter if you don't know how your program   *
***************  works, so long as it's parallel" - R. O'Keefe **********

------------------------------

Date: 10 Jun 88 19:13:26 GMT
From: ncar!noao!amethyst!kww@gatech.edu  (K Watkins)
Subject: Re: Bad AI: A Clarification

In article <1336@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>
>I do not think there is a field of AI.  There is a strange combination
>of topic areas covered at IJCAI etc.  It's a historical accident, not
>an epistemic imperative.
>
Of what field(s) is such a statement false?  An inventive imagination can
regroup the topics of study and knowledge in a great many ways.  Indeed, it
might be very useful to do so more often.  (Then again, the cross-tabulating
chore of making sure we lost a minimum of understanding in the transition
would be enormous.)

------------------------------

Date: 11 Jun 88 01:50:33 GMT
From: pasteur!agate!garnet!weemba@ames.arpa  (Obnoxious Math Grad
      Student)
Subject: Re: Who else isn't a science?

In article <13100@shemp.CS.UCLA.EDU>, bjpt@maui (Benjamin Thompson) writes:
>In article <10510@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu writes:
>>Gerald Edelman, for example, has compared AI with Aristotelian
>>dentistry: lots of theorizing, but no attempt to actually compare
>>models with the real world.  AI grabs onto the neural net paradigm,
>>say, and then never bothers to check if what is done with neural
>>nets has anything to do with actual brains.
>
>This is symptomatic of a common fallacy.

No, it is not.  You did not catch the point of my posting, embedded in
the subject line.

>                                          Why should the way our brains
>work be the only way "brains" can work?  Why shouldn't *A*I workers look
>at weird and wonderful models?

AI researchers can do whatever they want.  But they should stop trying
to gain scientific legitimacy from wild unproven conjectures.

>                                We (basically) don't know anything about
>how the brain really works anyway, so who can really tell if what they're
>doing corresponds to (some part of) the brain?

Right.  Or if they're all just hacking for the hell of it.

But if they are in fact interested in the brain, then they could period-
ically check back at what is know about real brains now and then.  Since
they don't, I think Edelman's "Aristotelian dentistry" criticism is per-
fectly valid.

In article <3c84f2a9.224b@apollo.uucp>, nelson_p@apollo (Peter Nelson) writes,
replying to the same article:

>  I don't see why everyone gets hung up on mimicking natural
>  intelligence.  The point is to solve real-world problems.

This makes for an engineering discipline, not a science.  I'm all for
AI research in methods of solving difficult ill-defined problems.  But
calling the resulting behavior "intelligent" is completely unjustified.

Indeed, many modern dictionaries now give an extra meaning to the word
"intelligent", thanks, partly due to AI's decades of abuse of the term:
it means "able to peform some of the functions of a computer".

Ain't it wonderful?  AI succeeded by changing the meaning of the word.

ucbvax!garnet!weemba    Matthew P Wiener/Brahms Gang/Berkeley CA 94720

------------------------------

Date: 11 Jun 88 07:00:13 GMT
From: oodis01!uplherc!sp7040!obie!wsccs!dharvey@tis.llnl.gov  (David
      Harvey)
Subject: Re: AI and Sociology

In article <1301@crete.cs.glasgow.ac.uk>, Gilbert Cockton writes:
> It is quite improper to cut out a territory which deliberately ignores
> others.  In this sense, psychology and sociology are guilty like AI,
> but not nearly so much, as they have territories rather than a
> territory.  Still, the separation of sociology from psychology is
> regrettable, but areas like social psychology and cognitive sociology
> do bridge the two, as do applied areas such as education and management.
> Where are the bridges to "pure" AI?  Answer that if you can.
>
You are correct in asserting that these are the bridges between
Psychology and Sociology, but my limited observation of people in both
groups is that people in Social Psychology rarely poke their heads into
the Sociology department, and people in Cognitive Sociology rarely
interact with the people in Cognitive Psychology.  The reason I know is
that I have observed them first-hand while getting degrees in Math and
Psychology.  In other words, the bridges are quite superficial, since
the interaction between the two groups is minimal.  In regards to this
situation I am referring to the status quo as it existed at the
University of Utah where I got my degrees and at Brigham Young
University which I visited fairly often.  And in answer to your demands
of AI, perhaps you better take a very good look at how well social
scientists are at answering questions about thinking.  They are making
progress, but it is not in the form of a universal theory, ala Freud.
In other words, they are snipping away at this little idea and that
little paradigm, just like AI researchers are doing.

> Again, I challenge AI's rejection of social criticisms of its
> paradigm.  We become what we are through socialisation, not programming
> (although some teaching IS close to programming, especially in
> mathematics).  Thus a machine can never become what we are, because it
> cannot experience socialisation in the same way as a human being.  Thus
> a machine can never reason like us, as it can never absorb its model of
> reality in a proper social context.  Again, there are well documented
> examples of the effect of social neglect on children.  Machines will not
> suffer in the same way, as they only benefit from programming, and not
> all forms of human company.  Anyone who thinks that programming is
> social interaction is really missing out on something (probably social
> interaction :-))

You obviously have not installed a new operating system on a VAX only to
discover that it has serious bugs.  Down comes the machine to the >>>
prompt and the process of starting the machine up with old OS that
worked begins.  Since the machine does not have feelings (AHA!) it
doesn't care, but it certainly was not beneficial to its performance.
Or a student's program with severe bugs that causes core dumps doesn't
help either.  Then there is the case of our electric news feed being
down for several weeks.  When it finally resumed operation it completely
filled the process table, making it impossible to even sign on as
super-user and do an 'ls'!  The kind of programming that allowed it to
spawn that many child processes is not my idea of something beneficial!
In other words, bad programming is to a certain extent an analog to
social neglect. Running a machine in bad physical conditions and
physically abusing a person are also similar.  Yes, you can create
enough havoc with Death Valley heat to totally kill a computer!
>
> RECOMMENDED READING
>
> Jerome Bruner on MACOS (Man: A Course of Study), for the reasoning
> behind interdisciplinary education.
>
   ↑↑↑ No qualms with the ideas presented in this book
>
> Skinner's "Beyond Freedom and Dignity" and the collected essays in
> response to it, for an understanding of where behaviourism takes you
> ("pure" AI is neo-behaviourist, it's about little s-r modelling).
>
   ↑↑↑ And I still think his model has lots of holes in it!

dharvey @ WSCCS (David A Harvey)

------------------------------

Date: 11 Jun 88 13:49:09 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu 
      (Stephen Smoliar)
Subject: Re: Bad AI: A Clarification

In article <1336@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>But when I read misanthropic views of Humanity in AI, I will reply.

Do you mean that all your wholesale railing against AI over the last
several weeks (and it HAS been pretty wholesale) is just a response
to "misanthropic views of Humanity?"  Perhaps we may have finally
penetrated to the root of the problem.  I wish to go on record as
observing that I have yet to read a paper on AI which has passed
through peer review which embodies any sense of misanthropy whatsoever,
and that includes all those conference proceedings which Mr. Cockton
wishes to take as his primary source of knowledge about the field.
There is certainly a lot of OBJECTIVITY, but I have never felt that
such objectivity could be confused with misanthropy.  As I said before,
stop biting the fingers long enough to look where they are pointing!

------------------------------

Date: 11 Jun 88 19:15:20 GMT
From: well!sierch@lll-lcc.llnl.gov  (Michael Sierchio)
Subject: Re: Who else isn't a science?


I agree, I think anyone should study whatever s/he likes -- after all, what
matters but whatever you decide matters.  I also agree that, simply because
you are interested in something, you shouldn't expect me to regard your
study as important or valid.

AI suffers from the same syndrome as many academic fields -- dissertations
are the little monographs that are part of the ticket to respectability in
academe.  The big, seminal questions (seedy business, I know) remain
unanswered, while the rush to produce results and get grants and make $$
(or pounds, the symbol for which...) is overwhelming. Perhaps we would not
be complaining if the study of intelligence and automata, and all the
theoretical foundations for AI work received their due. It HAS become an
engineering discipline, if not for the nefarious reasons I mentioned, then
simply because the gratification that comes from RESULTS is easier to get
than answers to the nagging questions about what we are, and what intelligence
is, etc.

Engineering has its pleasures, and I wouldn't deny them to anyone. But to
those who hold fast to the "?" and abjure the "!", I salute you.
--
        Michael Sierchio @ SMALL SYSTEMS SOLUTIONS
        2733 Fulton St / Berkeley / CA / 94705     (415) 845-1755

        sierch@well.UUCP        {..ucbvax, etc...}!lll-crg!well!sierch

------------------------------

Date: 11 Jun 88 20:20:59 GMT
From: agate!garnet!weemba@presto.ig.com  (Obnoxious Math Grad Student)
Subject: Re: AI seen as an experiment to determine the existence of
         reality

In article <517@dcl-csvax.comp.lancs.ac.uk>, simon@comp (Simon Brooke) writes:
>[...]
>If all this is so, then it is possible to exactly reproduce the workings
>of a human brain in a [Turing machine].

Your argument was pretty slipshod.  I for one do not believe the above
is even possible in principle.

ucbvax!garnet!weemba    Matthew P Wiener/Brahms Gang/Berkeley CA 94720

------------------------------

Date: 13 Jun 88 03:46:25 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: Re: Constructive Question  (Texts and social context)

In article <1335@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>IKBS programs are essentially private readings which freeze, despite
>the animation of knowledge via their inference mechanisms (just a
>fancy index really :-)).  They are only sensitive to manual reprogramming,
>a controlled intervention.  They are unable to reshape their knowledge
>to fit the current interaction AS HUMANS DO.  They are insensitive,
>intolerant, arrogant, overbearing, single-minded and ruthless.  Oh,
>and they usually don't work either :-) :-)

This is rather desperately anthropomorphic.  I am surprised to see Gilbert
Cockton, of all people, ascribing such human qualities to programs.

There is no reason why a program cannot learn from its input; as a
trivial example, Rob Milne's parser for PRESS could acquire new words
from the person typing to it.  What does it mean "to reshape one's
knowledge to fit"?  Writing programs which adapt to the particular
client has been an active research area in AI for several years now.  As
for insensitivity &c, if we could be given some examples of what kinds
of IKBS behaviour Gilbert Cockton interprets as having these qualities,
and or otherwise similar behaviours not so interpreted, perhaps we could
get some constructive criticism out of this.

The fact that "knowledge", once put into an IKBS, is fossilized, bothers
me.  I am so far in sympathy with Cockton as to think that any particular
set of facts & rules is most valuable when it is part of a tradition/
practice/social-context for interpreting, acquiring, and revising such
facts & rules, and I am worried that chunks of "knowledge", once handed
over to computers, may be effectively lost to human society.  But this
is no different from the human practice of abdicating responsibility to
human experts, who are also insensitive, &c.  Expert systems which are
designed to explain (in ICAI style) the knowledge in them as well as to
deploy it may in fact be a positive social factor.

Instead of waffling on in high-level generalisations, how would it be if
one particular case were to be examined.  I propose that the social effect
of Nolo Press's "WillWriter" should be examined (or a similar product).
It is worth noting that the ideology of Nolo Press is quite explicitly to
empower the masses and reduce the power of lawyers.  What _might_ such a
program do to society?  What _is_ it doing?  Do people who use it experience
it as more or less intolerant than a lawyer?  And so on.  This seems like a
worthy topic for a Masters in Sociology, whatever attitude you take to AI.
(Not that WillWriter is a notable AI program, but it serves the function of
an IKBS.)  Has a study like this already been done?

------------------------------

End of AIList Digest
********************

∂13-Jun-88  2038	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #30  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 13 Jun 88  20:38:38 PDT
Date: Mon 13 Jun 1988 15:37-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #30
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 14 Jun 1988      Volume 7 : Issue 30

Today's Topics:

  AI Languages
  Further Observations on the Fredkin Masters Open
  Construal of Induction
  The Definition of Information

----------------------------------------------------------------------

Date: Fri, 10 Jun 88 15:03:21 +0100
From: mcvax!swivax!vierhout@uunet.UU.NET (Paul Vierhout)
Subject: AI Languages

AIlanguage features:
old: procedure-data equivalence
less old: nondeterminism, 'streams'
                ,unification,OPS5 pattern matching,
shell-like: ability to specify frames and/or rules, and possibly control
promises: abstract models of cognitive tasks like the Interpretation Models
        of Breuker and Wielinga (SWI-UvA, Amsterdam) for knowledge acquisition,
        or the six generic tasks of Chandrasekaran (Ohio State Univ.).
Not at all an exhaustive list; shouldn't an AIlanguage ideally exhaustively
offer all features currently available ?

------------------------------

Date: 11 Jun 88 07:33:43 GMT
From: mtune!ihnp4!utah-cs!gr.utah.edu!uplherc!sp7040!obie!wsccs!dharve
      y@att.att.com
Subject: AI Languages


In article <19880527050431.8.NICK@MACH.AI.MIT.EDU>, Pat Hayes writes:
> Date: Mon, 16 May 88 13:29 EDT
> From: hayes.pa@Xerox.COM
> Subject: Re: AIList Digest   V7 #1
> In-reply-to: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>'s
>  message of Sat, 14 May 88  21:50 EDT
> To: AIList@AI.AI.MIT.EDU
> cc: hayes.pa@Xerox.COM
>
> I was fascinated by the correspondence between Gabe Nault and Mott Given in
> vol7#1, concerning "an artifical intelligence language ... something more than
> lisp or xlisp."    Can anyone suggest a list of features which a programming
> language must have which would qualify it as an "artificial intelligence
> language"  ?
>
> Pat Hayes

You wanted an opinion, so here it is!  There probably isn't such a thing
as list of 'must' features for an AI language!  There are things I would
'like' it to have though.  The first is that it must allow the action to
proceed anywhere (ie any Lisp function (procedure), and Prolog
predicate, or any FORTH word).  The second is that it must be at an
abstract level that allows me to forget about what the computer is doing
and concentrate on what I am doing.  Obviously, there is no such
language in existence that does this yet.  So we latch onto anything
that even closely resembles this ideal.  Take your pick, Smalltalk,
Lisp, Prolog.  One person's dream is bound to be someone else's
nightmare!

dharvey @ wsccs (David A Harvey)
I am responsible for Nobody, and Nobody is responsible for me.
Behind the Mormon Curtain.

------------------------------

Date: 11 Jun 88 16:06:46 GMT
From: hyper-sun1.jpl.nasa.gov!cracraft@jpl-elroy.arpa  (Stuart
      Cracraft)
Subject: Further Observations on Fredkin Masters Open


Now that there has been a "settling in" period, when people have been able
to digest the recent news, it seems advisable to put it in perspective.

A brief retrospective: Two new chess machines played in an Eastern U.S.
chess tournament against 18 masters and a few experts and class players.

While the performance of the machines, especially one of them which came
in 2nd in the tournament with a performance rating of over USCF 2500, is
laudable, an inspection of the games reveals that many of the machine's
human opponents sacrificed pawns and exchanges needlessly.

This style of play against an unknown opponent (the machine) by players
would seem to indicate a level of contempt that is generally self-defeating.
Discomforting is the fact that several of these players had faced very
powerful computer programs earlier in their careers, almost always scoring
a plus. The players who indicate contemptuousness end up taking unordinary
risks and generally underestimating most of their opponent's moves; this
lowers their quality of play and greatly degrades their performance.

So, while I think the performance of the 2nd-runner and its predecessor
is quite good, I also feel that players will be "on-guard" even more so
in the future and that this incident does not mean a lessening of human
chess; rather, it is a call to arms so that we may all regard our
opponent with more respect.

        Stuart

------------------------------

Date: 12 Jun 88 20:24:11 GMT
From: tness7!tness1!flatline!erict@bellcore.bellcore.com  (j eric
      townsend)
Subject: Re: Further Observations on Fredkin Masters Open

In article <6998@elroy.Jpl.Nasa.Gov>, Stuart Cracraft writes:

> This style of play against an unknown opponent (the machine) by players
> would seem to indicate a level of contempt that is generally self-defeating.

Now maybe this is already done, and my ignorance will get me a swift
boot to the ego, but...

Why not play these games double-blind?  Not being an avid chess player,
there may be lots of reasons that this wouldn't work that don't occur to
me at this moment...

Problems:

1.  Players able to tell by "style" that they're playing a computer(?)
    I know this is true for most any computerized wargame/combat simulation.

2.  Undue discomfort for the human players by not being able to see *any*
opponent ever.  Always playing a "black-box" human flunkie/motorized chess
piece mover. (See benefit #1)

Benefits:

1.  Above discomfort would be spread equally against all opponents this
    was the point in the first place:  create an equal level of discomfort
    for those playing humans.  (Discomfort could be nonexistant for each
    therefore equal.)

2.  Um.. Um..

Oh well.  It was just a thought...

--
                                Know Future
Skate UNIX or go home, boogie boy...
J. Eric Townsend ->uunet!nuchat!flatline!erict smail:511Parker#2,Hstn,Tx,77007
             ..!bellcore!tness1!/

------------------------------

Date: 10 Jun 88 19:53:38 GMT
From: Venugopala R. Dasigi <venu@mimsy.umd.edu>
Subject: Construal of Induction

Path: mimsy!venu
From: venu@mimsy.UUCP (Venugopala R. Dasigi)
Newsgroups: comp.ai.digest
Subject: Re: construal of induction
Message-ID: <11908@mimsy.UUCP>
Date: 10 Jun 88 19:53:36 GMT
References: <19880609224213.9.NICK@INTERLAKEN.LCS.MIT.EDU>
Reply-To: venu@mimsy.umd.edu.UUCP (Venugopala R. Dasigi)
Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742
Lines: 43

In an earlier article Raul Valdes-Perez writes:
>The concept of induction has various construals, it seems.  The one I am
>comfortable with is that induction refers to any form of ampliative
>reasoning, i.e. reasoning that draws conclusions which could be false
>despite the premises being true.  This construal is advanced by Wesley
>Salmon in the little book Foundations of Scientific Inference.  Accordingly,
>any inference is, by definition, inductive xor deductive.
                                  ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
>
>I realize that this distinction is not universal.  For example, some would
>distinguish categories of induction.  I would appreciate reading comments
>on this topic in AILIST.

I think it was Charles Sanders Peirce who made the distinction between three
types of resoning: induction, deduction and abduction. (Also, Harry Pople's
famous paper on "The Mechanization of Abductive Logic," Proc. IJCAI, 1973,
pp 147-152 mentions this.

Consider the following three possible components of reasoning:

1. A --> B      2.  A      3. B
(e.g., 1. All beans in this bag are white.
       2. This bean is from this bag.
       3. This bean is white.)

Deduction involves inferring 3 from 1 and 2.
Induction involves inferring 1 from 2 and 3.
Abduction invloves inferring 2 from 1 and 3.

(This was the way Peirce characterized the three types of logic.)
Now, my point is abduction also involves drawing conclusions which could be
false despite the premises being true, but that is not commonly construed as
a type of induction. Accordingly, I am not comfortable with the statement
that any inference is inductive XOR deductive (exclusive, all right, but not
necessarily exhaustive). I admit I have to read Salmon's book, though.


--- Venu Dasigi
--
Venugopala Rao Dasigi
ARPA: venu@mimsy.umd.edu CSNet: venu@umcp-cs/venu@mimsy.umd.edu
UUCP: {allegra,brl-bmd}!mimsy!venu@uunet.uu.net
US Mail: Dept. of CS, Univ. of Maryland, College Park, MD 20742-3255

------------------------------

Date: 10 Jun 88 18:05:04 GMT
From: USENET NEWS <linus!news@harvard.harvard.edu>
Subject: definition of information

Path: linus!mbunix!bwk
From: bwk@mitre-bedford.ARPA (Barry W. Kort)
Newsgroups: comp.ai.digest
Subject: Re: definition of information
Summary: And now for something completely different.
Keywords: Yes, but what difference does it make?
Message-ID: <34059@linus.UUCP>
Date: 10 Jun 88 18:05:03 GMT
References: <19880609224803.1.NICK@INTERLAKEN.LCS.MIT.EDU>
Sender: news@linus.UUCP
Reply-To: bwk@mbunix (Barry Kort)
Distribution: world
Organization: IdeaSync, Inc., Chronos, VT
Lines: 21

Bruce Nevin asks:
>Can anyone point me to a coherent definition of information respecting
>information content, as opposed to merely "quantity of information"?

I quote the following from Stewart Brand's book, _The Media Lab_:

        In 1979 anthropologist-philosopher Gregory Bateson offered
        another definition of "information":  "Any difference which
        makes a difference."  He said, "The map is not the territory,
        we're told.  Very well.  What is it that gets from the territory
        onto the map?"  The cartographer draws in roads, rivers,
        elevations--things the map user is expected to care about.
        Data, signal ("news of a difference") isn't information until
        it means something or does something ("makes a difference").
        The definition of information I kept hearing at the Media Lab
        was Bateson's highly subjective one.  That's philosophically
        heartwarming, but it also turns out there's a powerful tool
        kit lurking in the redefinition.

--Barry Kort

------------------------------

Date: Sat, 11 Jun 88 13:57:20 PDT
From: Bob Riemenschneider <rar@ads.com>
Subject: Re: definition of information

=>   It is often acknowledged that information theory has nothing to say
=>   about information in the usual sense, as having to do with meaning.
=>   ...
=>
=>   Can anyone point me to a coherent definition of information respecting
=>   information content, as opposed to merely "quantity of information"?
=>
=>   Bruce Nevin
=>   bn@cch.bbn.com

Actually, much the same formalization applies to "real" information.  See

        R. Carnap and Y. Bar-Hillel, "An Outline of a Theory of
        Semantic Information", Technical Report 247, Research
        Laboratory of Electronics, MIT, October 1952.  (Reprinted
        in Y. Bar-Hillel, _Language and Information_, Addison-Wesley,
        1964.)

        J. Hintikka, "On Semantic Information", in: J. Hintikka and
        P. Suppes (eds.), _Information and Inference_, Reidel, 1970.

for starters.  I'm not sure what you mean by `respecting information
content', but this approach *is* based on analysis of the logical
consequences of messages.

                                                        -- rar

------------------------------

Date: 12 Jun 88 13:49:00 EDT
From: Nahum (N.) Goldmann <ACOUST%BNR.BITNET@MITVMA.MIT.EDU>
Subject: def of info

In AIList Digest V7 #26 Bruce Nevin asks:

>>Can anyone point me to a coherent definition of information respecting
>>information content, as opposed to merely "quantity of information"?

Having analyzed several dozens of various definitions I, of course,
havily favour my own (see N. Goldmann, Online Research and Retrieval...
ISBN 0-8306-1947-X, Chapter 2).  I believe it is about "meaning" as
opposed to "statistics".

Greetings

Nahum Goldmann
acoust@bnr.ca
(613)763-2329

------------------------------

Date: Mon, 13 Jun 88 09:43:03 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: Re: definition of information

My understanding is that Carnap and Bar-Hillel set out to establish a
"calculus of information" but did not succeed in doing so.

Communication theory refers to a physical system's capacity to transmit
arbitrarily selected signals, which need not be "symbolic" (need not mean
or stand for anything).  To use the term "information" in this connection
seems Pickwickian at least.  "Real information"?  Do you mean the
Carnap/Bar-Hillel program as taken up by Hintikka?  Are you saying that
the latter has a useful representation of the meaning of texts?

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

End of AIList Digest
********************

∂13-Jun-88  2355	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #31  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 13 Jun 88  23:54:43 PDT
Date: Mon 13 Jun 1988 15:43-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #31
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 14 Jun 1988      Volume 7 : Issue 31

Today's Topics:

 Philosophy:
  The Social Construction of Reality
  Human-human communication

----------------------------------------------------------------------

Date: 10 Jun 88 18:22 PDT
From: hayes.pa@Xerox.COM
Subject: Re: Consensus and Reality, Consensus and Reality

OK, my last communication on this topic, I swear.   I absolutely agree that the
internal representation ( LofT ) is different from the languages of
communication ( I suspect profoundly different, in fact ).  I had a remark to
that effect in the first draft of my last note, but removed it as it seemed to
be aside from the point.  Oddly enough, I am more impressed by the way in which
speakers of different languages can communiate so easily, ie by the apparent
unity of LofT in the face of an external babel; whereas you seem to be more
impressed with the opposite:

BN> Different paradigms do exist in science, different
BN> predilections in philosophy, though the same
BN> natural language be spoken

so perhaps we are still disagreeing: but let that pass.
Of course its not obvious that all the terms we use are learned: I tend to think
that many cant be ( eg enough about spatial relationships to recognise a visual
cliff, and see T. Bowers work ). I was trying to lean over as far as I could in
the `social' direction, and pass you an olive branch.

But let me pass again to the central point of difficulty:

BN> No, I only want to establish agreement that we are
BN> NOT talking about some 'absolute Reality' (Ding an
BN> Sich), whatever the hell that is.  That we are constrained
BN> to talking about something much less absolute.  That
BN> is the point.

My point was that there is no NEED to establish agreement: that in saying that
the world is real, and that ( for example ) the CRT in front of ( the same one,
by the way ) really is a CRT, I am not claiming that the DinganSich is
CRT-shaped: I dont find the concept of an ultimate Reality ( your term ) useful
or perhaps even coherent: Im just talking about the ordinary world we all
inhabit.  This `absolute', `ultimate' talk is yours, not mine.  I feel a little
as though you had come up with an accusing air and told me forcefully that we
CANT refer to Froodle; and when I assured you that I had no intention of talking
about Froodle, you replied rather sternly that that was all right then, just so
long as we agreed that Froodle was unmentionable.  I am in a double bind: if I
disagree you will keep on arguing with me; but if I agree, then it seems that I
agree with your strange 19th-century views about the Ultimate:

BN>  ...you and I agree that we do not and cannot know
BN>  what is "really Real"

No: I dont think this talk is useful.  In agreeing that all our beliefs are
expressed in a framework and that it doesnt make sense to imagine that we could
somehow avoid this, I am not agreeing that we can never get to what is really
real: Im saying that this idea of a reality which is somehow more absolute than
ordinary reality is just smoke.  I DO think that we can know what is really
real, that some of our beliefs can be true: REALLY true, that is, true so that
no reality could make them truer, as absolutely and ultimately true as it is
possible to be.  They are true when the world is in fact the way they claim it
to be, thats all.

AS for ad hominem, well, Im afraid Im getting tired.  As far as I can discover,
there isnt anything in Winograd and Flores ( I refer to the book ), McCulloch (
on this sort of topic, not his technical work ) or Bateson which is sharp enough
to be worth arguing about.  I confess to not having read recent Pask, or any
Prigogine or Manturana & Varela: but there are only so many hours in a day, and
so many days in a life, and the odds that I will find anything interesting there
seem to me to be low.

OK, no more from Pat on this topic.

------------------------------

Date: 11 Jun 88 05:45:42 GMT
From: uflorida!novavax!maddoxt@gatech.edu  (Thomas Maddox)
Subject: Re: The Social Construction of Reality

In article <514@dcl-csvax.comp.lancs.ac.uk> Simon Brooke writes [. . .]:
>Wells, like fanatical adherents of other ideologies before him, first
>hurls abuse at his opponents, and finally, defeated, closes his ears. I
>note that he is in industry and not an academic; nevertheless he is
>posting into the ai news group, and must therefore be considered part of
>the American AI community. I haven't visited the States; I wonder if
>someone could tell me whether this extraordinary combination of ignorance
>and arrogance is frequently encountered in American intellectual life?

        I would say that any combination of ignorance and arrogance is
no more frequently encountered in American life than in British.
Consider, for instance, your own posting--ending as it does in a
gratuitous insult to American intellectual life in toto--as well as the
umpteen postings of Cockton's--virtually all charcterized by arrogant
dismissal of AI--that provoked Mr. Wells.
        Rude conjecture:  "Gilbert Cockton"'s postings are in fact
output from a rather silly AI program (probably out of MIT) called DREYFUS;
it is a logical successor to ELIZA and also its own best critique.  It
remains to be seen whether "Simon Brooke" is one of its sub-programs.

------------------------------

Date: 11 Jun 88 08:57:19 GMT
From: pasteur!agate!garnet!weemba@ames.arpa  (thatsDOCTORtoyoubuddy)
Subject: I'm in direct contact with many advanced fun CONCEPTS.

In article <539@novavax.UUCP>, maddoxt@novavax (Thomas Maddox) writes:
>In article <514@dcl-csvax.comp.lancs.ac.uk> Simon Brooke writes [. . .]:
>>Wells, like fanatical adherents of other ideologies before him, first
>>hurls abuse at his opponents, and finally, defeated, closes his ears. I
>>note that he is in industry and not an academic; nevertheless he is
>>posting into the ai news group, and must therefore be considered part of
>>the American AI community. I haven't visited the States; I wonder if
>>someone could tell me whether this extraordinary combination of ignorance
>>and arrogance is frequently encountered in American intellectual life?

Why do you say that?

>
>       I would say that any combination of ignorance and arrogance is
>no more frequently encountered in American life than in British.

Is it because any combination of ignorance and arrogance is no more
frequently encountered in american life than in british that you came to
me?

>Consider, for instance, your own posting--ending as it does in a
>gratuitous insult to American intellectual life in toto--as well as the
>umpteen postings of Cockton's--virtually all charcterized by arrogant
>dismissal of AI--that provoked Mr. Wells.

Does it bother you that provoked mr wells?

>       Rude conjecture:  "Gilbert Cockton"'s postings are in fact
>output from a rather silly AI program (probably out of MIT) called DREYFUS;
>it is a logical successor to ELIZA and also its own best critique.

Eliza?  Hah!  I would appreciate it if you would continue.

>It
>remains to be seen whether "Simon Brooke" is one of its sub-programs.

Earlier you said any combination of ignorance and arrogance is no more
frequently encountered in american life than in british?

ucbvax!garnet!weemba    Matthew P Wiener/Brahms Gang/Berkeley CA 94720

------------------------------

Date: Sun, 12 Jun 88 11:01 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: science, reality, man

Distribution-File:
        AILIST@AI.AI.MIT.EDU

In AIList Digest V7 #25, Bruce E. Nevin <bnevin@cch.bbn.com>
writes:

>the two aspects of what is real:  the absolute Ding an Sich, and those
>agreements that we hold about reality so long as we can get away with
>it.  In this relationship, consensual reality is not scientifically
>implausible; it is, at its most refined, science itself.

It would be interesting to hear opinions about the idea that we have
representations of the Ding an Sich very much like a robot which has
a representation of its environment.

Then that which we can perceive, think and feel only reflects that which
there is ... say, inside one's cranium, perhaps to put it in a better way
- what you can know reflects your mind.

Then what a scientist should do is to do things such as learning karate;
good music <Kraftwerk and Yello recommended> - good art - good
literature; study other sciences (I personally love Mathematics).

How well our intracranial representations reflect reality is a difficult
problem.

                        Andy Ylikoski

------------------------------

Date: Sun 12 Jun 88 22:27:22-PDT
From: Conrad Bock <BOCK@INTELLICORP.ARPA>
Subject: Resource limitation applied to hypostatization and consensus


I would not entirely recommend Winograd and Flores' book, but a way
occurred to me to make it (and consensus reality) more intelligible from
a computer scientific viewpoint.

If we agree that our minds are constructing theoretical entities from
patterns in our input-output data, then we might also agree that there
are so many theoretical entities that only a few of them can be open to
revision at any one time, given resource limitations.  This is
hypostatization (ie, taking concepts to be reality) as a computer
scientist might express it.

Since many of the concepts we use are learned from other people, we
might assume that many of our hypostatized concepts (which are part of
our reality) are due to social interaction (as Hayes suggested).  Hence,
reality is partly social.  A computer scientist might say the concepts
are in, or have been put into, the hardware or at least a lower level
language.  Winograd and Flores might call these concepts (I'm
interpreting now) ``practice'' or ``background''.

That's the proposal.  There's already a hole in it as far as Winograd
and Flores go: since we as computer scientists build our machines, we
don't have as much interest in situations where the machine was already
built before we got here; that's natural science.  I think Winograd and
Flores are concerned with the situation where we are the machines that
are already built (practice is ``already doing''), so the causality is
from background to concepts, not the other way around.

Conrad Bock

------------------------------

Date: Mon, 13 Jun 88 09:36:47 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: Re: Consensus and Reality, Consensus and Reality

We have some confusion of persons here.

It was in "Simon Brooke's acidic comments on William Wells' rather
brusquely expressed response to Cockton's social-science screaming" that
you perceived "a three-hundred-year old DOUBT about the world, and how
we know it's there." (V7 #24)  On the contrary, my opening remark was:

BN> I can't speak for Simon Brooke, but personally I don't think anyone
BN> seriously doubts the existence of the physical world in which we live.
BN> Something is going on here.  The question is, what.

I then said that it is the anti-consensus view that lays claim to an
absolute reality (WYSIWYG realism--the "naive realism" I thought was
unhorsed by Russell in 1940, in _An Inquiry Into Meaning and Truth_),
and that a consensual realist, like myself, acknowledges that we should
not attribute such absoluteness to what we perceive and know.

PH> . . . I am more impressed by the way in which
HP> speakers of different languages can communiate so easily, ie by the apparent
PH> unity of LofT in the face of an external babel; whereas you seem to be more
PH> impressed with the opposite

Speakers of different languages can communicate when there is mutual
good will and intent to communicate, and when they come to (or come with
a prior) agreement on a domain that constrains the semantics and
pragmatics sufficiently to make the ambiguities manageable.  Same
applies to speakers of the same language.  Get into rougher waters where
the discourse is no longer constrained by subject-matter (sublanguage
syntax) and social convention, however, and lifelong speakers of the
same neighborhood dialect can and often do find one another
incomprehensible.

PH> Of course its not obvious that all the terms we use are learned: I tend
PH> to think that many cant be ( eg enough about spatial relationships to
PH> recognise a visual cliff, and see T. Bowers work ). I was trying to lean
PH> over as far as I could in the `social' direction, and pass you an olive
PH> branch.

Thanks, an easy olive branch to accept and to reciprocate as follows:
it seems obvious to me that some of this is learned, some biologically
innate.  With the caveat that I believe it is sounder science not to
_assume_ a lot is innate (reference here to the sillier biologicist
claims of Generativists).

BN> I only want to establish agreement that we are
BN> NOT talking about some 'absolute Reality'
BN> . . . that we do not and cannot know
BN> what is "really Real"

PH> My point was that there is no NEED to establish agreement

I did not intend that you and I should be the only parties to such
agreement.  Some earlier messages seemed to claim that the world of
naive realism was in some sense absolute, e.g. Mr. T. William Wells.

TW> OK, answer me this: how in the world do they reach a consensus
TW> without some underlying reality which they communicate through.

PH> . . . this idea of a reality which is somehow more absolute than ordinary
PH> reality is just smoke.  I DO think that we can know what is really real,
PH> that some of our beliefs can be true: REALLY true, that is, true so that
PH> no reality could make them truer, as absolutely and ultimately true as
PH> it is possible to be.

"Some of our beliefs."  Certainly.  The hard question is, which ones, and
how can we tell the difference.

PH> They [some of our beliefs] are true when the world is in fact the way
PH> they claim it to be, thats all.

If one takes the appropriate perspective, has the appropriate purposes
and intentions, is prepared to ignore irrelevancies, and is able to get
away with ignoring what doesn't fit, then, yes, the world is "in fact"
and "really" the way our beliefs claim it to be.  From another
perspective, with other purposes and intentions, ignoring other
irrelevancies that the world (in that context) lets us get away with
ignoring, the world is in fact the way our rather different beliefs
claim it to be.  For all practical purposes, the earth is flat with lots
of hills, valleys, cliffs, bodies of water, plains, etc.  From an
astronomical or astronautical perspective, different beliefs apply.
Neither view can falsify the other (pace my 9th grade science teacher,
many years ago), because they are incommensurate, they do not
communicate with each other.  We can "act as if" the world were flat
most of the time, and get away with it.  And most of the time the
astronomical "truth" about the shape of the earth is irrelevant and
pointless to talk about.  Lucky for us!  We might otherwise have to
include a quantum physical statement about the shape of the earth in
everyday discourse--and act on it!

So sure, some of our beliefs are REALLY true--as far as they go.  However,
where one set of beliefs contradicts another set of beliefs couched in
another perspective and serving another purpose, they can't both be
REALLY true, can they?  Well, yes, they can.  You just have to assume
one perspective at a time, and not try to reconcile them.  To try to
reconcile them all is tantamount to trying to establish knowledge of
Absolute Reality, and we know that is a fruitless quest.

I am willing to let this dialog between you and me rest here.  I hope
that it is plain to those who objected to "consensual reality" that the
consensual aspects of knowledge and belief are neither silly nor
trivial.  I have tried to clarify that "consensual reality" refers to
shared beliefs, institutionalized as social convention, that the world
lets us get away with.  Our late 20th century American (techie
subculture) consensus reality has no greater and no less claim to being
absolutely real than any other.  It works really well in some respects.
It courts disaster in others.  Time will tell how much the world will
let us get away with.  It is of course an evolving consensus, and the
process of adaptation can allow for better accomodation with other
competing/cooperating perspectives that do exist in the biosphere.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: 11 Jun 88 00:03:53 GMT
From: mind!clarity!ghh@princeton.edu  (Gilbert Harman)
Subject: Re: Human-human communication

In article <200@kvasir.esosun.UUCP> Cris Kobryn writes:

>
>       How does one verbally explain what the color blue is to someone
>       who was born blind?
>
>The problem here is to explain a sensory experience (e.g. seeing
>"blue") to someone lacking the corresponding sensory facility
>(e.g., vision).

An even harder problem:

        How does one verbally explain what the color blue is to
        a stone?



                       Gilbert Harman
                       Princeton University Cognitive Science Laboratory
                       221 Nassau Street, Princeton, NJ 08542

                       ghh@princeton.edu
                       HARMAN@PUCC.BITNET

------------------------------

Date: 11 Jun 88 05:01:21 GMT
From: oodis01!uplherc!sp7040!obie!wsccs!terry@tis.llnl.gov  (Every
      system needs one)
Subject: Re: Human-human communication

In article <238@proxftl.UUCP> Tom Holroyd writes:
>Name one thing that isn't expressible with language! :-)

In article <839@netxcom.UUCP>, Sylvia Dutcher writes:
> Describe a complex mathematical formula, without writing it down.

        "The Schrodinger Wave Equation" (if this is inadequate, I
        can tell you how to write it down).

> Describe the unusual mannerisims of a friend, without demonstrating them.

        "He sniffs his pencil and groans a lot while scratching the stump of
         his left arm.  You can't miss him." (this is simply dependant on
         the amount of detail one is to put into a verbal description).

> When you get in a heated discussion, do you gesture with your hands and
> body?

        This is "body language" (the primary definition of language is an
        abstract method of describing information.  Body language, although
        not as concise [in most cases], qualifies).

> We can express just about anything with language, but is the listener
> receiving exactly what we are sending?

Of course not, but don't take it to the extreme of phenomenology, or we
will simply refuse to believe you exist and ignore any further statements :-).
True phenomenologists are useless, precisely for this reason.  You can't talk
to them or exchange information in a meaningful fashion.

> Even the same word, with the
> same definition, can mean different things to different people, or in
> different contexts.

        I waited until after this statement to follow the last one up:

> Look out your window and describe the view to someone who's been blind
> since birth.

        Since we do not share contexts, this is not possible.  They would
understand my referrents less than I would understand Japaneese; after all,
having watched "Shogun", I do have SOME referents ;-).  The entire concept
pre-supposes some referents.  I assume that it would be possible to use a
direct-brain-visual-center stimulation of some kind to demonstrate the
concepts of "color" and "light", but more likely you would simply demonstrate
the concepts of "electro-shock therapy" and "cauterization" given current
technological capabilities... but then you would have a referent and could,
therefore, provide a description.  Adequacy of the description is a matter
of opinion, after that.  Admittedly, a description is probably less adequate
to the describee, but give us 50 years; besides, you (hopefully) do not go into
some kind of a self-induced trance when something is described to you, and
actually believe you "see" what is described.  A description is not the same
thing as the item being described; it is a paraphrase.  Quality is obviously
dependant on who/what is doing the paraphrasing.  You have to admit that the
television (a machine) can better describe a scene than I can (if you don't,
I'll simply do a worse and worse job until you do ;-).

        This entire thread is devolving into "why AI is impossible so we can
justify cutting all funding rather than reforming the welfare system or building
fewer useless piles of paper instead".

        Everyone seems to be missing the point that the reason AI hasn't
got any shining results for you to touch is that, as soon as something is
useful/marketable/sellable (usually 3 mutually exclusive traits), it gets
renamed so that it isn't AI any more.  This happened with databases, it
happened with character recognition (it's now called "optical scanning"),
and seems to be trying to happen with natural language processing and
knowledge-based expert systems.  Most modern computer instruction technology
is the result of original work in the 50's and 60's by cognitive psychologists.
This doesn't mean you want one (a psychologist) running, maintaining, or
administering your computer facilities; it simply means that AI has been
proven to be a useful item to throw money at.  Hell, most compiler technology
today is a result of techniques learned exploring possibilities in AI.

        Whether or not current languages can do what needs to be done is
an open question, and is therefore disputed.  I see nothing in any previous
arguments by anyone that suggest that the concept of language as a method of
description is flawed.  It is idiotic to make assumptions based on the
likelyhood of possible future events until some form of social engineering can
make 100% accurate predicitions and produce duplicable results with accuracy.
Stating that machines can not produce behavior which is comparable with human
behavior is as idiotic as most religious dogma.

                                terry@wsccs

------------------------------

Date: 11 Jun 88 12:41:39 GMT
From: mind!eliot@princeton.edu  (Eliot Handelman)
Subject: Re: Human-human communication

In article <2534@mind.UUCP> ghh@clarity.UUCP (Gilbert Harman) writes:
>An even harder problem:

>       How does one verbally explain what the color blue is to
>       a stone?
>                      Gilbert Harman
>                       Princeton University Cognitive Science Laboratory
>                      221 Nassau Street, Princeton, NJ 08542


We've already done that. We've run into trouble testing the stone's knowledge,
though.


Eliot Handelman
Music & Cognition Group
Department of Music
Princeton University

------------------------------

Date: 11 Jun 88 14:10:09 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu 
      (Stephen Smoliar)
Subject: Re: Human-human communication

In article <920@papaya.bbn.com> barr@pineapple.bbn.com (Hunter Barr) writes:
>
>I will now express in language:
>"How to recognize the color red on sight (or any other color)..":
>
>Find a person who knows the meaning of the word "red."  Ask her to
>point out objects which are red, and objects which are not,
>distinguishing between them as she goes along.  If you are physically
>able to distinguish colors, you will soon get the hang of it.
>
As H. L. Menken once said:  "For every complex problem, there is a simple
answer . . . and it's wrong."  There is a lot of subtlety lurking beneath
the simplicity of the above scenario, rather like dust swept under a carpet.
Let us begin with the assumption that all that is required to distinguish
colors is some PHYSICAL ability.  Does that really mean anything;  and, if
so, what does it mean?  I think there is sufficient evidence that we are
not talking strictly about receptors which can distinguish different
frequencies of visible radiation.  If that were all there were to it,
we would have a lot more success with automata distinguishing colors
under the same circumstances as humans (such as major variations in
ambient lighting).  Then there is that casual phrase about getting "the
hang of it."  Given how little we really know about phenomena such as
memory, it is very hard to put much substance into this statement.  (If
we could, we probably wouldn't be studying AI any longer!)

I think Wittgenstein's PHILOSOPHICAL INVESTIGATIONS would be appropriate
reading for this discussion.  Wittgenstein does a much more thorough job
than I could ever do in exploring all the difficulties which plague the
scenario which Hunter Barr has proposed.  I found it a great adventure
(albeit frustrating) to delve into such mysteries of understanding.
Since reading it, I have recommended it to anyone concerned with issues
of communication with humans.

------------------------------

Date: 11 Jun 88 15:01:13 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu 
      (Stephen Smoliar)
Subject: Re: Human-human communication

In article <905@papaya.bbn.com> Hunter Barr writes:
>
>Now I must "get pedantic," by saying that body movement *is*
>describable.  As for part a), you are correct that someone other than
>the author must understand it, otherwise we do not have communication.
>But you ignore the existance of useful dance notations.  I don't know
>much about dance notation, and I am sure there is much lacking in it--
>probably standardization for one thing.  But the lack of a universally
>intelligable *spoken* language does not make human speech fail the
>"usefulness" test.  Mandarin Chinese is an even bigger problem with
>adults than dance notation!  If one learned a common dance notation
>from childhood, it would be every bit as useful as the Chinese
>language.

Having just said my piece about colors, Hunter, I do not want you to
get the idea that I'm picking on you;  but I have to come to Gilbert
Cockton's defense here.  (Surprised, Gilbert?)  You see, I spent several
years working with a variety of different dance notations.  A summary of
much of my work was published in COMPUTING SURVEYS in an article I wrote
with Norman Badler.  Let me try to straighten out a few points here.

First of all, NO dance notation provides sufficient information for the
exact reproduction of a movement.  Like all notations, dance notation
involves introducing simplifying abstractions.  Some notations are
basically iconographic . . . simplified images of positions are key
points in time drawn with the assumption that the brain can fill in
the "between" stuff.  Others attempt to describe trajectories of flexion
at the major joints.  However, no notation has been able to communicate
some of the most fundamental information about body comportment which is
vital in reproducing any movement pattern, be it for dance, athletics, or
anything else.

The notation I know best is Labanotation, having worked directly with the
Dance Notation Bureau for a couple of years.  Here are a few interesting
things that I learned there:

        1.  Most dancers do not read Labanotation.  If a dance company
        wants to reconstruct a work from a notated score, they bring
        in a notator to interpret the score for them.

        2.  When a notator is interpreting a score, it is usually very
        valuable to know WHO recorded the score.  If you know who wrote
        the notation, you can usually make some assumptions about how
        most of those abstractions can be fleshed out into "real" movement.
        If you don't know who the notator was, you damned well better know
        the style of the choreographer whose work is being reconstructed!
        In other words, without some general mental image of "what things
        are supposed to look like," the notation will not do you very much
        good.

In other words, for all its merits, dance notation is basically a sophisticated
form of a memory aid with some attempt at standardization.  If you wanted to
compare it to music notation, today's notation of music would be a poor
analogy.  For some dance notations, the analogy would best fit the diacritical
marks which indicate the proper incantation of Hebrew religious texts.
Labanotation, on the other hand, would probably find its analogy somewhere
in the 14th century attempts at notating polyphony.

Regarding the learning of dance notation from childhood, there used to be
(and perhaps still are . . . Gilbert?) programs in the United Kingdom which
teach dance from a very early age.  Some of these programs have incorporated
the use of dance notation from the beginning.  Since these programs have been
around since the fifties, I would have thought that by now we would be seeing
notation-literate dancers, at least in London.  I have encountered no evidence
that this is the case, nor does it appear that notation is a major element in
the operation of many large-scale dance companies.

Ultimately, I tend to agree with Gilbert that the problem is not in the
notation but in what is trying to be communicated.  Video is as valuable
in reconstructing dances as it is in gymnastics, but there is still no
substitute for "shaping" bodies.  What Gilbert calls "memory positions"
I have always called "muscular memory;"  and I'm afraid there is no substitute
for physical experience when it comes to acquiring it.

------------------------------

End of AIList Digest
********************

∂14-Jun-88  1122	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #31  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 14 Jun 88  11:21:51 PDT
Date: Mon 13 Jun 1988 15:43-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #31
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 14 Jun 1988      Volume 7 : Issue 31

Today's Topics:

 Philosophy:
  The Social Construction of Reality
  Human-human communication

----------------------------------------------------------------------

Date: 10 Jun 88 18:22 PDT
From: hayes.pa@Xerox.COM
Subject: Re: Consensus and Reality, Consensus and Reality

OK, my last communication on this topic, I swear.   I absolutely agree that the
internal representation ( LofT ) is different from the languages of
communication ( I suspect profoundly different, in fact ).  I had a remark to
that effect in the first draft of my last note, but removed it as it seemed to
be aside from the point.  Oddly enough, I am more impressed by the way in which
speakers of different languages can communiate so easily, ie by the apparent
unity of LofT in the face of an external babel; whereas you seem to be more
impressed with the opposite:

BN> Different paradigms do exist in science, different
BN> predilections in philosophy, though the same
BN> natural language be spoken

so perhaps we are still disagreeing: but let that pass.
Of course its not obvious that all the terms we use are learned: I tend to think
that many cant be ( eg enough about spatial relationships to recognise a visual
cliff, and see T. Bowers work ). I was trying to lean over as far as I could in
the `social' direction, and pass you an olive branch.

But let me pass again to the central point of difficulty:

BN> No, I only want to establish agreement that we are
BN> NOT talking about some 'absolute Reality' (Ding an
BN> Sich), whatever the hell that is.  That we are constrained
BN> to talking about something much less absolute.  That
BN> is the point.

My point was that there is no NEED to establish agreement: that in saying that
the world is real, and that ( for example ) the CRT in front of ( the same one,
by the way ) really is a CRT, I am not claiming that the DinganSich is
CRT-shaped: I dont find the concept of an ultimate Reality ( your term ) useful
or perhaps even coherent: Im just talking about the ordinary world we all
inhabit.  This `absolute', `ultimate' talk is yours, not mine.  I feel a little
as though you had come up with an accusing air and told me forcefully that we
CANT refer to Froodle; and when I assured you that I had no intention of talking
about Froodle, you replied rather sternly that that was all right then, just so
long as we agreed that Froodle was unmentionable.  I am in a double bind: if I
disagree you will keep on arguing with me; but if I agree, then it seems that I
agree with your strange 19th-century views about the Ultimate:

BN>  ...you and I agree that we do not and cannot know
BN>  what is "really Real"

No: I dont think this talk is useful.  In agreeing that all our beliefs are
expressed in a framework and that it doesnt make sense to imagine that we could
somehow avoid this, I am not agreeing that we can never get to what is really
real: Im saying that this idea of a reality which is somehow more absolute than
ordinary reality is just smoke.  I DO think that we can know what is really
real, that some of our beliefs can be true: REALLY true, that is, true so that
no reality could make them truer, as absolutely and ultimately true as it is
possible to be.  They are true when the world is in fact the way they claim it
to be, thats all.

AS for ad hominem, well, Im afraid Im getting tired.  As far as I can discover,
there isnt anything in Winograd and Flores ( I refer to the book ), McCulloch (
on this sort of topic, not his technical work ) or Bateson which is sharp enough
to be worth arguing about.  I confess to not having read recent Pask, or any
Prigogine or Manturana & Varela: but there are only so many hours in a day, and
so many days in a life, and the odds that I will find anything interesting there
seem to me to be low.

OK, no more from Pat on this topic.

------------------------------

Date: 11 Jun 88 05:45:42 GMT
From: uflorida!novavax!maddoxt@gatech.edu  (Thomas Maddox)
Subject: Re: The Social Construction of Reality

In article <514@dcl-csvax.comp.lancs.ac.uk> Simon Brooke writes [. . .]:
>Wells, like fanatical adherents of other ideologies before him, first
>hurls abuse at his opponents, and finally, defeated, closes his ears. I
>note that he is in industry and not an academic; nevertheless he is
>posting into the ai news group, and must therefore be considered part of
>the American AI community. I haven't visited the States; I wonder if
>someone could tell me whether this extraordinary combination of ignorance
>and arrogance is frequently encountered in American intellectual life?

        I would say that any combination of ignorance and arrogance is
no more frequently encountered in American life than in British.
Consider, for instance, your own posting--ending as it does in a
gratuitous insult to American intellectual life in toto--as well as the
umpteen postings of Cockton's--virtually all charcterized by arrogant
dismissal of AI--that provoked Mr. Wells.
        Rude conjecture:  "Gilbert Cockton"'s postings are in fact
output from a rather silly AI program (probably out of MIT) called DREYFUS;
it is a logical successor to ELIZA and also its own best critique.  It
remains to be seen whether "Simon Brooke" is one of its sub-programs.

------------------------------

Date: 11 Jun 88 08:57:19 GMT
From: pasteur!agate!garnet!weemba@ames.arpa  (thatsDOCTORtoyoubuddy)
Subject: I'm in direct contact with many advanced fun CONCEPTS.

In article <539@novavax.UUCP>, maddoxt@novavax (Thomas Maddox) writes:
>In article <514@dcl-csvax.comp.lancs.ac.uk> Simon Brooke writes [. . .]:
>>Wells, like fanatical adherents of other ideologies before him, first
>>hurls abuse at his opponents, and finally, defeated, closes his ears. I
>>note that he is in industry and not an academic; nevertheless he is
>>posting into the ai news group, and must therefore be considered part of
>>the American AI community. I haven't visited the States; I wonder if
>>someone could tell me whether this extraordinary combination of ignorance
>>and arrogance is frequently encountered in American intellectual life?

Why do you say that?

>
>       I would say that any combination of ignorance and arrogance is
>no more frequently encountered in American life than in British.

Is it because any combination of ignorance and arrogance is no more
frequently encountered in american life than in british that you came to
me?

>Consider, for instance, your own posting--ending as it does in a
>gratuitous insult to American intellectual life in toto--as well as the
>umpteen postings of Cockton's--virtually all charcterized by arrogant
>dismissal of AI--that provoked Mr. Wells.

Does it bother you that provoked mr wells?

>       Rude conjecture:  "Gilbert Cockton"'s postings are in fact
>output from a rather silly AI program (probably out of MIT) called DREYFUS;
>it is a logical successor to ELIZA and also its own best critique.

Eliza?  Hah!  I would appreciate it if you would continue.

>It
>remains to be seen whether "Simon Brooke" is one of its sub-programs.

Earlier you said any combination of ignorance and arrogance is no more
frequently encountered in american life than in british?

ucbvax!garnet!weemba    Matthew P Wiener/Brahms Gang/Berkeley CA 94720

------------------------------

Date: Sun, 12 Jun 88 11:01 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: science, reality, man

Distribution-File:
        AILIST@AI.AI.MIT.EDU

In AIList Digest V7 #25, Bruce E. Nevin <bnevin@cch.bbn.com>
writes:

>the two aspects of what is real:  the absolute Ding an Sich, and those
>agreements that we hold about reality so long as we can get away with
>it.  In this relationship, consensual reality is not scientifically
>implausible; it is, at its most refined, science itself.

It would be interesting to hear opinions about the idea that we have
representations of the Ding an Sich very much like a robot which has
a representation of its environment.

Then that which we can perceive, think and feel only reflects that which
there is ... say, inside one's cranium, perhaps to put it in a better way
- what you can know reflects your mind.

Then what a scientist should do is to do things such as learning karate;
good music <Kraftwerk and Yello recommended> - good art - good
literature; study other sciences (I personally love Mathematics).

How well our intracranial representations reflect reality is a difficult
problem.

                        Andy Ylikoski

------------------------------

Date: Sun 12 Jun 88 22:27:22-PDT
From: Conrad Bock <BOCK@INTELLICORP.ARPA>
Subject: Resource limitation applied to hypostatization and consensus


I would not entirely recommend Winograd and Flores' book, but a way
occurred to me to make it (and consensus reality) more intelligible from
a computer scientific viewpoint.

If we agree that our minds are constructing theoretical entities from
patterns in our input-output data, then we might also agree that there
are so many theoretical entities that only a few of them can be open to
revision at any one time, given resource limitations.  This is
hypostatization (ie, taking concepts to be reality) as a computer
scientist might express it.

Since many of the concepts we use are learned from other people, we
might assume that many of our hypostatized concepts (which are part of
our reality) are due to social interaction (as Hayes suggested).  Hence,
reality is partly social.  A computer scientist might say the concepts
are in, or have been put into, the hardware or at least a lower level
language.  Winograd and Flores might call these concepts (I'm
interpreting now) ``practice'' or ``background''.

That's the proposal.  There's already a hole in it as far as Winograd
and Flores go: since we as computer scientists build our machines, we
don't have as much interest in situations where the machine was already
built before we got here; that's natural science.  I think Winograd and
Flores are concerned with the situation where we are the machines that
are already built (practice is ``already doing''), so the causality is
from background to concepts, not the other way around.

Conrad Bock

------------------------------

Date: Mon, 13 Jun 88 09:36:47 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: Re: Consensus and Reality, Consensus and Reality

We have some confusion of persons here.

It was in "Simon Brooke's acidic comments on William Wells' rather
brusquely expressed response to Cockton's social-science screaming" that
you perceived "a three-hundred-year old DOUBT about the world, and how
we know it's there." (V7 #24)  On the contrary, my opening remark was:

BN> I can't speak for Simon Brooke, but personally I don't think anyone
BN> seriously doubts the existence of the physical world in which we live.
BN> Something is going on here.  The question is, what.

I then said that it is the anti-consensus view that lays claim to an
absolute reality (WYSIWYG realism--the "naive realism" I thought was
unhorsed by Russell in 1940, in _An Inquiry Into Meaning and Truth_),
and that a consensual realist, like myself, acknowledges that we should
not attribute such absoluteness to what we perceive and know.

PH> . . . I am more impressed by the way in which
HP> speakers of different languages can communiate so easily, ie by the apparent
PH> unity of LofT in the face of an external babel; whereas you seem to be more
PH> impressed with the opposite

Speakers of different languages can communicate when there is mutual
good will and intent to communicate, and when they come to (or come with
a prior) agreement on a domain that constrains the semantics and
pragmatics sufficiently to make the ambiguities manageable.  Same
applies to speakers of the same language.  Get into rougher waters where
the discourse is no longer constrained by subject-matter (sublanguage
syntax) and social convention, however, and lifelong speakers of the
same neighborhood dialect can and often do find one another
incomprehensible.

PH> Of course its not obvious that all the terms we use are learned: I tend
PH> to think that many cant be ( eg enough about spatial relationships to
PH> recognise a visual cliff, and see T. Bowers work ). I was trying to lean
PH> over as far as I could in the `social' direction, and pass you an olive
PH> branch.

Thanks, an easy olive branch to accept and to reciprocate as follows:
it seems obvious to me that some of this is learned, some biologically
innate.  With the caveat that I believe it is sounder science not to
_assume_ a lot is innate (reference here to the sillier biologicist
claims of Generativists).

BN> I only want to establish agreement that we are
BN> NOT talking about some 'absolute Reality'
BN> . . . that we do not and cannot know
BN> what is "really Real"

PH> My point was that there is no NEED to establish agreement

I did not intend that you and I should be the only parties to such
agreement.  Some earlier messages seemed to claim that the world of
naive realism was in some sense absolute, e.g. Mr. T. William Wells.

TW> OK, answer me this: how in the world do they reach a consensus
TW> without some underlying reality which they communicate through.

PH> . . . this idea of a reality which is somehow more absolute than ordinary
PH> reality is just smoke.  I DO think that we can know what is really real,
PH> that some of our beliefs can be true: REALLY true, that is, true so that
PH> no reality could make them truer, as absolutely and ultimately true as
PH> it is possible to be.

"Some of our beliefs."  Certainly.  The hard question is, which ones, and
how can we tell the difference.

PH> They [some of our beliefs] are true when the world is in fact the way
PH> they claim it to be, thats all.

If one takes the appropriate perspective, has the appropriate purposes
and intentions, is prepared to ignore irrelevancies, and is able to get
away with ignoring what doesn't fit, then, yes, the world is "in fact"
and "really" the way our beliefs claim it to be.  From another
perspective, with other purposes and intentions, ignoring other
irrelevancies that the world (in that context) lets us get away with
ignoring, the world is in fact the way our rather different beliefs
claim it to be.  For all practical purposes, the earth is flat with lots
of hills, valleys, cliffs, bodies of water, plains, etc.  From an
astronomical or astronautical perspective, different beliefs apply.
Neither view can falsify the other (pace my 9th grade science teacher,
many years ago), because they are incommensurate, they do not
communicate with each other.  We can "act as if" the world were flat
most of the time, and get away with it.  And most of the time the
astronomical "truth" about the shape of the earth is irrelevant and
pointless to talk about.  Lucky for us!  We might otherwise have to
include a quantum physical statement about the shape of the earth in
everyday discourse--and act on it!

So sure, some of our beliefs are REALLY true--as far as they go.  However,
where one set of beliefs contradicts another set of beliefs couched in
another perspective and serving another purpose, they can't both be
REALLY true, can they?  Well, yes, they can.  You just have to assume
one perspective at a time, and not try to reconcile them.  To try to
reconcile them all is tantamount to trying to establish knowledge of
Absolute Reality, and we know that is a fruitless quest.

I am willing to let this dialog between you and me rest here.  I hope
that it is plain to those who objected to "consensual reality" that the
consensual aspects of knowledge and belief are neither silly nor
trivial.  I have tried to clarify that "consensual reality" refers to
shared beliefs, institutionalized as social convention, that the world
lets us get away with.  Our late 20th century American (techie
subculture) consensus reality has no greater and no less claim to being
absolutely real than any other.  It works really well in some respects.
It courts disaster in others.  Time will tell how much the world will
let us get away with.  It is of course an evolving consensus, and the
process of adaptation can allow for better accomodation with other
competing/cooperating perspectives that do exist in the biosphere.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: 11 Jun 88 00:03:53 GMT
From: mind!clarity!ghh@princeton.edu  (Gilbert Harman)
Subject: Re: Human-human communication

In article <200@kvasir.esosun.UUCP> Cris Kobryn writes:

>
>       How does one verbally explain what the color blue is to someone
>       who was born blind?
>
>The problem here is to explain a sensory experience (e.g. seeing
>"blue") to someone lacking the corresponding sensory facility
>(e.g., vision).

An even harder problem:

        How does one verbally explain what the color blue is to
        a stone?



                       Gilbert Harman
                       Princeton University Cognitive Science Laboratory
                       221 Nassau Street, Princeton, NJ 08542

                       ghh@princeton.edu
                       HARMAN@PUCC.BITNET

------------------------------

Date: 11 Jun 88 05:01:21 GMT
From: oodis01!uplherc!sp7040!obie!wsccs!terry@tis.llnl.gov  (Every
      system needs one)
Subject: Re: Human-human communication

In article <238@proxftl.UUCP> Tom Holroyd writes:
>Name one thing that isn't expressible with language! :-)

In article <839@netxcom.UUCP>, Sylvia Dutcher writes:
> Describe a complex mathematical formula, without writing it down.

        "The Schrodinger Wave Equation" (if this is inadequate, I
        can tell you how to write it down).

> Describe the unusual mannerisims of a friend, without demonstrating them.

        "He sniffs his pencil and groans a lot while scratching the stump of
         his left arm.  You can't miss him." (this is simply dependant on
         the amount of detail one is to put into a verbal description).

> When you get in a heated discussion, do you gesture with your hands and
> body?

        This is "body language" (the primary definition of language is an
        abstract method of describing information.  Body language, although
        not as concise [in most cases], qualifies).

> We can express just about anything with language, but is the listener
> receiving exactly what we are sending?

Of course not, but don't take it to the extreme of phenomenology, or we
will simply refuse to believe you exist and ignore any further statements :-).
True phenomenologists are useless, precisely for this reason.  You can't talk
to them or exchange information in a meaningful fashion.

> Even the same word, with the
> same definition, can mean different things to different people, or in
> different contexts.

        I waited until after this statement to follow the last one up:

> Look out your window and describe the view to someone who's been blind
> since birth.

        Since we do not share contexts, this is not possible.  They would
understand my referrents less than I would understand Japaneese; after all,
having watched "Shogun", I do have SOME referents ;-).  The entire concept
pre-supposes some referents.  I assume that it would be possible to use a
direct-brain-visual-center stimulation of some kind to demonstrate the
concepts of "color" and "light", but more likely you would simply demonstrate
the concepts of "electro-shock therapy" and "cauterization" given current
technological capabilities... but then you would have a referent and could,
therefore, provide a description.  Adequacy of the description is a matter
of opinion, after that.  Admittedly, a description is probably less adequate
to the describee, but give us 50 years; besides, you (hopefully) do not go into
some kind of a self-induced trance when something is described to you, and
actually believe you "see" what is described.  A description is not the same
thing as the item being described; it is a paraphrase.  Quality is obviously
dependant on who/what is doing the paraphrasing.  You have to admit that the
television (a machine) can better describe a scene than I can (if you don't,
I'll simply do a worse and worse job until you do ;-).

        This entire thread is devolving into "why AI is impossible so we can
justify cutting all funding rather than reforming the welfare system or building
fewer useless piles of paper instead".

        Everyone seems to be missing the point that the reason AI hasn't
got any shining results for you to touch is that, as soon as something is
useful/marketable/sellable (usually 3 mutually exclusive traits), it gets
renamed so that it isn't AI any more.  This happened with databases, it
happened with character recognition (it's now called "optical scanning"),
and seems to be trying to happen with natural language processing and
knowledge-based expert systems.  Most modern computer instruction technology
is the result of original work in the 50's and 60's by cognitive psychologists.
This doesn't mean you want one (a psychologist) running, maintaining, or
administering your computer facilities; it simply means that AI has been
proven to be a useful item to throw money at.  Hell, most compiler technology
today is a result of techniques learned exploring possibilities in AI.

        Whether or not current languages can do what needs to be done is
an open question, and is therefore disputed.  I see nothing in any previous
arguments by anyone that suggest that the concept of language as a method of
description is flawed.  It is idiotic to make assumptions based on the
likelyhood of possible future events until some form of social engineering can
make 100% accurate predicitions and produce duplicable results with accuracy.
Stating that machines can not produce behavior which is comparable with human
behavior is as idiotic as most religious dogma.

                                terry@wsccs

------------------------------

Date: 11 Jun 88 12:41:39 GMT
From: mind!eliot@princeton.edu  (Eliot Handelman)
Subject: Re: Human-human communication

In article <2534@mind.UUCP> ghh@clarity.UUCP (Gilbert Harman) writes:
>An even harder problem:

>       How does one verbally explain what the color blue is to
>       a stone?
>                      Gilbert Harman
>                       Princeton University Cognitive Science Laboratory
>                      221 Nassau Street, Princeton, NJ 08542


We've already done that. We've run into trouble testing the stone's knowledge,
though.


Eliot Handelman
Music & Cognition Group
Department of Music
Princeton University

------------------------------

Date: 11 Jun 88 14:10:09 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu 
      (Stephen Smoliar)
Subject: Re: Human-human communication

In article <920@papaya.bbn.com> barr@pineapple.bbn.com (Hunter Barr) writes:
>
>I will now express in language:
>"How to recognize the color red on sight (or any other color)..":
>
>Find a person who knows the meaning of the word "red."  Ask her to
>point out objects which are red, and objects which are not,
>distinguishing between them as she goes along.  If you are physically
>able to distinguish colors, you will soon get the hang of it.
>
As H. L. Menken once said:  "For every complex problem, there is a simple
answer . . . and it's wrong."  There is a lot of subtlety lurking beneath
the simplicity of the above scenario, rather like dust swept under a carpet.
Let us begin with the assumption that all that is required to distinguish
colors is some PHYSICAL ability.  Does that really mean anything;  and, if
so, what does it mean?  I think there is sufficient evidence that we are
not talking strictly about receptors which can distinguish different
frequencies of visible radiation.  If that were all there were to it,
we would have a lot more success with automata distinguishing colors
under the same circumstances as humans (such as major variations in
ambient lighting).  Then there is that casual phrase about getting "the
hang of it."  Given how little we really know about phenomena such as
memory, it is very hard to put much substance into this statement.  (If
we could, we probably wouldn't be studying AI any longer!)

I think Wittgenstein's PHILOSOPHICAL INVESTIGATIONS would be appropriate
reading for this discussion.  Wittgenstein does a much more thorough job
than I could ever do in exploring all the difficulties which plague the
scenario which Hunter Barr has proposed.  I found it a great adventure
(albeit frustrating) to delve into such mysteries of understanding.
Since reading it, I have recommended it to anyone concerned with issues
of communication with humans.

------------------------------

Date: 11 Jun 88 15:01:13 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu 
      (Stephen Smoliar)
Subject: Re: Human-human communication

In article <905@papaya.bbn.com> Hunter Barr writes:
>
>Now I must "get pedantic," by saying that body movement *is*
>describable.  As for part a), you are correct that someone other than
>the author must understand it, otherwise we do not have communication.
>But you ignore the existance of useful dance notations.  I don't know
>much about dance notation, and I am sure there is much lacking in it--
>probably standardization for one thing.  But the lack of a universally
>intelligable *spoken* language does not make human speech fail the
>"usefulness" test.  Mandarin Chinese is an even bigger problem with
>adults than dance notation!  If one learned a common dance notation
>from childhood, it would be every bit as useful as the Chinese
>language.

Having just said my piece about colors, Hunter, I do not want you to
get the idea that I'm picking on you;  but I have to come to Gilbert
Cockton's defense here.  (Surprised, Gilbert?)  You see, I spent several
years working with a variety of different dance notations.  A summary of
much of my work was published in COMPUTING SURVEYS in an article I wrote
with Norman Badler.  Let me try to straighten out a few points here.

First of all, NO dance notation provides sufficient information for the
exact reproduction of a movement.  Like all notations, dance notation
involves introducing simplifying abstractions.  Some notations are
basically iconographic . . . simplified images of positions are key
points in time drawn with the assumption that the brain can fill in
the "between" stuff.  Others attempt to describe trajectories of flexion
at the major joints.  However, no notation has been able to communicate
some of the most fundamental information about body comportment which is
vital in reproducing any movement pattern, be it for dance, athletics, or
anything else.

The notation I know best is Labanotation, having worked directly with the
Dance Notation Bureau for a couple of years.  Here are a few interesting
things that I learned there:

        1.  Most dancers do not read Labanotation.  If a dance company
        wants to reconstruct a work from a notated score, they bring
        in a notator to interpret the score for them.

        2.  When a notator is interpreting a score, it is usually very
        valuable to know WHO recorded the score.  If you know who wrote
        the notation, you can usually make some assumptions about how
        most of those abstractions can be fleshed out into "real" movement.
        If you don't know who the notator was, you damned well better know
        the style of the choreographer whose work is being reconstructed!
        In other words, without some general mental image of "what things
        are supposed to look like," the notation will not do you very much
        good.

In other words, for all its merits, dance notation is basically a sophisticated
form of a memory aid with some attempt at standardization.  If you wanted to
compare it to music notation, today's notation of music would be a poor
analogy.  For some dance notations, the analogy would best fit the diacritical
marks which indicate the proper incantation of Hebrew religious texts.
Labanotation, on the other hand, would probably find its analogy somewhere
in the 14th century attempts at notating polyphony.

Regarding the learning of dance notation from childhood, there used to be
(and perhaps still are . . . Gilbert?) programs in the United Kingdom which
teach dance from a very early age.  Some of these programs have incorporated
the use of dance notation from the beginning.  Since these programs have been
around since the fifties, I would have thought that by now we would be seeing
notation-literate dancers, at least in London.  I have encountered no evidence
that this is the case, nor does it appear that notation is a major element in
the operation of many large-scale dance companies.

Ultimately, I tend to agree with Gilbert that the problem is not in the
notation but in what is trying to be communicated.  Video is as valuable
in reconstructing dances as it is in gymnastics, but there is still no
substitute for "shaping" bodies.  What Gilbert calls "memory positions"
I have always called "muscular memory;"  and I'm afraid there is no substitute
for physical experience when it comes to acquiring it.

------------------------------

End of AIList Digest
********************

∂14-Jun-88  1651	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #32  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 14 Jun 88  16:50:28 PDT
Date: Mon 13 Jun 1988 15:46-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #32
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 14 Jun 1988      Volume 7 : Issue 32

Today's Topics:

 Queries:
  Traveling Salesman Problem
  Reveal Expert System Shell
  Help with TI Personal Consultant Plus

 Seminars:
  Proposed seminar - "The Computer Experience and the Human Spirit"
  Symposium on Computer Graphics Education

----------------------------------------------------------------------

Date: 10 Jun 88 12:42:35 GMT
From: Pat Prosser <mcvax!cs.strath.ac.uk!pat@uunet.UU.NET>
Reply-to: mcvax!cs.strath.ac.uk!pat@uunet.UU.NET
Subject: Re: [csrobe@icase.arpa: Traveling Salesman Problem (a
         request)]


Just incase: two search strategies tried on the TSP  recently
have been Simulated Annealing and Genetic Algorithms. Papers
that cover the TSP and these techniques are:

Optimisation by Simulated Annealing, Kirkpatrick, Gelatt and Vecchi,
Science, May 1983, Volume 220, pages 671-680. This paper compares SA
to Lin and Kernighan.

Allels, Loci, and the Traveling Salesman Problem, Goldberg and Lingle
Probably in one of the Proceedings on GA.

------------------------------

Date: Fri, 10 Jun 88 15:25:32 PDT
From: wahl%cookie.DEC@decwrl.dec.com (Dave Wahl Database AD CX01
      522-3115)
Subject: Reveal Expert System Shell

Tymshare was marketing a tool called Reveal (which was developed at ICU,
I think) which was targeted at MIS applications.  It uses (used?) a
pattern matching technique based on fuzzy set membership.

Does anybody know what happened to Reveal?  Is it still on the market?
Does Tymshare still sell it?  A contact name and phone number or
email address would be appreciated.

Dave Wahl

------------------------------

Date: 12 Jun 88 18:45:44 GMT
From: killer!usl!skb@ames.arpa  (Sanjiv K. Bhatia)
Subject: Help with TI Personal Consultant Plus

I have been developing an application for information retrieval using TI PC+.
I need to have a control over assignment of certainty factors dynamically
within the program.  For example, I need to specify the rules in the form:

        IF:  condition x
        THEN: consequent y CF z

where z is to be picked up from a variable assignment and is not explicitly
specified in the rule.

Can anyone tell me if PC+ is capable of taking such rules, or how it can be
done in PC+ ?

In any case, can this kind of rules be specified using some other ES shell
which also has an interface with a DBMS, preferably dBASE II or III?

Thanks in advance.

Sanjiv

------------------------------

Date: Fri, 10 Jun 88 14:59:26 PDT
From: hodges@violet.Berkeley.EDU
Subject: Proposed seminar - "The Computer Experience and the Human
         Spirit"

Proposal for a Seminar:

        The Computer Experience and the Human Spirit

I am planning a seminar-workshop with a a working title of "The
Computer Experience and the Human Spirit". Jacob Needleman, Professor
of Philosophy at San Francisco State University, who is well known for
his books and seminars on the inner quest, has expressed interest in
this subject and suggested that he and I might offer such a program if
there is sufficient response.

It would be held in San Francisco and would consist of one or two
whole days' work together on a weekend, with presentations, exercises,
and exchanges among the participants. There would be a fee.

We would like to invite all who share an interest and concern about
the growing influence of computers on our inner as well as outer life
to participate. This would include those who work with computers
professionally, philosophers and spiritual explorers who wish to
understand how to approach the computer, and those for whom the
computer has become an inescapable fact of their daily lives.

Questions which we would like to explore include:

          Do computers liberate or enslave us?

          The computer as a creative medium.

          What does the experience of working with computers help us
          to understand about ourselves and our place in the world
          order?

          What new insights, metaphors, and values can be developed
          from the computer experience? What are their potential
          benefits and pitfalls?

          How can we improve the quality of our relationships with
          computers?


I am sending this out to invite commentary, suggestions, and
expressions of interest in participation. Please respond by e-mail or
telephone, or letter.

Also, if you are in touch with any other individuals, groups, or
mailing lists of people who might be interested, please forward this
message (and let me know).



Richard Hodges
hodges@violet.berkeley.edu
(415)268-3656
650 Calmar
Oakland CA, 94610

------------------------------

Date: Sun, 12 Jun 88 22:24:38 EDT
From: "William J. Joel" <JZEM%MARIST.BITNET@MITVMA.MIT.EDU>
Subject: Symposium on Computer Graphics Education

               SYMPOSIUM ON COMPUTER GRAPHICS EDUCATION
                          NOVEMBER 4-5, 1988
                   MARIST COLLEGE, POUGHKEEPSIE, NY

      Sponsored by the  Division of Computer Science  & Mathematics,
   Marist College, in cooperation with ACM/SIGGRAPH.

      The  symposium will  combine papers,   panels and  small-group
   workshops to explore  all aspects of teaching  computer graphics
   including

   y   computer graphics  in   liberal  arts   institutions,   fine/
       commercial art programs and engineering programs
   y   interdisciplinary techniques
   y   elementary and secondary computer graphics courses
   y   undergraduate versus graduate programs
   y   hardware and software choices
   y   curriculum aids


      A maximum  of 250 attendees has  been set for  this symposium,
   due to space limitations.   This number includes those presenting
   papers and participating  in panels.   Registration will  be on a
   first come,  first serve basis.   The deadline for advance regis-
   tration is July 31,  1988.   Please send a completed registration
   form,  with  a check made out  to Symposium on  Computer Graphics
   Education, to

       Deborah Coleman/Registration Chairperson
       West Coast University
       440 Shatto Place
       Los Angeles, CA 90020

      All other questions concerning the symposium should be sent to

       William J. Joel/General Chairperson
       Marist College
       82 North Road
       Poughkeepsie, NY 12601
       (914) 471-3240, x601
       Email: jzem@marist.bitnet

------------------------------ CUT HERE ------------------------------

               Symposium on Computer Graphics Education
                          November 4-5, 1988
                         Advance Registration
   ZDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD?
    _______________________________________________________________
   |                                                               |
   |      ______ ____________________ ____________________ ______  |
   |      Prefix First Name           Last Name            Suffix  |
   |                                                               |
   |      _______________________________________________________  |
   |      Title                                                    |
   |                                                               |
   |      _______________________________________________________  |
   |      Institution                                              |
   |                                                               |
   |      _______________________________________________________  |
   |      Division/Department                                      |
   |                                                               |
   |      _______________________________________________________  |
   |      Street                                                   |
   |                                                               |
   |      ___________________________________________ _____ _____  |
   |      City/Town                                   State Zip    |
   |                                                               |
   |      ___-___-____ ____________________ _____________________  |
   |      Telephone    Email                Network                |
   |                                                               |
   |                                                               |
   |                         Prior to July 31       On-Site        |
   |                                           (space available)   |
   |      Registration fee        $45                 $50          |
   |      (includes lunches both days)                             |
   |                                                               |
   |_______________________________________________________________|

------------------------------

End of AIList Digest
********************

∂14-Jun-88  2326	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #33  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 14 Jun 88  23:26:22 PDT
Date: Wed 15 Jun 1988 02:04-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #33
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 15 Jun 1988     Volume 7 : Issue 33

Today's Topics:

 Philosophy:
  Who else isn't a science?
  scope of ailist
  Me, Karl, Stephen, Gilbert
  Definition of Information
  representation languages

----------------------------------------------------------------------

Date: 13 Jun 88 13:07:50 GMT
From: marsh@mitre-bedford.arpa  (Ralph J. Marshall)
Subject: Re: Who else isn't a science?

In article <10785@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu writes:
>
>Indeed, many modern dictionaries now give an extra meaning to the word
>"intelligent", thanks, partly due to AI's decades of abuse of the term:
>it means "able to peform some of the functions of a computer".
>
>Ain't it wonderful?  AI succeeded by changing the meaning of the word.
>
>ucbvax!garnet!weemba   Matthew P Wiener/Brahms Gang/Berkeley CA 94720

I don't know what dictionary you are smoking, but _MY_ dictionary has the
following perfectly reasonable definition of intelligence:

        "The ability to learn or understand or to deal with new or
         trying situations." (Webster's New 9th Collegiate Dictionary)

I'm not at all sure that this is really the focus of current AI work,
but I am reasonably convinced that it is a long-term goal that is worth
pursuing.

------------------------------

Date: 14 Jun 88 07:25:00 EDT
From: "CUGINI, JOHN" <cugini@icst-ecf.arpa>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf.arpa>
Subject: scope of ailist


As a somewhat belated response to the complaints about endless
philosophizing, I offer the following quote from H.G. Wells, "The
Future in America", written in 1906, after Wells had toured the
states.  He was writing specifically about Washington, providing some
additional poignancy for those of us who work in the DC area, but
perhaps it has wider pertinence:


    It is perhaps near the truth to say that this dearth of any
    general and comprehensive intellectual activity is due to
    intellectual specialization.  The four thousand scientific
    men in Washington are all too energetically busy with
    ethnographic details, electrical computations or herbaria,
    to talk about common and universal things.  They ought not to
    be so busy, and a science so specialized sinks halfway down
    the scale of sciences.  Science is one of those things that
    cannot hustle; if it does it loses its connexions.  In
    Washington some men, I gathered, hustle, others play bridge,
    and general questions are left a little comtemptuously, as
    being of the nature of "gas," to the newspapers and
    magazines.  Philosophy, which correlates the sciences and
    keeps them subservient to the universals of life, has no
    seat there.  My anticipated synthesis of ten thousand minds
    refused, under examination, to synthesize at all; it
    remained disintegrated, a mob, individually active and
    collectively futile, of specialists and politicians.


John Cugini  <Cugini@ecf.icst.nbs.gov>

------------------------------

Date: Tue, 14 Jun 88 08:18:37 -0400 (EDT)
From: David Greene <dg1v+@andrew.cmu.edu>
Subject: Re: Me, Karl, Stephen, Gilbert

In AIList Digest   V7 #29, Stephen Smoliar writes:

> What have all those researchers who don't spend so much
> time with computer programs have to tell us?


I'm not advocating Mr. Cockton's views, but the limited literature breadth in
many AI papers *is* self-defeating.  For example, until very recently, few
expert system papers acknowledged the results of 20+ years of psychology
research on Judgement and Decision Making.  It seems odd that AI people
studying experts decision making would not reference behavioral/ performance
research on human/ expert decision making.

The works of Kahneman, Tversky, Hogarth and Dawes (to name some luminaries),
all identify inherent flaws in human (including experts') judgement.  These
dysfunctional biases result in consistent suboptimal decision rules across many
realistic conditions (setting aside debates on "optimality").  Yet, AI
researchers and knowledge engineers attempt to produce fidelity to the expert
and compare the resultant system to the experts performance.  Is it a wonder
that many ES's don't work in the field...

Perhaps a broader literature/ research exposure could be advantageous to AI (or
any field)...


-David
dg1v@andrew.cmu.edu
Carnegie Mellon

"You're welcome to use my oppinions, just don't get them all wrinkled..."

------------------------------

Date: Tue, 14 Jun 88 07:52:42 PDT
From: golden@frodo.STANFORD.EDU (Richard Golden)
Subject: Re: Definition of Information

In AILIST Digest V7 #26 Bruce Nevin asks:
Can anyone point me to a coherent definition of information respecting
information content, as opposed to merely "quantity of information"?

This question is really related to an earlier discussion concerned with
viewing probability theory as a measure of belief.  We can think of a
knowledge structure as being represented by a probability distribution
which assigns some "degree of belief" (i.e., a probability) to some
set of events (i.e., a sample space).  Let X be an event which occurs
with probability p(X).  Then clearly an equivalent "knowledge structure"
which assigns some "degree of surprise" (i.e., -LOG[p(X)]) to some
set of events (i.e., a sample space) may be constructed.

The simple point which I am making is that the SAMPLE SPACE and the
STRUCTURE OF ITS ELEMENTS is a necessary component of the definition of
information in a technical sense and information CONTENT (for the most
part) resides in this SAMPLE SPACE.

                                        Richard Golden (golden@psych)

------------------------------

Date: Tue, 14 Jun 88 10:42:12 bst
From: Ian Dickinson <ijd%otter.lb.hp.co.uk@RELAY.CS.NET>
Subject: Re: representation languages


/ otter:comp.ai.digest / vierhout@swivax.UUCP (Paul Vierhout) / writes:
> AIlanguage features:
> old: procedure-data equivalence
> less old: nondeterminism, 'streams'
>               ,unification,OPS5 pattern matching,
> shell-like: ability to specify frames and/or rules, and possibly control
> promises: abstract models of cognitive tasks like the Interpretation Models
>       of Breuker and Wielinga (SWI-UvA, Amsterdam) for knowledge acquisition,
>       or the six generic tasks of Chandrasekaran (Ohio State Univ.).
> Not at all an exhaustive list; shouldn't an AIlanguage ideally exhaustively
> offer all features currently available ?

If you read the various papers from Chandresakaran's group, you will see that
one of their central hypotheses is that you cannot define a single language
that is usable to write all applications.  Each generic task (and I think
there are rather more than six) will have its own language, which specialises
control and data structures to that task.  In fact, they seem to anticipate
a spectrum of languages each individually suited to *part* of the application
being 
What we absolutely *must* avoid in defining representation languages of the
future is the "good feature explosion" - ie adding new features like frames,
rules, backwards, forwards and side ways inference, 12 inheritiance schemes,
etc - simply because they could be useful in some circumstance.

This is the route to the KEE's and ART's ++ of future representation schemes.
Whilst I have no doubt that these systems are useful today, _I_ as an
application developer want to see a representation system that is maximally
small whilst giving me the power that I need.  The philosophy I would like to
see adopted is:
        o  define conceptual representations that allow applications to be
           written at the maximum level of abstraction (eg generic tasks)
        o  define the intermediate representations (frames, rules, sets ..)
           that are needed to implement the conceptual structures
        o  choose a subset of these representations that can be maximally
           tightly integrated with the base language of your choice (which
           would not be Lisp in my choice)

By doing this, we can not only help the application developer by giving her
access to all of the abstraction power in the system, but also have a chance
of getting the semantics of these systems properly understood and defined.

++ KEE and Art are registered trademarks.

Ian.


+---------------------+--------------------------+------------------------+
|Ian Dickinson         net:                        All opinions expressed |
|Hewlett Packard Labs   ijd@otter.hplabs.hp.com    are my own, and not    |
|Bristol, England       ijd@hplb.uucp              necessarily those of   |
|0272-799910            ..!mcvax!ukc!hplb!ijd      my employer.           |
+---------------------+--------------------------+------------------------+

------------------------------

End of AIList Digest
********************

∂15-Jun-88  0217	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #34  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 15 Jun 88  02:17:37 PDT
Date: Wed 15 Jun 1988 02:21-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #34
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 15 Jun 1988     Volume 7 : Issue 34

Today's Topics:

 Queries:
  Connectionist Expert Systems
  Ternary Logic Systems
  Pointers needed on induction over multiple explanations
  Graphics on PC using GCL LISP?

 Seminar:
  Children's reorganization of knowledge in the domain of astronomy

----------------------------------------------------------------------

Date: 13 Jun 88 22:37:45 GMT
From: olivier@boulder.colorado.edu  (Olivier Brousse)
Subject: Connectionist Expert Systems


Could any one give me pointers about NESTOR, an expert  system combining
neural nets and symbolic AI techniques ?

Is there any other work done in connectionist expert systems ?

Thanks.


Olivier Brousse                       |
Department of Computer Science        |  olivier@boulder.colorado.EDU
U. of Colorado, Boulder               |

------------------------------

Date: 13 Jun 88 22:57:40 GMT
From: manta!key@nosc.mil  (Gerry Key)
Subject: Ternary Logic Systems

A colleague is interested in contacting anyone who is doing
research on 3-state (ternary logic) computer systems,
specifically for AI applications.  He's read much of the
literature on emulating ternary logic on binary systems, but
hasn't seen much work done directly on ternary systems.

Any pointers would be appreciated.

Please respond directly to me at the addresses listed below,
as I am not a subscriber to this newsgroup.

Gerry Key
Computer Sciences Corporation
4045 Hancock Street
San Diego, CA 92110 U.S.A.
(619) 225-8401
  key@nosc.mil                (Internet)
  {...!ihnp4!moss!nosc!key}   (UUCP)

------------------------------

Date: 14 Jun 88 12:54:19 GMT
From: paul.rutgers.edu!vanhalen.rutgers.edu!bruce@rutgers.edu  (Shane
      Bruce)
Subject: Pointers needed on induction over multiple explanations


In an interesting article in the Proceedings of the 1988 AAAI Spring
Symposium on EBL, Flann and Dietterich discuss the idea of performing
induction over the functional explanations of a concept (in their case,
minmax game trees), as opposed to performing the induction on the feature
language description of the concept.  In the article they list some
other projects in which induction over explanations is performed.

Can anyone provide me with pointers to work in which induction is done
over multiple concept explanations?  I would particularly be
interested in hearing about projects in which induction is performed
over causal process explanations generated by qualitative or
quantitative domain models.

Please email to me (bruce@paul.rutgers.edu) any references which you
might have concerning this topic.  I will, of course, post the results
of this query to the net if there is enough interest.  Thanks for the
help.
--
Shane Bruce
HOME: (201) 613-1285                WORK: (201) 932-4714
ARPA: bruce@paul.rutgers.edu
UUCP: {ames, cbosgd, harvard, moss}!rutgers!paul.rutgers.edu!bruce

------------------------------

Date: Fri, 10 Jun 88 16:34:59 CDT
From: halcdc!tciaccio
Reply-to: shamash!jwabik@umn-cs.arpa  (Jeff Wabik)
Subject: Graphics on PC using GCL LISP?


  We are trying to do windows (both text and graphics) and mouse events from
GCL LISP and MS-DOS.  Would also like to display digitized pictures in
these windows.  Is anyone doing such a thing out there?  Should we start with
GC-WINDOWS or the Gold Hill Extended Programming Interface (EPI) to something
like C ?  Any info would be much appreciated.



Please direct responses to shamash!jwabik@umn-cs.arpa
 or to halcdc!tciaccio

------------------------------

Date: Tue 14 Jun 88 08:51:56-EDT
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: Lang. & Cognition Seminar


                     BBN Science Development Program
                   Language & Cognition Seminar Series


                 CHILDREN'S REORGANIZATION OF KNOWLEDGE
                      IN THE DOMAIN OF ASTRONOMY

                          Stella Vosniaoov
                       University of Illinois


                      BBN Laboratories Inc.
                       10 Moulton Street
                 Large Conference Room, 2nd Floor

               10:30 a.m., Wednesday, June 15, 1988


Abstract:  Some preliminary findings from an ongoing project on children's
acquisition of knowledge in the domain of astronomy will be presented.
The findings indicate that elementary school children's early beliefs
are consistent with their phenomenal explanation of a stationary flat
earth and an up and down movement of the sun and moon.  These beliefs
appear to be quite resistant to change and rise to a number of
misconceptions which reveal children's difficulty to assimilate
current scientific views.

------------------------------

End of AIList Digest
********************

∂15-Jun-88  2046	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #35  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 15 Jun 88  20:46:04 PDT
Date: Wed 15 Jun 1988 23:24-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #35
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 16 Jun 1988      Volume 7 : Issue 35

Today's Topics:

 Queries:
  Connectionist Expert Systems

 Seminar:
  Supercomputing Summer Institute

 Philosophy:
  Biological relevance and AI
  Human-human communication

----------------------------------------------------------------------

Date: 14 Jun 88 18:22:56 GMT
From: ndsuvax!ncthangi@uunet.uu.net  (sam r. thangiah)
Subject: Connectionist Expert Systems


>From: olivier@boulder.Colorado.EDU (Olivier Brousse)
>Could any one give me pointers about NESTOR, an expert  system combining
>neural nets and symbolic AI techniques ?

>Is there any other work done in connectionist expert systems ?

  I am also interested in the above area.  Pointers to any references, papers
or work being conducted using such techniques will be much appreciated.

Thanks in advance,

Sam
--
Sam R. Thangiah,  North Dakota State University.
                         UUCP:       ...!uunet!ndsuvax!ncthangi
                         BITNET:     ncthangi@ndsuvax.bitnet
                         ARPA,CSNET: ncthangi%ndsuvax.bitnet@cunyvm.cuny.edu

------------------------------

Date: Wed, 15 Jun 88 11:09:55 EDT
From: hendler@dormouse.cs.umd.edu (Jim Hendler)
Subject: Re:   Connectionist expert systems


There are essentially two strains of work going on in this area.  One
strain involves the actual implementation of expert-system-like programs
using connectionist techniques.  I'll let the real connectionists
comment on those.  The second strain involves the creation of hybrid
systems which have both connectionist and symbolic mechanisms
cooperating in the solution of traditional cognitivish AI problems (i.e.
language, planning, ``high level'' vision, etc.)  As well as my own work
in this area, there is the work of Wendy Lehnert of UMass, Michael Dyer
of UCLA, Mark Jones of Bell Labs, and, depending on how you classify it,
the work of Dave Touretzky at CMU (Dave's work doesn't really have
symbols in the traditional sense of the word, but some of his models
use gating and other serial techniques to control a larger connectionist
system).  Also, the work of the local (now often called structured)
connectionists have in some ways been hybrid.  The one most related
might be that of Lokendra Shastri of U Penn.

  -Jim Hendler
   U. of Md. Institute for Advanced Computer Studies
   UMCP
   College Park, Md. 20742

------------------------------

Date: Wed, 15 Jun 88 10:18:09 EDT
From: Una Smith <Q2813@pucc.princeton.edu>
Subject: Supercomputing Summer Institute


          John von Neumann National Supercomputer Center
                       1988 Summer Institute
 "An Intensive Introduction to Vector and Parallel Supercomputing"
                         August 1-12, 1988


 Objectives
   The John von Neumann National Supercomputer Center (JvNC)  will
   hold its third annual Summer Institute during August 1-12, 1988
   at the JvNC in Princeton, New Jersey.  The principal  goals  of
   the  Institute  are  to  teach  the  participants  how  to  use
   supercomputers  effectively  and  to   provide   an   intensive
   mini-course in computational scientific research.

 Facilities
   The JvNC operates the first production Class VII supercomputer,
   the  ETA  10, installed at the JvNC in March, 1988.  The ETA 10
   is presently configured with 128 million words of shared memory
   and four  Central  Processing  Units (CPUs) each  equipped with
   four million words of local memory.  The JvNC also operates two
   CYBER  205  supercomputers, configured  with four million words
   of memory.   All supercomputers are accesssed from the DEC 8600
   front-end computers  at  the JvNC operating under VMS or Ultrix
   (UNIX).  The PEP software, a  series of  user-friendly, command
   driven interactive procedures, allows users to execute programs
   on the ETA 10 and  CYBER  205  without  the  need  for learning
   the supercomputer's operating system.  The  JvNC  Visualization
   Facility includes  two Silicon  Graphics  Iris  4D workstations
   and  two Sun 3/160  workstations  with  presentation  and draft
   quality color hardcopy.

 Operation
   The Summer  Institute  will  be  equally  divided  between  the
   classroom  and  the  laboratory.   The  classroom  portion will
   include lectures by JvNC staff on  the  overall  JvNC  computer
   environment,  vector  computing,  parallel  computing, advanced
   performance programming,  communications  and  networking,  and
   graphics.   In  addition,  there  will  be  invited lectures on
   computational  chemistry,  algorithms  for  parallel  machines,
   computational    fluid    dynamics,    plasma    physics,   and
   visualization.  The laboratory portion  will  include  detailed
   consulting by User Services on program optimization for the ETA
   10 and CYBER 205, and utilization  of  the  JvNC  Visualization
   Facility.   The  participants shall be awarded an allocation of
   supercomputer time for their research projects.

 Participants
   Postdoctoral, graduate and advanced undergraduate  students  at
   U.S.   universities  and colleges are eligible to apply for the
   Summer Institute.  Knowledge  of  Fortran  is  a  prerequisite.
   Experience   with   vectorization  and  supercomputers  is  not
   required.  Each participant is expected  to  be  engaged  in  a
   computational research project and bring an operational Fortran
   code for further development and  production  runs  during  the
   Institute.

 Financial Support
   The JvNC will provide reimbursement for travel,  accommodations
   and  meals  up  to  $1500  per  participant  according  to  NSF
   policies.   Air  travel  shall  be  economy  class  and   local
   accommodations  will be organized through the JvNC.  Meals will
   be reimbursed up to a maximum per diem.  There is  no  separate
   stipend.

 Application
   Interested persons should submit the following information:

    - Curriculum   vitae,   indicating   university   or   college
      affiliation,  courses  of  study, undergraduate and graduate
      transcript (as appropriate), address and  telephone  (office
      and home)

    - Two letters  of  recommendation  from  faculty  advisors  or
      instructors

    - Description of computational research project

   The information should be  submitted  by  24  June  1988.   All
   applications should be sent to:

           John von Neumann National Supercomputer Center
                          Summer Institute
                            P.O. Box 3717
                        Princeton, NJ  08543


 For further information, contact David Salzman at 609/520-2000 or
 <salzman@jvnca.csc.org> or <SALZMAN@JVNCD.BITNET>

------------------------------

Date: 14 Jun 88 20:57:06 GMT
From: tektronix!sequent!mntgfx!msellers@bloom-beacon.mit.edu  (Mike
      Sellers)
Subject: Biological relevance and AI (was Re: Who else isn't a
         science?)

In article <13100@shemp.CS.UCLA.EDU>, Benjamin Thompson writes:
>In article <10510@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu writes:
>> Gerald Edelman, for example, has compared AI with Aristotelian
>> dentistry: lots of theorizing, but no attempt to actually compare
>> models with the real world.  AI grabs onto the neural net paradigm,
>> say, and then never bothers to check if what is done with neural
>> nets has anything to do with actual brains.

Where are you getting your information regarding AI & the neural net paradigm?
I agree that there is a lot of hype right now about connectionist/neural nets,
but this alone does not invalidate them (they may not be a panacea, but they
probably aren't worthless either).  There are an increasing number of people
interested in (and to some degree knowledgeable of) both the artificial and
biological sides of sensation, perception, cognition, and (some day)
intelligence.  See for example the PDP books or Carver Mead's upcoming book
on analog VLSI and neural systems (I just finished a class in this -- whew!).
There have been recent murmurings from some of the more classical AI types
(e.g. Seymour Papert in last winter's Daedalus) that the biological
paradigm/metaphor is not viable for AI research, but these seem to me to be
either overstating the case against connectionism or simply not aware
of what is being done.  Others contend that anything involving 'wetware' is
not *really* AI at all, and thus shouldn't invade discussions on that subject.
This is, I believe, a remarkably short-sighted view that amounts to denying
the possibility of a new tool to use.

> This is symptomatic of a common fallacy.  Why should the way our brains
> work be the only way "brains" can work?  Why shouldn't *A*I workers look
> at weird and wonderful models?  We (basically) don't know anything about
> how the brain really works anyway, so who can really tell if what they're
> doing corresponds to (some part of) the brain?
>
> Ben

I think Ben's second and following sentences here are symptomatic of a common
fallacy, or more precisely of common misinformation and ignorance.  No one has
said or implied that biological nervous systems have a monopoly on viable
methodologies for sensation, perception, and/or cognition.  There probably are
many different ways in which these types of problems can be tackled.  We do
have a considerable amount of knowledge about the human brain, and (for
the time being more to the point) about invertebrate nervous systems and the
actions of individual neurons.  And finally, correspondence to biological
systems, while important, is by no means a single and easily acheived goal
(see below).  On the other hand, we can say at least two things about the
current state of implemented cognition:
  1) The methods we now call 'classical' AI, starting from about the late
1950's or early 60's, have not made an appreciable dent in their original
plans nor even lived up to their original claims.  To refresh your memory,
a quote from 1958:

     "...there are now in the world machines that think, that learn and
     that create.  Moreover, their ability to do these things is going to
     increase rapidly until --in a visible future-- the range of problems
     they can handle will be coextensive with the range to which the human
     mind has been applied."

This quote is from H. Simon and A. Newell in "Heuristic Problem Solving:
The Next Advance in Operations Research" in _Operations Research_ vol 6,
published in *1958*.  (It was recently quoted by Dreyfus and Dreyfus in the
Winter 1988 edition of Daedalus, on page 19.)  We seem to be no closer to
the realization of this claim than we were thirty years ago.
  2)  We do have one instance that proves that sensation, perception, and
cognition are possible: natural nervous systems.  Thus, even though there
may be other ways of solving the problems associated with vision, for
example, it would seem that adopting some of the same strategies used by
other successful systems would increase the likelyhood of our success.
While it is true that there is more unknown than known about nervous
systems, we do know enough about neurons, synapses, and small aggregates
of neurons to begin to simulate their structure and function.

  The issue of how much to simulate is a valid and interesting one.  Natural
nervous systems have had many millions of years to evolve their solutions
(much longer than we hope to have to take with our artificial systems), but
then they have been both undirected in their evolution and constrained by
the resources and techniques available to biological systems.  This would
seem to argue for only limited biological relevance to artificial solutions:
e.g., where neurons have axons, we can simply use wires.  On the other hand,
natural systems also have the tendency to take a liability and make it into
a virtue.  For example, while axons are not simple 'wires', and in fact are
much slower, larger, and more complex than wires, they can also act as active
parts of the whole system, enabling such things as temporal differentiation
to occur easily and without much in the way of cellular overhead.  Thus,
while we probably will not want to create fully detailed simulations of
neurons, synapses, and neural structures, we do need to understand what
advantages are embodied in the natural approach and extract them for use in
our artifices while carefully excluding those things that exist only by
being carried along with the main force of the evolutionary current.
  All of this is not to say that AI researchers shouldn't look at "weird and
wonderful models" of perception and cognition; this is after all precisely
what they have been doing for the past thirty years.  The only assertion here
is that this approach has not yielded much in the way of fertile results
(beyond the notable products such as rule-based systems, windowed displays,
and the mouse :-) ), and that with new technology, new knowledge of biological
systems, and a new generation of researchers, the one proven method for
acheiving real-time sensation, perception, and cognition ought to be given
its chance to fail.

Responses welcomed.

--
Mike Sellers                           ...!tektronix!sequent!mntgfx!msellers
Mentor Graphics Corp., EPAD            msellers@mntgfx.MENTOR.COM
"AI is simply the process of taking that which is meaningful, and making it
meaningless."  -- Tom Dietterich  (admittedly, taken out of context)

------------------------------

Date: Wed, 15 Jun 88 09:20:07 pdt
From: Ray Allis <ray@BOEING.COM>
Subject: Re: Human-human communication


In AIList Digest V7 #31, Stephen Smoliar writes:

> First of all, NO dance notation provides sufficient information for the
> exact reproduction of a movement.

Likewise there's not sufficient information in an English description
of "red" to impart knowledge to a listener.

> Ultimately, I tend to agree with Gilbert that the problem is not in the
> notation but in what is trying to be communicated.  Video is as valuable
> in reconstructing dances as it is in gymnastics, but there is still no
> substitute for "shaping" bodies.  What Gilbert calls "memory positions"
> I have always called "muscular memory;"  and I'm afraid there is no substitute
> for physical experience when it comes to acquiring it.

Your experience with dance notation is illustrative of a characteristic
of languages in general, and a seriously flawed assumption in "AI".
"Natural" language *evokes* experience in a listener; language can't
*impart* experience. No amount of English description will produce the
experience of "red" in a congenitally blind person, or a computer, or
produce the same quality of associations with "flame" and "danger" and
"hot" and "blushing" that a sighted person can hardly avoid.

In order for a computer (read digital computer) to "understand" human
language, it must have *experience* which the language can evoke.  "Data
structures" won't do, because they are symbols themselves, not experience.
In iconic languages, (e.g. the dance notations you mention) there is a
small amount of information conveyed because the perception of the icon
itself is an experience.  Seeing a picture of a platypus is similar to
seeing a platypus.  Reading or listening to a description of a platypus
is not.  Hearing "cerulean" described in English conveys no information,
and any "understanding" on the part of the receiver must be *created*
from that receiver's experience.

It is this line of thought which led me several years ago to discard the
Physical Symbol System Hypothesis.  Physical symbol systems are *not*
sufficient to explain or reproduce human thought and behavior.  They are
formal systems (form-al: concerning form, eliminating content).  The
PSSH is, however, a useful guide for most of what is called "AI", which is
the mechanization of formal logic, an engineering task and
properly a part of computer science.  That task has nothing to do with the
creation of intelligence.  I can certainly understand the irritation
of the engineers at people who want to re-think such a job after it's
started.

------------------------------

End of AIList Digest
********************

∂16-Jun-88  2115	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #36  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 16 Jun 88  21:14:40 PDT
Date: Thu 16 Jun 1988 23:42-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #36
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 17 Jun 1988       Volume 7 : Issue 36

Today's Topics:

 Philosophy:
  Me and Karl Kluge
  The Social Construction of Reality
  Cognitive AI vs Expert Systems
  Dance notation
  definition of information

 Announcements:
  object-oriented database workshop: oopsla '88
  LP'88 Conference Announcement

----------------------------------------------------------------------

Date: 10 Jun 88 14:42:15 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: Me and Karl Kluge (no flames, no insults, no abuse)

In article <43@aipna.ed.ac.uk> Richard Caley writes:
>In <1312@crete.cs.glasgow.ac.uk>, Gilbert Cockton writes
>> work other than SOAR and ACT* where the Task Domain has been formally
>> studied before the computer implementation?
>Natural language processing. Much ( by no means all ) builds on the work
>of some school of linguistics.

and ignores most of the work beyond syntax :-) Stick to the
computable, not the imponderable.  Hmm pragmatics.  I know there is AI
work on pragmtics, but I don't know if a non-computational linguist
working on semantics and pragmatics would call it advanced research work.

>One stands on the shoulders of giants. Nobody has time to research
>their subject from the ground up.

But what when there is no ground? Then what?  Hack first or study?
Take intelligent user interfaces, hacking first well before any study
of what problems real users on real tasks in real applications face
(exception Interllisp-D interface, but this was an end-user project!).

>According to your earlier postings, if ( strong ) AI was successful it
>would tell us that we have no free will, or at least that we can not assume
>we have it. I don't agree with this but it is _your_ argument and something
>which a computer program could tell us.
Agreed.  Anything ELSE though that may be useful? (I accept that proof
of our logical (worse than physical) determinism would be a revelation)

>What do the theories of physics tell us that we couldn't find out by
>studying objects.

Nothing, but as these theories are based on the study of objects, we
know that if we were to repeat the study, we would confirm the
theories. Strong AI on the other hand conducts NO study of people, and
thus if we studied people in an area colonised at present by hackers
only, then we have no reason to believe that we would confirm the
model in the hacker's mind.  There is no similarity at all between the
theories of physics and computational models of human behaviour, it
just so happens that some (like ACT*) do have an empirical input.  The
problem with strong AI is that you don't have to have this input.  No one
would dare call something a theory in physics which was based solely on
one individual's introspection constrained only by their progamming
ability. In AI, it seems acceptable (Schank's work for example, can
anyone give me references to the substantive critiques from within AI,
I know of ones by linguists).

>>     The proper object of the study of humanity is humans, not machines
>Well, there go all the physical sciences, botany, music, mathematics . . .

And there goes your parser too.  "of humanity" attaches to "the
study".  Your list is not such a study, it is a study of the natural
world and some human artifacts (music, mathematics).  These are not
studies of people, OK, and they thus tell us nothing essential about
ourselves, except that we can make music and maths, and that we can
find structures and properties for these artifacts.  A study of
artifacts, cognitive, aesthetic or otherwise, is not necessarily a
study of humanity.  The latter will embrace all artifacts, but not as
objects in themselves, but for their possible meanings.
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 13 Jun 88 15:14:53 GMT
From: mcvax!ukc!its63b!aipna!rjc@uunet.uu.net  (Richard Caley)
Subject: Re: Me and Karl Kluge


In <1342@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>In article <43@aipna.ed.ac.uk> Richard Caley writes:

>>Natural language processing...builds on the work of linguistics.

>and ignores most of the work beyond syntax :-)

Some does.

>Hmm pragmatics.  I know there is AI
>work on pragmtics, but I don't know if a non-computational linguist
>working on semantics and pragmatics would call it advanced research work.

The criterion for it being interesting would not necessarily be explaining
something new, explaining something in a more extensible/elegant/practical/
formal ( choose your own hobby horse ) is just as good.

>But what when there is no ground? Then what?  Hack first or study?

Maybe my metaphor was not well chosen. Rather than building up it
might be better to see the computational work building down, trying to
ground its borrowed theories ( of language or whatever ) in something
other than their own symbols and/or set theory.

your question becomes, what when there is nothing to hang your work from?
In that case you should go out and do the groundwork or, better, get
someone trained in the empirical methods of that field to do it.

>(exception Interllisp-D interface, but this was an end-user project!).

ARGH don't even mention it, it just lost my days work for me.

>(I accept that proof of our logical (worse than physical) determinism
>would be a revelation)

Well physical determinism wouldn't be a revelation to many of us
who assume it already. I don't know your definition of logical determinism
so I can't say whether that is worse. If it is meant to apply to all
possible outcomes of strong AI it can't imply lack of free will ( read
as the property of making your own decisions, rather than exclusion from
causality ), what does it imply that is so shocking.

>>What do the theories of physics tell us that we couldn't find out by
>>studying objects.
>Nothing.

But they do. Studying objects just tells you what has happened. A (correct)
theory can be predictive, can be explainatory, can allow one to deduce
properties of the system under study which are not derivable from the
data.

>Strong AI on the other hand conducts NO study of people,

Strong AI does not require the study of people, it is not "computational
psycology". AI workers study people in order to  avoid reinventing wheels.

>>>     The proper object of the study of humanity is humans, not machines

>And there goes your parser too.

 Oops. I'm afraid I read it as parallel to "The proper study of man is man".

It does seem to be something of a hollow statement; I can't think of
many people who study machines as a study of humanity ( except in the
degenerate case, if one believes humans are machines ). Some people
use machines as tools to study human beings, some study and build
machines.

------------------------------

Date: 15 Jun 88 15:39:28 GMT
From: amdahl!pyramid!prls!philabs!gcm!dc@ames.arpa  (Dave Caswell)
Subject: Re: The Social Construction of Reality

In article <514@dcl-csvax.comp.lancs.ac.uk> Simon Brooke writes [. . .]:
.Wells, like fanatical adherents of other ideologies before him, first
.hurls abuse at his opponents, and finally, defeated, closes his ears. I
.note that he is in industry and not an academic; nevertheless he is
.posting into the ai news group, and must therefore be considered part of
.the American AI community. I haven't visited the States; I wonder if
.someone could tell me whether this extraordinary combination of ignorance
.and arrogance is frequently encountered in American intellectual life?

Yes it is extremely common, and not just within the AI community.


--
Dave Caswell
Greenwich Capital Markets                             uunet!philabs!gcm!dc
If it could mean something, I wouldn't have posted about it! -- Brian Case

------------------------------

Date: 17 Jun 88 01:17:03 GMT
From: krulwich-bruce@yale-zoo.arpa  (Bruce Krulwich)
Subject: Cognitive AI vs Expert Systems (was Re: Me, Karl, Stephen,
         Gilbert)

In article <19880615061536.5.NICK@INTERLAKEN.LCS.MIT.EDU>
dg1v+@ANDREW.CMU.EDU (David Greene) writes:
>I'm not advocating Mr. Cockton's views, but the limited literature breadth in
>many AI papers *is* self-defeating.  For example, until very recently, few
>expert system papers acknowledged the results of 20+ years of psychology
>research on Judgement and Decision Making.

This says something about expert systems papers, not about papers discussing
serious attempts at modelling intelligence.  It is wrong to assume (as both
you and Mr. Cockton are) that the expert system work typical of the business
world (in other words, applications programs) is at all similar to work done
by researchers investigating serious intelligence.  (See work on case based
reasoning, explanation based learning, expectation based processing, plan
transformation, and constraint based reasoning, to name a few areas.)


Bruce Krulwich

Net-mail: krulwich@{yale.arpa, cs.yale.edu, yalecs.bitnet, yale.UUCP}

        Goal in life: to sit on a quiet beach solving math problems for a
                      quarter and soaking in the rays.

------------------------------

Date: 13 Jun 88 08:44:22 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: Human-human communication

In article <905@papaya.bbn.com> Hunter Barr writes:
>But you ignore the existance of useful dance notations.  I don't know
>much about dance notation, and I am sure there is much lacking in it--

For an accessible introduction to the problem of dance notations, see
Singh, Beatty, Booth and Ryman in Siggraph'83.  You can chase up
references from here.  All I can add is that many choreographers (All
I have encountered) do NOT use notations, as none are up to the job.
There's research at New York into using figure animation, computer
graphics and body sensors (Columbia I think).

--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: Wed, 15 Jun 88 23:06:13 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Re: Dance notation


     Smoliar's comment that no dance notation provides sufficient information
for the exact reproduction of a movement is true as far as it goes, but
misleading.  Modern dance notation, by which I mean Labanotation or,
as it is sometimes called, kinetography Laban, is designed to convey,
in a concise form, the constraints on a movement necessary to produce the
desired effect.  Although the notation provides for detailed description
of arm and hand motions, for example, the choreographer will not ordinarily
specify these unless they are essential to the movement desired.  Movements
not specified are left to the discretion of the dancer.  Placing the
dancer under unnecessarily tight constraints will result in an unnaturally
stiff performance (it is an ideal in ballet to achieve fluidity
despite overconstraint by the choreographer, but the ideal is reached only
in the better professional companies and at high cost to both company and
dancers).  Nor is it usually necessary.  Thus the tendency to specify only
the necessary.

     The discretion of the dancer in executing a movement specified only
in outline, or what is referred to as "motif writing" in Labanotation, is
not unlimited.  There are defaults.  Where forward motion is specified
without additional annotation, a normal walk is assumed.  There are
sufficient conventions to produce a generally similar performance should
two dancers perform from the same notation.

     As a technical tour de force, it is quite possible, by the way, to
record in great detail the positions of the human body during a dance.
Both the inventor of VPL's "Z-glove" and the MIT Media Lab have developed
systems for so doing.  It is not at all clear, though, what one does with
the information so obtained.  One can play it back through an animation
system, of course.  But it is not likely to be useful to a dancer.

                                        John Nagle

------------------------------

Date: Thu, 16 Jun 88 10:49:16 PDT
From: Bob Riemenschneider <rar@ads.com>
Subject: Re↑2: definition of information

=>   From: bnevin@CCH.BBN.COM (Bruce E. Nevin)
=>
=>   My understanding is that Carnap and Bar-Hillel set out to establish a
=>   "calculus of information" but did not succeed in doing so.

I'm not sure what your criteria for success are, but it looks pretty good to
me.  They didn't completely solve the problem of laying a foundation for
Carnap's approach to inductive logic.  (But it certainly advanced the
state of the art--see, e.g.,  the second of Carnap's _Two Essays on Entropy_,
which was, obviously, heavily influenced by this work.)  Advances have been
made since the original paper as well: see the bibiographies for Hintikka's
paper and Carnap's later works on inductive logic (especially "A System of
Inductive Logic" in _Studies in Inductive Logic_, vols. 1 and 2).
[Disclaimer: There are very serious problem's with Carnap's approach
to induction, which I have no wish to defend.]

=>   Communication theory refers to a physical system's capacity to transmit
=>   arbitrarily selected signals, which need not be "symbolic" (need not mean
=>   or stand for anything).  To use the term "information" in this connection
=>   seems Pickwickian at least.  "Real information"?  Do you mean the
=>   Carnap/Bar-Hillel program as taken up by Hintikka?  Are you saying that
=>   the latter has a useful representation of the meaning of texts?

The Carnap and Bar-Hillel approach is based on the idea that the information
conveyed by an assertion is that the actual world is a model of the
sentence (or: "... is a member of the class of possible worlds in which
sentence is true", or: "the present situation is a situation in which the
sentence is true", or: <fill in your own, based on your favorite
model-theory-like semantics>).  This is certainly the most popular formal
account of information.  They, and Hintikka, count state descriptions
to actually calculate the amount of information an assertion conveys, but
that's just because Carnap (and, I suppose, the others) are interested in
the logical notion of probability.  If you start with a probability measure
over structures (or possible worlds, or situations, or ... ) as given, you
can be much more elegant--see, e.g., Scott and Krauss's paper on probabilities
over L-omega1-omega-definable classes of structures.  (It's in one of those
late-60's North-Holland "Studies in Logic" volumes on inductive logic, maybe
_Aspects of Inductive Logic_.)  I don't recall what, if anything, you said
about the application you have in mind, but, as the dynamic logic crowd
discovered, L-omega1-omega is a natural language for talking about computation
in general.

=>   Bruce Nevin
=>   bn@cch.bbn.com
=>   <usual_disclaimer>

                                                        -- rar

------------------------------

Date: 13 Jun 88 16:11:58 GMT
From: ames!smu!ti-csl!mips.ti.com!fordyce@spam.istc.sri.com (David
      Fordyce)
Subject: OBJECT-ORIENTED DATABASE WORKSHOP: OOPSLA '88
Article-I.D.: ti-csl.51420


                  OBJECT-ORIENTED DATABASE WORKSHOP

                 To be held in conjunction with the

                             OOPSLA '88

              Conference on Object-Oriented Programming:
                 Systems, Languages, and Applications

                          26 September 1988

                    San Diego, California, U.S.A.


Object-oriented database systems combine the strengths of
object-oriented programming languages and data models, and database
systems.  This one-day workshop will expand on the theme and scope of a
similar OODB workshop held at OOPSLA '87.  The 1988 Workshop will
consist of the following four panels:

  Architectural issues: 8:30 AM - 10:00 AM

    Therice Anota (Graphael), Gordon Landis (Ontologic),
    Dan Fishman (HP), Patrick O'Brien (DEC),
    Jacob Stein (Servio Logic), David Wells (TI)

  Transaction management for cooperative work: 10:30 AM - 12:00 noon

    Bob Handsaker (Ontologic), Eliot Moss (Univ. of Massachusetts),
    Tore Risch (HP), Craig Schaffert (DEC),
    Jacob Stein (Servio Logic), David Wells (TI)

  Schema evolution and version management:  1:30 PM - 3:00 PM

    Gordon Landis (Ontologic), Mike Killian (DEC),
    Brom Mehbod (HP), Jacob Stein (Servio Logic),
    Craig Thompson (TI), Stan Zdonik (Brown University)

  Query processing: 3:30 PM - 5:00 PM

    David Beech (HP), Paul Gloess (Graphael),
    Bob Strong (Ontologic), Jacob Stein (Servio Logic),
    Craig Thompson (TI)


Each panel member will present his position on the panel topic in 10
minutes.  This will be followed by questions from the workshop
participants and discussions.  To encourage vigorous interactions and
exchange of ideas between the participants, the workshop will be limited
to 60 qualified participants.  If you are interested in attending the
workshop, please submit three copies of a single page abstract to the
workshop chairman describing your work related to object-oriented
database systems.  The workshop participants will be selected based on
the relevance and significance of their work described in the abstract.

Abstracts should be submitted to the workshop chairman by 15 August 1988.
Participants selected will be notified by 5 September 1988.

                        Workshop Chairman:

                       Dr. Satish M. Thatte
           Director, Information Technologies Laboratory
                Texas Instruments Incorporated
                   P.O. Box 655474, M/S 238
                        Dallas, TX 75265

                      Phone: (214)-995-0340
  Arpanet: Thatte@csc.ti.com   CSNet: Thatte%ti-csl@relay.cs.net

Regards, David

------------------------------

Date: 16 Jun 88 21:06:21 GMT
From: nyser!cmx!skolem!kabowen@itsgw.rpi.edu  (Ken Bowen)
Subject: LP'88 Conference Announcement

LP'88: 5th Conference on Logic Programming  &  5th  Symposium  on
Logic  Programming  August  15-19, 1988 University of Washington,
Seattle, Washington

Information   and telephone(credit  card)  registration:
Conference  Registration,
University of Washington:  (206)-543-2310

##TUTORIALS (All week):

INTRODUCTION TO PROLOG   (Mon,  8/15  --  8:30-5:00)  Christopher
Mellish, University of Edinburgh
An introduction to Prolog for engineers, programmers, and  scien-
tists  with  no background in the language.  Tutorial Text:  Pro-
gramming in Prolog,  3rd ed. W. Clocksin & C. Mellish,  Springer-
Verlag.

ABSTRACT  INTERPRETATION   (Mon  8/15   --   1:30-5:00)   Maurice
Bruynooghe, Universiteit Leuven
Directed at the advanced Prolog  programmer,  the  tutorial  will
develop  a  general framework for extracting global properties of
logic programs (e.g., mode & type inferencing,  detecting  deter-
minism)  via the use of abstract interpretation.  The course will
sketch:  (1) A formal framework for  abstract  interpretation  of
logic  programs which relies on familiar notions about the execu-
tion of logic programs and uses only a small amount of mathemati-
cal  machinery  concerning  partial  orders;  (2)  The process of
developing an application within this framework;  (3)  High-level
comments  on  the structure of a correctness proof of an applica-
tion.

IMPLEMENTATION OF PROLOG (Tues, 8/16 -- 8:30-12:00)  D.H.D.  War-
ren, Manchester Univ.
This tutorial presents the detailed design of the Prolog  engine,
now  known  as  the WAM.  The tutorial provides a detailed under-
standing of the WAM and why WAM-based Prolog  systems  are  effi-
cient.   It also gives insight into how to write efficient Prolog
programs for WAM-based compilers.  Attendees  should  know  basic
Prolog  programming  and  it  would help to have some familiarity
with compiler technology.

PARALLEL EXECUTION SCHEMES  (Thurs, 8/18 -- 8:30-5:00) L.  Kale',
Univ. of Illinois
This tutorial will describe the individual schemes  for  parallel
execution  of  logic programs that have been proposed so far, and
develop an understanding of their place in the spectrum along the
dimensions of: degree of parallelism, overhead, targeted applica-
tions, and type of multi-processor best suited  for  the  scheme.
The  tutorial  will  be of interest to anyone planning to build a
parallel logic programming system, as well as beginning research-
ers  in the area.  A basic knowledge of logic programming will be
presumed

CONSTRAINT LOGIC PROGRAMMING   (Tues,  8/16  --  1:30-5:00)  J-L.
Lassez et al., IBM
CLP offers a framework to reason with and  about  constraints  in
the  context of Logic Programming.  The fundamental principles of
this paradigm are presented in order to illustrate the expressive
power  of constraints and to show how they naturally merge with a
Logic Programming rule-based system.  Next the design and  imple-
mentation  of  a  CLP system is discussed, focusing on efficiency
issues of constraint solving, followed  by  the  descriptions  of
several applications.  A basic knowledge of Prolog is presumed.

CLP AND OPTIONS  TRADING   (Wed  8/17  --  8:30-12:00)  Catherine
Lassez, IBM and Fumio Mizoguchi, Science Univ. of Tokyo
This tutorial will explore the application  of  Constraint  Logic
Programming (CLP) to financial problems, in particular to options
trading.   The  chosen  examples  will  demonstrate  the  special
strengths  of  combined  symbolic and numeric constraint-oriented
reasoning in a logic programming setting.  Knowledge  of  CLP  or
attendance at the "Introduction to CLP" tutorial is essential for
this course.

LOGIC PROGRAMMING & LEGAL REASONING   (Wed  8/17  --  8:30-12:00)
Robert  Kowalski,  Imperial  College,  and Marek Sergot, Imperial
College
The unique charateristics of legal  reasoning  are  apparent  not
only in legal domains, but underlie administrative procedures and
many data processing applications.  The use of logic for  analyz-
ing  legal  reasoning has a long tradition.  Computer implementa-
tion of legal reasonoing involves representing and reasoning with
legal  language,  the relationship between rules and regulations,
and the policies they implement.  The tutorial will  examine  the
use  of  logic programming for analyzing such questions, for both
real and hypothetical cases.

PRACTICAL PROLOG FOR REAL PROGRAMMERS  (Thurs, 8/18 -- 1:30-5:00)
Richard O'Keefe, Quintus Computer Systems
This tutorial assumes that you understand the elementary  aspects
of  Prolog programming, such as recursion, pattern matching, par-
tial data structures, and so on, and want to know how to use Pro-
log  to  build  practical  programs.  Topics covered will include
"choice points and how to use the cut", "setofPhow it  works  and
what  it  is  good  for",  "efficient  data  structures",  "mixed
language programming", and "programming methodology".  All topics
will be illustrated by working code.

LOGIC GRAMMARS FOR NL& COMPILING  (Fri, 8/19 -- 8:30-12:00)  Har-
vey Abramson, Univ. of British Columbia
This tutorial assumes a  basic  knowledge  of  Logic  Programming
techniques,  but  does  not assume a detailed knowledge either of
linguistics or of compilation techniques.  The tutorial will show
how  logic programming naturally applies to both natural and for-
mal grammars.  Tutorial topics include: 1) Use  of  Metamorphosis
Grammars and Definite Clause Grammars to produce derivation trees
and semantic transforms;  2) Use of related  grammar  formalisms;
3)   Compilation  from  natural language to logical form and from
programming languages to  machine  code  using   Definite  Clause
Translation Grammars, a logical version of Attribute Grammars; 4)
Top-down versus bottom-up parsing, chart-parsing, and the use  of
parallelism and concurrency.


##INVITED SPEAKERS:

Layman E. Allen (U. Mich)  Multiple  Logical  Interpretations  of
Legal Rules: Impediment or Boon forExpert Systems?

William F. Bayse (FBI) Law Enforcement Applications of Logic Pro-
gramming

Alan Bundy (U. Edinburgh) A Broader Interpretation  of  Logic  in
Logic Programming

Giorgio Levi (U. Pisa) Models, Unfolding Rules, and Fixpoint  Se-
mantics

Carlo Zaniolo (MCC) Design  &  Implementation  of  a  Logic-Based
Language for Data Intensive Applications


OVERALL SCHEDULE:

Sunday (8/14):
3:30-5:30:      Registration
5:30--  :       Informal reception
Monday (8/15):
9:00-9:30:      Opening Session
9:30-10:30      Layman Allen
10:30-11:00     Break
11:00-12:30:    Paper sessions:  LP & FP #1;  E & V #1;  Imp #1
12:30-2:00:     Lunch
2:00-3:30:      Paper sessions:  PrE #1; SemN#1; OR// #1
3:30-4:00:      Break
4:00-5:30:      Paper sessions: Ap #1; SemI #1; Imp #2
                Tentative: Panel on Prolog Standards
5:30--  :       Conference reception
Tuesday (8/16):
8:30-10:00:     Paper sessions: PrS; Cx + MT; //C #1
10:00-10:30:    Break
10:30-12:00:    Paper sessions: Obj + E & V #2; RP #1; //C #2
12:00-1:30:     Lunch
1:30-3:30:      Paper sessions: Meta; CN + GP #1; OR// #2
3:30-4:00:      Break
4:00-5:00:      Giorgio Levi
7:30--  :       Demonstrations
Wednesday (8/17):
8:30-10:00:     Paper sessions:  AbI # 1; &-OR// #1; GP #2
10:00-10:30:    Break
10:30-12:00:    Alan Bundy
12:00-1:30:     Lunch
1:30--  :       Free afternoon
Thursday (8/18):
8:30-10:00:     Paper sessions:  LP&FP#2 + Db#1; RP#2+Types#1; //C # 3
10:00-10:30:    Break
10:30-12:00:    Paper sessions:  Ap #2; Imp #3; SemN #2
12:00-1:00:     Lunch
1:00-2:30:      Paper sessions:  Db #2; SemI#2+Time; Types #2
2:30-3:30:      Paper sessions:  UC; PrE #2; &-//
3:30-4:00:      Break
4:00-5:00:      William Bayse
5:30--  :       Conference Dinner--Speaker:  J. Alan Robinson
Friday (8/19):
8:30-10:00:     Paper sessions:  Ap #3; AbI #2; &-OR// #2
10:00-10:30:    Break
10:30-11:30:    Carlo Zanielo
11:30-12:00:    Panel/Closing session

##CONTRIBUTED PAPERS:

%%APPLICATIONS & PROGRAMMING METHODOLOGY

* (Ap) Applications
P.G. Bosco, C. Cecchi and C. Moiso, Exploiting the Full Power  of
Logic Plus Functional Programming (#1)
Tony Kusalik and C. McCrosky, Improving First-Class Array Expres-
sions Using Prolog (#1)
Toramatsu Shintani, A Fast Prolog-based Inference Engine  KORE/IE
(#1)
M. Dincbas, H. Simonis and P. van Hentenryck, Solving a  Cutting-
Stock Problem in Constraint Logic Programming (#2)
Catherine Lassez and Tien Huynh, A CLP(R)  Option  Analysis  Sys-
tem(#2)
Peter B. Reintjes, A VLSI Design Environment in PROLOG (#2)
T.W.G. Docker, SAME - A Structured Analysis Tool and  its  Imple-
mentation in Prolog (#3)
Kevin Steer, Testing Data Flow Diagrams with PARLOG (#3)
*(CN) Constructive negation
David Chan, Constructive Negation Based on the Completed Database
Adrian Walker, Norman Foo, Andrew Taylor and Anand  Rao,  Deduced
Relevant Types and Constructive Negation
 (Db) Databases
Raghu Ramakrishnan, Magic Templates: A Spellbinding  Approach  to
Logic Programming (#1)
P. Franchi-Zannettacci and I. Attali, Unification-free  Execution
of TYPOL Programs by Semantic Attributes Evaluation (#2)
D.B. Kemp and  R.W.  Topor,  Completeness  of  a  Top-Down  Query
Evaluation Procedure for Stratified Databases (#2)
Hirohisa Seki and Hidenori Itoh, An Evaluation Method of  Strati-
fied Programs under the Extended Closed World Assumption (#2)
*(GP) Grammar & Parsing
R. Trehan and P.F. Wilk, A Parallel Chart Parser for the  Commit-
ted Choice Non-Deterministic (CCND) Logic Languages (#1)
Harvey Abramson, Metarules and an Approach to Conjunction in  De-
finite Clause Translation Grammars: Some Aspects of... (#2)
Veronica Dahl, Representing Linguistic  Knowledge  through  Logic
Programming (#2)
Lynette Hirschman, William  C.  Hopkins  and  Robert  Smith,  OR-
Parallel  Speed-up  in  Natural Language Processing: A Case Study
(#2)
*(LP&FP) Logic & Functional programming
Jean H. Gallier and Tomas Isakowitz,  Rewriting  in  Order-sorted
Equational Logic (#1)
Claude Kirchner, Order-Sorted Equational Unification (#1)
Joseph L. Zachary,  A Pragmatic Approach to Equational Logic Pro-
gramming (#1)
Staffan Bonnier and Jan Maluszynski, Towards a Clean Amalgamation
of Logic Programs with External Procedures (#2)
Steffen Holldobler, From Paramodulation to Narrowing (#2)
*(Meta) Meta-programming
A. Bruffaerts and E. Henin, Proof Trees for Negation  as  Failure
or Yet Another Prolog Meta-Interpreter
Patrizia Coscia, Paola Franceschi, Giorgio Levi  et.  al.,  Meta-
Level  Definition and Compilation of Inference Engines in the Ep-
silon Logic Programming Environment
C.S. Kwok and M.J. Sergot, Implicit Definition of Logic Programs
Arun  Lakhotia   and   Leon   Sterling,Composing   Prolog   Meta-
Interpreters
*(Obj) Objects
Weidong Chen and D.S. Warren, Objects as Intensions
John S. Conery, Logical Objects
* (PrE) Programming environments
Miguel Calejo and Luis Moniz Pereira, A Framework for Prolog  De-
bugging (#1)
Dave Plummer, Coda: An Extended Debugger for PROLOG (#1)
Ehud Shapiro and Yossi Lichtenstein, Abstract Algorithmic  Debug-
ging (#1)
Mike Brayshaw and Marc  Eisenstadt,  Adding  Data  and  Procedure
Abstraction to the Transparent Prolog Machine (TPM) (#2)
Michael Gorlick and Carl Kesselman, Gauge: A  Workbench  for  the
Performance Analysis of Logic Programs (#2)
 *(PrS) Problem-solving & novel techniques
Jonas Barklund, Nils Hagner and Malik Wafin, Condition Graphs
Philippe Codognet, Christian  Codognet  and  Gilberto  File,  Yet
Another Intelligent Backtracking Method
Sei-ichi Kondoh and Takashi Chikayama, Macro Processing in Prolog
 *(Time) Temporal reasoning
Kave Eshghi, Abductive Planning with Event Calculus
*(Ty) Types
Paul Voda, Types of Trilogy (#1)
M.H. van Emden, Conditional Answers for Polymorphic  Type  Infer-
ence (#2)
Uday S. Reddy, Theories of Polymorphism for Predicate Logic  Pro-
grams (#2)
Jiyang Xu and David S. Warren, A Type Inference System for Prolog
(#2)
*(UC) Unification & constraints
D. Scott Parker and R.R. Muntz, A Theory of Directed  Logic  Pro-
grams and Streams
Graeme S. Port, A Simple Approach to finding the Minimal  Subsets
of Equations Needed to Derive a Given Equation by Unification

%%THEORY & PROGRAM ANALYSIS

*(AbI) Abstract interp. &  data dependency
Maurice Bruynooghe and Gerda Jenssens, An  Instance  of  Abstract
Interpretation Intergrating Type and Mode Inferencing, Part1: the
abstract domain (#1)
Manuel Hermenegildo, Richard Warren & Saumya Debray, On the Prac-
ticality of Global Flow Analysis of Logic Programs (#1)
Annika Waern, An Implementation Technique for  the  Abstract  In-
terpretation of Prolog (#1)
Saumya Debray, Static Analysis of Parallel Logic Programs (#2)
Kim Marriott and Herald Sondergaard, Bottom-up Abstract Imterpre-
tation of Logic Programs (#2)
Will Winsborough and Annika Waern, Transparent And-Parallelism in
the Presence of Shared Free Variables (#2)
*(Cx) Complexity
K.R. Apt and Howard A. Blair, Arithmetic Classification  of  Per-
fect Models of Stratified Programs
Stephane Kaplan, Algorithmic Complexity of Logic Programs
*(E&V) Extensions and variations of LP
Donald Loveland and Bruce T. Smith, A Simple Near-Horn Prolog In-
terpreter (#1)
Dale Miller and Gopalan Nadathur, An Overview of l-PROLOG (#1)
Jack Minker, Jorge Lobo  and  Arcot  Rajasekar,  Weak  Completion
Theory for Non-Horn Programs (#1)
Bharat Jayaraman and Anil Nair, Subset-logic Programming:  Appli-
cation and Implementation (#2)
*(RP) Reasoning about programs
Charles Elkan and David McAllester, Automated Inductive Reasoning
about Logic Programs (#1)
Laurent Fribourg, Equivalence-Preserving Transformations  of  In-
ductive Properties of Prolog Programs (#1)
K. Marriott, L. Naish and J.L. Lassez, Most Specific  Logic  Pro-
grams (#1)
H. Fujita, A. Okumura and K. Furukawa, Partial Evaluation of  GHC
Programs Based on UR-set with Constraint Solving (#2)
John Hannan and Dale Miller, Uses of Higher-Order Unification for
Implementing Program Transformers (#2)
*(SemI) Semantic issues
Aida Batarekh and V.S. Subrahmanian, Semantical  Equivalences  of
(non-Classical) Logic Programs (#1)
Kenneth Kunen, Some Remarks on the Completed Database (#1)
Maurizio Martelli, M. Falaschi, G. Levi and C. Palamidessi, A New
Declarative Semantics for Logic Languages (#1)
D. Pedreschi and P. Mancarella, An Algebra of Logic Programs (#2)
Stan Raatz and Jean H. Gallier, A Relational Semantics for  Logic
Programming (#2)
V. S. Subrahmanian, Intuitive  Semantics  for  Quantitative  Rule
Sets (#2)
*  (SemN) Semantics of negation
Melvin Fitting and Miriam Ben-Jacob, Stratified and  Three-valued
Logic Programming Semantics (#1)
Vladimir Lifschitz and Michael Gelfond, The Stable  Model  Seman-
tics for Logic Programming (#1)
Teodor  Przymusinski,  Semantics  of  Logic  Programs  and   Non-
monotonic Reasoning (#1)
Yves Moinard, Pointwise Circumscription is Equivalent  to  Predi-
cate Completion (sometimes) (#2)
Halina Przymusinska and Teodor Przymusinski, Weakly Perfect Model
Semantics for Logic Programs (#2)
* (MT) Miscellaneous Theory
M.A. Nait Abdallah, Heuristic Logic and the Process of Discovery

##IMPLEMENTATION & PARALLELISM

*  (&//) AND-parallelism
V. Kumar and Y-J Lin, AND-parallel Execution of Logic Programs on
a Shared Memory Multoprocessor: A Summary of Results
Kotagiri Ramamohanarao and Zoltan Somogyi, A Stream  AND-Parallel
Execution Algorithm with Backtracking
*(& - OR //) AND-OR parallelism
P. Biswas, Su and Yun, A Scalable Abstract Machine Model to  Sup-
port  Limited  OR (LOR)/Restricted-AND Parallelism (RAP) in Logic
Programs (#1)
K.W. Ng and H.F. Leung, The Competition Model for Parallel Execu-
tion of Logic Programs (#1)
Prabhakaran Raman and Eugene W. Stark, Fully Distributed,  AND-OR
Parallel Execution of Logic Programs (#1)
P. Biswas and Tseng, A Data-Driven Parallel Execution  Model  for
Logic Programs (#2)
Jacques Chassin de Kergommeaux and Philippe Robert,  An  Abstract
Machine to Implement Efficiently OR-AND Parallel Prolog (#2)
L.V. Kale, B. Ramkumar and W.W. Shu, A  Memory  Organization  In-
dependent  Binding  Environment for AND and OR Parallel Execution
of Logic Programs (#2)
* (Imp) Implementation
Hamid Bacha, MetaProlog Design and Implementation (#1)
Gerda Janssens, Bart Demoen & Andre Marien,  Register  Allocation
for WAM, Based upon an Adaptable Unification Order (#1)
Jonathan Mills and Kevin Buettner, Assertive Demons (#1)
D.A. Chu and F.G. McCabe, SWIFT - a New Symbolic Processor (#2)
Subash Shankar, A Hierarchical  Associative  Memory  Architecture
for Logic Programming Unification (#2)
Charles Stormon, Mark Brule and John Oldfield et. al.,
An Architecture Based in Content-Addressable Memory for the Rapid
Execution of Prolog (#2)
David Hemmendinger, A Compiler and  Semantic  Analyzer  Based  on
Categorial Grammars (#3)
Feliks Kluzniak, Compile Time Garbage Collection for Proportional
Prolog (#3)
K. Kurosawa, S. Yamaguchi, S. Abe and  T.  Bandoh,    Instruction
Architecture  for  High  Performance  Integrated Prolog Processor
IPP (#3)
*(Or//) OR-parallelism and parallel Prolog
Khayri Ali, OR-Parallel Execution of Prolog on BC-Machine (#1)
Lee Naish, Parallelizing NU-Prolog (#1)
Ross Overbeek, Mats Carlsson and Ken Danhof, Practical Issues Re-
lating to the Internal Database Predicates in an OR-Parallel Pro-
log: .... (#1)
Hiyan Alshawi and D.B. Moran, The Delphi Model and  some  Prelim-
inary Experiments (#2)
Ewing Lusk, Ralph Butler, Terry Disz and Robert  Olsen  et.  al.,
Scheduling OR-Parallelism: an Argonne Perspective (#2)
 *(// C) Concurrent sys: GHC, Parlog, CP etc.
Atsuhiro Goto, Y. Kimura, T.  Nakagawa  and  T.  Chikayama,  Lazy
Reference Counting - An Incremental Garbage Collection Method for
Parallel Inference Machines (#1)
Hamish Taylor, Localising the GHC Suspension Test (#1)
Handong Wu, An Extended Dataflow Model of FGHC (#1)
Leon Alkalaj and Ehud Shapiro, An Architectural Model for a  Flat
Concurrent Prolog Processor (#2)
V.J. Saraswat, A Somewhat Logical Formulation of CLP Synchronisa-
tion Primitives (#2)
S. Klinger and E. Shapiro, A Decision Tree Compilation  Algorithm
for Flat Concurrent Prolog (#3)
Martin Nilsson and Hidehiko Tanaka, A Flat GHC Implementation for
Supercomputers (#3)
Sven-Olof Nystrom, Control Structures for  Guarded  Horn  Clauses
(#3)

**Conference Registration Information:


**CONFERENCE REGISTRATION FEES:
Advance (until 1 July):
ALP/IEEE member:        Regular: $240  Student: $75
Non-member:             Regular: $320  Student: $95
Late (after 1 July):
ALP/IEEE member:        Regular: $340  Student: $105
Non-member:             Regular: $455  Student: $135

**TUTORIAL REGISTRATION FEES:
Advance (until 1 July):
Full Day Tutorials:
ALP/IEEE member:        Regular: $300  Student: $180
Non-member:             $400
Half Day Tutorials:
ALP/IEEE member:        Regular: $150  Student: $90
Non-member:             $200
Late (after 1 July):
ALP/IEEE member:        Regular: $360  Student: $215
Non-member:             $480
Half Day Tutorials:
ALP/IEEE member:        Regular: $180  Student: $140
Non-member:             $240

------------------------------

End of AIList Digest
********************

∂17-Jun-88  0356	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #37  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 17 Jun 88  03:56:34 PDT
Date: Thu 16 Jun 1988 23:51-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #37
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 17 Jun 1988       Volume 7 : Issue 37

Today's Topics:

  And Yet More Free Will

----------------------------------------------------------------------

Date: 13 Jun 88 17:23:21 GMT
From: well!sierch@lll-lcc.llnl.gov  (Michael Sierchio)
Subject: Re: Free Will & Self-Awareness


The debate about free will is funny to one who has been travelling
with mystics and sages -- who would respond by saying that freedom
and volition have nothing whatsoever to do with one another. That
volition is conditioned by internal and external necessity and
is in no way free.

The ability to make plans, set goals, to have the range of volition
to do what one wants and to accomplish one's own aims still begs the
question about the source of what one wants.
--
        Michael Sierchio @ SMALL SYSTEMS SOLUTIONS
        2733 Fulton St / Berkeley / CA / 94705     (415) 845-1755

        sierch@well.UUCP        {..ucbvax, etc...}!lll-crg!well!sierch

------------------------------

Date: 13 Jun 88 22:10:37 GMT
From: colin@CS.UCLA.EDU (Colin F. Allen )
Reply-to: lanai!colin@seismo.CSS.GOV (Colin F. Allen )
Subject: Re: Free Will vs. Society

In article <19880613194742.7.NICK@INTERLAKEN.LCS.MIT.EDU>
  INS_ATGE@JHUVMS.BITNET writes:
>ill-defined.  I subscribe to the notion that there are not universal
>'good' and 'evil' properties...I know that others definately disagree on
>this point.  My defense rests in the possibility of other extremely
>different life systems, where perhaps things like murder and incest, and
>some of the other common things we humans refer to as 'evil' are necessary
>for that life form's survival.

But look, this is all rather naive.....you yourself are giving a criterion
(survival value) for the acceptability of certain practices.  So even if
murder etc. are not universal evils, you do nonetheless believe that harming
others without good cause is bad.  So, after all, you do accept some
general characterization of good and bad.

------------------------------

Date: Tue, 14 Jun 88 13:37 O
From: Antti Ylikoski tel +358 0 457 2704
      <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: RE: Free will

That would seem to make sense.

I'm a spirit/body dualist; humans have a spirit or a soul,
we have not so far made a machine with one; bue we can make
new souls (new humans).

Then the idea arises that one could model the human soul.

Antti Ylikoski

------------------------------

Date: Tue, 14 Jun 88 08:56 EST
From: <FLAHERTY%CTSTATEU.BITNET@MITVMA.MIT.EDU>
Subject: (1) Free will & (2) Reinforcement

Is it possible that the free will topic has run its course (again)?
After all, the subject has been pondered for millenia by some pretty
powerful minds with no conclusions, or even widely accepted working
definitions, in sight.  Perhaps the behavioral approach of ignoring
(or at least "pushing") problems that seem to be intractable is not
crazy in this instance.  Anyway, it's getting *boring* folks.

Now, re: reinforcement.  It comes in (at least) two varieties --
positive and negative -- both of which are used to *increase* the
probability of a response.  Positive reinforcement is just old
fashioned reward. Give your dog a treat for sitting up and it is more
likely to do it again.

Negative reinforcement consists in the removal of an aversive stimulus
(state of affairs) which leads to increased response probability.  If
you take aspirin when you have a headache and the pain goes away, you
are more likely to take aspirin next time you have a headache.  Thus,
negative reinforcement is the flip-side of positive reinforcement (and
often difficult to distinguish from it).

The effect of punishment is to *decrease* response probability.  The
term is usually used to describe a situation where an aversive
stimulus is presented following the occurrence of an "undesirable"
behavior.  So, Susie gets a spanking because she stuck her finger in
her little brother's eye (even though he probably did something to
deserve *his* punishment -- there is no real justice).  The hope is
that Susie will "learn her lesson" and not do it again.

Point is, punishment and negative reinforcement are *not* equivalent.
See any introductory Psychology text for more (and probably better?)
examples.

--Tom  <FLAHERTY@CTSTATEU.BITNET>

------------------------------

Date: 14 Jun 88 14:31:43 GMT
From: bc@media-lab.media.mit.edu  (bill coderre)
Subject: Re: Free Will & Self-Awareness

In article <6268@well.UUCP> sierch@well.UUCP (Michael Sierchio) writes:
>The debate about free will is funny to one who has been travelling
>with mystics and sages -- who would respond by saying that freedom
>and volition have nothing whatsoever to do with one another....

(this is gonna sound like my just previous article in comp.ai, so you
can read that too if you like)

Although what free will is and how something gets it are interesting
philosophical debates, they are not AI.

Might I submit that comp.ai is for the discussion of AI: its
programming tricks and techniques, and maybe a smattering of social
repercussions and philosophical issues.

I have no desire to argue semantics and definitions, especially about
slippery topics such as the above.

And although the occasional note is interesting (and indeed my
colleague Mr Sierchio's is sweet), endless discussions of whether some
lump of organic matter (either silicon- or carbon-based) CAN POSSIBLY
have "free will" (which only begs the question of where to buy some and
what to carry it in) is best confined to a group where the readership
is interested in such things.

Naturally, I shall not belabour you with endless discussions of neural
nets merely because of their interesting modelling of Real(tm)
neurons. But if you are interested in AI techniques and their rather
interesting approaches to the fundamental problems of intelligence and
learning (many of which draw on philosophy and epistemology), please
feel free to inquire.

I thank you for your kinds attention.....................mr bc

------------------------------

Date: 14 Jun 88 14:17:27 GMT
From: bc@media-lab.media.mit.edu  (bill coderre)
Subject: Re: Who else isn't a science?

In article <34227@linus.UUCP> marsh@mbunix (Ralph Marshall) writes:
>In article <10785@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu  writes:
>>Ain't it wonderful?  AI succeeded by changing the meaning of the word.
......(lots of important stuff deleted)
>I'm not at all sure that this is really the focus of current AI work,
>but I am reasonably convinced that it is a long-term goal that is worth
>pursuing.

Oh boy. Just wonderful. We have people who have never done AI arguing
about whether or not it is a science and whether or not it CAN succeed
ever and the definition of Free Will and whether a computer can have
some.

It just goes on and on!


Ladies and Gentlemen, might I remind you that this group is supposed
to be about AI, and although there should be some discussion of its
social impact, and maybe even an enlightened comment about its
philosophical value, the most important thing is to discuss AI itself:
programming tricks, neat ideas, and approaches to intelligence and
learning -- not have semantic arguments or ones about whose dictionary
is bigger.

I submit that the definition of Free Will (whateverTHATis) is NOT AI.

I submit that those who wish to argue in this group DO SOME AI or at
least read some of the gazillions of books about it BEFORE they go
spouting off about what some lump of organic matter (be it silicon or
carbon based) can or cannot do.

May I also inform the above participants that a MAJORITY of AI
research is centered around some of the following:

Description matching and goal reduction
Exploiting constraints
Path analysis and finding alternatives
Control metaphors
Problem Solving paradigms
Logic and Theorem Proving
Laguage Understanding
Image Understanding
Learning from descriptions and samples
Learning from experience
Knowledge Acquisition
Knowledge Representation

(Well, my list isn't very good, since I just copied it out of the table
of contents of one of the AI books.)

Might I also suggest that if you don't understand the fundamental and
crucial topics above, that you refrain from telling me what I am doing
with my research. As it happens, I am doing simulations of animal
behavior using Society of Mind theories. So I do lots of learning and
knowledge acquisition.

And if you decide to find out about these topics, which are extremely
interesting and fun, might I suggest a book called "The Dictionary of
Artificial Intelligence."

And of course, I have to plug Society of Mind both since it is the
source of many valuable new questions for AI to pursue, and since
Marvin Minsky is my advisor. It is also simple enough for high school
students to read.

If you have any serious AI questions, feel free to write to me (if
they are simple) or post them (if you need a lot of answers). I will
answer what I can.

I realize much of the banter is due to crossposting from
talk.philosphy, so folks over there, could you please avoid future
crossposts? Thanks...

Oh and Have a Nice Day................................mr bc

------------------------------

Date: Wed, 15 Jun 88 23:37 EDT
From: SUTHERS%cs.umass.edu@RELAY.CS.NET
Subject: Why I hope Aaron HAS disposed of the Free Will issue

    Just a few years ago, I would have been delighted to be able to
    participate in a network discussion on free will.  Now I skip over
    these discussions in the AIList.  Why does it seem fruitless?

    Many of the arguments seem endless, perhaps because they are arguments
    about conclusions rather than assumptions.  We disagree about
    conclusions and argue, while never stating our assumptions.  If
    we did the latter, we'd find we simply disagree, and there would
    be nothing to argue about.

    But Aaron Sloman has put his finger on the pragmatic side of why
    these discussions (though engaging for some), seem to be without
    progress.  Arguments about generic, undefined categories don't impact
    on *what we do* in AI: the supposed concept does not have an image
    in the design decisions we must make in building computational models
    of interesting behaviors.

    So in the future, if these discussions must continue, I hope that
    the participants will have the discipline to try to work out how
    the supposed issues at stake and their positions on them impact
    on what we actually do, and use examples of the same in their
    communications as evidence of the relevancy of the discussion.
    It is otherwise too easy to generate pages of heated discussion
    which really tell us nothing more than what our own prejudices are
    (and even that is only implicit).  -- Dan Suthers

------------------------------

Date: 15 Jun 88 17:20:50 GMT
From: uflorida!novavax!proxftl!bill@gatech.edu  (T. William Wells)
Subject: Re: Free Will & Self Awareness

In article <558@wsccs.UUCP>, rargyle@wsccs.UUCP (Bob Argyle) writes:
> We genetically are programmed to protect that child (it may be a
> relative...);

Please avoid expressing your opinions as fact.  There is
insufficient evidence that we are genetically programmed for ANY
adult behavior to allow that proposition to be used as if it were
an uncontestable fact.  (Keep in mind that this does NOT mean
that we are not structured to have certain capabilities, nor does
it deny phenomena like first-time learning.)

> IF we get some data on what 'free will' actually is out of AI, then let
> us discuss what it means.  It seems we either have free will or we
> don't; finding out seems indicated after is it 3000 years of talk.vague.

I hate to call you philosophically naive, but this remark seems
to indicate that this is so.  The real question debated by
philosophers is not "do we have free will", but "what does `free
will' mean".  The question is not one of data, but rather of
interpretation.  Generally speaking, once the latter question is
answered to a philosopher's satisfaction, the answer to the
former question is obvious.

Given this, one can see that it is not possible to test the
hypothesis with AI.

------------------------------

End of AIList Digest
********************

∂19-Jun-88  1838	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #38  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Jun 88  18:38:14 PDT
Date: Sun 19 Jun 1988 21:12-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #38
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 20 Jun 1988       Volume 7 : Issue 38

Today's Topics:

 Announcements:
  New mailing list
  Computatational linguistics/formal semantics workshop
  PODS-89 Call for Papers

 Queries:
  representation languages
  BRAINS AI tool

----------------------------------------------------------------------

Date: Fri, 17 Jun 88 13:56:39 PDT
From: jlevy.pa@Xerox.COM
Subject: New mailing list

CLP.X@XEROX.COM

   Coordinator: Jacob Levy <jlevy.pa@xerox.com>

   Unmoderated, direct-redistribution mailing list devoted to discussion of
   the following topics (among others):

        * Concurrent logic programming languages
          - Problematic constructs
          - Comparisons between languages
        * Concurrent constraint programming languages
          - Constraint solvers, including those for discrete constraint
            satisfaction
          - Language issues
        * Semantics, proof techniques and program transformations
          - Partial evaluation
          - Meta interpretation
          - Embedded languages
        * Parallel Prolog systems
          - Restricted And-parallel
          - Or-parallel Prolog
        * Implementations
          - Announcement of software packages
          - Reports on performance
          - Issues in implementation
        * Programming techniques and idioms, applications
          - Open systems and distributed computation
          - Small demonstration programs
        * Seminars, conferences, trip reports etc. related to the above

   All requests to be added to or deleted from this list, problems, questions,
   etc., should be sent to clp-request.x@xerox.com or to jlevy.pa@xerox.com.

   All messages will be archived and can be obtained on request from the list
   coordinator.

------------------------------

Date: 17 Jun 88 16:12 +0100
From: Mike Rosner <rosner%cui.unige.ch@RELAY.CS.NET>
Subject: COMPUTATATIONAL LINGUISTICS/FORMAL SEMANTICS WORKSHOP

****WORKSHOP ANNOUNCEMENT/APPLICATION FORM******

-------------------------
COMPUTATIONAL LINGUISTICS
AND
FORMAL SEMANTICS
-------------------------

Institut Dalle Molle ISSCO, Geneva
Istituto Dalle Molle IDSIA, Lugano

29th August - 2nd September 1988
Palazzo dei Congressi, LUGANO, Switzerland

With the support of

Fondazione Dalle Molle
Citta' di Lugano
European Economic Community
Fonds National Suisse

AIMS:  to present both tutorial and current research material in
these two fields.

PROGRAM

Tutorials:

                   Jens Erik Fenstad, (Oslo)
               Representation and Interpretation

                      Martin Kay, (Xerox)
         Unification and the Syntax/Semantics Interface

                    Barbara Partee, (UMass)
               Current Issues in Formal Semantics


Workshop Papers:


                     Ewan Klein (Edinburgh)
                  Context and Compositionality

                    Kris Halvorsen, (Xerox)
             Algorithms for Semantic Interpretation

                       Pat Hayes, (Xerox)
         Natural Language versus Mental Representations

                   Michael Moortgat, (Leiden)
         Categorial Parsing and Implicational Deduction

                      Ray Turner, (Essex)
                   Polymorphism in Semantics

                 Johan van Benthem, (Amsterdam)
           Logical Semantics and the Theory of Types

                   Yorick Wilks, (New Mexico)
                 Form and Content in Semantics

                     Margaret King (Geneva)
        Computational Linguistics and Formal Semantics?

     Rod Johnson, Mike Rosner, CJ Rupp* (Lugano/*Manchester)
        Situation Schemata and Linguistic Representation

REGISTRATION

o  To receive application form: rosner@cui.unige.ch or
                                ..cernvax!unige!cui!rosner
   Further information: Sandra Manzi/Mike Rosner +41 22 20 93 33 ext. 2115
===================================================================

==================

------------------------------

Date: 17 Jun 88 21:47:49 GMT
From: sbcs!kifer@sbcs.sunysb.edu (Michael Kifer)
Subject: PODS-89 Call for Papers


                         Call for Papers

             Eighth ACM SIGACT-SIGMOD-SIGART Symposium on
                PRINCIPLES OF DATABASE SYSTEMS (PODS)

            Philadelphia, Pennsylvania,  March 29-31, 1989

               Extended Abstracts due October 10, 1988

The conference will cover new developments in both the theoretical and
practical aspects of database and knowledge-base systems.  Papers are
solicited which describe original and novel research about the theory,
design, specification, or implementation of database and knowledge-
base systems.

Some suggested, although not exclusive, topics of interest are:
complex objects, concurrency control, database machines, data models,
data structures, deductive databases, dependency theory, distributed
systems, incomplete information, knowledge representation and
reasoning, object-oriented databases, performance evaluation, physical
and logical design, query languages, query optimization, recursive
rules, spatial and temporal data, statistical databases, and
transaction management.

You are invited to submit eleven copies of a detailed abstract (not a
complete paper) to the program chairman:

          Ashok K. Chandra - PODS
          IBM T. J. Watson Research Center
          P.O. Box 218
          Yorktown Heights, NY 10598.
          ashok@ibm.com             (914) 945-1752.

Submissions will be evaluated on the basis of significance,
originality, and overall quality.  Each abstract should 1) contain
enough information to enable the program committee to identify the
main contributions of the work; 2) explain the importance of the work -
its novelty and its practical or theoretical relevance to database
and knowledge-base systems; and 3) include comparisons with and
references to relevant literature.  Abstracts should be no longer than
ten double-spaced pages.  Deviations from these guidelines may affect
the program committee's evaluation of the paper.

                  Program Committee

     Catriel Beeri                Daniel J. Rosenkrantz
     Ashok K. Chandra             Oded Shmueli
     Hector Garcia-Molina         Victor Vianu
     Michael Kifer                William E. Weihl
     Teodor C. Przymusinski       Carlo Zaniolo

The deadline for submission of abstracts is OCTOBER 10, 1988.  Authors
will be notified of acceptance or rejection by December 7, 1988.  The
accepted papers, typed on special forms, will be due at the above
address by January 11, 1989.  All authors of accepted papers will be
expected to sign copyright release forms.  Proceedings will be
distributed at the conference, and will be subsequently available for
purchase through the ACM.

    General Chair:                     Local Arrangements Chair:
     Avi Silberschatz                   Tomasz Imielinski
     Computer Science Department        Dept. of Computer Science
     Univ. of Texas at Austin           Rutgers University
     Austin, Texas 78712                New Brunswick, NJ 08903
     avi@sally.utexas.edu               imielinski@rutgers.edu

------------------------------

Date: Fri, 17 Jun 1988 08:39-EDT
From: weh@SEI.CMU.EDU
Subject: Re: representation languages

Newsgroups: comp.ai.digest
Subject: Re: representation languages
Summary: admit ignorance-want references
Expires:
References: <19880615061555.7.NICK@INTERLAKEN.LCS.MIT.EDU>
Sender: Bill Hefley
Reply-To: weh@bu.sei.cmu.edu.UUCP (Bill Hefley)
Followup-To: weh@sei.cmu.edu.UUCP
Distribution:
Organization: Carnegie-Mellon University, SEI, Pgh, Pa
Keywords:

In a previous article Paul Vierhout (vierhout@swivax.UUCP) mentions Breuker
and Wielinga's work on Iterpretation Models and Chandrasekaran's work in
generic tasks.  I must admit ignorance of both of these bodies of work.  Can
anyone provide references?  I'd be happy to summarize or mail summaries if
there is enough interest.

I'm looking for both references and a short explanation of why these efforts
are useful in understanding the real-world tasks to be modeled.

Thanks.

   ____    ______   _____      _____=====        Bill Hefley
  / __ \  | _____| |_   _|   _____=========      Software Engrg Institute
 | |__|_| | |__      | |   _____=============    Carnegie Mellon
 _\___ \  |  __|     | | _____=================  Pittsburgh, PA 15213
 | |__| | | |____   _| |_  _____=============    (412) 268-7793
  \____/  |______| |_____|   _____=========      ARPA:   weh@sei.cmu.edu
                               -----=====        BITNET: weh%sei.cmu.edu
                                                 CSNET:  weh%sei.cmu.edu@
                                                         relay.cs.net
 C a r n e g i e   M e l l o n  U n i v e r s i t y

+---------------------------- Disclaimer -------------------------------+
|  The views expressed herein are my own and do not necessarily reflect |
|                        those of my employer.                          |
+-----------------------------------------------------------------------+

------------------------------

Date: 17 Jun 88 16:50:00 GMT
From: osu-cis!dsacg1!ntm1169@ohio-state.arpa  (Mott Given)
Subject: BRAINS AI tool

In the Summer 1988 issue of IEEE Expert, there was a special article on
Expert Systems in Japan.  On page 72 the article mentioned an AI tool
called BRAINS that runs on 3081 hardware (presumably being the IBM mainframe
called a 3081).  Can anyone give me an address and/or phone number where
I can find more about BRAINS.  Also, I would like to find more information
on another software tool mentioned on page 73, Esparon that runs on an
IBM 5550.


--
Mott Given @ Defense Logistics Agency ,DSAC-TMP, P.O. Box 1605,
            Systems Automation Center, Columbus, OH 43216-5002
UUCP:        {cbosgd,gould,cbatt!osu-cis}!dsacg1!mgiven
Phone:       614-238-9431

------------------------------

End of AIList Digest
********************

∂19-Jun-88  2135	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #39  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Jun 88  21:35:19 PDT
Date: Sun 19 Jun 1988 21:18-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #39
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 20 Jun 1988       Volume 7 : Issue 39

Today's Topics:

  Fuzzy systems theory
  Human-Human Communication

 Philosophy:
  Biological relevance and AI
  determinism a dead issue?
  Cognitive AI vs Expert Systems
  Biological relevance and AI
  Consensual realities are structurally unstable

----------------------------------------------------------------------

Date: 19 Jun 88 11:57:24 GMT
From: uflorida!novavax!proxftl!bill@umd5.umd.edu  (T. William Wells)
Subject: Re: Fuzzy systems theory was (Re: Alternative to Probability)

In article <1073@usfvax2.EDU>, pollock@usfvax2.EDU (Wayne Pollock) writes:
> On the other hand, set theory, which underlies much of current theory, is
> also based on fallacies; (given the basic premses of set theory one can
> easily derive their negation).

Just where DID you get that idea?  While it was true of the set
theory of around a century ago, it is NOT true of set theory
today.

>                                 As long as fuzzy logic provides a framework
> for dicussing various concepts and mathematical ideas, which would be hard
> to describe in traditional terms, the theory serves a purpose.

You seemed to miss my point: fuzzy systems theory MIGHT be an
interesing form of mathematics (but ask a mathematician, don't
ask me); BUT in its current form it is not valid as a means of
representing the real world.

------------------------------

Date: Wed, 15 Jun 88 13:14:29 EDT
From: "William J. Joel" <JZEM%MARIST.BITNET@MITVMA.MIT.EDU>
Subject: Human-Human Communication

It seems to me that recent discussion on this topic has been running
around in circles.  First off, all communication is coded.  The types
that humans use are merely ways to encapsulate thought so that another
human might attempt to understand what the first human meant.
In order to truely 'understand' each other we would first have to
understand exactly how the brain works ... exactly.  Since that's far
off, then anything we do is but an approximation.

------------------------------

Date: 17 Jun 88 20:45:01 GMT
From: uflorida!novavax!proxftl!tomh@gatech.edu  (Tom Holroyd)
Subject: Re: Human-human communication

In article <33343@linus.UUCP>, bwk@mitre-bedford.ARPA (Barry W. Kort) writes:
> How can we talk about that which cannot be encoded in language?
[stuff deleted]
> I know how to walk, how to balance on a bicycle, and how to reduce
> my pulse.  But I can't readily transmit that knowledge in English.
> In fact, I don't even know how I know these things.

You ride a bicycle by transforming input signals from your sensory system
into output signals for your muscles.  On the way, these signals are modified
by a large number of factors, including some conscious ones which we will
ignore.  The input/output signals can be represented as vectors, and the
transformation is a mapping from one vector space to another.  If you train
a neural net to learn the mapping from sense data to leg movement (and I'm
only talking about simple motion here), the connections of the network encode
the knowledge of how to ride a bicycle.  Enough to build a robot that can
ride a bike.  Maybe not cross an intersection safely.. :-)

Or, I could list a bunch of differential equations that describe the dynamics
of riding a bike.

Neither of these is complete, and the connectionist form would include a
lot of floating point data, so they don't really count as describing anything
in English.  However, by analyzing the forms of the equations, it is often
possible to develop an understanding of what is going on.

Does reducing the problem to a mathematical description count?  The next step
would be to develop a jargon to cover the dynamics of the situation.  Maybe
we just don't have terms for many of the actions required for bike riding.

Summary:  Everything can be described mathematically, and the mathematics
can be described in English.  Caveat:  we haven't figured out how to describe
everything using mathematics yet.  To me, this is the real problem.  Some
subjective phenomena may well prove to be irreducible in the sense that
in order to understand why a person thinks something is beautiful (say),
we'll need to have a large part of that person's brain state, and no amount
of mathematical gymnastics will make the data any less complex.  (For
example, a list of numbers describing a stone falling can be reduced to
a simple quadratic equation.  Brain states don't seem to be this simple.)

Tom Holroyd
UUCP: {uunet,codas}!novavax!proxftl!tomh

The white knight is talking backwards.

------------------------------

Date: 15 Jun 88 16:34:15 GMT
From: wlieberm@teknowledge-vaxc.arpa  (William Lieberman)
Subject: Re: Biological relevance and AI (was Re: Who else isn't a
         science?)


Just to add slightly to Ben and Mike's discussion, Ben's naturally good
question about why  should it be that  anyone can assume that we humans on
earth uniquely possess capabilities in intellgence, etc. (i.e. the
biological system that makes us up), and Mike's reply that such an assumption
is not really made, reminds me of the question asked in a not-too-long
ago earlier age when scientists asked, 'How likely is
it that the chemistry of the world, as we know it, exists in the same
state outside the earth?'

        A reasonable question. Then when helium was demonstrated to exist
on the sun (through spectrographic analysis around the 1860's??) and around
the same time when the table of the elements was being built up empirically
and intuitively, the evidence favored the idea that our local chemical and
physical laws were probably universal. As a youngster I used to wonder
why chemists, etc. kept saying there are only around 100 or so elements
in the universe. Why couldn't there be millions?  But the data do suggest
the chemists are correct - with relatively few elements, such is the matter
of the universe existing. What I'm saying here is that it may be prudent
to expect not too many diverse 'forms' of intelligence around. Rough
analogy, I agree; but sometimes the history of science can provide useful
guideposts. Right now we have some sensible ideas about what it takes to
do certain kinds of analyses; but no one really knows what it takes to
enable a state of consciousness to exist, for example. One answer surely
lies in research in biophysics (and probably CS-AI).

Bill Lieberman

------------------------------

Date: Fri, 17 Jun 88 08:42:08 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: determinism a dead issue?

Is the notion of determinism not deeply undercut by developments in
study of nonlinearity and Chaos?

There is sufficient nonlinearity in the workings of brains, bodies, and
interacting agents in the world to ensure that simple billiard ball
click click click in the pocket determinism is not even an
approximation.

There seems to me a parallel to Bateson's discussion of creatura vs
pleroma, terms borrowed from Jung.  If I remember correctly which is
which, creatura is the deterministic cause-effect realm amenable to
description in simple, linear, Newtonian terms; pleroma (the term
derives from a root having to do with "fullness", as in "plenary
session") involves metabolism, where outputs are not directly
predictable from inputs in terms of forces and impacts and what Bateson
elaborates as "cybernetic explanation" applies.  He argued that imagery
of forces and impacts were inappropriate for most of what is important
to us.  He was not aware of or at any rate did not write about
the relationship of this to nonlinearity and chaos before his death.

What is the relationship between the two?  Is it the case that systems
involving nonlinearity always involve feedback or feedforward loops?  My
impression from reading is yes.  (Isn't it mutual effect of the values
of two or more variables on one another that makes an equation
nonlinear, and isn't that a way of expressing feedback or feedforward?
The effect of friction in a physical system varies according to
velocity, even as it affects velocity.)  Is it the (stronger) case that
systems with such cybernetic loop structure always involve nonlinearity?
No, computers are generally advertised as deterministic.  Is it that
nonlinear systems are not error correcting?  Or perhaps that they are
analog rather than digital systems?  Are massively parallel systems
nonlinear, or do they tend to be?  Does the distinction apply to now
familiar characterizations of brain hemisphere specialization?

This has relevance to how an AI based on deterministic, linear systems
can do what nonlinear organisms do.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Fri, 17 Jun 88 09:15:29 -0400 (EDT)
From: David Greene <dg1v+@andrew.cmu.edu>
Subject: Re: Cognitive AI vs Expert Systems (was Re: Me, Karl,
         Stephen, Gilbert)


In article <digest.cWiAujy00Ukc40RHNs@andrew.cmu.edu>
krulwich-bruce@yale-zoo.arpa  (Bruce Krulwich) writes:
>This says something about expert systems papers, not about papers
>discussing serious attempts at modelling intelligence.  It is wrong to
>assume (as both you and Mr. Cockton are) that the expert system
>work typical of the business world (in other words, applications
>programs) is at all similar to work done by researchers investigating
>serious intelligence.  (See work on case based reasoning,
>explanation based learning, expectation based processing, plan
>transformation, and constraint based reasoning, to name a few areas.)

Since my researchs concerns developing knowledge acquisition approaches (via
machine learning) to address real world environments, I'm well aquainted with
not only the above literature, but psych, cog psych, JDM (judgement and
decision making), and BDT (behavioral decision theory).

While I suspect AI researchers who work in Expert System might resent being
excluded from work in "serious intelligence", I think my point is that, for a
given phenomena, multiple viewpoints from different disciplines (literature)
can provide important breadth and insights.

Not an earth shattering assumption I admit, but then again, if you examine work
in the fields you suggested, you'll frequently find a very narrow scope of
references.  Many of the papers I was describing come from various learning
approaches to knowledge acquisition (eg. Workshop on Knowledge Acquisition for
Knowledge Based Systems).  @admittedsarcasm(Perhaps this was an unfortunate
example since these indidviduals don't qualify as representative AI
researchers.)

Actually I think the proposition is that it would be encouraging to see more AI
lit reviews which offered some viewpoints from different fields... not only
might they suggest new issues to address but they might also identify useable
solutions to be transferred.


- David Greene
dg1v@andrew.cmu.edu

------------------------------

Date: 17 Jun 88 17:28:36 GMT
From: uhccux!lee@humu.nosc.mil  (Greg Lee)
Subject: Re: Biological relevance and AI (was Re: Who else isn't a
         science?)

From article <23201@teknowledge-vaxc.ARPA>, by William Lieberman:
" ...
"       A reasonable question. Then when helium was demonstrated to exist
" on the sun (through spectrographic analysis around the 1860's??) and around
" the same time when the table of the elements was being built up empirically
"...
" the chemists are correct - with relatively few elements, such is the matter
" of the universe existing. What I'm saying here is that it may be prudent
" to expect not too many diverse 'forms' of intelligence around. Rough
" analogy, I agree; but sometimes the history of science can provide useful
" ...

It's not even analogous unless you have a table of intelligence.  Maybe
you do.  If so, how many entries does it have room for?

                Greg Lee, uhccux.uhcc.hawaii.edu

------------------------------

Date: Sat, 18 Jun 88 13:27:34 EDT
From: George McKee <mckee%corwin.ccs.northeastern.edu@RELAY.CS.NET>
Subject: Consensual realities are structurally unstable


(another comment, better late than never, I hope.)

As Pat Hayes points out, the right way to interpret the phrase
"consensual reality" is as a belief system held by some group of
participants about the nature of the universe.  However, given a
universe that contains more than one group and group-reality, it's
reasonable to look at the origin, scope, and structure of the different
systems and evaluate them with respect to each other.  Now it's
conceivable that you may find two or more systems with equivalent
descriptive and predictive power, and with equally compact
representations in the minds of the participants, and in this situation
you might be justified in saying that there is more than one
fundamental reality.  But this doesn't seem to be the case, and there
is in fact one description of the collective experience of humanity,
namely science, that clearly outranks all the alternatives in just
about any respect you may wish to examine it, except perhaps promises
of present or future happiness.  This is not to say that the scientific
description of reality is complete or without weak spots, just that
it's so much better than the others that it surprises me that people
can argue against the primacy of scientific, physical reality and use a
computer at the same time.

But even leaving the content of a description of reality aside, I think
it's provable that a constructive, exterior description of the
universe, one that posits a single fundamental reality that generates
the thoughts and perceptions of each observer, is more *stable* than an
interior one that assumes the primacy of mental activity and doesn't
assume a physical origin of thought, and consequently permits the
observer to accept the validity of multiple descriptions.  That is, as
long as both the interior and exterior viewpoints are sensitive to new
data, many if not all of the potential realities consistent with the
interior view are susceptible to catastrophic reorganizations triggered
by single new datums, while the single reality assumed by the exterior
view can only be incrementally modified by any single fact.

The proof of this is, as they say, "too long for this page", but one
part of it rests on the observation, implicit in Turing's proof of
universal computability, that a computer can't determine its microcode
by executing instructions.  That is, a mind can't determine its
fundamental principles of operation by thinking.  You have to look at
the implementation -- the hardware and microcode.  For computational
minds we'll be sure to know the details of the implementation, because
we did the design.  For the human mind, designed as it is by the random
processes of genetic variation and historical accident, it's very hard
to know what aspects of its structure and organization are essential or
important, and which ones aren't.  But it's clear that we now have
tools that are only a quantitative step away from telling us what we
need to know about how the brain implements the mind.  Those people who
say "we have no idea about how the brain works" are just announcing
their own ignorance.

The best that a mind can do by thought alone is to determine an
infinite equivalence class of possible implementations of itself.  This
is apparently one of the major conclusions of Hilary Putnam's
soon-to-be-released book "Representation and Reality."  It'll be
interesting to read it to find out if he's able to take the next step
and show the determination of a unique implementation of each human
mind in the brain of each individual member of H. sapiens.  I sure hope
so...

        - George McKee
          NU Computer Science

p.s. And you thought I was going to write about Catastrophe Theory.
Not today...

------------------------------

End of AIList Digest
********************

∂21-Jun-88  1512	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #40  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 21 Jun 88  15:12:19 PDT
Date: Tue 21 Jun 1988 17:47-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #40
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 22 Jun 1988     Volume 7 : Issue 40

Today's Topics:

 Queries:
  Math/Science Education
  Robotics mailing list - does there exist one?

 Announcements:
  IJCAI Computers & Thought and Research Excellence Awards
  Philosophy & Computers Conference
  master of engineering in ai program at k.u.leuven belgium

----------------------------------------------------------------------

Date: 19 Jun 88 16:05:30 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Math/Science Education


For a colleague doing research on math/physics education (without
access to the net) I would be grateful for references on the following:
(1) Van Hiele Research
(2) Transfer of training between mathematical and scientific
instruction and application of mathematical knowledge, especially
applying general principles to particular problems.
Of interest is all work on psychological, computational or pedagical
aspects of this area of cognition.
--
Stevan Harnad   ARPANET:  harnad@mind.princeton.edu         harnad@princeton.edu
harnad@confidence.princeton.edu     srh@flash.bellcore.com      harnad@mind.uucp
BITNET:   harnad%mind.princeton.edu@pucc.bitnet    UUCP:   princeton!mind!harnad
CSNET:    harnad%mind.princeton.edu@relay.cs.net

------------------------------

Date: Mon, 20 Jun 88 10:15:31 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Robotics mailing list - does there exist one?


      To the best of my knowledge, no one has yet established a robotics
mailing list.  Is there one already in existence?  If not, is there sufficient
interest to justify starting one?

                                        John Nagle

------------------------------

Date: Mon, 20 Jun 88 08:38:32 EDT
From: walker@flash.bellcore.com (Don Walker)
Subject: IJCAI Computers & Thought and Research Excellence Awards

                CALL FOR NOMINATIONS FOR IJCAI AWARDS


THE IJCAI AWARD FOR RESEARCH EXCELLENCE

The IJCAI Award for Research Excellence is given at an International
Joint Conference on Artificial Intelligence to a scientist who has
carried out a program of research of consistently high quality over a
period of years that has produced a number of substantial results.  If
the research program has been carried out collaboratively the award may
be made jointly to the research team.  The first recipient of this
award was John McCarthy in 1985.

The award carries with it a certificate and the sum of $1,000 plus
travel and living expenses for the IJCAI.  The researcher(s) will be
invited to deliver an address on the nature and significance of the
results achieved and write a paper for the conference proceedings.
Primarily, however, the award carries the honour of having one's work
selected by one's peers as an exemplar of sustained research in the
maturing science of Artificial Intelligence.

We hereby call for nominations for The IJCAI Award for Research
Excellence to be made at IJCAI-89 in Detroit.  The accompanying note on
Selection Procedures for IJCAI Awards provides the relevant details.


THE COMPUTERS AND THOUGHT AWARD

The Computers and Thought Lecture is given at each International Joint
Conference on Artificial Intelligence by an outstanding young scientist
in the field of artificial intelligence.  The Award carries with it a
certificate and the sum of $1,000 plus travel and subsistence expenses
for the IJCAI.  The Lecture is presented one evening during the
Conference, and the public is invited to attend.  The Lecturer is
invited to publish the Lecture in the conference proceedings.  The
Lectureship was established with royalties received from the book
Computers and Thought, edited by Feigenbaum and Feldman; it is
currently supported by income from IJCAI funds.

Past recipients of this honour have been Terry Winograd (1971),
Patrick Winston (1973), Chuck Rieger (1975), Douglas Lenat (1977),
David Marr (1979), Gerald Sussman (1981), Tom Mitchell (1983),
Hector Levesque (1985), and Johan de Kleer (1987).

Nominations are invited for The Computers and Thought Award to be made
at IJCAI-89 in Detroit.  The note on Selection Procedures for IJCAI
Awards describes the nomination procedures to be followed.


SELECTION PROCEDURES FOR IJCAI AWARDS

Nominations for The Computers and Thought Award and The IJCAI Award for
Research Excellence are invited from everyone in the Artificial
Intelligence international community.  The procedures are the same for
both awards.

There should be a nominator and a seconder, at least one of whom should
not be in the same institution as the nominee.  The nominee must agree
to be nominated.  There are no other restrictions on nominees,
nominators or seconders.  The nominators should prepare a short
submission of less than 2,000 words, outlining the nominee's
qualifications with respect to the criteria for the particular award.

The award selection committee is the union of the Program, Conference
and Advisory Committees of the upcoming IJCAI and the Board of Trustees
of IJCAII, with nominees excluded.  Nominations should be submitted
before December 1st, 1988 to the Conference Chair for IJCAI-89:

    Wolfgang Bibel
    IJCAI-89 Conference Chair
    Department of Computer Science
    University of British Columbia
    Vancouver, CANADA V6T 1W5

    Tel. +1-604-228-6281
    Net: bibel@ubc.csnet

------------------------------

Date: Mon, 20 Jun 88 13:17:39 EDT
From: rapaport@cs.Buffalo.EDU (William J. Rapaport)
Subject: Philosophy & Computers Conference


                       Third Annual Conference on

                        PHILOSOPHY AND COMPUTERS

                            Darmouth College
                           August 24-27, 1988

             Department of Philosophy at Dartmouth College
              American Association of Philosophy Teachers
American Philosophical Association Committee on Computer Use in Philosophy

                     Keynote Address in Philosophy

                              JERRY FODOR

                        "Against Connectionism"


                      Keynote Address in Computing

                              JOHN KEMENY

                "Computers Revolutionize the Classroom"

  Many other papers on aspects of philosophy and computing as well as
  demonstrations and discussions of the latest develpments in software

-------------------------------------------------------------------------
      Registration Form for Conference on Philosophy and Computers

Name_____________________________________________ Phone______________________

Institution______________________________________ Department_________________

Address______________________________________________________________________

___$25 (___$20 for spouse)  Registration & Banquet [___$35 after 8/1/88]
___$24 (___$14 for spouse)  Housing 8/24/88
___$24 (___$14 for spouse)  Housing 8/25/88
___$24 (___$14 for spouse)  Housing 8/26/88

$_________ Total--Please make check payable to DARTMOUTH COLLEGE and send to:

  Jim Moor, P&C Conference, Philosophy, Dartmouth College, Hanover, NH 03755

     **********  Reservations are due by August 7, 1988  **********

------------------------------

Date: Tue, 21 Jun 88 11:09:41 GMT
From: <prlb2!kulcs!kulesat!van_cleyn@uunet.UU.NET>
Subject: master of engineering in ai at k.u.leuven belgium

        Annoucement of the program


               MASTER OF ENGINEERING IN ARTIFICIAL INTELLIGENCE
               ________________________________________________
               ------------------------------------------------

        at the Katholieke Universiteit LEUVEN, BELGIUM

----------------------------------------------------------------------

        The PROGRAM


        1. Mandatory Components

        1.1 Introductory Courses -

        Each  course is taught during one semester, 1.5 hour a week.
        In addition, 6 2.5 hour sessions accompany each course.

        1. Fundamentals of AI (Y.D. Willems)
        2. Cognitive Science (G. van Outryve d'Ydewalle)
        3. Neural Computing (G. Orban)


        1.2 Specialized Courses -

        Each  course is taught during one semester, 1.5 hour a week.
        In addition, six 2.5 hour sessions accompany each course.

        1. Logic as a Foundation for AI (Y.D. Willems, B. Demoen)
        2. Programming Languages and Programming Methodologies
                                          (K. De Vlaminck, J. Lewi)
        3. Methodologies for Building Knowledge-based Systems (P. Suetens)


        1.3 Seminar on AI (weekly 2.5 hour sessions) -


        1.4 Thesis -


        2 Optional Components

        Each  course is taught during one semester, 1.5 hour a week.
        In addition, four 2.5 sessions accompany each course.

        1.  Robotics (J. De Schutter)
        2.  Computer Vision (P. Wambacq, P. Suetens)
        3.  Natural Language Processing (G. Adriaens)
        4.  Speech Processing (D. Van Compernolle)
        5.  Advanced Computer Architectures (L. Van Eycken, P. Wambacq)
        6.  Advanced Programming Languages for AI (J. Lewi, E. Steegmans)
        7.  Formal Reasoning and Proof Techniques for Software Systems
                                                                (J. Lewi)
        8.  Selected Topics in Logic Programming (M. Bruynooghe, Y.D. Willems)
        9.  Expert System Techniques for Control and Design in the Process
            Industry (M. Rijckaert, W. Bogaerts)
        10. Techniques for Solving Complex Conceptual Digital System Design
            Problems (H. De Man)
        11. Formal Reasoning and Proof Techniques for Digital System
            Correctness Verification (H. De Man, L. Claesen)
        12. Knowledge-based Techniques for Automated Analog System Design
                                                                  (W. Sansen)



        PRACTICAL INFORMATION


        REQUIREMENTS

        To  receive the degree of Master of Engineering in Artificial Intelli-
        gence  the  candidate  has to attend the mandatory part of the program
        and  to select at least 7 optional courses.  Except for the thesis and
        the  seminar  on  AI,  all  courses  are  taught  during one semester,
        1.5 hour  a  week.    In  addition, practical exercises illustrate the
        theory.    Six  2.5 hour sessions accompany each mandatory course, and
        four 2.5 hour sessions each optional course.  In both semesters weekly
        seminars  are  provided  to discuss new research activities, to invite
        outside  lecturers  and  to arrange visits to university labs and com-
        panies.    The  student  will be involved in an AI research project on
        which  he  has  to write a thesis.  The work load of the thesis corre-
        sponds to the work load of four optional courses.

        The  Coordinating  Staff  is  aware of the fact that students may have
        various  educational  backgrounds.   Students are therefore allowed to
        propose  to  the  Coordinating Staff other coherent programs that meet
        their  goals  and conform to the basic requirements.  The candidate is
        not  expected  to  take  courses  if  he  has  previously  studied the
        equivalent  subject  matter.  If such courses are listed as mandatory,
        they must be replaced with additional optional courses.
        If  the  student can not meet the prerequisites of the program, he may
        attend  courses  taught  elsewhere at the K.U.Leuven.  Then, in excep-
        tional cases, these courses may be allowed to replace either mandatory
        or optional courses of this program.

        Exams  will  be administered at the end of the year, during the months
        of June and July.

        Students  other than those preparing for the MEAI, can attend individ-
        ual  courses  within  the regulations of the university.  Students who
        are  preparing  a doctoral dissertation at the K.U.Leuven in the field
        of  Artificial  Intelligence  are  encouraged to enroll for courses of
        this program.


        TIME SCHEDULE

        The  first semester lasts from early October until the end of January.
        The Christmas holidays last two weeks.  The second semester lasts from
        early February until the end of May.  Easter holidays last three weeks
        of which normally one week before and two weeks after Easter.


        K.U.LEUVEN

        Town of LEUVEN

        Leuven  is  a  typical  university  town.    Many attractive cafis and
        restaurants add to the town's playful charm.

        Leuven profits from the unique geo-political position Belgium occupies
        on   the   Continent.    It  is  close  to  Brussels  which,  with the
        headquarters  of  several important European organizations, has become
        the  unofficial capital of Europe.  On the crossroads between Germanic
        and Romance languages, it has always operated as a mediator between
        different  cultures.    The university still offers the possibility to
        study  approximately  20  languages,  ranging  from  Dutch to Chinese.
        Leuven  has  always  been  considered  one of the leading intellectual
        centers in the Low Countries.
        Belgium  is a wonderful base for trips throughout Europe : such places
        as  Paris,  Grenoble,  London,  Geneva,  Z|rich,  Milan, Rome, Venice,
        Munich,  Cologne  and  Amsterdam  are  easily  accessible.  This broad
        orientation  strengthens  the cosmopolitan character of K.U.Leuven and
        fosters  the  international spirit required for an International Study
        Program in Artificial Intelligence.

        Campus Facilities

        The  organizing  laboratories are situated on and in the neighbourhood
        of  the  Arenberg Campus in Heverlee, within walking distance from the
        Leuven  City  center.    Near  the  laboratories there is a university
        restaurant,  a sports center and several student residences.  They are
        situated in the attractive park of Arenberg Castle.

        The  organizing  laboratories  have  computer facilities and their own
        topical libraries, to which the participants of the study program have
        access.   In each of these libraries computer terminals give access to
        the  LIBIS  data  bank  which  contains  information  on all books and
        journals available inside the university.

        The  students'  living quarters are nearly as important as the lecture
        hall,  the library or the laboratory.  The inner city of Leuven offers
        a  variety  of accomodations : dormitories, private homes and communal
        houses.    Facilities have been built near the new campus in Heverlee.
        The  newly restored 13th century Groot Begijnhof, in the center of the
        city,  attracts visitors from all over the world.  It accommodates 700
        students  and  assistants  in a quiet 'old-world' environment yet with
        all modern comforts.

        Apart  from  scientific lectures given by specialists from Belgium and
        abroad,  the  Katholieke  Universiteit of Leuven also offers excellent
        concerts  and recitals given by world famous orchestras, ensembles and
        soloists.    Exhibitions  present  the  most  important  of the modern
        Flemish painters and sculptors.


        APPLICATION AND REGISTRATION

        Applicants  must  hold  a  bachelor's  degree or its equivalent.  This
        degree  should  be  obtained  in  the  field  of  sciences  or applied
        sciences, including experience in computing concepts and practice.

        An application form must be filled  out, including a statement of  the
        individual's objectives; the  applicant also  has to  explain how  the
        program and  previous  preparation  meet  these  objectives.   Foreign
        students should register  preferably  before July 1,  Belgian students
        before September 1.  Students  who apply for  a partial program,  must
        follow the  same  procedure.   Admission  to  the  program is normally
        granted  by  the  Coordinating   Staff  after  consideration   of  the
        information  provided  on  the   application  form,  the   applicant's
        educational and professional background and his motivation.   Students
        who apply before July  1, will be  notified to their  admission before
        August 1.   Those  who  apply  after  July  1, will be informed before
        October 1.

        Application forms can be obtained from :

        Prof F. DELMARTINO
        International Study Programs
        Universiteitshal
        Naamsestraat 22
        B-3000 LEUVEN

        Phone : (32) (16) 28.40.27
        Telex : 257.15 kulbib b
        Telefax : (32) (16) 28.40.14

------------------------------------------------------------------------------
          Detailed information about  the program  can be  obtained be sending
        the message GET MASTER OF_AI (first line, first column) to the userid
        LISTSERV@BLEKUL11.BITNET

------------------------------

End of AIList Digest
********************

∂21-Jun-88  1813	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #41  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 21 Jun 88  18:13:31 PDT
Date: Tue 21 Jun 1988 17:51-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #41
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 22 Jun 1988     Volume 7 : Issue 41

Today's Topics:

  Representation languages
  AI language
  determinism a dead issue?
  Ding an sich
  The primacy of scientific physical reality?
  Cognitive AI vs Expert Systems

----------------------------------------------------------------------

Date: Mon, 20 Jun 88 08:16:19 PDT
From: smoliar@vaxa.isi.edu (Stephen Smoliar)
Subject: Re: representation languages

In article <19880615061555.7.NICK@INTERLAKEN.LCS.MIT.EDU> Ian
Dickson writes:
>Date: Tue, 14 Jun 88 05:42 EDT
>From: Ian Dickinson <ijd%otter.lb.hp.co.uk@RELAY.CS.NET>
>To: ailist@mc.lcs.mit.edu
>Subject: Re: representation languages
>
>Whilst I have no doubt that these systems [KEE and ART]
are useful today, _I_ as an
>application developer want to see a representation system that is
>maximally
>small whilst giving me the power that I need.  The philosophy I would like
>to
>see adopted is:
>       o  define conceptual representations that allow applications to be
>          written at the maximum level of abstraction (eg generic tasks)
>       o  define the intermediate representations (frames, rules, sets ..)
>          that are needed to implement the conceptual structures
>       o  choose a subset of these representations that can be maximally
>          tightly integrated with the base language of your choice (which
>          would not be Lisp in my choice)
>
These are admirable desiderata, but they may not be sufficient to stave off
the dreaded "good feature explosion."  Rather, this malady is a consequence
of a desire we seem to have of our representation systems which allows them
to both RECORD and REASON ABOUT "units" of knowledge (whatever those units
may be).  (PACE, Mark Stefik;  I know I have lifted the name of a
knowledge representation system in my choice of words.)  We take it
for granted that we want both facilities.  If all we were doing was
recording, all we would have would be a data base;  and if all we were
doing was reasoning, all we would have would be a theorem prover.  I
would claim that our cultural expectations of a knowledge representation
system has grown out of a desire to assimilate these two capabilities.

Unfortunately, both capabilities turn out to be extremely demanding of
computational resources.  As a result, it has been demonstrated that
even some of the simplest attempts to find a viable middle ground can
easily lead to computational intractability (particularly if a clean
semantic foundation is one of your desiderata).  As a result, now may
be a good time to question whether or not the sort of "homogeneous
assimilation" of recording and reasoning which is to be found in many
knowledge representation systems is such a good thing.  Perhaps it would
be more desirable to have TWO facilities which handle record keeping and
reasoning as independent tasks and which communicate through a protocol
which does not impede their interaction.

Here at ISI we have been exploring means by which expert systems can give
adequate explanatory accounts of their own behaviror.  We have discovered
that an important element in the service of such explanation is a
TERMINOLOGICAL FOUNDATION, which amounts to a means by which all
symbols which are used as part of the problem solving apparatus of
the expert system also have a semantic support which links them to
the text generation facilities required in explanation.  Thus, for
example "fever" is not treated simply as a symbolic varaible which
get set to T if a patient's temperature is more than 100 degrees
Farenheit but may then get set back to NIL if it is discovered that
the patient had been drinking hot coffee just before the nurse took
his temperature;  rather it is a "word" which serves as a key to certain
knowledge about patient conditions, as well as knowledge about how it may
be detected and knowledge about its consequences.  In keeping with the
aforementioned attempt to separate the concerns of recording and reasoning,
we have developed a facility (currently called HI-FI) for recording
such terminological knowledge in such a way as to SUPPORT (but not
necessarily PERFORM) subsequent reasoning.

In pursuing this approach, we have developed as set of "terminological
building blocks," which seem to be at least partially sympathetic to Ian
Dickinson's philosophy.  Here is a quick outline:

        ACTIONS are the "verbs" of the terminological foundation.
        They provide the basis for the expression of both the statements
        of problems and the statements of solution methods.  While they
        definitely have a "generic" quality, I am not yet sure that they
        bear much resemblance to Chandrasekaran's generic tasks.  Since
        this material is relatively new, that possibility remains to be
        investigated.

        TYPES are "nouns" which designate classes (i.e. categories of
        entities).  They intentionally bear resemblance to both frame
        descriptions and object classes which may be found in object-
        oriented languages.

        INSTANCES are the entities which are members of classes.  An
        instance may be a member of several classes.  However,
        determinining whether or not a given instance is a member
        of a given class may often be a matter of reasoning rather
        than retrieval from a base of recorded facts.

        PROPERTIES are unary predicates applied to instances.

        RELATIONS are binary predicates applied to instances.

        ASSERTIONS are sentences in a typed predicate calculus whose
        types are the type classes.  These sentences are "about" instances;
        but they may incorporate expressions of descriptions of types,
        properties, and relations.

A major application of assertions is the representation of DEFINITIONAL
FORMS.  These are sentences which establish necessary and/or sufficient
conditions for membership in a type class or for the satisfaction of a
property or relation.  These assertions are the major link to the reasoning
facility.

The above is a sketch of work which has just gotten under way.  However,
the approach has already been pursued in some test cases concerned with
reasoning about digital circuits;  and it appears to be promising.  I
plan to have more discipined documentation of our work ready in the near
future;  but while I am engaged in preparing such documents, I am
interested in any feedback regarding similar (or contrary) approaches.

------------------------------

Date: Tue, 21 Jun 88 08:23 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: AI language

Distribution-File:
        AILIST@AI.AI.MIT.EDU

In a recent AIList issue, Pat Hayes wished to get a list of desirable
features for an AI language.

My opinion is that we need new theoretical formalisms for expressing
intelligence and the world of a human being for the basis of new AI
languages.

It seems to me that almost all successful programming languages have a
good background formalism.  APL has Iverson's array notation.  Modern
Lisp (CommonLOOPS and Zetalisp) has several ones: functional
programming, the idea of the list, object-oriented programming.  The
Algol/Pascal family of languages has the idea of expressing the
language unambiguously with the Backus-Naur notation.

                        Andy Ylikoski

------------------------------

Date: 21 Jun 88 17:34:40 GMT
From: umix!umich!eecs.umich.edu!itivax!dhw@uunet.UU.NET (David H.
      West)
Subject: Re: determinism a dead issue?


In a previous article, Bruce E. Nevin writes:
> Is the notion of determinism not deeply undercut by developments in
> study of nonlinearity and Chaos?

No. (Or, if you prefer, "it depends".)  I take it that "determinism"
is for present purposes equivalent to "predictability". Then:
1) Nonlinearity is strictly irrelevent - it just makes the math more
   difficult, but determinism requires only the existence (*) of a
   solution, not that it be easy to compute, or that it be computed
   at all;
2) Chaos means (in the continuum view of things) that some quantity
   has with respect to some initial parameter a derivative the
   magnitude of which becomes unbounded for large times, i.e.
   adjacent trajectories diverge.  All this means is that to predict
   further into the future, one needs increasingly precise knowledge
   of the initial conditions.  Infinite precision suffices ;-) for
   infinite-time prediction.  Remember, Laplace (or was it
   Lagrange?) assumed he could have the positions and velocities of every
   particle in the universe.  Anyone who grants that would be
   niggardly to refuse infinite precision.

Determinism *is* undercut by quantum mechanics, but that's
encouraging only to those who identify randomness with freedom.

(*) There are epistemological problems here.  One can certainly
prove things about the existence of solutions to equations, but
notwithstanding that we know some equations that describe the world
to some degree of approximation, it is clearly impossible for finite
beings to prove or know that (P:) "equations (currently known or
otherwise) describe the world exactly".  Such beings (e.g. us) can
believe P or not, as it suits them (or as they are determined ;-).

>   Is it the case that systems
> involving nonlinearity always involve feedback or feedforward loops?

Any system worthy of the name has loops, and linearity is only a
special case, so this is a good bet, but there are counterexamples
(see below).

>   (Isn't it mutual effect of the values
> of two or more variables on one another that makes an equation
> nonlinear, and isn't that a way of expressing feedback or feedforward?

No.  Consider the nonlinear equation y=sqrt(x).

> Is it that
> nonlinear systems are not error correcting?  Or perhaps that they are
> analog rather than digital systems?  Are massively parallel systems
> nonlinear, or do they tend to be?  Does the distinction apply to now
> familiar characterizations of brain hemisphere specialization?

The answer is probably "not necessarily".

> This has relevance to how an AI based on deterministic, linear systems
> can do what nonlinear organisms do.

Whose AI is based on *linear* systems? Logic circuits are nonlinear,
semantic networks are nonlinear, connectionist networks are
nonlinear...

------------------------------

Date: 20 Jun 88  0322 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: Ding an sich

I want to defend the extreme point of view that it is both
meaningful and possible that the basic structure of the
world is unknowable.  It is also possible that it is
knowable.  It just depends on how much of the structure
of the world happens to interact with us.  This is like
Kant's "Ding an sich" (the thing in itself) except that
I gather that Kant considered "Ding an sich" as unknowable
in principle, whereas I only consider that it might be
unknowable.

The basis of this position is the notion of evolution
of intelligent beings in a world not created for their
scientific convenience.  There is no mathematical theorem
requiring that if a world evolves intelligent beings,
these beings must be in a position to discover all its
laws.

To illustrate this idea, consider the Life cellular
automaton proposed by John Horton Conway and studied
by him and various M.I.T. hackers and others.  It's
described in Winning Ways by Berlekamp, Conway and
Guy.

Associated with each point of the two dimensional
integer lattice is a state that takes values  0  and
1.  The state of a point at time  t+1  is determined
by its state at time  t  and the states at time  t  of
its eight neighbors.  Namely, if the number of
neigbors in state  1  is less than two or more than
4, its state at time  t+1  is  0.  If it has exactly two
neighbors in state  1,  its state remains as it was.
If it has exactly  3  neighbors in state  1,  its
new state is  1.

There is a configuration of five cells in state  1  (with neighbors
in state  0) called a glider, which reproduces itself displaced
in two units of time.  There is a configuration called a glider
gun that emits gliders.  There are configurations that thin out
streams of gliders from a glider gun.  There are configurations
that take two streams of gliders as inputs and perform logical
operations (regarding the presence of a glider
at a given time in the stream as  1  and its absence
as  0) on them producing a new stream.  Thinned streams can
cross each other and serve as wires conducting signals.
This permits the construction of general purpose computers
in the Life plane.

The Life automaton wasn't designed to admit computers.  The
discovery that it did was made by hacking.  Configurations
that can serve as general purpose computers can be made
in a variety of ways.  The way indicated above and more
fully described in Berlekamp, et. al. is only one.

Now suppose that one or more interacting Life computers are
programmed to be physicists, i.e. to attempt to discover
the fundamental physics of their world.  There is no reason
to expect a mathematical theorem about cellular automata
in general or the Life cellular automaton in particular
that says that a physicist program will be able to discover
that the fundamental physics of its world is the Life
cellular automaton.

It requires some extra attention in the design of the computer to make
sure that it has any capability to observe at all, and some that can
observe will be unable to observe enough detail.  Of course, we could
program a Life computer to simulate some other "second level" cellular
automaton that admits computers, and give the "second level computer" only
the ability to observe the "second level world".  In that case, it surely
couldn't find any evidence for the its world being the Life cellular
automaton.  Indeed the Life automaton could simulate exceedingly slowly
any theory we like of our 3+1 dimensional world.

If a Life world physicist is provided with too narrow a philosophy
of science, and some of the consensual reality theories may indeed
be that narrow, it might not regard the hypothesis that its physics
is the Life world as meaningful.  There may be Life world physicists who
regard it as meaningful and Life world philosophers of science
interacting with them who try to forbid it.

This illustrates what I mean by metaepistemology.  Metaepistemology must
study what knowledge is possible for intelligent beings in a world to the
structure of the world and the physical structures and computational
programs that support scientific activity.

The traditional methods of philosophy of science are too weak to discuss
these matters, because they don't take into account how the structure of
the world and the structure of its intelligences affect what science is
possible.  There is no more guarantee that the structure of our
world is observable than that Fermat's last theorem is decidable
in Peano arithmetic.  Physicists are always proposing theories
of fundamental physics whose testability depends on the correctness
of other theories and the development of new apparatus.  For example,
some of the current GUTS theories predict unification of the
force laws at energies of 10↑15 Mev, and there is no current
idea of how an accelerator producing such an energy might
be physically possible.

I have received messages asking me if the metaepistemology I propose
is like what has been proposed by Kant and other philosophers
or even by Winograd and Flores.  As far as I can tell it's not,
and all those mentioned are subject to the criticism of the
previous paragraph.

------------------------------

Date: Mon 20 Jun 88 10:20:14-PDT
From: Mike Dante <DANTE@EDWARDS-2060.ARPA>
Subject: Re: AIList Digest   V7 #39

     I can't resist replying to George McKee's insistence that "one description
of the collective experience of humanity ... outranks all the alternatives ...
(that is, the ) primacy of scientific physical reality".  The statement is of
course true only if you exclude from the "collective experience of humanity"
all of history, aesthetics, human relationships, and self understanding.  But
I think you must also exclude George's own belief that we are only a
"quantitative step away" from "telling us what we need to know."  This non-
scientific belief was held by Marx, Freud, etc., etc., all of whom wished to
believe that the crystal purity and certainty of the scientific method had
solved mankind's ills.  It seems to me that the evidence necessary to support
this belief would be at least some demonstrated success that we were some sort
of "quantitative step away" from having any idea how to close prisons and mental
hospitals, and abolish greed, fear, and war.  So far I see no scientific
evidence that we have more than a laundry list of things to try, and a much
longer list of things that have been tried and have failed.  To extrapolate
from the limited (though exciting and important) successes of the scientific
method in these fields to an assertion that we are only quantitatively distant
from describing "the collective experience of humanity" seems to me a great
deal less justified than the belief that was expressed by the Dean of American
Science in the 19th Century, that all of Physics had been learned and all that
was left was quantitative improvements.

------------------------------

Date: 21 Jun 88 03:02:26 GMT
From: krulwich-bruce@yale-zoo.arpa  (Bruce Krulwich)
Subject: Re: Cognitive AI vs Expert Systems

In a previous post, I claimed that there were differences between people
doing "hard AI" (trying to achieve serious understanding and intelligence)
and "soft AI" (trying to achieve intelligent behavior).

dg1v+@ANDREW.CMU.EDU (David Greene) responds:
>Since my researchs concerns developing knowledge acquisition approaches (via
>machine learning) to address real world environments, I'm well aquainted with
>not only the above literature, but psych, cog psych, JDM (judgement and
>decision making), and BDT (behavioral decision theory).
>
>While I suspect AI researchers who work in Expert System might resent being
>excluded from work in "serious intelligence", I think my point is that, for a
>given phenomena, multiple viewpoints from different disciplines (literature)
>can provide important breadth and insights.

I agree fully, and I think you'll find this in the references section of alot
of "hard AI" research work.  (As a matter of fact, a fair number of
researchers in "hard AI" are prof's in or have degrees psychology,
linguistics, etc.)  I'm sorry if my post seemed insulting -- it wasn't
intended that way.  I truly believe, however, that there are differences in
the research goals, methods, and results of the two different areas.  That's
not a judgement, but it is a difference.

Bruce Krulwich

------------------------------

End of AIList Digest
********************

∂21-Jun-88  2143	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #42  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 21 Jun 88  21:43:00 PDT
Date: Tue 21 Jun 1988 17:57-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #42
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 22 Jun 1988     Volume 7 : Issue 42

Today's Topics:

 Free Will:

  Free Will & Self Awareness
  Disposing of the free will issue
  on the concept of will

----------------------------------------------------------------------

Date: 14 Jun 88 14:48:51 GMT
From: geb@cadre.dsl.pittsburgh.edu  (Gordon E. Banks)
Subject: Re: Free Will & Self Awareness

In article <2436@uvacs.CS.VIRGINIA.EDU> Carl F. Huber writes:
>In article <5323@xanth.cs.odu.edu> Warren E. Taylor writes:
>>In article <1176@cadre.dsl.PITTSBURGH.EDU>, Gordon E. Banks writes:
>>
>>"Spanking" IS, I repeat, IS a form of redesigning the behavior of a child.
>>Many children listen to you only when they are feeling pain or are
>>anticipating the feeling of pain if they do not listen.
>

Whoa!  This is not a quote from me!  Myself, I would prefer non-violent
forms of punishment, since I think kids learn the legitamacy of violence
from being spanked.  But, I should mention, I don't have kids, so I may
not be the one to ask about it.

------------------------------

Date: Sun, 19 Jun 88 10:16:38 BST
From: Aaron Sloman <aarons%cvaxa.sussex.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Disposing of the free will issue

(I wasn't going to contribute to this discussion, but a colleague
encouraged me. I haven't read all the discussion, so apologise if
there's some repetition of points already made.)

Philosophy done well can contribute to technical problems (as shown by
the influence of philosophy on logic, mathematics, and computing, e.g.
via Aristotle, Leibniz, Frege, Russell).

Technical developments can also help to solve or dissolve old
philosophical problems. I think we are now in a position to dissolve the
problems of free will as normally conceived, and in doing so we can make
a contribution to AI as well as philosophy.

The basic assumption behind much of the discussion of freewill is

    (A) there is a well-defined distinction between systems whose
    choices are free and those which are not.

However, if you start examining possible designs for intelligent systems
IN GREAT DETAIL you find that there is no one such distinction. Instead
there are many "lesser" distinctions corresponding to design decisions
that a robot engineer might or might not take -- and in many cases it is
likely that biological evolution tried both (or several) alternatives.

There are interesting, indeed fascinating, technical problems about the
implications of these design distinctions. Exploring them shows that
there is no longer any interest in the question whether we have free
will because among the REAL distinctions between possible designs there
is no one distinction that fits the presuppositions of the philosophical
uses of the term "free will". It does not map directly onto any one of
the many different interesting design distinctions. (A) is false.

"Free will" has plenty of ordinary uses to which most of the
philosophical discussion is irrelevant. E.g.

    "Did you go of your own free will or did she make you go?"

That question presupposs a well understood distinction between two
possible explanations for someone's action. But the answer "I went of my
own free will" does not express a belief in any metaphysical truth about
human freedom. It is merely a denial that certain sorts of influences
operated. There is no implication that NO causes, or no mechanisms were
involved.

This is a frequently made common sense distinction between the existence
or non-existence of particular sorts of influences on a particular
individual's action. However there are other deeper distinctions that
relate to to different sorts of designs for behaving systems.

The deep technical question that I think lurks behind much of the
discussion is

    "what kinds of designs are possible for agents and what are the
    implications of different designs as regards the determinants of
    their actions?"

I'll use "agent" as short for "behaving system with something like
motives". What that means is a topic for another day. Instead of one big
division between things (agents) with and things (agents) without free
will we'll then come up with a host of more or less significant
divisions, expressing some aspect of the pre-theoretical free/unfree
distinction. E.g. here are some examples of design distinctions (some
of which would subdivide into smaller sub-distinctions on closer
analysis):

- Compare (a) agents that are able simultaneously to store and compare
different motives with (b) agents that have no mechanisms enabling this:
i.e. they can have only one motive at a time.

- Compare (a) agents all of whose motives are generated by a single top
level goal (e.g. "win this game") with (b) agents with several
independent sources of motivation (motive generators - hardware or
software), e.g. thirst, sex, curiosity, political ambition, aesthetic
preferences, etc.

- Contrast (a) an agent whose development includes modification of its
motive generators and motive comparators in the light of experience, with
(b) an agent whose generators and comparators are fixed for life
(presumably the case for many animals).

- Contrast (a) an agent whose motive generators and comparators change
partly under the influence of genetically determined factors (e.g.
puberty), with (b) an agent for whom they can change only in the light of
interactions with the environment and inferences drawn therefrom.

- Contrast (a) an agent whose motive generators and comparators (and
higher order motivators) are themselves accessible to explicit internal
scrutiny, analysis and change, with (b) an agent for which all the
changes in motive generators and comparators are merely uncontrolled
side effects of other processes (as in addictions, habituation, etc.)
[A similar distinction can be made as regards motives themselves.]

- Contrast (a) an agent pre-programmed to have motive generators and
comparators change under the influence of likes and dislikes, or
approval and disapproval, of other agents, and (b) an agent that is only
influenced by how things affect it.

- Compare (a) agents that are able to extend the formalisms they use for
thinking about the environment and their methods of dealing with it
(like human beings) and (b) agents that are not (most other animals?)

- Compare (a) agents that are able to assess the merits of different
inconsistent motives (desires, wishes, ideals, etc.) and then decide
which (if any) to act on with (b) agents that are always controlled by
the most recently generated motive (like very young children? some
animals?).

- Compare (a) agents with a monolithic hierarchical computational
architecture where sub-processes cannot acquire any motives (goals)
except via their "superiors", with only one top level executive process
generating all the goals driving lower level systems with (b) agents
where individual sub-systems can generate independent goals. In case
(b) we can distinguish many sub-cases e.g.
(b1) the system is hierarchical and sub-systems can pursue their
    independent goals if they don't conflict with the goals of their
    superiors
(b2) there are procedures whereby sub-systems can (sometimes?) override
    their superiors. [e.g. reflexes?]

- Compare (a) a system in which all the decisions among competing goals
and sub-goals are taken on some kind of "democratic" voting basis or a
numerical summation or comparison of some kind (a kind of vector
addition perhaps) with (b) a system in which conflicts are resolved on
the basis of qualitative rules, which are themselves partly there from
birth and partly the product of a complex high level learning system.

- Compare (a) a system designed entirely to take decisions that are
optimal for its own well-being and long term survival with (b) a system
that has built-in mechanisms to ensure that the well-being of others is
also taken into account. (Human beings and many other animals seem to
have some biologically determined mechanisms of the second sort - e.g.
maternal/paternal reactions to offspring, sympathy, etc.).

- There are many distinctions that can be made between systems according
to how much knowledge they have about their own states, and how much
they can or cannot change because they do or do not have appropriate
mechanisms. (As usual there are many different sub-cases. Having
something in a write-protected area is different from not having any
mechanism for changing stored information at all.)

There are some overlaps between these distinctions, and many of them are
relatively imprecise, but all are capable of refinement and can be
mapped onto real design decisions for a robot-designer (or evolution).

They are just some of the many interesting design distinctions whose
implications can be explored both theoretically and experimentally,
though building models illustrating most of the alternatives will
require significant advances in AI e.g. in perception, memory, learning,
reasoning, motor control, etc.

When we explore the fascinating space of possible designs for agents,
the question which of the various sytems has free will loses interest:
the pre-theoretic free/unfree contrast totally fails to produce any one
interesting demarcation among the many possible designs -- it can be
loosely mapped on to several of them.

So the design distinctions define different notions of free:- free(1),
free(2), free(3), .... However, if an object is free(i) but not free(j)
(for i /= j) then the question "But is it really FREE?" has no answer.

It's like asking: What's the difference between things that have life and
things that don't?

The question is (perhaps) OK if you are contrasting trees, mice and
people with stones, rivers and clouds. But when you start looking at a
larger class of cases, including viruses, complex molecules of various
kinds, and other theoretically possible cases, the question loses its
point because it uses a pre-theoretic concept ("life") that doesn't have
a sufficiently rich and precise meaning to distinguish all the cases
that can occur. (Which need not stop biologists introducing a new
precise and technical concept and using the word "life" for it. But that
doesn't answer the unanswerable pre-theoretical question about precisely
where the boundary lies.

Similarly "what's the difference between things with and things without
free will?" This question makes the false assumpton (A).

So, to ask whether we are free is to ask which side of a boundary we are
on when there is no particular boundary in question. (Which is one
reason why so many people are tempted to say "What I mean by free is..."
and they then produce different incompatible definitions.)

I.e. it's a non-issue. So let's examine the more interesting detailed
technical questions in depth.

(For more on motive generators, motive comparators, etc. see my (joint)
article in IJCAI-81 on robots and emotions, or the sequel "Motives,
Mechanisms and Emotions" in the journal of Cognition and Emotion Vol I
no 3, 1987).

Apologies for length.

Now, shall I or shan't I post this.........????

Aaron Sloman,
School of Cognitive Sciences, Univ of Sussex, Brighton, BN1 9QN, England
    ARPANET : aarons%uk.ac.sussex.cvaxa@nss.cs.ucl.ac.uk
              aarons%uk.ac.sussex.cvaxa%nss.cs.ucl.ac.uk@relay.cs.net
    JANET     aarons@cvaxa.sussex.ac.uk
    BITNET:   aarons%uk.ac.sussex.cvaxa@uk.ac
        or    aarons%uk.ac.sussex.cvaxa%ukacrl.bitnet@cunyvm.cuny.edu
As a last resort (it costs us more...)
    UUCP:     ...mcvax!ukc!cvaxa!aarons
            or aarons@cvaxa.uucp

------------------------------

Date: Mon, 20 Jun 88 12:56 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: on the concept of will

Distribution-File:
        AILIST@AI.AI.MIT.EDU

This is an attempt by me to do some research into the concept of free
will.

First, I would recommend to everyone Carlos Castaneda's books.

They approach the concept of will from Yaqui Indian knowledge point of view.

The Yaqui have their own scientific tradition anthropologically studied
by Castaneda.  Their science is very different from Western sci but
non-trivial and honorable.

Secondly - we might have a look at the very life itself and study what
people do actually will in the real life.

Examples:

        * marry a lovely spouse and raise smart children
        * exceed one's sales quota at IBM
        * beat the competition in Silicon Valley
        * <in my case> travel to Israel and learn Hebrew
        * kill that enemy soldier with one's bayonette
        * find out what the life, the universe, and everything are
        * explain it to others
        * relax with a good book and California wine

                Andy Ylikoski

------------------------------

Date: 21 Jun 88 15:28:15 GMT
From: uvaarpa!virginia!uvacs!cfh6r@umd5.umd.edu  (Carl F. Huber)
Subject: Re: Free Will & Self-Awareness

In article <306@proxftl.UUCP> T. William Wells writes:
>Let's consider a relatively uncontroversial example.  Say I have
>a hot stove and a pan over it.  At the entity level, the stove
>heats the pan.  At the process level, the molecules in the stove
>transfer energy to the molecules in the pan.
> ...
>Now, I can actually try to answer your question.  At the entity
>level, the question "how do I cause it" does not really have an
>answer; like the hot stove, it just does it.  However, at the
>process level, one can look at the mechanisms of consciousness;
>these constitute the answer to "how".

I do not yet see your distinction in this example.
What is the difference between saying the stove _heats_ or the
molecules _transfer_energy_?  The distinction must be made in the
way we describe what's happening.  In each case above, you seem to
be giving the pan and the molecules volition.  The stove does not
heat the pan.  The stove is hot.  The pan later becomes hot.  Molecules do
not transfer energy.  The molecules in the stove have energy s+e.  Then
the molecules in the pan have energy p+e and the molecules in the
stove have energy s.

So it seems that both cases here are entity level, since the answer
to "how do I cause it" is the same.  If I have totally missed the
point, could you please try again?

-carl

------------------------------

End of AIList Digest
********************

∂23-Jun-88  2145	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #43  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 23 Jun 88  21:45:34 PDT
Date: Fri 24 Jun 1988 00:14-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #43
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 24 Jun 1988       Volume 7 : Issue 43

Today's Topics:

 Announcements:
  deadline change - Automating software design workshop
  PODS - 89 Call for papers
  COLING '88 program
  Unisys AI seminar: The Causal Simulation of Ordinary and Intermittent Mechanical Devices
  ACL European Chapter Call for Papers

----------------------------------------------------------------------

Date: Tue, 21 Jun 88 13:49:22 EDT
From: Robert McCartney <robert%carcvax.uconn@RELAY.CS.NET>
Subject: deadline change

       EXTENDED DEADLINE -- Automating software design workshop

Due to problems with the mail link at kestrel, we are extending the
deadline for requests to participate and/or make presentations at the
automating software design workshop at AAAI.  The deadline, which was
originally 15 June, has been changed to 4 July; we still intend to
send notification around 15 July.  To maximize the likelihood of your
request/abstract being received, we suggest the following csnet
addresses:  robert@uconn.edu for mccartney, lowry@coyote.stanford.edu
for lowry.  Hard copy submissions should be sent to Doug Smith at
Kestrel as before.  If you have any questions, call one of the
organizers at the numbers given below.

Current plans include a difference in emphasis between the morning and
afternoon sessions--the morning's emphasis will be on specification
and acquisition issues, while the afternoon's will be on formal
derivation.  This separation is by no means absolute, and given the
deadline change, the schedule is still quite approximate.

The revised call follows.  Apologies to all whose mail to kestrel
didn't go through.

                                        --robert.



                        CALL FOR PARTICIPATION

            Automating software design: current directions
                       (a workshop at AAAI-88)

                Radison-St. Paul Hotel, St. Paul, Minnesota
                       Thursday, 25 August 1988


In this workshop, we intend to discuss current approaches to automated
software design and how these approaches deal with: 1) acquiring
specifications, 2) acquiring and representing domain and design
knowledge, and 3) controlling search during design.  Among the issues
that might be addressed are the interaction of domain and design
knowledge, comparing automatic and interactive systems, the use of
general vs.  specific control mechanisms, and software environments
appropriate for design systems.

This is intended to be a forum for the presentation and discussion of
current ideas and approaches.  The format will consist of individual
presentations followed by adequate time for interaction with peers.
To maximize such interaction, participation will be limited to a small
number of  active researchers.

Participation: Those interested in attending should submit a short
description of their research interests and current work to one of the
organizing committee (preferably electronically) by July 4.  At the
same time, those interested in making a presentation should submit a
short abstract (around 500 words) of their intended topic.
Notification of acceptance or rejection will be given after July 15.
All participants may submit an extended abstract or position paper by
August 1; these will be reproduced and distributed at the workshop.

Organizing Committee:

     Michael Lowry            Robert McCartney       Douglas R. Smith
    Stanford/Kestrel        Univ. of Connecticut     Kestrel Institute
    (415) 325-3105            (203) 486-5232           (415) 493-6871
(lowry@coyote.stanford.edu)  (robert@uconn.edu)

Hard-copy submissions may be sent to:

                          Douglas R. Smith
                          Kestrel Institute
                         1801 Page Mill Road
                        Palo Alto, CA 94304-1216

------------------------------

Date: 22 Jun 88 14:46:12 GMT
From: sbcs!kifer@sbcs.sunysb.edu (Michael Kifer)
Subject: PODS - 89 Call for papers


                         Call for Papers

             Eighth ACM SIGACT-SIGMOD-SIGART Symposium on
                PRINCIPLES OF DATABASE SYSTEMS (PODS)

            Philadelphia, Pennsylvania,  March 29-31, 1989

               Extended Abstracts due October 10, 1988

The conference will cover new developments in both the theoretical and
practical aspects of database and knowledge-base systems.  Papers are
solicited which describe original and novel research about the theory,
design, specification, or implementation of database and knowledge-
base systems.

Some suggested, although not exclusive, topics of interest are:
complex objects, concurrency control, database machines, data models,
data structures, deductive databases, dependency theory, distributed
systems, incomplete information, knowledge representation and
reasoning, object-oriented databases, performance evaluation, physical
and logical design, query languages, query optimization, recursive
rules, spatial and temporal data, statistical databases, and
transaction management.

You are invited to submit eleven copies of a detailed abstract (not a
complete paper) to the program chairman:

          Ashok K. Chandra - PODS
          IBM T. J. Watson Research Center
          P.O. Box 218
          Yorktown Heights, NY 10598.
          ashok@ibm.com             (914) 945-1752.

Submissions will be evaluated on the basis of significance,
originality, and overall quality.  Each abstract should 1) contain
enough information to enable the program committee to identify the
main contributions of the work; 2) explain the importance of the work -
its novelty and its practical or theoretical relevance to database
and knowledge-base systems; and 3) include comparisons with and
references to relevant literature.  Abstracts should be no longer than
ten double-spaced pages.  Deviations from these guidelines may affect
the program committee's evaluation of the paper.

                  Program Committee

     Catriel Beeri                Daniel J. Rosenkrantz
     Ashok K. Chandra             Oded Shmueli
     Hector Garcia-Molina         Victor Vianu
     Michael Kifer                William E. Weihl
     Teodor C. Przymusinski       Carlo Zaniolo

The deadline for submission of abstracts is OCTOBER 10, 1988.  Authors
will be notified of acceptance or rejection by December 7, 1988.  The
accepted papers, typed on special forms, will be due at the above
address by January 11, 1989.  All authors of accepted papers will be
expected to sign copyright release forms.  Proceedings will be
distributed at the conference, and will be subsequently available for
purchase through the ACM.

    General Chair:                     Local Arrangements Chair:
     Avi Silberschatz                   Tomasz Imielinski
     Computer Science Department        Dept. of Computer Science
     Univ. of Texas at Austin           Rutgers University
     Austin, Texas 78712                New Brunswick, NJ 08903
     avi@sally.utexas.edu               imielinski@rutgers.edu

------------------------------

Date: 23 Jun 88 13:11:18 GMT
From: FLASH.BELLCORE.COM!walker@ucbvax.berkeley.edu  (walker_donald e)
Subject: COLING '88 program

12th INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS: COLING '88
                     Budapest, 22-27 August 1988

                    SCIENTIFIC PROGRAMME SCHEDULE

                        MONDAY, AUGUST 22nd

 9:30  OPENING SESSION - Room E

ROOM A:  SEMANTICS
11:00 - J.Ph.Hoepelman, A.J.M.van Hoof (FRG): The success of failure -
        the concept of failure in dialogue logics with some
        applications for NL-semantics
11:30 - P.Saint-Dizier (France): Default logic, natural language and
        generalized quantifiers
12:00 - D.Jurafsky (USA): Issues in the relation of grammar and meaning
14:00 - D.Horton, G.Hirst (Canada): Presuppositions as beliefs
14:30 - R.E.Mercer (Canada): Solving some persistent presupposition
        problems
15:30 - T.Vlk (Czechoslovakia): Topic/Focus articulation and
        intensional logic
16:00 - M.Merkel (Sweden): A novel analysis of temporal frame-adverbials

ROOM B: FORMAL MODELS
11:00 - N.Abe (USA): Polynomially learnable subclasses of mildly context
        sensitive languages
11:30 - C.Beierle, U.Pletat (FRG): Feature graphs and abstract data
        types: a unifying approach
12:00 - M.Reape, H.Thompson (UK): Parallel intersection and serial
        composition of finite state transducers
14:00 - S.M.Shieber (USA): A uniform architecture for parsing and
        generation
14:30 - J.Wedekind (FRG): Generation as structure driven derivation
15:30 - M.Meteer, V.Shaked (USA): Strategies for effective paraphrasing
16:00 - J.Kilbury (FRG): Parsing with category cooccurrence restrictions

ROOM C: UNDERSTANDING AND KNOWLEDGE REPRESENTATION
11:00 - L.Ahrenberg (Sweden): Functional constraints in knowledge-based
        natural language understanding
11:30 - X.Liu, T.Nishida, S.Doshita (Japan): Maintaining consistency
        and plausibility in integrated natural language understanding
12:00 - K.Hasida (Japan): A cognitive account of unbounded dependency
14:30 - V.Pericliev, S.Brajnov, I.Nenova (Bulgaria): Hinting by
        paraphrasing in an instruction system
15:30 - P.S.Jacobs (USA): Concretion: assumption-based understanding
16:00 - U.Zernik, A.Brown (USA): Default reasoning in natural language
        processing: a preliminary report

ROOM D: MACHINE TRANSLATION
11:00 - J.Tsujii, M.Nagao (Japan): Dialogue translation vs. text
        translation - interpretation based approach
11:30 - R.Zajac (France): Traduction interactive: une nouvelle approche
12:00 - A.K.Melby (USA): Lexical transfer: between a source rock and
        a hard target
14:00 - J.L.Beaven, P.Whitelock (UK): Machine translation using
        isomorphic UCGs
14:30 - H.Nogami, Y.Yoshimura, S.Amano (Japan): Parsing with look-ahead
        in a real-time on-line translation system
15:30 - F.Nishida, S.Takamatsu (Japan): Feed-back of the corrections
        in post edition to the machine translation system
16:00 - K.Kakigahara, T.Aizawa (Japan): Completion of Japanese sentences
        by inferring function words from content words

        SPEECH ANALYSIS AND SYNTHESIS
17:00 - W.M.P.Daelemans (Belgium): A grapheme-to-phoneme conversion
        system for Dutch
17:30 - P.Trescases, M.Crocker (Canada): Linguistic contributions to
        text-to-speech computer programs for French
18:00 - R.Kuhn (Canada): Speech recognition and the frequency of
        recently used words: a modified Markov model for natural
        language

17:00 - 18:30 PANEL DISCUSSION in Room C:
        "Language Engineering: The real Bottleneck of Natural
        Language Processing: (moderator: M.Nagao)

                        TUESDAY, AUGUST 23rd

ROOM A: SEMANTICS
 9:00 - J.Pustejovsky, P.Anick (USA): On the semantic interpretation
        of nominals
 9:30 - L.Lesmo, P.Terenziani (Italy): Interpretation of noun phrases
        in intensional contexts
10:00 - E.V.Paduceva (USSR): Referential properties of generic terms
        denoting things and situations

        DISCOURSE
11:00 - M.V.LaPolla (USA): The role of old information in generating
        readable text
11:30 - M.H.Sarner, S.Carberry (USA): A new strategy for providing
        definitions in task-oriented dialogues
12:00 - A.Yamada, T.Nishida, S.Doshita (Japan): Figuring out most
        plausible interpretation from spatial descriptions
14:00 - E.Werner (FRG): A formal computational semantics and pragmatics
        of speech acts
14:30 - M.Gerlach, M.Sprenger (FRG): Semantic interpretation of pragmatic
        clues: connectives, modal verbs, and indirect speech acts
15:30 - K.Eberle (FRG): Partial orderings and Aktionsarten in discourse
        representation theory
16:00 - M.Hess (Switzerland): Crossing coreference in discourse
        representation theory

ROOM B: FORMAL MODELS
 9:00 - L.Vijay-Shanker, A.K.Joshi (USA): Feature structures based tree
        adjoining grammars
 9:30 - R.M.Kaplan, J.T.Maxwell III (USA): An algorithm for functional
        uncertainty
10:00 - Ch.Boitet, Y.Zaharin (France): Representation trees and
        string-tree correspondences
11:00 - L.Carlson (Finland): RUG: Regular unification grammar
11:30 - J.Calder, E.Klein (UK), H.Zeevat (FRG): Unification categorial
        grammar, a concise, extendable grammar for natural language
        processing
12:00 - A.M.R.Aristar, C.F.Justus (USA): Word-order constraints in a
        multilingual categorial grammar
14:00 - B.V.Sukhotin (USSR): Optimization algorithms of deciphering
        as the elements of a linguistic theory
14:30 - R.M.Kaplan, J.T.Maxwell III (USA): Coordination in lexical
        functional grammar
15:30 - S.Busemann, Ch.Hauenschild (Berlin): A constructive view of
        GPSG or how to make it work
16:00 - W.Weisweber (Berlin): Using constraints in a constructive
        version of GPSG

ROOM C: UNDERSTANDING AND KNOWLEDGE REPRESENTATION
 9:00 - H.Shimazu, Y.Takashima, M.Tomono (USA, Japan): Understanding
        of stories for animation
 9:30 - R.J.Kuhns (USA): A news analysis system
10:00 - D.Fass (USA): Metonymy and metaphor: what's the difference?

        SOFTWARE TOOLS
11:00 - B.Boguraev, J.Carroll, T.Briscoe, C.Grover (UK): Software
        support for practical grammar development
11:30 - H.Tomabechi, M.Tomita (USA): Application of the direct
        memory access paradigm to natural language interface to
        knowledge-based system
12:00 - M.Marino (Italy): A process-activation based parsing
        algorithm for the development of natural language grammars
14:00 - T.Tokunaga, M.Iwayama, H.Tanaka, T.Kamiwaki (Japan): LangLAB:
        a natural language analysis system
14:30 - H.Kaji (Japan): An efficient execution method for rule-based
        machine translation

        COMPUTER ASSISTED LEARNING
15:30 - M.Zock (France): Language learning as problem solving
16:00 - M.Rayner, A.Hugosson, G.Hagert (Sweden): Using a logic
        grammar to learn a lexicon

ROOM D: PARSING
 9:00 - B.Lang (France): Parsing incomplete sentences
 9:30 - H.Saito, M.Tomita (USA): Parsing noisy sentences
10:00 - E.Giachin, K.C.Rullent (Italy): Robust parsing of severely
        corrupted spoken utterance

        MACHINE TRANSLATION
11:00 - P.Isabelle, M.Dymetman, E.Mackiovitch (Canada): CRITTER:
        a translation system for agricultural market reports
11:30 - Chen Zhaoxiong, Gao Qingshi (China): English-Chinese machine
        translation system IMT/EC
12:00 - I.Golan, S.Lappin, M.Rimon (Israel): An active bilingual
        lexicon for machine translation

        PARSING
14:00 - Y.Schabes, A.K.Joshi (USA): An Earley-type parser for tree
        adjoining grammar
14:30 - A.Yonezawa, I.Ohsawa (Japan): A new approach to parallel
        parsing for context-free grammar
15:30 - M.B.Kac, T.Rindflesch (USA): Coordination in reconnaissance-
        attack parsing
16:00 - L.Emirkanian, L.H.Bouchard (Canada): Knowledge integration
        in a robust and efficient morpho-syntactic analyzer for French

        MACHINE TRANSLATION
17:00 - Ch.DiMarco, G.Hirst (Canada): Stylistic grammars in language
        translation
17:30 - P.C.Rolf (Netherlands): Machine translation: the language
        network (versus the intermediate language)
18:00 - P.Brown, J.Cocke, S.Della Pietra, V.Della Pietra, F.Jelinek,
        R.Mercer, P.Roossin (USA): A statistical approach to
        language translation

17:00 - 18:30 PANEL DISCUSSION in Room C:
        "Parallel Processing in Computational Linguistics"
        (moderator: H.Schnelle)

                        THURSDAY, AUGUST 25th

ROOM A: SYNTAX AND MORPHEMICS
 9:00 - T. van der Wouden, D.Heylen (Netherlands): Massive
        disambiguation of large text corpora with flexible
        categorial grammar
 9:30 - I.Kudo, T.Morimoto, M.Chung, M.Koshino (Japan): Schema method: a
        framework for correcting ill-formed input based on LFG
10:00 - J.Veronis (France): Morphosyntactic correction in natural
        language interfaces
11:00 - L.Kataja, K.Koskenniemi (Finland): Finite-state description of
        Semitic morphology: a case study of ancient Akkaidan
11:30 - M.R.Sorensen (USA): Non-linear computational analysis of
        non-concatenative Arabic morphology
12:00 - G.Goerz, D.Paulus (FRG): A finite state approach to German
        verb morphology
14:00 - K.Koskenniemi, K.W.Church (USA): Complexity, two-level
        morphology and Finnish
14:30 - J.Bear (USA): Morphology with two-level rules and negative
        rule features
15:30 - J.Carson (FRG): Unification and transduction in computational
        phonology
16:00 - I.A.Bol'sakov (USSR): Socinitel'nyj ellipsis v russkich
        tekstach: problemy opisanija i vosstanovlenija

ROOM B: DISCOURSE
 9:00 - B.L.Webber (USA): Tense as discourse anaphora
 9:30 - J.G.Carbonell, R.D.Brown (USA): Anaphora resolution: a
        multi-strategy approach
10:00 - E.Schuster (USA): Anaphoric reference to events and action:
        a representation

        LANGUAGE GENERATION
11:00 - L.Iordanskaja, R.Kittredge, A.Polguere (Canada): Implementating
        the meaning-text model for language generation
11:30 - S.Nirenburg, I.Nirenburg (USA): A framework for lexical
        selection in natural language generation
12:00 - J.M.Lancel, M.Otani, N.Simonin (France): Sentence parsing and
        generation with a semantic dictionary and a lexicon-grammar
14:00 - D.Schmauks, N.Reithinger (FRG): Generating multimodal output -
        conditions, advantages and problems
14:30 - M.Gasser, M.G.Dyer (USA): Sequencing in a connectionist model
        of language processing
15:30 - N.Ward (USA): Issues in word choice
16:00 - P.Sibun, A.K.Huettner, D.D.McDonald (USA): Directing the
        generation of living space descriptions

ROOM C: COMPUTER ASSISTED LEARNING
 9:00 - C.Schwind (France): Sensitive parsing: error analysis and
        explanation in an intelligent language tutoring system
 9:30 - W.Menzel (GDR): Error diagnosing and selection in a training
        system for second language learning
10:00 - E.G.Borissova (USSR): Two-component teaching system, that
        understands and corrects mistakes
11:00 - U.Zernik (USA): Language Acquisition: Coping with lexical gaps
11:30 - W.Bloemberg (Netherlands): A system for creating and manipulating
        generalized wordclass transition matrices from large labelled
        text-corpora
12:00 - Y.Tateisi, Y.Ono (Japan): A computer readability formula of
        Japanese texts for machine scoring

        LEXICAL ISSUES

14:00 - R.Scha, D.Stallard (USA): Lexical ambiguity and distributivity
14:30 - J.L.Klavans (USA): COMPLEX: a computational lexicon for
        natural language systems
15:30 - J.Nakamura, M.Nagao (Japan): extraction of semantic information
        from ordinary English dictionary and its evaluation
16:00 - N.Calzolari, E.Picchi (Italy): Acquisition of semantic
        information from an on-line dictionary

ROOM D: MACHINE TRANSLATION
 9:00 - E.van Munster (Netherlands): The treatment of scope and
        negation in Rosetta
 9:30 - P.Schmidt (FRG): A syntactic description of German in a
        formalism designed for a machine translation system
10:00 - C.Zelinsky-Wibbelt (FRG): Universal quantification in machine
        translation

        PARSING
11:00 - H.Nakagawa, T.Mori (Japan): A parser based on connectionist model
11:30 - R.T.Kasper (USA): An experimental parser for systemic grammars
12:00 - A.Abeille (USA): Parsing French with tree adjoining grammar:
        some linguistic accounts
14:00 - H.Haugeneder, M.Gehrke (FRG): Improving search strategies: an
        experiment in best-first parsing
14:30 - O.Stock, R.Falcone, P.Insinnamo (Italy): Island parsing and
        bidirectional charts
15:30 - H.Trost, W.Heinz, E.Buchberger (Austria): On the interaction of
        syntax and semantics in a syntactically guided caseframe parser
16:00 - G.Adriaens, M.Devos, Y.D.Willems (Belgium): The parallel expert
        parser (PEP): a thoroughly revised descendant of the word
        expert parser (WEP)

        MACHINE TRANSLATION
17:00 - M.Meya, J.Vidal (Spain): An integrated model for treatment of
        time in MT-systems
17:30 - F.van Eynde (Belgium): The analysis of tense and aspect in
        EUROTRA
18:00 - E.H.Steiner, J.Winter-Thielen (FRG): ON the semantics of focus
        phenomena in Eurotra

17:00 - 18:30 PANEL DISCUSSION in Room C
        "Controlled Languages and Language Control"
        (moderator: H.Karlgren)

                        FRIDAY, AUGUST 26th

 9:00 - 10:30 PLENARY SESSION: TRENDS AND PERSPECTIVES
        Speakers: A.K.Joshi, H.Karlgren, M.Kay, M.Nagao, P.Sgall,
        W.Wahlster

ROOM A: DISCOURSE
11:00 - A.Nakhimovsky, W.Rapaport (USA): Discontinuities in narratives
11:30 - K.J.Saebo (FRG): A cooperative yes-no query system featuring
        discourse particles
12:00 - R.Reilly (Ireland), G.Ferrari, I.Prodanof (Italy): a Framework
        for a model of dialogue
14:00 - J.Gundel, N.Hedberg, S.Rundquist, R.Zacharski (USA): On the
        generation and interpretation of demonstrative expressions
14:30 - K.Yoshimoto (Japan): Identifying zero pronouns in Japanese
        dialog

ROOM B: SPEECH ANALYSIS AND SYNTHESIS
11:00 - W.N.Campbell (UK): Speech-rate variation in a real-speech
        database
11:30 - K.J.Engelberg (FRG): Lexical functional grammar in speech
        recognition
12:00 - S.Matsunaga, M.Kohda (Japan): Linguistic processing using a
        dependency structure for speech recognition and understanding
14:00 - J.Harrington, G.Watson, M.Cooper (UK): Word-boundary
        identification from phoneme sequence constraints in automatic
        continuous speech recognition
14:30 - G.Houghton (UK): Anaphora and accent placement in a model of
        the production of spoken dialogue

ROOM C: LEXICAL ISSUES
11:00 - Y.Wilks, D.Fass, Ch.M.Guo, J.E.McDonald, T.Plate,
        B.M.Slator (USA): Machine tractable dictionaries as tools and
        resources for natural language processing
11:30 - M.Domenig (Switzerland): Word manager: a system for the
        definition, access and maintenance of lexical databases
12:00 - B.Katz, B.Levin (USA): Exploiting lexical regularities in
        designing natural language systems
14:00 - Zhong-Xiang Yang (China): Generation of Chinese vocabulary
        from text by associative network
14:30 - J.H.Martin (USA): Representing regularities in the metaphoric
        lexicon

ROOM D: MACHINE TRANSLATION
11:00 - J.A.Alonso (Spain): A model for transfer control in METAL
11:30 - M.McGee Wood (UK): Machine translation for monolinguals
12:00 - A.Bech, A.Nygaard (Denmark): The E-framework: a new comprehensive
        formalism for natural language processing within a stratificational
        transfer-based multi-lingual machine translation system

        PARSING
14:00 - N.Correa (USA): A binding rule for government-binding parsing
14:30 - Hsin-Hsi Chen, I-Peng Lin, Chien-Ping Wu (Taiwan): A new design
        of Prolog-based bottom-up parsing system with government-binding
        theory

15:00 - 17:00 PANEL DISCUSSION in Room C
        "The Relation of Lexicon and Grammar in Machine Translation"
        (moderator: A.Zampolli)

17:00 - CLOSING SESSION in Room C

For further information, contact:
        COLING'88 Secretariat c/o MTESZ Congress Bureau
        Kossuth ter 6-8, H-1055 Budapest, Hungary
        Telex: 22792 MTESZ H

------------------------------

Date: Thu, 23 Jun 88 13:27:26 EDT
From: finin@PRC.Unisys.COM
Subject: Unisys AI seminar: The Causal Simulation of Ordinary and
         Intermittent Mechanical Devices

                              AI SEMINAR
                     UNISYS PAOLI RESEARCH CENTER


                  The Causal Simulation of Ordinary
                 and Intermittent Mechanical Devices

                               Pearl Pu
                      University of Pennsylvania

The causal simulation of physical devices is an important area in the
field of commonsense reasoning of the everyday physical world.  When a
human expert describes the way a physical device works, for example a
pendulum clock, he or she uses commonsense knowledge of physics and
mathematics. To make computers to do likewise, we must first construct
a knowledge representation scheme that captures commonsense knowledge,
and supports causal simulation.

Mechanical systems, especially those that exhibit intermittent
motions, provide a good basis for the investigation of behavioral
reasoning issues.  Our key observation is that the spatial
configuration of mechanical devices changes periodically. So far only
simple links or conduits have been used to model the connection
between a pair of objects in the field.  We offer a solution which
uses a separate representational entity, called the connection frame,
to model the spatial relationships between a pair of objects and how
those relationships achieve force or velocity propagation.  The
connection representation is assumed supplied as part of the design
knowledge of the mechanism, though it could be just as readily
computed by other spatial connection determination methods.

In this talk, I describe a framework constructed to simulate the
behaviors of regular and intermittent mechanical systems, with an
emphasis on force and velocity propagation reasoning. In general, it
appears that continuous motion can usually be modeled by velocity
propagation while intermittent motion is best approached by force
propagation.

The second part of the talk, I discuss a simulation system which
attempts to reason about how the physical devices work by simulating
the devices qualitatively, mimicing the way people perform such a
task. The simulation algorithm will be outlined.  Several examples
analyzed with the model include dozens of generic objects and
connections, a two-gear device, a spring-driven cam mechanism, and a
pendulum clock. Currently the simulation is being implemented on the
Symbolics Lisp machine in Flavors, which is an object-oriented
language.  Some of the implementation issues will be discussed as
well.

                      2:00 pm Wednesday, June 29
                         BIC Conference room
                     Unisys Paoli Research Center
                      Route 252 and Central Ave.
                            Paoli PA 19311

   -- non-Unisys visitors who are interested in attending should --
   --   send email to finin@prc.unisys.com or call 215-648-7446  --

------------------------------

Date: 23 Jun 88 21:46:32 GMT
From: FLASH.BELLCORE.COM!walker@ucbvax.berkeley.edu  (walker_donald e)
Subject: ACL European Chapter Call for Papers


                 ACL European Chapter 1989
Status: R

                      CALL FOR PAPERS

         Fourth Conference of the European Chapter
      of the Association for Computational Linguistics

                      10-12 April 1989
            Centre for Computational Linguistics
 University of Manchester Institute of Science & Technology
                    Manchester, England


This conference is  the  fourth  in  a  series  of  biennial
conferences  on  computational  linguistics sponsored by the
European  Chapter  of  the  Association  for   Computational
Linguistics.  Previous  conferences  were held in Pisa (Sep-
tember 1983), Geneva  (March  1985)  and  Copenhagen  (April
1987).  Although hosted by a regional chapter, these confer-
ences are global in scope and  participation.  The  European
Chapter  represents a major subset of the parent Association
for Computational Linguistics, and is in its  seventh  year.
The  conference  is  open  both to existing members and non-
members of the Association.

Papers are invited on all aspects of computational  linguis-
tics, including but not limited to:

                         morphology
                     lexical semantics
               computational models for the
            analysis and generation of language
               speech analysis and synthesis
         computational lexicography and lexicology
                    syntax and semantics
                     discourse analysis
                    machine translation
             computational aids to translation
                natural language interfaces
        knowledge representation and expert systems
            computer-assisted language learning


Authors should send six copies of a  5-  to  8-page  double-
spaced  summary  to the Programme Committee at the following
address:

                       Harold Somers
            Centre for Computational Linguistics
                           UMIST
                         PO Box 88
                    Manchester M60 1 QD
                          England


It is important that the summary  should  identify  the  new
ideas  in  the paper and indicate to what extent the work is
complete and to what extent it  has  been  implemented.   It
should contain sufficient information to allow the programme
committee to determine the scope of the work and  its  rela-
tion  to  relevant literature. The author's name and address
(including net address if possible) should be clearly  indi-
cated, as well as one or two keywords indicating the general
subject matter of the paper.

Schedule: Summaries must be submitted by 1st  October  1988.
Authors  will  be  notified  of acceptance by 15th December.
Camera-ready copy of final  papers  prepared  in  a  double-
column  format  on model paper (which will be provided) must
be received by 28th February 1989, along with a signed copy-
right  release  statement.  Papers not received by this date
will not be included in the  Conference  Proceedings,  which
will  be  published  in  time  for  distribution to everyone
attending the conference.

The programme committee will be co-chaired by Harold  Somers
(UMIST)  and  Mary  McGee  Wood (Manchester University), and
will include the following

                Christian Boitet (Grenbole)
                  Laurence Danlos (Paris)
                   Gerald Gazdar (Sussex)
                 Jurgen Kunze (Berlin, DDR)
                 Michael Moortgat (Leiden)
                  Oliviero Stock (Trento)
                 Henry Thompson (Edinburgh)
                   Dan Tufis (Bucharest)


Local arrangements will also be handled by Somers and  Wood.
Please  await  a  further  announcement  in October for more
details.

Exhibits and demonstrations: A  programme  of  exhibits  and
demonstrations  is  planned.  Anyone  wishing to participate
should contact John McNaught  at  the  above  address.  Book
exhibitors  should  contact  Paul  Bennett also at the above
address.

------------------------------

End of AIList Digest
********************

∂25-Jun-88  1237	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #44  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 25 Jun 88  12:37:32 PDT
Date: Sat 25 Jun 1988 15:16-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #44
To: AIList@AI.AI.MIT.EDU


AIList Digest            Sunday, 26 Jun 1988       Volume 7 : Issue 44

Today's Topics:

 Queries:
  heuristics
  online thesaurus and database
  oral surgery expert system proposal
  Response to: Connectionist Expert Systems (MACIE)
  competitive learning
  Query Dbase III Plus with Turbo Prolog
  CLIPS on Apollo?

----------------------------------------------------------------------

Date: Mon, 20 Jun 88 14:10:42 EDT
From: Nicky Ranganathan <nicky@vx2.GBA.NYU.EDU>
Subject: heuristics

 I am currently attempting a classification of heuristics based on the
"type" of the heuristic and the manner in which it is used. For
example, a heuristic such as "A smell of gas in the air could signal
the presence of a leak in a pipe" is an evidential association,
linking some observable feature of the environment to a probable fault
or hypothesis. Heuristics that impose orderings on the reliability of
components are essentially built on statistical experience, and can be
used to discriminate between competing hypotheses.
 I would appreciate ideas/examples/pointers to heuristics like the
above, from any domain, how they could be classified, and how they
would be used. Also, any speculations comparing the general nature of
heuristics for different tasks, such as diagnosis, planning and
prediction would be much appreciated.
 Please e-mail responses to me. If there is sufficient interest I will
post a summary or e-mail those who are interested. Thanks in advance.
--Nicky
------------------------------------------------------------------------------
Nicky Ranganathan         Internet: nicky@vx2.GBA.NYU.EDU
Information Systems Dept. UUCP: ...{allegra,rocky,harvard}!cmcl2!vx2!nicky
New York University       Bitnet: nicky@nybvx1

------------------------------

Date: Mon, 20 Jun 88 18:45:43 EDT
From: HSINCHUN CHEN <hchen@vx2.GBA.NYU.EDU>
Subject: online thesaurus and database

I am working on a dissertation topic which involves applying AI
techniques and methodologies on online information retrieval systems,
such as online catalog and online bibliography database.
This research has been conducted for the last two years.
Some results were reported in AAAI87.
At the current stage I am developing a program which can serve as
an online information specialist between the searchers and the retrieval
system. I am also actively looking for some kind of
online database (book records; in the areas of
computer science, information systems, and finance)
and online thesaurus (preferrably based on the
Library of Congress subject headings classification scheme).
Any pointers or information regarding these electronic forms of thesaurus and
database are highly appreciated.

For those of you who have similar research interests,
I would also be very happy to exchange ideas and thoughts.
Please contact me through email.

Hsinchun Chen
Information Systems area
New York University


ADDRESS: HCHEN@VX2.GBA.NYU.EDU
    TEL: 212-9984205

------------------------------

Date: Wed, 22 Jun 88 15:01 CST
From: <PMACLIN%UTMEM1.BITNET@MITVMA.MIT.EDU>
Subject: oral surgery expert system proposal


Two of us -- a faculty member in the College of Dentistry and a knowledge
engineer in the Computer Science Department at the University of Tennessee at
Memphis -- have developed the following proposal:

Develop an expert system to teach undergraduate oral surgery students the most
appropriate means of patient evaluation and thereby provide the best treatment
plans for patients at our clinics.

Students will enter facts relating to specific patients into a database on a
Macintosh II. Using expertise in the knowledge base, the computer will
determine appropriate preoperative patient evaluation and then print those
findings for students to use in patient treatment and study later. Patient
histories and data in the database will be used for further research and
evaluation.

Programming, hardware and development will cost an estimated $25,000.

THE BIG QUESTION: Does anyone out there know any possible places for funding
of this project? Any ideas for sources of funding would be appreciated. Thanks
much.

Contact BHIPP@UTMEM1 or PMACLIN@UTMEM1.
Dr. B.R. Hipp (901 528-6234) or Philip Maclin
The University of Tennessee, Memphis, TN.

------------------------------

Date: Thu, 23 Jun 88 12:12:20 EDT
From: carole hafner <hafner%corwin.ccs.northeastern.edu@RELAY.CS.NET>
Subject: Response to: Connectionist Expert Systems (MACIE)

Prof. Steve Gallant is the author of MACIE, a connectionist expert system
that was described in CACM Vol. 31, No. 2 (Feb. 1988).  Steve's e-mail
address is: sg@corwin.ccs.northeastern.edu

His U.S. Mail address is:
College of Computer Science
Northeastern University
Boston, MA 02115 USA

He has a number of interesting papers on connectionist learning methods and
their applications in expert systems.

--Carole Hafner

------------------------------

Date: 24 Jun 88 13:59:00 GMT
From: s.cs.uiuc.edu!bhamidip@a.cs.uiuc.edu
Subject: competitive learning


I have a question that I am hoping that someone out there will be able
to clear up.  I have been reading about competitive learning in
Rumelhart & McClelland "Parallel Distributed Processing", and am
confused by the way that they have set up the model.  Basically what
they have is a two level network with all the units in the second
layer taking their inputs from the units in the first layer.  The
units in the second layer are grouped into clusters and all units in a
cluster are connected in a way so that they inhibit each other, allowing
only one unit in a cluster to become active.  What I do not understand
is the nature of the inhibitory connections within a
cluster.  Are they just like the other connections, but with
negative weights?  Is some special activation function needed to take
this into account?  When a unit is "learning" can the weights on the
inhibitory connections be changed?

I think that this is a basic question and therefore would best be
answered by email.  If I get enough responses or there is some variation
in the answers I get I will post the results to the net.

Ram Bhamidipaty
EMail: {rutgers}!bellcore!alpha!arb  or arb@alpha.bellcore.com

ps. I am posting this under a friends id.

------------------------------

Date: 24 Jun 88 18:29:32 GMT
From: uccba!ucqais!bbeck@ohio-state.arpa  (Bryan Beck)
Subject: Query Dbase III Plus with Turbo Prolog


I recently read an article in AI EXPERT, June 1988 written by Karl Horak about
using Turbo Prolog to query Dbase III Plus files. There were two points that
were not explained in the article, (1) How to get prolog to read the .dbf and
how to get prolog to read the .dbt files.

Also the article referencing  Fileman, Rick. "Memo to Character Conversion,"
Aston-Tate Inc.  TechNotes, Nov. 1987,pp 15-24.

I any one has read this article of have done anything like this, or where I can
find these TechNotes.  I would greatly appreciate any additional information
about trying to do this.

Please send replys to my e-mail address.

                Thanks in advance,
                Bryan

--
Bryan M. Beck     Univ of Cincinnati  College of Business Administration
(bbeck@uccba.uc.edu)   UUCP:  {pyramid,decuac,mit-eddie}!uccba!bbeck

------------------------------

Date: 25 Jun 88 00:02:15 GMT
From: cae780!leadsv!kallaus@hplabs.hp.com  (Jerry Kallaus)
Subject: CLIPS on Apollo?

[artifact]

HELLLLLLLLLLLLLLPPPPPPPPPPPPPPPPPPPPP!!!!!!

I am trying to get the AI tool CLIPS running on an Apollo
workstation.  The Apollo has Aegis4-DOMAIN/IX rev 9.2.3.
CLIPS seems to work fine for small test cases, but apparently
prematurely thinks it is out of memory for a problem of any
significant size.  The code that I am trying to load and run
was originally in ART on a Symbolics, was modified to be
syntactically acceptable to CLIPS and has been successfully
run with CLIPS on a VAX.

I have yet to find anyone who knows of anyone using CLIPS
on an Apollo.  I've tried the CLIPS helpline at NASA and
have talked to Apollo software reps already.

I am new to CLIPS, the Apollo, and C, which CLIPS is coded in.
Oh yeah, this my first USENET posting.   ...have mercy!

Any info or help would be greatly appreciated.



--
Jerry Kallaus         {pyramid.arpa,ucbvax!sun!suncal}leadsv!kallaus
work (408)742-4569    home (408)732-0217
     Funny, how just when you think that life can't possibly get
     any worse, it suddenly does. - Douglas Adams

------------------------------

End of AIList Digest
********************

∂25-Jun-88  1519	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #45  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 25 Jun 88  15:18:44 PDT
Date: Sat 25 Jun 1988 15:22-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #45
To: AIList@AI.AI.MIT.EDU


AIList Digest            Sunday, 26 Jun 1988       Volume 7 : Issue 45

Today's Topics:

 Philosophy:
  Re: Cognitive AI vs Expert Systems
  Re: Ding an sich
  the legal rights of robots
  possible value of AI
  metaepistemology
  H. G. Wells
  Smoliar's metaphor
  more on dance notation

----------------------------------------------------------------------

Date: 22 Jun 88 15:10:39 GMT
From: mikeb@ford-wdl1.arpa  (Michael H. Bender)
Subject: Re: Cognitive AI vs Expert Systems

I think terms like "hard AI" and "soft AI" are potentially offensive
and imply a set of values (i.e. some set of problems being of more
value than others). Instead, I highly recommend that you use the
distinctions proposed by Jon Doyle in the AI Magazine (Spring 88),
in which he distinguishes between the following (note - the short
definitions are MINE, not Doyle's)

 o      COMPUTATIONAL COMPLEXITY ANALYSIS - i.e. the search for
        explanations based on comutational complexity

 o      ARTICULATING INTELLIGENCE - i.e. codifying command and expert
        knowledge.

 o      RATIONAL PSYCHOLOGY - i.e. the cognitive science that deals
        with trying to understand human thinking

 o      PSYCHOLOGICAL ENGINEERING - i.e. the development of new
        techniques for implementing human-like behaviors and
        capacities

Note - using this demarcation it is easier to pin-point the different
areas in which a person is working.

------------------------------

Date: 22 Jun 88 15:31:22 GMT
From: steinmetz!vdsvax!thearlin@uunet.UU.NET (Thearling Kurt H)
Reply-to: steinmetz!vdsvax!thearlin@uunet.UU.NET (Thearling Kurt H)
Subject: Re: Ding an sich


In an earlier article, John McCarthy writes:

>meaningful and possible that the basic structure of the
>world is unknowable.  It is also possible that it is
>knowable.  It just depends on how much of the structure

>To illustrate this idea, consider the Life cellular
>automaton proposed by John Horton Conway and studied
>by him and various M.I.T. hackers and others.  It's
>described in Winning Ways by Berlekamp, Conway and
>Guy.


There is a very interesting article related to this topic
in the April 1988 issue of Atlantic Monthly.  It is about
the semi-controversial physicist/computer scientist Edward
Fredkin and is titled "Did the"Did the Universe Just Happen."

An interesting quote from the article is "Fredkin believes that
the universe is very literally a computer and that it is being
used by someone, or something, to solve a problem."


kurt


-----------------------------------------------------------------------
Kurt Thearling                        thearlin%vdsvax.tcpip@ge-crd.arpa
General Electric CRD                   thearlin@vdsvax.steinmetz.ge.com
Bldg. KW, Room C313                     uunet!steinmetz!vdsvax!thearlin
P.O. Box 8                               thearlin%vdsvax@steinmetx.uucp
Schenectady, NY  12301                       kurt%bach@uxc.cso.uiuc.edu
(518) 387-7219                                   kurt@bach.csg.uiuc.edu

------------------------------

Date: Wed, 22 Jun 88 12:59:34 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: the legal rights of robots

There is an article entitled `The Rights of Robots' by Phil McNally and
Sohail Inayatullah in the Summer 1988 issue of _Whole Earth Review_.
They work for the Hawaii Judiciary (inter alia).  A more complete
legalese version was submitted as a report to the Hawaii Supreme
Court.  A footnote to the article says you can obtain this report
and related correspondence from the authors at PO Box 2650, Honolulu,
HI 96804.

The same issue has an article by Candace Pert of NIH about neuropeptides
and the `physical basis of emotions'.  This article suggests to me that
the computer may be inappropriate as a metaphor for mental process,
perhaps as inappropriate as the steam engine metaphor that Freud's
thinking was grounded in.  (Freud's libido, channelling, repression,
release, all presumed passive neurons as pipes and valves, no
metabolism:  the neurophysiological wisdom of the day.)

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Thu, 23 Jun 88 16:52 EST
From: DJS%UTRC%utrcgw.utc.com@RELAY.CS.NET
Subject: possible value of AI

Gilbert Cockton writes:

"... Once again, what the hell can a computer program tell us about
ourselves? Secondly, what can it tell us that we couldn't find out by
studying people instead?"

What do people use mirrors for? What the hell can a MIRROR tell us about
ourselves? Secondly, what can it tell us that we couldn't find out by
studying people instead?

        Isn't it possible that a computer program could have properties
which might facilitate detailed self analysis? I believe some people
have already seen the dim reflection of true intelligence in the primitive
attempts of AI research. Hopefully all that is needed is extensive
polishing and the development of new tools.

                                David Sirag
                                UTRC

------------------------------

Date: Fri, 24 Jun 88 18:46 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: metaepistemology

Distribution-File:
        AILIST@AI.AI.MIT.EDU
        JMC@SAIL.Stanford.EDU

In AIList Digest   V7 #41, John McCarthy <JMC@SAIL.Stanford.EDU>
writes:

>I want to defend the extreme point of view that it is both
>meaningful and possible that the basic structure of the
>world is unknowable.  It is also possible that it is
>knowable.


Suppose an agent which wants to know what there is there.

Let the agent have methods and data like a Zetalisp flavor.

Let it have sensors with which it can observe its environment and
methods to influence its environment like servo motors running robot
hands.


Now what can it know?


It is obvious the agent only can have a representation of the Ding an
Sich.  In this sense the reality is unknowable.  We only have
descriptions of the actual world.

There can be successively better approximations of truth.  It is
important to be able to improve the descriptions, compare them and to
be able to discard ones which do not appear to rescribe the reality.

It also helps if the agent itself knows it has descriptions and that
they are mere descriptions.


It also is important to be able to do inferences based on the
descriptions, for example to design an experiment to test a new theory
and compare the predicted outcome with the one which actually takes
place.


It seems that for the most part evolution has been responsible for
developing life-forms which have good descriptions of the Ding an Sich
and which have a good capability to do inference with their models.
Humans are the top of this evolutionary development: we are capable of
forming, processing and communicating complicated symbolic models of
the reality.


                        Andy Ylikoski

------------------------------

Date: Sat, 25 Jun 88 09:26:02 PDT
From: Stephen Smoliar <smoliar@vaxa.isi.edu>
Subject: H. G. Wells

I was disappointed to see no reaction to John Cugini's quotation from
H. G. Wells.  Is no one willing to admit that things haven't changed
since 1906?

------------------------------

Date: Wed Jun 22 13:46:40 EDT 1988
From: sas@BBN.COM
Subject: Smoliar's metaphor


In Volume 7, Issue 41, Stephen Smoliar presents an interesting
metaphor, relating the components of a knowledge representation system
to the parts of speech.  In particular he described ACTIONS as verbs
and TYPES as nouns.  INSTANCES were merely described as "entities".  I
was wondering if it might work better to describe TYPES as
"adjectives" and INSTANCES as "nouns"?  If nothing else, it kind of
makes one think.

                                Just wondering,
                                        Seth

------------------------------

Date: Sat, 25 Jun 88 09:21:39 PDT
From: Stephen Smoliar <smoliar@vaxa.isi.edu>
Subject: more on dance notation

I accept most of John Nagle's response to my remarks on dance notation.
However, I think we both overlooked one area in which, to the best of my
knowledge, NO dance or music notation has served as an effective medium
of communication:  This is the matter of how the dancers (or moving agents)
are situated in space and how they interact.  Most notations, including
Labanotation, make use of relatively primitive floor plans with little
more than vague attempts to coordinate the notations of individual
movements with paths on those floor plans.  In addition, Labanotation
has a repertoire of signs concerned with person-to-person contact;  but
these signs lack the rigor which went into the development of the notation
of movement of the limbs and torso.

The original discussion was provoked by the question of what could not be
communicated by a formal system, such as mathematics.  From there we progressed
to physical movement as a candidate.  That was what led the discussion into
dance notation.  However, whatever has been achieved regarding the movement
of an individual has not served the problems with communicating the
interactions of several moving individuals.  I would stipulate that
this is still a thorny problem which, in practice, is still handled
basically by demonstration and imitation.

------------------------------

End of AIList Digest
********************

∂28-Jun-88  2041	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #46  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 28 Jun 88  20:41:03 PDT
Date: Tue 28 Jun 1988 23:05-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #46
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 29 Jun 1988     Volume 7 : Issue 46

Today's Topics:

 Philosophy:

  replicating the brain with a Turing machine
  Deep Thought.
  possible value of AI
  H. G. Wells
  Who else isn't a science?
  metaepistemology
  questions and answers about meta-epistemology

----------------------------------------------------------------------

Date: Sat, 25 Jun 88 21:16 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: replicating the brain with a Turing machine

Distribution-File:
        AILIST@AI.AI.MIT.EDU

In AIList Digest V7 #29, agate!garnet!weemba@presto.ig.com (Obnoxious
Math Grad Student) writes:

>In article <517@dcl-csvax.comp.lancs.ac.uk>, simon@comp (Simon Brooke) writes:
>>[...]
>>If all this is so, then it is possible to exactly reproduce the workings
>>of a human brain in a [Turing machine].
>
>Your argument was pretty slipshod.  I for one do not believe the above
>is even possible in principle.

Why?  You must / at least should have a basis for the opinion.

One possibility I can think of is the dualist position: we have a
spirit but don't know how to make a machine with one.

Any other Dualists out there?

                        Andy Ylikoski

------------------------------

Date: Sun, 26 Jun 88 13:10:51 +0100
From: "Gordon Joly, Statistics, UCL"
      <gordon%stats.ucl.ac.uk@ESS.Cs.Ucl.AC.UK>
Subject: Deep Thought.

Kurt Thearling quotes a quote in AIList Digest V7 #45.

> An interesting quote from the article is "Fredkin believes that
> the universe is very literally a computer and that it is being
> used by someone, or something, to solve a problem."

Excuse my ignorance, but I am not able to judge who is plagiarising
whom.  Douglas Adams invented a computer which does just this in "The
Hitch Hikers Guide to the Galaxy".  Most people regard this novel as a
work of fiction.

Gordon Joly.

------------------------------

Date: 26 Jun 88 15:41:38 GMT
From: uwvax!uwslh!lishka@rutgers.edu (Fish-Guts)
Reply-to: uwvax!uwslh!lishka@rutgers.edu (Fish-Guts)
Subject: Re: possible value of AI


In a previous article, DJS%UTRC@utrcgw.utc.COM writes:
>Gilbert Cockton writes:
>
>"... Once again, what the hell can a computer program tell us about
>ourselves? Secondly, what can it tell us that we couldn't find out by
>studying people instead?"
>
>What do people use mirrors for? What the hell can a MIRROR tell us about
>ourselves? Secondly, what can it tell us that we couldn't find out by
>studying people instead?

     Personally, I think everything can tell us something about
ourselves, be it mirror, computer, or rock.  Maybe it depends on what
one expects to find?

>       Isn't it possible that a computer program could have properties
>which might facilitate detailed self analysis? I believe some people
>have already seen the dim reflection of true intelligence in the primitive
>attempts of AI research. Hopefully all that is needed is extensive
>polishing and the development of new tools.

     Whether or not there has been a "dim reflection of true
intelligence in the primitive attempts of AI research" is an opinion,
varying greatly with who you talk to.  I would think that there are
"some people" who have already seen "true intelligence" (however
bright or dim) in anything and everything.  Again, their opinions.

>                               David Sirag
>                               UTRC

     The above are my opinions, although they might be those of my
cockatiels as well.

                                        -Chris
--
Christopher Lishka                | lishka@uwslh.uucp
Wisconsin State Lab of Hygiene    | lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617 | ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
"...Just because someone is shy and gets straight A's does not mean they won't
put wads of gum in your arm pits."
                          - Lynda Barry, "Ernie Pook's Commeek: Gum of Mystery"

------------------------------

Date: Sun, 26 Jun 88 13:59:24 -0400 (EDT)
From: David Greene <dg1v+@andrew.cmu.edu>
Subject: Re: H. G. Wells

>I was disappointed to see no reaction to John Cugini's quotation from
>H. G. Wells.  Is no one willing to admit that things haven't changed
>since 1906?

Of course things have changed...

there are now far more than 4000 "scientists" in Washington and Herbert George
Wells is dead.

------------------------------

Date: 26 Jun 88 18:17:37 GMT
From: pasteur!agate!garnet!weemba@ames.arpa  (Obnoxious Math Grad
      Student)
Subject: Re: Who else isn't a science?

In article <????>, now expired here, ???? asked me for references.  I
find this request strange, since at least one of my references was in
the very article being replied to, although not spelled out as such.

Anyway, let me recommend the following works by neurophysiologists:

G M Edelman _Neural Darwinism: The Theory of Neuronal Group Selection_
(Basic Books, 1987)

C A Skarda and W J Freeman "How brains make chaos in order to make sense
of the world", _Behavorial and Brain Sciences_, (1987) 10:2 pp 161-195.

These researchers start by looking at *real* brains, *real* EEGs, they
work with what is known about *real* biological systems, and derive very
intriguing connectionist-like models.  To me, *this* is science.

GME rejects all the standard categories about the real world as the start-
ing point for anything.  He views brains as--yes, a Society of Mind--but
in this case a *biological* society whose basic unit is the neuronal group,
and that the brain develops by these neuronal groups evolving in classical
Darwinian competition with each other, as stimulated by their environment.

CAS & WJF have developed a rudimentary chaotic model based on the study
of olfactory bulb EEGs in rabbits.  They hooked together actual ODEs with
actual parameters that describe actual rabbit brains, and get chaotic EEG
like results.
------------------------------------------------------------------------
In article <34227@linus.UUCP>, marsh@mitre-bedford (Ralph J. Marshall) writes:
>       "The ability to learn or understand or to deal with new or
>        trying situations."

>I'm not at all sure that this is really the focus of current AI work,
>but I am reasonably convinced that it is a long-term goal that is worth
>pursuing.

Well, sure.  So what?  Everyone's in favor of apple pie.
------------------------------------------------------------------------
In article <2618@mit-amt.MEDIA.MIT.EDU>, bc@mit-amt (bill coderre) writes:

>Oh boy. Just wonderful. We have people who have never done AI arguing
>about whether or not it is a science [...]

We've also got what I think a lot of people who've never studied the
philosophy of science here too.  Join the crowd.

>May I also inform the above participants that a MAJORITY of AI
>research is centered around some of the following:

>[a list of topics]

Which sure sounded like programming/engineering to me.

>                  As it happens, I am doing simulations of animal
>behavior using Society of Mind theories. So I do lots of learning and
>knowledge acquisition.

Well good for you!  But are you doing SCIENCE?  As in:

If your simulations have only the slightest relevance to ethology, is your
advisor going to tell you to chuck everything and try again?  I doubt it.

ucbvax!garnet!weemba    Matthew P Wiener/Brahms Gang/Berkeley CA 94720

------------------------------

Date: Mon, 27 Jun 88 09:18:11 EDT
From: csrobe@icase.arpa (Charles S. Roberson)
Subject: Re: metaepistemology

Assume the "basic structure of the world is unknowable"
[JMC@SAIL.Stanford.edu] and that we can only PERCEIVE our
world, NOT KNOW that what we perceive is ACTUALLY how the
world is.

Now imagine that I have created an agent that interacts
with *our* world and which builds models of the world
as it PERCEIVES it (via sensors, nerves, or whatever).

My question is this:  Where does this agent stand, in
relation to me, in its perception of reality?  Does it
share the same level of perception that I 'enjoy' or is
it 'doomed' to be one level removed from my world (i.e.
is its perception inextricably linked to my perception
of the world, since I built it)?

Assume now, that the agent is so doomed.  Therefore, it
may perceive things that are inconsistent with the world
(though we may never know it) but are consistent with
*my* perception of the world.

Does this imply that "true intelligence" is possible
if and only if an agent's perception is not nested
in the perception of its creator?  I don't think so.
If it is true that we cannot know the "basic structure of
the world" then our actions are based solely on our
perceptions and are independent of the reality of the
world.

I believe we all accept perception as a vital part of an
intelligent entity.  (Please correct me if I am wrong.)
However, a flawed perception does not make the entity any
less intelligent (does it?).  What does this say about
the role of perception to intelligence?  It has to be
there but it doesn't have to function free of original
bias?

Perhaps, we have just created an agent that perceives
freely but it can only perceive a sub-world that I
defined based on my perceptions.  Could it ever be
possible to create an agent that perceives freely and
that does not live in a sub-world?

-chip
+-------------------------------------------------------------------------+
|Charles S. Roberson          ARPANET:  csrobe@icase.arpa                 |
|ICASE, MS 132C               BITNET:   $csrobe@wmmvs.bitnet              |
|NASA Langley Rsch. Ctr.      UUCP:     ...!uunet!pyrdc!gmu90x!wmcs!csrobe|
|Hampton, VA  23665-5225      Phone:    (804) 865-4090
+-------------------------------------------------------------------------+

------------------------------

Date: Mon, 27 Jun 88 20:57 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: questions and answers about meta-epistemology

Distribution-File:
        AILIST@AI.AI.MIT.EDU

Here come questions by csrobe@icase.arpa (Charles S. Roberson) and my
answers to them:

>Assume the "basic structure of the world is unknowable"
>[JMC@SAIL.Stanford.edu] and that we can only PERCEIVE our
>world, NOT KNOW that what we perceive is ACTUALLY how the
>world is.
>
>Now imagine that I have created an agent that interacts
>with *our* world and which builds models of the world
>as it PERCEIVES it (via sensors, nerves, or whatever).
>
>My question is this:  Where does this agent stand, in
>relation to me, in its perception of reality?  Does it
>share the same level of perception that I 'enjoy' or is
>it 'doomed' to be one level removed from my world (i.e.
>is its perception inextricably linked to my perception
>of the world, since I built it)?

It has the perceptory and inferencing capabilities you designed and
implemented, unless you gave it some kind of self-rebuilding or
self-improving capability.  Thus its perception is linked to your
world.

>Does this imply that "true intelligence" is possible
>if and only if an agent's perception is not nested
>in the perception of its creator?  I don't think so.

I also don't think so. The limitation of the perception of the robot
being linked to the designer of the robot is unessential I think.

>I believe we all accept perception as a vital part of an
>intelligent entity.  (Please correct me if I am wrong.)

Perception is extremely essential.  All observation of reality takes
place by means of perception.

>However, a flawed perception does not make the entity any
>less intelligent (does it?).  What does this say about
>the role of perception to intelligence?  It has to be
>there but it doesn't have to function free of original
>bias?

A flawed perception can be lethal for example to an animal.
Perception is a necessary requirement.

It can be argued, though, that all human perception is biased (our
education influeces how we interpret that which we perceive).

>Perhaps, we have just created an agent that perceives
>freely but it can only perceive a sub-world that I
>defined based on my perceptions.  Could it ever be
>possible to create an agent that perceives freely and
>that does not live in a sub-world?

Yes, at least if the agent has the capability to extend itself for
example by being able to redesign and rebuild itself.  How much
computational power in the Turing machine sense this capability
requires is an interesting theoretical question which may already have
been studied by the theoreticians out there.

                        Andy Ylikoski

------------------------------

End of AIList Digest
********************

∂29-Jun-88  2142	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #47  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 29 Jun 88  21:42:29 PDT
Date: Thu 30 Jun 1988 00:02-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #47
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 30 Jun 1988      Volume 7 : Issue 47

Today's Topics:

 Announcements:
  Visiting position in NL at Toronto
  TOC from Canadian Artificial Intelligence, No. 16, July 1988
  call for papers - 5th IEEE conference on AI applications

----------------------------------------------------------------------

Date: Sat, 25 Jun 88 14:04:18 GMT
From: Graeme Hirst <gh%aipna.edinburgh.ac.uk@NSS.CS.UCL.AC.UK>
Subject: Visiting position in NL at Toronto

VISITING POSITION IN NATURAL LANGUAGE UNDERSTANDING

UNIVERSITY OF TORONTO
ARTIFICIAL INTELLIGENCE GROUP
(DEPARTMENT OF COMPUTER SCIENCE)

A one-year visiting position, for a post-doc or more senior person,
is available in the University of Toronto A.I. group, in the area of
natural language understanding and computational linguistics.

The visitor would carry a 50% teaching load (one half-course per
semester), supervise MSc theses, and participate in the research
group's activities.  The position is to commence asap.

The Toronto A.I. group includes 6.5 faculty, 2 research scientists,
and approximately 40 graduate students.  The natural language
subgroup includes one faculty member (Graeme Hirst) and about ten
graduate students.

For more information, contact Graeme Hirst, preferably by e-mail:

   In North America:    gh@ai.toronto.edu
   In U.K./Europe:      gh@uk.ac.ed.aipna

   Phone (in U.K. until 18 July):   031 225 7774 x.225
         (in Canada from 19 July):  416-978-8747

   Write:     Graeme Hirst
              Dept Computer Science
              University of Toronto
              Toronto, CANADA   M5S 1A4

------------------------------

Date: Mon, 27 Jun 88 15:35:10 EDT
From: Christopher Prince <ames!arcsun!chris@calgary.MIT.EDU>
Subject: Canadian AI magazine TOC

Table of contents from
Canadian Artificial Intelligence, No. 16, July 1988
(edited at the Alberta Research Council)
a publication of the CSCSI (Canadian Society for Computational Studies
of Intelligence).


   Communications
 3 Executive Notes
 4 Humour-Dear Dr. Rob

   AI News
 7 Short Takes
 9 New Products
10 A Proposal for the Creation of SIGET
   Philippe Duchastel

   Feature Articles
13 AI and Resource Industries
   Connie Bryson
16 Research Directions for ICAI in Canada
   Philippe Duchastel

   Research Reports
20 AI Research and Development at Applied AI Systems, Inc.
   Takashi Gomi
23 Artificial Intelligence Work at MacDonald Dettwiler and Associates Ltd.
   Max Krause
27 Knowledge Acquisition Research and Development at Acquired Intelligence Inc.
   Brian A. Schaefer

   Conference Reports
29 Social Issues Conference
   Robin Cohen
32 Report on the 1988 Distributed Artificial Intelligence Workshop
   Ernest Chang
34 The Fourth IEEE Conference on Artificial Intelligence Applications
   Betty Ann Snyder

   Publications
36 Book Reviews
38 Books Received
39 Computational Intelligence Abstracts
41 Technical Reports

44 Conference Announcements


-------------------------------------------------------------------------------

Christopher Prince,

Alberta Research Council,
Calgary, Alberta. Canada.
(403) 297-2600

arcsun!chris@uunet
or
prince@noah.arc.cdn

------------------------------

Date: Tue, 28 Jun 88 10:02:15 EDT
From: Mark.Fox@ISL1.RI.CMU.EDU
Subject: conference call for papers


                      PRELIMINARY CALL FOR PARTICIPATION

                         THE FIFTH IEEE CONFERENCE ON

                     ARTIFICIAL INTELLIGENCE APPLICATIONS

                                  OMNI HOTEL
                                MIAMI, FLORIDA

                               MARCH 6-10, 1989

                  SPONSORED BY: THE COMPUTER SOCIETY OF IEEE

This  conference  is  devoted  to  the  application  of artificial intelligence
techniques to real-world problems.    Two  kinds  of  papers  are  appropriate:
papers   that   focus  on  principles  which  underlie  knowledge-based  system
applications, and case studies of  knowledge-based  application  programs  that
solve  significant  problems.  Only new, significant and previously unpublished
work will be accepted.  The following types of  papers  will  be  accepted  for
review by the Program Committee:

   - Papers  focusing  on  principles  of  knowledge-based  systems.  Such
     papers should describe significant completed research, detailing  the
     practical  aspects  of  designing  and  constructing  knowledge-based
     systems, how relevant  AI  techniques  were  applied  effectively  to
     important  problems,  software  life cycle engineering concerns, etc.
     AI  techniques  include,  but  are  not  limited  to  the  following:
     Knowledge   acquisition,   task-specific  knowledge  representations,
     task-specific  reasoning,  verification  and  validation,  diagnosis,
     project management, intelligent interfaces, and general tools.

   - Papers  describing  case  studies  of  AI-based application programs.
     Such  papers  should  describe  an  application  of   AI   technology
     demonstrating  the  solution  of  a significant problem, including an
     analysis  of  why  the  implementation   techniques   selected   were
     appropriate  for  the  problem domain.  Case study areas include, but
     are not limited to the following:  Science, medicine, law,  business,
     engineering,  manufacturing,  and robotics.  Case Study papers should
     contain the following sections:  (1) Problem definition; (2) Previous
     approaches; (3) Approach of the case study; (4) Performance analysis;
     (5) Status of implementation.

Papers should be limited to 5000 words.  Starting this year, the first page  of
the  paper  will be standardized in order to provide the reader with additional
application information.   The  first  page  of  the  paper  must  contain  the
following information:

   - TITLE
   - CONTACT  INFORMATION:  Name, affiliation, US Mail and electronic mail
     addresses, telephone number.
   - TOPIC: Principles or Case Study, Subtopics within  the  topic  (e.g.,
     manufacturing, diagnosis, explanation, knowledge acquisition).
   - ABSTRACT: A 200 word abstract.
   - STATUS:  Status of implementation: research, development, field test,
     or production use.
   - DOMAIN:  Domain of  application,  e.g.,  medical  diagnosis,  factory
     scheduling.
   - LANGUAGE:   Implementation language (if applicable), both programming
     language,  e.g.,  LISP,  C,  and  knowledge  engineering   tool   (if
     applicable).
   - EFFORT: Person-years of effort put into the project to date.

Each  paper  accepted  for  publication  will  be  allotted  six  pages  in the
conference proceedings.

In addition to papers, we will be accepting the following types of submissions:

   - Proposals for Panel discussions.   Topic  and  desired  participants.
     Indicate  the  membership of the panel and whether you are interested
     in organizing/moderating the discussion.   A  panel  proposal  should
     include a 1000-word summary of the proposed subject.

   - Proposals for Demonstrations.  Videotape and/or description of a live
     presentation (not to exceed 1000 words).  The demonstration should be
     of  a  particular  system  or  technique  that shows the reduction to
     practice of one of the conference topics.  The demonstration or video
     tape should be not longer than 15 minutes.

   - Proposals   for   Tutorial  Presentations.    Proposals  of  both  an
     introductory and advanced nature are requested.  Topics should relate
     to  the  management  and  technical  development of usable and useful
     artificial intelligence applications.  Particularly of  interest  are
     tutorials  analyzing  classes of applications in depth and techniques
     appropriate for a particular class of  applications.    However,  all
     topics  will  be  considered.    Tutorials  are  three  (3)  hours in
     duration; copies of slides are to be provided in advance to IEEE  for
     reproduction  (see  schedule  of  dates  below).    If  you  have any
     questions about tutorial proposals, contact the Tutorial Chair, Nancy
     Martin, for more information.

     Each tutorial proposal should include the following:

        * Detailed  topic  list  and descriptive abstract (approximately 5
          pages)
        * Tutorial level:  introductory, intermediate, or advanced
        * Prerequisite reading for intermediate and advanced tutorials
        * Short tutorial and instructional  vita  of  presenter  (previous
          lecture experience)
        * Short  professional vita demonstrating presenter's experience in
          area of tutorial

IMPORTANT DATES

September 20, 1988:  Four copies  of  Papers,  Demonstration  Proposals,  Panel
                Proposals,  and  Tutorial  Proposals  are due.  Submissions not
                received by that date will be returned unopened.

October 18, 1988: Author notifications mailed.

December 6, 1988: Accepted papers due to IEEE. Accepted tutorial notes  due  to
                Tutorial Chair, Nancy Martin.

March 6-7, 1989: Tutorials convene.

March 6-10, 1989: Conference convenes.

Submit Papers and Other Materials to:

Mark S. Fox/ Roy A. Maxion
Robotics Institute
Carnegie Mellon University
Pittsburgh, Pennsylvania 15213
USA

Phone: 412-268-3832
FAX: 412-268-5016
TELEX: 854941
ARPANET: msf@isl1.ri.cmu.edu


Submit Tutorial Proposals to:

Nancy Martin
Softpert Systems
24 Berkeley Street
Nashua, NH 03060

Phone: 603-882-1790

                             CONFERENCE COMMITTEE

General Chair
Elaine Kant, Schlumberger-Doll Research

Program Committee Chairs
Mark S. Fox, Carnegie-Mellon University
Roy Maxion, Carnegie-Mellon University

Tutorial Chair
Nancy Martin, Softpert Systems

Program Committee
Jan Aikins, AION Corp.
Alice Agogino, UC Berkeley
Miro Benda, Boeing Computer Services
B. Chandrasekaren, Ohio State University
Rina Dechter, UC Los Angeles
Vasant Dhar, New York University
Lee Erman, Teknowledge
Brian Gaines, University of Calgary
Richard Herrod, Texas Instruments
Se June Hong, IBM
Gary Kahn, Carnegie Group
Sanjay Mittal, Xerox Palo Alto Research Center
Sergei Nirenburg, Carnegie-Mellon University
Van Paranak, ITI
Marilyn Stelzner, Intellicorp
Steve Shafer, Carnegie-Mellon University
Beau Sheil, Price Waterhouse Technology Center
Elliot Soloway, University of Michigan
Mitch Tseng, Digital Equipment Corporation

Additional Information

For registration, exhibits, and additional conference information, contact:
CAIA-89
The Computer Society of the IEEE
1730 Massachusetts Avenue, NW
Washington, DC 20036-1903

Phone: 202-371-0101

------------------------------

End of AIList Digest
********************

∂30-Jun-88  0104	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #48  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 30 Jun 88  01:04:13 PDT
Date: Thu 30 Jun 1988 00:18-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #48
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 30 Jun 1988      Volume 7 : Issue 48

Today's Topics:

 Queries:
  gardening systems
  the Michalski-Stepp Conceptual Clustering Algorithm
  Response to: gardening systems
  Math discussion list?

----------------------------------------------------------------------

Date: 27 Jun 88 14:51:40 GMT
From: parvis@pyr.gatech.edu  (FULLNAME)
Subject: gardening systems


I would like to know whether anyone knows about a computer program that
could help me with my gardening. I'm interested in all kinds of computer
programs either for indoor or outdoor running on any personal computer.

Does anyone have experience in using a computer to plan her/his gardening and
diagnose deseases of the plants? I know there are many knowledge based systems
for many different domains. Does anyone know a similar application for
gardening?

I appreciate any response and discussion on this question. Thanks, Parvis.

-- parvis@gitpyr.gatech.edu

------------------------------

Date: Mon, 27 Jun 88 11:20:43 CDT
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: /tmp/naulin

Mr. Nanlin has implemented the Michalski-Stepp Conceptual Clustering
Algorithm in Common Lisp on a TI-Explorer.  It has been tested out on the
following examples in the literature:
  The microcomputer exmaple, Figure 8 [1]
  The figure example, Figure 6 [2`

He needs answers to the following two questions:

Does anyone have any more extensive test data for it.

Does anyone know how to generate artificial test data of arbitrary size
for this algorithm.

[1] Michalski, Ryszard S. and Stepp, Robert E., Automated Construction
of Classifications: Conceptual Clustering Versus Numerical Taxonomy
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume PAMI 5,
No. 4, July 1983

[2] Michalski, R. S. and Stepp, R., Revealing Conceptual Structure in Data by
Inductive Inference, Machine Intelligence, #10, Editors, J. E. Hayes, Donald

 Laurence Leff: Coordinator, Computer Science and Engineering Departmental
Computer Facilities Management Team, Complete Address: 75275-0122, 214-692-2859
 Moderator comp.doc.techreports/TRLIST, Symbolic Math List
 convex!smu!leff leff%smu.uucp@uunet  E1AR0002 at SMUVM1 (BITNET)

------------------------------

Date: 28 Jun 88 09:13:38 GMT
From: otter!ijd@hplabs.hp.com  (Ian Dickinson)
Subject: Re: gardening systems

> / otter:comp.ai / parvis@pyr.gatech.EDU (FULLNAME) /  3:51 pm  Jun 27, 1988 /
> I know there are many knowledge based systems
> for many different domains. Does anyone know a similar application for
> gardening?

I don't know about gardening, but there was a project over here about three
years ago in the domain of crop disease diagnosis, which might give you some
useful pointers.   It was a collaboration between ICI (Imperial Chemical
Industries, who make grungy chemicals for killing bugs)  and a small (then,
at least) software house called, I think, ISI.  ISI produced a fairly
standard es-shell for the project called _Savoir_.  [If you think this is a
long prelude, you're right - I can't remember the name of the project :-(  ]

There was a paper on it at one of the _Expert Systems '8n_ conferences held
by the BCS  (where member( n, [5, 6, 7] ))

Apologies for the fallible memory...

> I appreciate any response and discussion on this question. Thanks, Parvis.
You are welcome - sorry it's a rather broad-band response.

Ian.


+---------------------+--------------------------+------------------------+
|Ian Dickinson         net:                        All opinions expressed |
|Hewlett Packard Labs   ijd@otter.hplabs.hp.com    are my own, and not    |
|Bristol, England       ijd@hplb.uucp              necessarily those of   |
|0272-799910            ..!mcvax!ukc!hplb!ijd      my employer.           |
+---------------------+--------------------------+------------------------+

------------------------------

Date: Tue, 28 Jun 88 21:41:51 EDT
From: Thanasis Kehagias <ST401843%BROWNVM.BITNET@MITVMA.MIT.EDU>
Subject: Math discussion list?

a while ago, people mentioned the existence of discussion lists on math and
science. can you point me to the email adresses of the above?



             Thanks, Thanasis

------------------------------

End of AIList Digest
********************

∂30-Jun-88  2219	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V7 #49  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 30 Jun 88  22:19:12 PDT
Date: Fri  1 Jul 1988 00:56-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V7 #49
To: AIList@AI.AI.MIT.EDU


AIList Digest             Friday, 1 Jul 1988       Volume 7 : Issue 49

Today's Topics:

  ZAD random number generator

 Philosophy:
  Who else isn't a science?
  Reproducing the brain in low-power analog CMOS
  replicating the brain with a Turing machine
  linguistic metaphor for knowledge representation
  more on dance notation

----------------------------------------------------------------------

Date: Sat, 25 Jun 88 00:58 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: ZAD random number generator

Distribution-File:
        AILIST@AI.AI.MIT.EDU

In a recent AIList issue, it was inquired what kind of random number
generator a Zener diode / A-D converter generator would be.

I recall that noise from a Zener diode is quantum mechanical and
follows a well-known and well-defined theoretical spectrum (might it
be the 1/f law?).  As is well known, distributions can be transformed
to obtain the desired one (exponential / Gaussian etc.).

Combining the results by Santha and Vazira mentioned by Albert
Boulanger with a quantum electrodynamical source of noise might even
best a good pseudorandom number generator.

                Andy Ylikoski

------------------------------

Date: 27 Jun 88 00:18:24 GMT
From: bc@media-lab.media.mit.edu  (bill coderre and his pets)
Subject: Re: Who else isn't a science?

In article <11387@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu writes:
>Anyway, let me recommend the following works by neurophysiologists:
(references)

>These researchers start by looking at *real* brains, *real* EEGs, they
>work with what is known about *real* biological systems, and derive very
>intriguing connectionist-like models.  To me, *this* is science.

And working in the other direction is not SCIENCE? Oh please...

>CAS & WJF have developed a rudimentary chaotic model based on the study
>of olfactory bulb EEGs in rabbits.  They hooked together actual ODEs with
>actual parameters that describe actual rabbit brains, and get chaotic EEG
>like results.

There is still much that is not understood about how neurons work.
Practically nothing is known about how structures of neurons work. In
50 years, maybe we will have a better idea. In the mean time,
modelling incomplete and incorrect physical data is risky at best. In
the mean time, synthesizing models is just as useful.

>------------------------------------------------------------------------
>In article <2618@mit-amt.MEDIA.MIT.EDU>, bc@mit-amt (bill coderre) writes:
>>Oh boy. Just wonderful. We have people who have never done AI arguing
>>about whether or not it is a science [...]

>We've also got what I think a lot of people who've never studied the
>philosophy of science here too.  Join the crowd.

I took a course from Kuhn. Speak for youself, chum.

>>May I also inform the above participants that a MAJORITY of AI
>>research is centered around some of the following:
>>[a list of topics]
>Which sure sounded like programming/engineering to me.

Oh excuse me. They're not SCIENCE. Oh my. Well, we can't go studying
THAT.

>>                 As it happens, I am doing simulations of animal
>>behavior using Society of Mind theories. So I do lots of learning and
>>knowledge acquisition.
>Well good for you!  But are you doing SCIENCE?  As in:
>If your simulations have only the slightest relevance to ethology, is your
>advisor going to tell you to chuck everything and try again?  I doubt it.

So sorry to disappoint you. My coworkers and I are modelling real,
observable behavior, drawn from fish and ants. We have colleagues at
the New England Aquarium and Harvard (Bert Holldobler).

Marvin Minsky, our advisor, warns that we should not get "stuck" in
closely reproducing behavior, much as it makes no sense for us to
model the chemistry of the organism to implement its behavior (and
considering that ants are almost entirely smell-driven, this is not a
trite statement!).

The bottom line is that it is unimportant for us to argue whether or
not this or that is Real Science (TM).

What is important is for us to create new knowledge either
analytically (which you endorse) OR SYNTHETICALLY (which is just as
much SCIENCE as the other).

Just go ask Kuhn..........................................mr bc
                                   heart full of salsa jalapena

------------------------------

Date: Wed, 29 Jun 88 08:23:28 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Reproducing the brain in low-power analog CMOS


      Forget Turing machines.  The smart money is on reproducing the brain
with low-pwer analog CMOS VLSI.  Carver Mead is down at Caltech, reverse
engineering the visual system of monkeys and building equivalent electronic
circuits.  Progress seems to be rapid.  Very possibly, traditional AI will
be bypassed by the VLSI people.

                                        John Nagle

------------------------------

Date: Wed, 29 Jun 88 9:26:50 PDT
From: jlevy.pa@Xerox.COM
Subject: Re: AIList Digest   V7 #46 replicating the brain with a
         Turing machine

Andy Ylikoski asks why you can't replicate the brain's exact functions
with a Turing machine. First off, the brain is not a single machine but
a whole bunch of them. Therefore "replacing it with a Turing machine"
wouldn't get you there.

Turing machines have an inherent limitation in that they are not
reactive i.e.  they are unable to react to the environment directly. On
the other hand, the brain is in direct communication with a number of
input devices (eyes, ears, nose, touch-sense, etc.), all of which are
sending data at the same time.

An interesting question is whether the brain's software suffers from the
Church-Rosser problem which is present in functional languages -
basically, you cannot, in a functional language, see that a certain
source of input is empty and later detect input on it. It seems that
this is not so, since we are able to close our eyes and later open them,
seeing again.

Just speculating...

--Jacob

References
        AIList-REQUEST@AI.AI.MIT.EDU's message of Tue, 28 Jun 88 23:05:00 EDT
-- AIList Digest   V7 #46

------------------------------

Date: Mon, 27 Jun 88 08:20:08 PDT
From: Stephen Smoliar <smoliar@vaxa.isi.edu>
Subject: linguistic metaphor for knowledge representation

Seth at BBN proposed a view of TYPES as "adjectives" and INSTANCES as "nouns,"
as an alternative to my view of TYPES as "nouns" (and INSTANCES sort of swept
under the rug as "entities").  Seth has a good point which, as I understand it,
actually has its roots in KL-ONE.  (I suspect Ron Brachman will correct me if
my understanding is off.)  The "types" of KL-ONE are called "classes;"  and
while they might seem nominal, their usage is closer to adjectival.  Thus,
while there might be a class called DOCTOR, the class is meant to embody the
DESCRIPTION of a doctor.  I have heard various KL-ONE users employ phrases
like "doctor-like" or "doctor-ish" in discussing such a class.  The adjectival
point of view becomes more apparent when you consider that a concept like
RICH-DOCTOR may be defined as a specialization of DOCTOR.  This would perhaps
best be paraphrased as "having properties of being both doctor-like and rich."
Thus, there may be some merit in viewing classes as adjectival rather than
nominal.

------------------------------

Date: 27 Jun 88 18:57:13 GMT
From: dan@ads.com (Dan Shapiro)
Reply-to: dan@ads.com (Dan Shapiro)
Subject: Re: more on dance notation


As a method of preserving choreography, Labanotation has been close to
a complete failure; it is laborious, noncompact, nonvisual, and as
some people have commented, it doesn't capture relations of dancers in
space or to one another very well.  As a result, few people are
skilled in writing or reading Labanotation, and the effort of
recording dances has been invested for only a very small percentage of
the world's choreographies.  I have never heard of it being used to
generate choreography.  For communication with people, video formats
are far more expressive.

As a candidate machine format, Labanotation has still more problems
which haven't been mentioned.  It turns out that dance notation is not
only meant to capture the physical position of joints and the movement
of a dancer in space, but also the quality of the movement in an
emotional sense.  Is the effort quick, percussive, or fluid, etc.?
Sometimes, this sense of the movement is more important than the joint
positions themselves.  The formal vocabulary for recording these
qualities is quite limited - which means that dance notation is an
incomplete specification (dance performances are interesting
because there are as many ways of projecting a choreography as there
are dancers).  However, it is unclear what the critical emotional
subset of choreography is.  When would a choreographer be satisfied
that the recording is right?

My suspicion is that the most natural approach is to expand the
concept of dance notation to include both the static score, and an
interpreter which models the dancers that render the score visually.
These "dancers" would add their own nuances of interpretation.
There is a curious point here about the medium carrying the message.
Brings back memories of the 70s, doesn't it?

                Dan Shapiro

------------------------------

Date: 28 Jun 88 09:52 PDT
From: hayes.pa@Xerox.COM
Subject: Re: AIList Digest   V7 #45

On dance notation:
A quick suggestion for a similar but perhaps even thornier problem:  a notation
for the movements involved in deaf sign language.

------------------------------

End of AIList Digest
********************

∂02-Jul-88  1235	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #1   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 2 Jul 88  12:35:07 PDT
Date: Fri  1 Jul 1988 22:48-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #1
To: AIList@AI.AI.MIT.EDU


AIList Digest            Saturday, 2 Jul 1988       Volume 8 : Issue 1

Today's Topics:

 Queries:
  Contact at AI Depart. of Fujitsu Labs in Japan
  Wanted: Jan 88 issue of AI Expert
  Response to: gardening systems
  Proposed robotics list
  Response to: Math discussion list? (2 messages)
  Expert systems on PCs
  Tangram, magic squares
  Digital answering machines
  Translator systems

----------------------------------------------------------------------

Date: 15 Jun 88 17:54:45 GMT
From: mcvax!inria!vmucnam!daniel@uunet.uu.net  (Daniel Lippmann)
Subject: contact at AI Depart. of Fujitsu Labs in Japan

Anybody there knowing how to contact Mr Kumon Kouichi working in
the AI Depart. of Fujitsu Labs in Kawasaki (Japan).
All mails to kddlab!titcca!flab!kumon failed .
Every electronic or even postal address will be welcome.
thanks in advance
daniel (...!mcvax!inria!vmucnam!daniel)

------------------------------

Date: 27 Jun 88 09:57:10 GMT
From: dowjone!gregb@uunet.uu.net  (Greg_Baber)
Subject: Wanted: Jan 88 issue of AI Expert


I am offering $10 for a January 1988 issue of AI Expert in good
condition. Please respond by E-mail only.
--
Reply to: Gregory S. Baber              Voice:  (609) 520-5077
          Dow Jones & Co., Inc.         E-mail: ..princeton!dowjone!gregb
          Box 300
          Princeton, New Jersey 08543-0300

------------------------------

Date: 29 Jun 88 16:32:42 GMT
From: ulysses!sfmag!sfsup!saal@bloom-beacon.mit.edu  (S.Saal)
Subject: Response to: gardening systems

In article <5968@pyr.gatech.EDU>, parvis@gitpyr.UUCP writes:

> I would like to know whether anyone knows about a computer program that
> could help me with my gardening. I'm interested in all kinds of computer
> programs either for indoor or outdoor running on any personal computer.

> -- parvis@gitpyr.gatech.edu


The mail-order bookstore Capability's Books offers about half a dozen
computer programs that design vegetable and flower gardens and can
be set up to remind you of weekly chores.  I don't know of any that
diagnose pest problems.  Capability's prices are reasonable, they
offer a discount on large orders, and a letter will get you excellent
personal service.  Their address is:
        Capability's Books, Box 114, Highway 46, Deer Park, WI  54007
                                        from Carrie, via Sam Saal
"A great garden is never completed, merely abandoned."

------------------------------

Date: Wed, 29 Jun 88 21:51:11 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Proposed robotics list


     I previously asked if a robotics mailing list existed, and if not, was
there interest in starting one.  I have now received seven replies,
none indicating the existence of such a list, and all interested in
subscribing to one.

     I would like to see such a list created.  Personally, I would
like to see a moderated list, along the lines of AILIST, in hopes of
keeping the percentage of useful content high enough to keep the
information useful to those doing robotics research.  Alternatively,
it could be a USENET newsgroup, perhaps gatewayed to the non-UNIX
machines by appropriate sites.  What is the sense of the AILIST community?

     Choose one:

        - A robotics list is not necessary.
        - A moderated list is preferable.
        - An unmoderated list is preferable.

     Reply to me, (jbn@glacier.stanford.edu) and I will summarize.

                                                John Nagle


[Editor's note -

        The current hookup between AIList and usenet has been a great
source of trouble for me, due in part to the fact that the usenet side
is unmoderated while the internet/bitnet side is.

        I currently perform the forwarding by hand, with little
opportunity to prune headers and such - to the great consternation of
many members of the usenet community.

        I am currently looking for (or considering writing) software to
handle this task, but until some exists I could not recommend another
cross-net list using the same mechanism as AIList.  Unless, of course,
you enjoy three-legged races ...

                - nick ]

------------------------------

Date: 30 Jun 88 13:56:48 GMT
From: sunybcs!nobody@rutgers.edu
Reply-to: sunybcs!rapaport@rutgers.edu (William J. Rapaport)
Subject: Response to: Math discussion list?


In a previous article Thanasis Kehagias writes:
>
>a while ago, people mentioned the existence of discussion lists on math and
>science. can you point me to the email adresses of the above?

Try sci.logic and sci.math; they're not moderated, so you can just post
news to them directly.

------------------------------

Date: 30 Jun 88 14:50:30 GMT
From: steve@hubcap.clemson.edu ("Steve" Stevenson)
Subject: Response to: Math discussion list?


From a previous article by Thanasis Kehagias:
> a while ago, people mentioned the existence of discussion lists on math and
> science. can you point me to the email adresses of the above?


There are several math lists on usenet.  Check with usenet folks.
They are not moderated so you will have to find a way to transmit into
uucp.  Try spaf@purdue.
--
Steve (really "D. E.") Stevenson           steve@hubcap.clemson.edu
Department of Computer Science,            (803)656-5880.mabell
Clemson University, Clemson, SC 29634-1906

------------------------------

Date: 30 Jun 88 16:04:55 GMT
From: parvis@pyr.gatech.edu  (FULLNAME)
Subject: expert systems on PCs

I'm doing research on the usability and feasibility of expert systems on
personal computers such as the Apple Macintosh and the IBM PC.

There are certainly limitations due to memory size and time efficiency. What are
typical problems when developing and/or using a PC based expert system? What do
users (not only developers) think about expert systems on PCs? What domain
solutions are successfully realized on a PC? Are the users satisfied with the
features and efficiency or are such  systems 'just expensive toys'?

The main goal of this survey is to get an overview of how useful and successful
expert system implementations on PCs are.

Any contribution and comments are appreciated. I plan to  post a summary of the
results on this newsgroup.

Thanks, Parvis.

-- parvis@gitpyr.gatech.edu

------------------------------

Date: Fri, 1 Jul 88 00:48 EDT
From: AIList Moderator Nick Papadakis <nick@ai.ai.mit.edu>
Subject: tangram, magic squares


        A Taiwanese reader, Dragon Y.Y. <3V6B0001%TWNMOE10.BITNET>
writes that he recently was told of a solution to the tangram puzzle in
an AI-related magazine done by someone at the University of Maryland
named Deausch (Deutsch, perhaps?).  Unfortunately, he has been unable to
find the actual reference.  I conducted a brief computer search that
failed to yield anything.  Can any AIList readers provide further
information?  Does anyone know of any other related work?

                - nick

------------------------------

Date: 1 Jul 88 20:52:28 GMT
From: rochester!ur-tut!sunybcs!krishnan@cu-arpa.cs.cornell.edu 
      (Ganapathy Krishnan)
Subject: Digital answering machines

Does anyone know of vendors who manufacture digital answering
machines that can store a telephone message on disk,
perform rudimentary speech recognition, and say mail the message
to the right person. Sounds like a tall order !! If you know of
anything that comes close to this, can you send me electronic mail ?


Thanks,

krishnan
UUCP   : {cmc12,hao,harpo}!seismo!rochester!rocksvax!sunybcs!krishnan
         ...{allegra,decvax,watmath}!sunybcs!krishnan
CSNET  :  krishnan@cs.buffalo.edu
BITNET :  krishnan@sunybcs

------------------------------

Date: Fri, 01 Jul 88 19:32:44
From: UZR515%DBNRHRZ1.BITNET@CUNYVM.CUNY.EDU
Subject: translator systems

                           H E L P !
                           =========
Dear subscriber,

   I am designing a Language Translator which should translate
English Computer Text Books into persian. Indeed, everybody knows
that there are a lot of works on automatic translation of "English-"
texts into many different languages such as german, chineese, arabic,
etc. It means that the "Analysing"-part of such a translator system
is more than n times (naturally in different manners) designed and
implemented.

   Now I will be very pleased to become information from any person
interested in automatic translation of English texts
(into any other language). Specially I wish to be informed where and
how can I locate the followings:

1. Available translation systems from LISTSERV@FINHUTC
   or any other possibility to access such a system ?

2. A multi-lingua translator system such as EUROTRA ?

3. Fonts for persian or at least arabic characters ?

   Your prompt attention would be most appreciated.

Yours sincerely,

                  Hooshang Mehrjerdian
                  <UZR515@DBNRHRZ1.BITNET>
                  Bergmeisterstueck 1
                  5300 Bonn 3
                  West Germany
                  Tel: (0228)-733358
                                               =======

------------------------------

End of AIList Digest
********************

∂02-Jul-88  2218	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #2   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 2 Jul 88  22:18:03 PDT
Date: Sun  3 Jul 1988 00:54-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #2
To: AIList@AI.AI.MIT.EDU


AIList Digest             Sunday, 3 Jul 1988        Volume 8 : Issue 2

Today's Topics:

  No digests for one week

 Philosophy:
  On applying AI
  replicating the brain with a Turing machine
  metaepistemology
  ASL vs dance
  Auto_Suggestion?

 Announcements:
  Directions and Implications of Advanced Computing - DIAC-88
  Intermediate Mechanisms For Activation Spreading

----------------------------------------------------------------------

Date: Sun, 3 Jul 88 00:49 EDT
Subject: No digests for one week


        I have been unexpectedly called away for a period of about one
week.  Unless I am lucky and manage to obtain net access during that
time, there will be no digests sent out.

        Apologies to all.

                - nick

------------------------------

Date: Fri, 1 Jul 88 14:18:14 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: ON applying AI

The following is excerpted without permission from The Boston Globe
Magazine for June 26, 1988, pp. 39-42 of the cover article by D. C.
Denison entitled "The ON team, software whiz Mitch Kapor's new venture":

  In the conference room, where the ON team has assembled for a group
  interview, a question is posed:  What developments in artificial
  intelligence have made their project possible?

  The first response comes immediately:  "The lack of progress."

  Then William Woods, ON's principal technologist, takes a turn.  "What
  came out of artificial intelligence that's useful to us is kind of
  like what came out of the space program that's useful for everybody on
  Earth --"

  "Velcro," someone interrupts.

  "Tang."  From the other side of the table.  "Are we the Tang of
  artificial intelligence?"

  Woods continues undaunted.  "Artificial intelligence has given us a
  tool kit of engineering techniques.  AI has been driven by people
  who've been tilting at windmills, but their techniques are pretty good
  for what we want to do."

  <Paragraph on Bill's background & experience omitted>

  Although the early promise of artificial-intelligence research has
  been tempered--we still don't have computers that can understand
  English or reason like a human expert--the possibilities are so
  seductive, so intriguing, and so potentially profitable that the field
  continues to attract some of the best minds in the computer field.
  Which is why it wans't surprising that when Mitchell Kapor left Lotus
  two years ago, he became a visiting scholar at MIT's Center for
  Cognitive Science, a leading center of artifial-intelligence-related
  research.  And it's not at all surprising that when Kapor and [Peter]
  Miller put together the ON team, at least one AI veteran of Woods'
  stature was part of the group.

  Yet "artificial intelligence" has become such a buzzword, such an
  umbrella term, that when the topic is brought up, experts such as
  Woods and interested explorers such as Kapor take deep breaths and try
  to redefine the terms of discussion.

  "When you talk about AI," Kapor says, "you're talking about many, many
  things at once:  a body of research, certain kinds of goals and
  aspirations that are characteristic of the people who are in it,
  certain fields of inquiry; you've got soft stuff, you've got hard
  stuff, you have mythology--the term 'AI' casts a broad shadow.

  "I also think there's a reason why so much attention is paid to the AI
  question in the nontechnical press," he continues, "and that is that
  there are some very bombastic people in the AI community who have
  spoken incredibly irresponsibly, who've made careers out of that.  But
  if you make the assumption that because AI gets a lot of attention in
  the press there's a lot going on in the field, you might be making a
  big mistake."

  When Kapor and Woods first met soon after Kapor left Lotus, they
  discovered that they shared a similar view of the value of current AI
  research.  First of all, they both felt that the goal most often
  attributed to artificial-intelligence research--the creation of a
  computer that "thinks" just like a human--was so remote as to be
  essentially impossible.  Kapor's experience at MIT had convinced him
  that scientists still have no idea how people really think.
  Therefore, any attempt to design a computer that works the way people
  think is doomed.

  A more realistic approach, according to Kapor, would be to design
  computer programs that are compatible with the way people think, that
  help amplify a person's intelligence rather than try to duplicate it.
  Kapor felt that some artificial-intelligence techniques, when applied
  to that goal, could be very powerful.

  <Another bg paragraph omitted>

  Last year, Woods, who was working at Applied Expert Systems,
  discovered that Kaapor had leased a floor in the same building, and he
  began stopping in for informal conversations.  Eventually, after
  discussing the ON Technology project with Kapor and Miller, and
  studying their business plan, he accepted the position of principal
  technologist and moved his things down two floors into a large corner
  office.

  <paragraph omitted about micros such as Mac II getting more memory>

  One of the possibilities [that opens up], which Woods will be actively
  working on during the next two years, is a more sympathetic fit
  between people and their computers.  "I want to take an abstract
  perspective of what people's mental machinery does very well and what
  a machine can do well," Woods says, "and design ways that you can
  couple the two together to complement each other.  For example,
  machines can do long sequences of complicated steps without leaving
  out one.  People will forget something with frequency.  On the other
  hand, people can walk down the street without falling in holes.  To
  get a mechanical artifact to do that is a challenge that hasn't even
  been aapproximately approached after several decades of research."

  Woods pauses to frame his thoughts.  "But if you could get the right
  interface technology and conceptual framework, on the machine side, to
  match up with what people really want to do on our side--that would be
  a very nice arrangement.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Fri, 1 Jul 88 09:55:16 EDT
From: Duke Briscoe <briscoe-duke@YALE.ARPA>
Subject: Re: replicating the brain with a Turing machine

>Date: Wed, 29 Jun 88 9:26:50 PDT
>From: jlevy.pa@Xerox.COM
>Subject: Re: AIList Digest   V7 #46 replicating the brain with a
>         Turing machine
>
>Andy Ylikoski asks why you can't replicate the brain's exact functions
>with a Turing machine. First off, the brain is not a single machine but
>a whole bunch of them. Therefore "replacing it with a Turing machine"
>wouldn't get you there.
I think this is not a valid point because a single Turing machine (TM) can
simulate the actions of a group of parallel TMs.

>Turing machines have an inherent limitation in that they are not
>reactive i.e.  they are unable to react to the environment directly. On
>the other hand, the brain is in direct communication with a number of
>input devices (eyes, ears, nose, touch-sense, etc.), all of which are
>sending data at the same time.
TMs are usually only used as a theoretical tool.  If you were actually
going to implement one, you could have a multi-track input tape with
one tape having an alphabet representing sensory input sampled at an
appropriate rate.  Issues of real-time response discussed below.

>An interesting question is whether the brain's software suffers from the
>Church-Rosser problem which is present in functional languages -
>basically, you cannot, in a functional language, see that a certain
>source of input is empty and later detect input on it. It seems that
>this is not so, since we are able to close our eyes and later open them,
>seeing again.
In a functional program to simulate a brain, you are assuming that
closing your eyes equates to closing an input stream, while in fact
real optic nerves continue sending information even when the eyes are
closed.

Even though I have just shown that I think the points above are
invalid, I'm still not sure that brain functions can be theoretically
modelled by a TM.  TMs operate in discrete steps, while material
objects act in continuous dimensions of time and space (as far as we
know, otherwise perhaps the universe is a giant, parallel
Turing-equivalent computer).  Assuming reality is continuous, a TM
model might closely approximate something material for some period of
time, but would eventually diverge.

Plus there is the whole problem that any physical TM implementation
would have problems such as unavoidable bit errors which would
invalidate its exact correspondence to the abstract TM.

However, physical implementations, even using non-organic materials,
of computers should still theoretically be capable of the same
computing powers as organic brains.  There just seem to be limitations
in using a restricted TM model to prove things about brain computable
functions.  Maybe an expanded TM model is needed which takes into
account physical properties of space-time.  Or perhaps the space-time is
discrete at some level we have not yet detected, in which case the
current plain TM would be adequate.  After all, electric charges seem
to be discrete.

------------------------------

Date: 2 Jul 88 19:11:40 GMT
From: proxftl!bill@bikini.cis.ufl.edu (T. William Wells)
Subject: Re: metaepistemology


In a previous article, YLIKOSKI@FINFUN.BITNET writes:
> In AIList Digest   V7 #41, John McCarthy <JMC@SAIL.Stanford.EDU>
> writes:
>
> >I want to defend the extreme point of view that it is both
> >meaningful and possible that the basic structure of the
> >world is unknowable.  It is also possible that it is
> >knowable.

I did not see the origins of this debate but it appears to be
nothing more than an attempt to defend the Kantian noumenal vs.
phenomenal distinction. Instead of wasting time debating this
issue, why don't those of you who are interested go and study
some philosophy? And, for those of you who are going to say "but
I have", carefully compare this view with Kant and you will see
that they are in essence identical.

------------------------------

Date: Fri, 1 Jul 88 08:16:09 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: ASL vs dance


I have not studied ASL, but it seems prima facie likely that the gesture
system of a sign language used by the deaf would have both a formal and
an expressive aspect, just as the gesture system of ordinary spoken
phonology does.

In the phonology of a given language, there is a limited inventory of
usually <50 contrasts, differences that make a difference.  The phonetic
`content' of these contrasts (the actual sounds used to embody the
contrasts in a given utterance by a given speaker at a given time)  is
subject to remarkably free `stretching', which languages exploit for
expressive purposes as well as in dialect variation.

Leigh Lisker long ago speculated that the function of semantically empty
greeting rituals ("Hello, how are you?" "Fine, and you?") is to provide
an opportunity for conversants to tune in on the fundamental frequency
of each other's voice and calibrate for the relative location of
vowel formants.  Calibrating for the phonetic envelope each uses to
embody the contrasts of their shared language is also a likely function.

I would expect that deaf folks have to attune themselves to the gestural
style and expressive range of conversants, but I can't think of anything
analogous to the fundamental frequency and vowel formants in phonology.
I would astonished if there were no analogs of phonemic contrasts in ASL
utterances, no fundamental and stable `differences that make a
difference' to other ASL users, and I would be be very interested to
learn what they are like.

In language, it is the formal aspect, the system of contrasts or
`differences that make a difference', and the information structures
that they support, that are the ostensive focus.  This is surely the
case with the sign languages of the Deaf also.  In dance, by contrast,
it is the expressive aspect that is typically the main point, and the
formal structure is subsidiary, merely a channel for expressive
communication, else the piece is seen as dry, technical, academic,
uninspired.  One may apply such adjectives to a conversation, but with
scarcely the same devastating critical effect!  Conversely, a critic who
discussed what a choreographer was saying without comment on how she or
he said it would generally be thought to be missing the point.

An interesting thing here is that the expressive aspects of language use
actually do influence people much more than the literal content (words
7%, tone 32%, kinesics 61%:  Albert Murahbian, _Public Places & Private
Spaces_; Ray Birdwhistell, _Kinesics & Context_).  In this respect, we
very much need an understanding and representation of the expressive
`stretching' of a formal structure, since that is where most of human
communication takes place (as distinct from simple transmission of
literal information).  This is a big part of the difference between
linguistic competence (Chomsky) and communicative competence (Hymes).
An AI that has the first (a hard enough problem!) but not the second
will always be missing the point and misconstruing the literal meaning
of what is said.  I should think that the notations developed by Ray
Birdwhistell and his colleagues at the Annenberg School of Communication
would be more apt than Laban dance notation, because they concern the
unconscious, culturally inherited expressive art form of ordinary human
communication rather than a consciously cultivated art form.  And of
course Manfred Clynes makes claims about the underlying form of all
communicative expression.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Sat, 02 Jul 88 16:05:09 +0100
From: "Gordon Joly, Statistics, UCL"
      <gordon%stats.ucl.ac.uk@ESS.Cs.Ucl.AC.UK>
Subject: Auto_Suggestion?


> From AIList Vol 3 # 161

gcj> Date: Mon,  4 Nov 85 09:58:29 GMT
gcj> From: gcj%qmc-ori.uucp@ucl-cs.arpa
gcj> Subject: Vision Systems and American Sign Language
gcj>
gcj> One of goals of AI research is to  produce speech recognition systems.
gcj> Has there been a proposal to produce a vision system that can ``read''
gcj> ASL?
gcj>
gcj> Gordon Joly

> From AIList Vol 4 # 49

ph> Date: 28 Jun 88 09:52 PDT
ph> From: hayes.pa@Xerox.COM
ph> Subject: Re: AIList Digest   V7 #45
ph>
ph> On dance notation:
ph> A quick suggestion for a similar but perhaps even thornier problem:
ph> a notation for the movements involved in deaf sign language.
ph>

I am not sure if Pat and I are really thinking of the same thing...
Gordon Joly.

Surface mail: Dr. G.C.Joly, Department of Statistical Sciences,
      University College London, Gower Street, LONDON WC1E 6BT, U.K.
E-mail:                                            | Tel: +44 1 387 7050
 JANET (U.K. national network) gcj@uk.ac.ucl.stats |      extension 3636
         (Arpa/Internet form: gcj@stats.ucl.ac.uk) |
Relays: ARPA,EAN: @nss.cs.ucl.ac.uk                |
        CSNET: %nss.cs.ucl.ac.uk@relay.cs.net      |
        BITNET: %ukacrl.bitnet@cunyvm.cuny.edu, @ac.uk
        EARN: @ukacrl.bitnet, @AC.UK
By uucp/Usenet: ....!uunet!mcvax!ukc!stats.ucl.ac.uk!gcj

------------------------------

Date: 30 Jun 88 18:46:41 GMT
From: bcsaic!douglas@june.cs.washington.edu (Douglas Schuler)
Subject: Directions and Implications of Advanced Computing - DIAC-88


              DIRECTIONS AND IMPLICATIONS OF ADVANCED COMPUTING

             DIAC-88   Twin Cities, Minnesota   August 21, 1988

      Earle Browne Continuing Education Center, University of Minnesota


Advanced computing  technologies  are  presented  as  instruments  and images
of both near and distant futures.   Some of these futures radically challenge
our conceptions of work, security, leisure, and common purpose.  Will  we  be
drawn  into  these futures as passive participants or will we actively select
and shape alternative futures in our own interests?

Few computing disciplines lie so directly at the intersection of these issues
as   does  Artificial  Intelligence.    This  summer  thousands  of  computer
professionals will descend on the Twin Cities for the  annual  conference  of
the  American  Association for Artificial Intelligence (AAAI). Sunday, August
21,  the  day  before the AAAI  conference, Computer Professionals for Social
Responsibility (CPSR) will sponsor a  one  day  symposium,  "Directions   and
Implications of Advanced  Computing."  DIAC-88 aims to examine the social and
political contexts of advanced computing, asking what futures are obtainable,
for  whom, and at what cost?

Douglas Engelbart, the DIAC-88 plenary speaker, will share his perspective on
using  the  computer  to  address  global  problems.   Since the late 1950's,
Engelbart has worked with systems that augment the human intellect  including
his  NLS/Augment  system,  a  hypertext system that pioneered "windows" and a
"mouse."  The driving force behind Engelbart's professional career  has  been
his  recognition  of  social  impacts  of  computing technology.  The plenary
session  will  be followed by presentations of research papers  and  a  panel
discussion.  The panel, John Ladd (Brown University), Deborah Johnson  (Rens-
salaer Polytechnic), Claire McInerney (College of St. Catherine)  and  Glenda
Eoyang (Excel  Instruction)  will address  the question, "How  Should Ethical
Values be Imparted  and  Sustained in the Computing Community?"

                         Presented Papers

  Computer Literacy: A Study of Primary and Secondary Schools, Ronni
    Rosenberg

  Dependence Upon  Expert  Systems:   The  Dangers  of  the  Computer  as
    an Intellectual Crutch, Jo Ann Oravec

  Computerized Voting, Eric Nilsson

  Computerization and Women's Knowledge, Lucy Suchman and Brigitte Jordan

  Some Prospects for Computer Aided Negotiation, Douglas Schuler

  Computer Accessibility for Disabled Workers: It's the Law (invited paper)
    Richard E. Ladner

Send symposium registration to: DIAC-88, CPSR/Los Angeles,  P.O.   Box  66038
Los  Angeles,  CA   90066-0038.   Enclose  check payable to CPSR/DIAC-88 with
registration.  For additional information, call David Pogoff, 612-933-6431.

  NAME ___________________________________________________
  ADDRESS _________________________________________________
  ________________________________________________________
  ________________________________________________________
  Phone (home) _____________________ (work) ______________________

  Please check one:
  Symposium Registration           Regular             O $50
  (Includes Proceedings and Lunch) CPSR Member         O $35
                                   Student/Low Income  O $25

  I cannot attend, but want the symposium proceedings  O $15

There  will  a  reception  following  the  symposium.   Proceedings  will  be
distributed  to  registrants  at  the  symposium.  Non-attendees will receive
proceedings by October 15, 1988.
--
   ** MY VIEWS MAY NOT BE IDENTICAL TO THOSE OF THE BOEING COMPANY **
        Doug Schuler     (206) 865-3226
        [allegra,ihnp4,decvax]uw-beaver!uw-june!bcsaic!douglas
        douglas@boeing.com

------------------------------

Date: Fri, 1 Jul 88 09:10:39 EDT
From: dlm@research.att.com
Subject: talk announcement


Title:  Intermediate Mechanisms For Activation Spreading
        or
        Why can't neural networks talk to expert systems?

Speaker:Jim Hendler
        University of Maryland Institute for Advanced Computer Studies
        University of Maryland, College Park

Date:   Tuesday, July 19
Time:   1:30
Place:  AT&T Bell Laboratories - Murray Hill  3D-473

Abstract:

               Spreading activation,  in  the  form  of  computer
          models and cognitive theories, has recently been under-
          going a resurgence of interest in the cognitive science
          and  AI  communities.  Two competing schools of thought
          have been forming.  One technique concentrates  on  the
          spreading  of  symbolic information through an associa-
          tive knowledge representation.  The other technique has
          focused on the passage of numeric information through a
          network.  In this talk we show  that  these  two  tech-
          niques can be merged.

               We show how an ``intermediate  level''  mechanism,
          that of symbolic marker-passing, can be used to provide
          a limited form of interaction between traditional asso-
          ciative networks and subsymbolic networks.  We describe
          the marker-passing technique,  show  how  a  notion  of
          microfeatures  can  be  used  to allow similarity based
          reasoning,  and  demonstrate  that  a  back-propogation
          learning  algorithm  can  build  the  necessary  set of
          microfeatures from a  well-defined  training  set.   We
          discuss  several problems in natural language and plan-
          ning research and show how the hybrid system  can  take
          advantage  of inferences that neither a purely symbolic
          nor a purely connectionist system can make at present.

Sponsor:  Diane Litman (allegra!diane)

------------------------------

End of AIList Digest
********************

∂11-Jul-88  2150	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #3   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 11 Jul 88  21:50:12 PDT
Date: Tue 12 Jul 1988 00:28-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #3
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 12 Jul 1988       Volume 8 : Issue 3

Today's Topics:

 Queries:

  Spiegelhalter's causal graphs
  PROLOG Compiler
  Frames System
  KEE
  Blackboard Systems
  Scheduling systems with preferential attributes
  LISP implementations
  A grammar for English

----------------------------------------------------------------------

Date: Mon, 4 Jul 88 08:42:48 PDT
From: mcvax!dutrun!duttnphg!hans@uunet.UU.NET (Hans Buurman)
Subject: Spiegelhalter's causal graph's

I'm interested in using a causal model as described in:

Spiegelhalter, D.J., "Probabilistic Reasoning in Predictive Ex-
     pert Systems," in: Uncertainty in Artificial Intelligence,
     ed. L.N. Kanal, J.F. Lemmer, pp. 47-67, North-Holland, Am-
     sterdam, 1986.

Spiegelhalter represents his model in a graph to which he applies
probabilistic theory under a Markov assumption. Could anybody give
me pointers in literature to applications or criticism of this
model ? I've seen some work by Pearl but I wonder how much there
is. Any help would be very much appreciated.

Thanks in advance,

    Hans Buurman
    Pattern Recognition Group
    Faculty of Applied Physics
    Delft University of Technology

UUCP:   ..!mcvax!dutrun!duttnphg!hans
BITNET: tnphhbu@hdetud1

------------------------------

Date: 5 Jul 88 19:05:51 GMT
From: hanrahan@bingvaxu.cc.binghamton.edu  (Bill Hanrahan)
Subject: Request PROLOG Compiler

Hello all,
I'm looking for the source for a Prolog compiler written by Lou Odette.
It was written up in 'AI EXPERT', August 1987, p.48. Apparently it was
posted to this board at one time, but I don't have access to purged
files (fliles > two weeks old).

Anyone with any idea of how I might get a hold of this?
Please respond to my ID, Thanks!
--
========================================================================
:Bill Hanrahan                      Relax, don't worry...have a homebrew
:Programmer/Analyst SUNY Binghamton
:hanrahan@bingvaxu.cc.binghamton.edu

------------------------------

Date: 5 Jul 88 19:58:39 GMT
From: ndsuvax!ncthangi@uunet.uu.net  (sam r. thangiah)
Subject: Need a Frames System


I wish to use a frame system for my Ph.D dissertation.  Could someone give
me pointers as to where I could obtain one, public domain or otherwise that
is considerably efficent and maintainable.
(We do not have a fortune to spend on obtaining one.)


Thanks in advance

Sam
--
Sam R. Thangiah,  North Dakota State University.
300 Minard Hall     UUCP:       ...!uunet!plains.nodak.edu!ncthangi
NDSU, Fargo         BITNET:     ncthangi@plains.nodak.edu.bitnet
ND 58105            ARPA,CSNET: ncthangi%plains.nodak.edu.bitnet@cunyvm.cuny.edu

------------------------------

Date: Tue, 05 Jul 88 20:53:34
From: Ramachandran Iyer <60874863%WSUVM1.BITNET@MITVMA.MIT.EDU>
Subject: References on KEE wanted

I am looking for references on the knowledge representation tool
Knowledge Engineering Environment (KEE).

Any suggestions are most welcome.

-Ramachandran

Email:  60874863%wsuvm1.BITNET@cunyvm.cuny.edu

------------------------------

Date: 6 Jul 88 21:08:12 GMT
From: mcvax!inria!crin!laasri@uunet.uu.net  (Hassan LAASRI)
Subject: To Blackboard Systems Designers

Originally, a blackboard-based system designed an AI system which is
composed of a set of independent computational agents known as knowledge
sources (KSs) interacting and communicating via a global database called the
blackboard, under the management of a controller that can be also an AI
system.

Recently, a proliferation of "distributed blackboard systems" (or
distributed control in blackboard systems) definitions were employed in
knowledge-based systems such as the UMass SCHEMA system or the CMU Navlab
system...In these systems, one read that, because the controller component
can be a system bottleneck, the systems designers prefer to distribute the
control between the KSs, so each KS is able to decide when and how it can
contribute to the problem solving process. Unfortunately there is not enough
documentations about their controllers.

We would like to have some highlights on these questions :

1. In a distributed control model, once two or more KSs are triggered to
activate, how do they negotiate between them to solve the conflict problem of
the blackboard accesses, i.e. how they find the best order in which they
will be executed ? do they use the contract net model or some thing like
that ?

2. Does each KS evaluate its contributions to the problem solving evolvement
and return a priority value, then the system executes the KS with the
highest priority ? in this case, this control type will be only a subset of
the original "centralised" control model used in systems such HEARSAY-II,
BB-1, HASP/SIAP, ATOME, GBB,...

P.S. We have decided to put this demand in this News group, after trying to
contact the concerned researchers. Unfortunately, we didn't have any
response to our requests.


-----------------------------------------------------
Mr. Hassan LAASRI and Miss. Brigitte MAITRE
CRIN/INRIA-Lorraine
Pattern Recognition and Artificial Intelligence Lab.
Blackboard Group
Campus Scientifique - B.P. 239
54506 VANDOEUVRE-LES-NANCY CEDEX
FRANCE

E-mail : laasri@crin.crin.fr or maitre@crin.crin.fr
Phone : (+33) 83-91-20-00 Post 30-05

------------------------------

Date: 8 Jul 88 03:38:41 GMT
From: munnari!charlie.oz.au!root@uunet.UU.NET (Just call me SUPER)
Reply-to: lukose@aragorn.OZ (Dickson Lukose)
Subject: Pointers on Scheduling systems with preferential attributes

Dear Colleagues,
        Are you aware of any scheduling systems (real world or
academical prototype) that includes personal preferential attributes
when it is performing the scheduling process.

EG: Scheduling for assignment of workers to assembly lines in plants,
which takes into account:-
        (1) individual preference of supervisor
        (2)    "           "      "  workmate[s]
        (3)    "           "      "  type of assembly line[s]
        (4)    "           "      "  tool[s]
        etc,....

Any pointers will be much appreciated

thanks in advance
------------------------------------------------------------------
Dickson Lukose          | UUCP: ...!seismo!munnari!aragorn.oz!lukose
Div. Comp. & Maths      |       ....!decvax!mulga!aragorn.oz!lukose
Deakin University       |
Victoria, 3217          | ARPA: munnari!aragorn.oz!lukose@SEISMO.ARPA
Australia               | ACSNET: lukose@aragorn

------------------------------

Date: Fri, 8 Jul 88 21:19:04 CDT
From: drl@backup (David R. Linn)
Subject: LISP implementations

Does anyone have any experience with a LISP implementation that does
not rely on an interpreter? I know of none such but before I state (in
my master's thesis) or even imply that LISP is *always* implemented with
an interpreter, I thought I'd solicit confirmation (or counterexample)
from the readers of AILIST(/comp.ai?). Please reply directly to me as this
is not likely to be of interest to most of the readership; "if there is
evidence of sufficient interest, I will summarize to the list/newsgroup."

NB: my mailer is broken; reply to the address below - not to the address
in the header. (The postmaster is busy working on his master's thesis.)

David Linn - System Manager/Postmaster
Vanderbilt University School of Engineering
drl@vuse.vanderbilt.edu -or- ...!uunet!vuse!drl

------------------------------

Date: 11 Jul 88 13:37:19 GMT
From: mailrus!uflorida!novavax!proxftl!bill@husc6.harvard.edu  (T.
      William Wells)
Subject: Wanted: a grammar for English

I am looking for grammars for English.  I'd prefer ones written
as a context free rules with restrictions.

I am currently looking through the references I picked up at the
recent ACL conference and am investigating a number of other
items I have picked up. However, I'd appreciate any references
thrown my way.

Please respond by E-mail, and if there is enough interest I will
summarize to the net.

------------------------------

End of AIList Digest
********************

∂12-Jul-88  0158	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #4   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 12 Jul 88  01:58:20 PDT
Date: Tue 12 Jul 1988 00:35-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #4
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 12 Jul 1988       Volume 8 : Issue 4

Today's Topics:

 Queries:
  Soundex algorithm   (3 responses)
  Syllables of English   (3 responses)

----------------------------------------------------------------------

Date: 8 Jul 88 17:18:30 GMT
From: hubcap!shorne@gatech.edu  (Scott Horne)
Subject: Soundex algorithm

Does anyone have a reference to info on the design of the Soundex algorithm?
Source code (whatever language) would be helpful, too.

Advance thanks.  (BTW, it's probably best to post, as mail is at best shaky
at this site.)


                                --Scott Horne

BITNET:         PHORNE@CLEMSON (not working; please use another address)
uucp:           ....!gatech!hubcap!scarle!{hazel,citron,amber}!shorne
                (If that doesn't work, send to cchang@hubcap.clemson.edu)
SnailMail:      Scott Horne
                812 Eleanor Dr.
                Florence, SC   29501
VoiceNet:       803 667-9848

------------------------------

Date: 9 Jul 88 04:20:40 GMT
From: wesommer@athena.mit.edu  (William Sommerfeld)
Subject: Re: Soundex algorithm

Sorry for the length of this posting..

In article <2130@hubcap.UUCP> shorne@citron writes:
>Does anyone have a reference to info on the design of the Soundex algorithm?

This one is a somewhat superficial article; it contains a short
Apple ][+ BASIC program which implements the soundex algorithm.

@article{soundx,
        AUTHOR="Jacob R. Jacobs",
        TITLE="Finding Words That Sound Alike: The Soundex Algorithm",
        YEAR="1982",
        MONTH="March",
        JOURNAL="Byte"
}

Fortunately, it references the following, which talks about many
algorithms other than just Soundex:

@article{acmsoundex,
        AUTHOR="Patrick A. V. Hill and Geoff R. Dowling",
        TITLE="Approximate String Matching",
        JOURNAL="ACM Computing Surveys",
        VOLUME="12",
        MONTH="December",
        YEAR="1980"
}

>Source code (whatever language) would be helpful, too.

You asked for it, you got it.

Don't ask me why it's in BCPL; I didn't write it (but I'm going to
have to convert it to C Real Soon Now (before DECSYSTEM-20 that it
runs on turns into scrap metal).

structure
{ SoundXCode↑1↑4 char
}

SoundX(Str) := valof
{ let Value := 0
  let S := vec 40
  CopyString(Str, S)
  RaiseString(S)
  Value<<SoundXCode↑1 := S>>String.C↑1
  let N := 2
  and PreviousSoundX := -1
  for i := 2 to S>>String.N do
  { let Ch := S>>String.C↑i
    let ThisSoundX := selecton Ch into
    { default: 0

      case $F:
      case $V: 1

      case $C:
      case $G:
      case $J:
      case $K:
      case $Q:
      case $S:
      case $X:
      case $Z: 2

      case $B:
      case $P:
      case $D:
      case $T: 3

      case $L: 4

      case $M:
      case $N: 5

      case $R: 6
    }
    if ThisSoundX=0 \ ThisSoundX=PreviousSoundX loop
    Value<<SoundXCode↑N := ThisSoundX
    PreviousSoundX := ThisSoundX
    N := N+1
    if N=5 break
  }
  resultis Value
}
!
and SoundXCompare(DataBase, Attempt) := valof
{ let DBSoundX := SoundX(DataBase)
  for i := 1 to 4 do
  { let ThisAttempt := Attempt<<SoundXCode↑i
    if ThisAttempt=0 resultis true
    if ThisAttempt ne DBSoundX<<SoundXCode↑i resultis false
  }
  resultis true
}

------------------------------

Date: 10 Jul 88 16:14:00 GMT
From: leverich@rand-unix.arpa  (Brian Leverich)
Subject: Re: Soundex algorithm


If you aren't satisfied with the responses you've already received,
try posting to the genealogy newsgroup (rec.genealogy or whatever...).

Soundex is used to index many lists of names, and there are several PD
programs genealogists use for converting names to Soundex.

Incidentally, does anyone know if there's been any genealogy applications
built using Prolog or the like?  Looks like a logic programming approach
to maintaining relations between individuals might be a big win.  -B
--
  "Simulate it in ROSS"
  Brian Leverich                       | U.S. Snail: 1700 Main St.
  ARPAnet:     leverich@rand-unix      |             Santa Monica, CA 90406
  UUCP/usenet: decvax!randvax!leverich | Ma Bell:    (213) 393-0411 X7769

------------------------------

Date: 11 Jul 88 13:38:21 GMT
From: rochester!ur-tut!sunybcs!stewart@bbn.com  (Norman R. Stewart)
Subject: Re: Soundex algorithm


     The source I've used for Soundex (developed by the
Remington Rand Corp., I believe), is

     Huffman, Edna K. (1972) Medical Record Management.
        Berwyn, Illonois: Physicians' Record Company.

The algorithm is very simple,

1:  Assign number values to all but the first letter of the
word, using this table
   1 - B P F V
   2 - C S K G J Q X Z
   3 - D T
   4 - L
   5 - M N
   6 - R
   7 - A E I O U W H Y

2: Apply the following rules to produce a code of one letter and
   three numbers.
   A: The first letter of the word becomes the initial character
      in the code.
   B: When two or more letters from the same group occur together
      only the first is coded.
   C: If two letters from the same group are seperated by an H or
      a W, code only the first.
   D: Group 7 letters are never coded (this does not include the
      first letter in the word, which is always coded).

Of course, this can be used without the numeric substitution to
produce abbreviations also, but the numbers indicate the phonemic
similarity (e.g. Bear = Bare = B6), or Rhymes (e.g. Glare = G46,
Flair = F46).  This can also be useful for finding duplicate entries
in a large database, where a name may be slightly mis-spelled (e.g.
Smith = Simth = S53).



Norman R. Stewart Jr.             *  How much more suffering is
C.S. Grad - SUNYAB                *  caused by the thought of death
internet: stewart@cs.buffalo.edu  *  than by death itself!
bitnet:   stewart@sunybcs.bitnet  *                       Will Durant

------------------------------

Date: 6 Jul 88 14:14:26 GMT
From: ece-csc!ncrcae!gollum!rolandi@ncsuvx.ncsu.edu  (Walter Rolandi)
Subject: syllables of English


Can anyone provide me with a list of all the constituent syllables of English?
Any ideas as to how one could produce such a list would be greatly appreciated.

Thanks.

Walter Rolandi
rolandi@gollum.UUCP
rolandi@ncrcae.Columbia.NCR.COM
NCR Advanced Systems Development, Columbia, SC

------------------------------

Date: 7 Jul 88 02:03:31 GMT
From: hubcap!shorne@gatech.edu  (Scott Horne)
Subject: Re: syllables of English

From article <125@gollum.UUCP>, by rolandi@gollum.UUCP (Walter Rolandi):
>
> Can anyone provide me with a list of all the constituent syllables of English?

I've read that there are more than 8000 such syllables (DeFrancis, _The
Chinese Language:  Fact and Fantasy_, U. of Hawaii).  Good luck compiling a
list!  (N.B.:  Those are phonetically distinct syllabes, not graphically
distinct.)

Incidentally, Japanese has just over 100 syllables.

                                --Scott Horne

BITNET:         PHORNE@CLEMSON (not working; please use another address)
uucp:           ....!gatech!hubcap!scarle!{hazel,citron,amber}!shorne
                (If that doesn't work, send to cchang@hubcap.clemson.edu)
SnailMail:      Scott Horne
                812 Eleanor Dr.
                Florence, SC   29501
VoiceNet:       803 667-9848

------------------------------

Date: 7 Jul 88 16:07:17 GMT
From: uhccux!stampe@humu.nosc.mil  (David Stampe)
Subject: Re: syllables of English

If it's possible, rather than occurring, English syllables you want, you
might look at diagrams for possible monosyllables, as in Zellig Harris,
Methods in Structural Linguistics, U. Chicago Press, 195?.  Stressed
syllables in polysyllables are a subset of those in monosyllables.
Unstressed syllables are a subset of stressed syllables, unless you take
the consonantal nuclei in rubber, rubble, ribbon, rub'm to be distinct
from the nuclei of brr, bull, bun, bum.  Such diagrams are approximations,
since the number of phonemes and especially the number of possible
combinations into syllables differs somewhat among dialects and
individuals.  They usually admit hundreds of pronounceable but very
peculiar syllables like trart, klilk, kwuw, smamp, oyj, awb.

David (stampe@uhccux.uhcc.hawaii.edu)

------------------------------

Date: 8 Jul 88 19:22:34 GMT
From: att!ihlpa!krista@bloom-beacon.mit.edu  (Anderson)
Subject: Re: syllables of English

<>
    To Walter R.:  I tried to send mail, but it bounced.  I don't
have a list of English syllables, but I do have a list of consonant
clusters and vowels.  If you want it, I'll post it; however, it is
about 250 lines.
    Actually, I made the list when I was trying to understand why
a Navajo friend was having trouble with some English words.
    I wrote all the English consonant clusters I could think
of, including those that occur only in the *final* positions of
words.  I came up with about 197 consonants and consonant clusters!
And the list is probably not be conclusive.
    Since Navajo has only about 35 consonants and clusters, of which
about 15 intersect the English set, I gained a lot of sympathy for
anybody learning English as a second language.  I've heard that
Polish has a lot of clusters; anybody know how many?  Cherokee has
only 13 consonants (no clusters), I seem to recall.  Tlingit
(related to Navajo) is  reputed to have a great many phonemes (50
compared to English 35); but these figures do not include clusters.
By the way, Cherokee is about the prettiest language I've ever
heard. It was once a tonal language, but the tones lost their
meaning in most words, at least in the western dialect.  However, a
light, musical quality remains.
    Shut me up, please!  If you want the list, let me know.

Krista Anderson, ihnp4!ihlpa!krista, but we may be shutting down email?

------------------------------

End of AIList Digest
********************

∂12-Jul-88  0356	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #5   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 12 Jul 88  03:55:44 PDT
Date: Tue 12 Jul 1988 00:44-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #5
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 12 Jul 1988       Volume 8 : Issue 5

Today's Topics:

 Philosophy:

  Re: Who else isn't a science?
  Metaepistemology & Phil. of Science
  Re: Bad AI: A Clarification
  Re: The Social Construction of Reality
  Generality in Artificial Intelligence
  Theoretical vs. Computational Linguistics

----------------------------------------------------------------------

Date: 3 Jul 88 08:07:20 GMT
From: agate!garnet!weemba@ucbvax.berkeley.edu  (Obnoxious Math Grad
      Student)
Subject: Re: Who else isn't a science?

I'm responding very slowly nowadays.  I think this will go over better
in s.phil.tech anyway, so I'm directing followups there.

In article <2663@mit-amt.MEDIA.MIT.EDU>, bc@mit-amt  writes:
>In article <11387@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu  writes:

>>These researchers start by looking at *real* brains, *real* EEGs, they
>>work with what is known about *real* biological systems, and derive very
>>intriguing connectionist-like models.  To me, *this* is science.

>And working in the other direction is not SCIENCE? Oh please...

Indeed it isn't.  In principle, it could be, but it hasn't been.

Physics, for example, does work backwards.  All physical models are expected
to point to experiment.  Non-successful models are called "failures" or, more
politely, "mathematics".  They are not called "Artificial Reality" as a way
of hiding the failure.

(If it isn't clear, I do not consider mathematics to be a "science".  My say-
ing AI has not been science, in particular, is not meant as a pejorative.)

>>CAS & WJF have developed a rudimentary chaotic model based on the study
>>of olfactory bulb EEGs in rabbits.  They hooked together actual ODEs with
>>actual parameters that describe actual rabbit brains, and get chaotic EEG
>>like results.

>There is still much that is not understood about how neurons work.
>Practically nothing is known about how structures of neurons work.

And theorizing forever won't tell us either.  You have to get your hands
dirty.

>                                                                   In
>50 years, maybe we will have a better idea. In the mean time,
>modelling incomplete and incorrect physical data is risky at best.

Incorrect???  What are you referring to?

Risky or not--it is "science".  It provides constraints that theory must
keep in mind.

>                                                                   In
>the mean time, synthesizing models is just as useful.

No.  Synthesizing out of thin air is mostly useless.  Synthesizing when
there is experiment to give theory feedback, and theory to give exper-
iment a direction to look, is what is useful.  That is what Edelman,
Skarda and Freeman are doing.

>>We've also got what I think a lot of people who've never studied the
>>philosophy of science here too.  Join the crowd.
>
>I took a course from Kuhn. Speak for youself, chum.

Gee.  And I know Kuhn's son from long ago.  A whole course?  Just enough
time to memorize the important words.  I'm not impressed.

>>>May I also inform the above participants that a MAJORITY of AI
>>>research is centered around some of the following:
>>>[a list of topics]
>>Which sure sounded like programming/engineering to me.

>Oh excuse me. They're not SCIENCE. Oh my. Well, we can't go studying THAT.

What's the point?  Who said you had to study "science" in order to be
respectable.  I think philosophy is great stuff--but I don't call it
science.  The same for AI.

>>If your simulations have only the slightest relevance to ethology, is your
>>advisor going to tell you to chuck everything and try again?  I doubt it.

>So sorry to disappoint you. My coworkers and I are modelling real,
>observable behavior, drawn from fish and ants.

>Marvin Minsky, our advisor, warns that we should not get "stuck" in
>closely reproducing behavior,

That seems to be precisely what I said up above.

>The bottom line is that it is unimportant for us to argue whether or
>not this or that is Real Science (TM).

You do so anyway, I notice.

>What is important is for us to create new knowledge either
>analytically (which you endorse) OR SYNTHETICALLY (which is just as
>much SCIENCE as the other).

Huh??  Methinks you've got us backwards.  Heinously so.  And I strongly
disagree with this "just as much as the other".

>Just go ask Kuhn.

Frankly, I'm not all that impressed with Kuhn.

ucbvax!garnet!weemba    Matthew P Wiener/Brahms Gang/Berkeley CA 94720

------------------------------

Date: Sun, 03 Jul 88 03:47:51 EST
From: Jeff Coggshall <KLEMOSG%YALEVM.BITNET@MITVMA.MIT.EDU>
Subject: Metaepistemology & Phil. of Science


======================================================================== 273
>Date: Fri, 24 Jun 88 18:46 O
>From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
>It is obvious the agent only can have a representation of the Ding an
>Sich.  In this sense the reality is unknowable.  We only have
>descriptions of the actual world.

>It seems that for the most part evolution has been responsible for
>developing life forms which have good descriptions of the Ding an Sich

 I don't think that it's even obvious that we have anything that is in any
sense a "representation" of the Ding an Sich. The argument from "well, hey, -
look, we're doing pretty well at surviving, so therefore what we think is
really out there must have some sort of _correspondence_ with what's really
out there" just doesn't hold up. It doesn't follow. Yes, we are adapted to
survive, but that doesn't mean that our thoughts about what's going on are
even incomplete "representations" of some Ding an Sich.
    Some may say: "well - who cares if we've got a real _representation_ of
some "really real reality", so long as our theoretical assumptions continue to
yeild fruitful results - then why worry?"
    The problem is that our theoretical assumptions will bias our empirical
results. "facts" are theory-laden and theories are value laden. It all comes
down to ethics. Our human perceptual world is at least as real (and in many
ways more real) to us (and to who else would it make sense to talk about it's
being real to?) as the world of theoretical physics.
    Once we assume that there is no priveledged source knowledge about the way
things really are, then, it seems, we are left with either saying that
"anything goes" (as in astrology is just as valdid as physics and so is voodoo)
or with insisting that "reality", however it is constued, must constrain
cognitive activity and that one must cultivate an openness to this constraint.

------------------------------------------------------------------------

>Date: 27 Jun 88 00:18:24 GMT
>From: bc@media-lab.media.mit.edu  (bill coderre and his pets)
>Subject: Re: Who else isn't a science?

>The bottom line is that it is unimportant for us to argue whether or
>not this or that is Real Science (TM).

    Well, yes and no. It doesn't make any sense to go around accusing other
people of not doing science when you haven't established any criteria for
considering some kind of discovery/inquiry-activity as a science. Is math a
science? There doesn't seem to be any empirical exploration going on... Is
math just a conceptual tool used to model empirical insights? Could AI be the
same?
    I found a quote from Nelson Goodman that might interest you. Tell me what
you think, y'all:
    "Standards of rightness in science do not rest on uniformity and constancy
of particular judgments. Inductive validity, fairness of sample, relevance of c
ategorization, all of them essential elements in judging the correctness of obs
ervations and theories, do depend upon conformity with practic - but upon a ten
uous conformity hard won by give-and-take adjustment involving extensive revisi
on of both observations and theories." (from _Mind and other Matters_, 1984, p.
12)

>What is important is for us to create new knowledge either
>analytically (which you endorse) OR SYNTHETICALLY (which is just as
>much SCIENCE as the other).

     - are you using Kant's analytic/synthetic distinction? because if you are
(or want to be), then you should realize that all new knowledge is synthetic.
You might even be interested in an article by W. V. O. Quine in the book "From
a Logical Point of View". It is called "Two Dogmas of Empiricism" and therein
he convincingly trashes any possible analytic/synthetic distinction. I agree
with him. I think that the burden of proof lies on anyone who wants to claim
that there is any a priori knowledge. Believing this presupposes a God's eye
(or actually a no-eye) point of view on reality, which doesn't make sense.

                                                        Jeff Coggshall
                                    (Jcoggshall@hampvms or Klemosg@yalevm)

------------------------------

Date: 5 Jul 88 18:09:24 GMT
From: bc@media-lab.media.mit.edu  (bill coderre)
Subject: Re: Who else isn't a science?

I am going to wrap up this discussion here and now, since I am not
interested in semantic arguments or even philosophical ones. I'm sorry
to be rude. I have a thesis to finish as well, due in three
weeks.

First, the claim was made that there is little or no research in AI
which counts as Science, in a specific interpretation. This statement
is incorrect.

For example, the reasearch that I an my immediate colleagues are doing
is "REAL" Science, since we model REAL animals, make very REALISTIC
behavior, and have REAL ethologists as critics of our work.

Next, the claim was made that synthesis as an approach to AI has not
panned out as Science. Well, wrong again. There's plenty of such.

Then I am told that few AI people understand the Philosophy of
Science. Well, gee. Lots of my colleagues have taken courses in such.
Most are merely interested in the fundamentals, and have taken survey
courses, but some fraction adopt a philosophical approach to AI.

If I was a better AI hacker, I would just append a list of references
to document my claims. Unfortunately, my references are a mess, so let
me point you at The Encyclopedia of Artificial Intelligence (J Wiley
and Sons), which is generally excellent, and although lacking specific
articles on AI as a Science (I think, I didn't find any on a quick
glance), there are plenty of references concering the more central
philosophical issues to AI. Highly recommended. (Incidentally, there's
plenty of stuff in there on the basic approaches to and results from
AI research, so if you're a pragmatic engineer, you'll enjoy it too.)

Enough. No more followups from me.

------------------------------

Date: 6 Jul 88 15:00:57 GMT
From: mcvax!ukc!etive!aiva!jeff@uunet.uu.net  (Jeff Dalton)
Subject: Re: Bad AI: A Clarification

In article <1337@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:

I don't have time to respond to all of your articles that respond
to mine, but will try to say something.  I suggested that you give
specific criticism of specific research, but you have declined to do
so.  That's unfortunate, because as it is most people are just going
to ignore you, having heard such unsupported attacks before.

>In article <451@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton) writes:
>>>It would be nice if they followed good software engineering practices and
>>>structured development methods as well.

>>Are you trying to see how many insults can fit into one paragraph?

>No.

OK, I'll accept that.  But if so, you failed to make your intention
clear.  And of course it *would* be nice if they, etc., but do you
know "they" don't.  My experience is that appropriate software
engineering practices are followed in many cases.  That doesn't mean
they all use JSP (or eqivalent), but then it's not always appropriate
to do so.

>No-one in UK HCI research, as far as I know, objects to the criticism
>that research methodologies are useless until they are integrated
>with existing system development approaches.

That no one objects is not a valid argument.  They might all be wrong.

>On software engineering too, HCI will have to deliver its
>goods according to established practices.  To achieve this, some HCI
>research must be done in Computer Science departments in collaboration
>with industry.  There is no other way of finishing off the research
>properly.

There is a difference between research and delivering goods that can
be used by industry.  It is not the case that all research must be
delivered in finished form to industry.  Of course, the needs of
industry, including their desire to follow established practices, are
important when research will be so delivered, but in other cases such
needs are not so significant.

We must also consider that the results of research are not always
embodied in software.

>You've either missed or forgotten a series of postings over the last
>two years about this problem in AI.

Or perhaps I don't agree with those postings, or perhaps I don't agree
with your view of the actual state of affairs.

>Project managers want to manage IKBS projects like existing projects.

Of course, they do: that's what they know.  You've yet to give any
evidence that they're right and have nothing to learn.

>You must also not be talking to the same UK software houses as I am, as
>(parts of) UK industry feel that big IKBS projects are a recipe for
>burnt fingers, unless they can be managed like any other software project.

Big IKBS projects are risky regardless of how they're managed.  Part
of the problem is that AI research hasn't advanced far enough: it's
not just a question of applying some software engineering; and so
the difficulties with big IKBS projects are not necessarily evidence
that they must be managed like any other software project.

But this is all beside the point -- industrial IKBS projects and
AI research are not the same thing.

------------------------------

Date: 6 Jul 88 15:16:08 GMT
From: mcvax!ukc!etive!aiva!jeff@uunet.uu.net  (Jeff Dalton)
Subject: Re: Bad AI: A Clarification

In article <451@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton) writes:
1 Are you really trying to oppose "bad AI" or are you opportunistically
1 using it to attack AI as a whole?  Why not criticise specific work you
1 think is flawed instead of making largely unsupported allegations in
1 an attempt to discredit the entire field?

In article <1336@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
2 No, I've made it clear that I only object to the comfortable,
2 protected privilege which AI gives to computational models of
2 Humanity.

If that is so, why don't you confine your remarks to that instead of
attacking AI's existence as a discipline?

2 Anything which could be called basic research is the province of other
2 disciplines, who make more progress with less funding per investigation (no
2 expensive workstations etc.).

Have you considered the costs of equipment in, say, Medicine or
Physics?

2 I do not think there is a field of AI.  There is a strange combination
2 of topic areas covered at IJCAI etc.  It's a historical accident, not
2 an epistemic imperative.

So are the boundaries of the UK.  Does that mean it should not exist
as a country?

2 My concern is with the study of Humanity and the images of Humanity
2 created by AI in order to exist.  Picking on specific areas of work is
2 irrelevant.

The question will then remain as to whether there is any work for
which your criticism is valid.

2 But when I read misanthropic views of Humanity in AI, I will reply.
2 What's the problem?

Perhaps you will have a better idea of the problem if you consider
that "responding to misanthropic views of Humanity in AI" is not an
accurate description of what you do.

-- Jeff

------------------------------

Date: 6 Jul 88 16:51:40 GMT
From: mcvax!ukc!etive!aiva!jeff@uunet.uu.net  (Jeff Dalton)
Subject: Re: The Social Construction of Reality

In article <1332@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>The dislike of ad hominem arguments among scientists is a sign of their
>self-imposed dualism: personality and the environment stop outside the
>cranium of scientists, but penetrate the crania of everyone else.

No, it is a sign that they recognize that someone can be right despite
having qualities that might make their objectivity suspect.  They also
can remember relativity being attacked as "Jewish science" and other
ad hominem arguments of historical note.

>When people adopt a controversial position for which there is no convincing
>proof, the only scientific explanation is the individual's ideology.

Perhaps the person is simply mistaken and thinks there is convincing
proof.  (Suppose they misread a conclusion that said something was
"not insignificant", say.)  Or perhaps they are don't really hold
the position in question but are simply using it because others
find it hard to refute.

-- Jeff

------------------------------

Date: Thu, 7 Jul 88 10:24 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: Generality in Artificial Intelligence

Distribution-File:
        AILIST@AI.AI.MIT.EDU
        JMC@SAIL.Stanford.EDU

This entry was inspired by John McCarthy's Turing Award lecture in
Communications of the ACM, December 1987, Generality in Artificial
Intelligence.


> "In my opinion, getting a language for expressing general
> commonsense knowledge for inclusion in a general database is the key
> problem of generality in AI."


What is commonsense knowledge?


Here follows an example where commonsense knowledge plays its part.  A
human parses the sentence

"Christine put the candle onto the wooden table, lit a match and lit
it."

The difficulty which humans overcome with commonsense knowledge but
which is hard to a program is to determine whether the last word, the
pronoun "it" refers to the candle or to the table.  After all, you can
burn a wooden table.

Probably a human would reason, within less than a second, like this.

"Assume Christine is sane.  The event might have taken place at a
party or during her rendezvous with her boyfriend.  People who do
things such as taking part in parties most often are sane.

People who are sane are more likely to burn candles than tables.

Therefore, Christine lit the candle, not the table."


It seems to me that the inferences are not so demanding but the
inferencer utilizes a large amount of background knowledge and a good
associative access mechanism.


Thus, it would seem that in order for us to see true commonsense
knowledge exhibited by a program we need:

        * a vast amount of knowledge involving the world of a person
          in virtual memory.  The knowledge involves gardening,
          Buddhism, the emotions of an ordinary person and so forth -
          its amount might equal a good encyclopaedia.
        * a good associative access mechanism.  An example of such
          an access mechanism is the hashing mechanism of the
          Metalevel Reasoning System described in/1/.


What kind of formalism should we use for expressing the commonsense
knowledge?


Modern theoretical philosophy knows of a number of logics with
different expressive power /2/.  They form a natural scale for
evaluating different knowledge representation formalisms.  For
example, it would be very interesting to know whether Sowa's
Conceptual Structures correspond to a previously known logical system.
I remember having seen a paper which complained that to a certain
extent the KRL is just another syntax for first-order predicate logic.

In my opinion, it is possible that an attempt to express commonsense
knowledge with a formalism is analogous to an attempt to fit a whale
into a tin sardine can.  The knowledge of a person has so many nuances
which are well reflected by the richness of the language used in
poetry and fiction (yes, a poem may contain nontrivial knowledge!)

Think of the Earthsea trilogy by Ursula K. LeGuin.  The climax of the
trilogy is when Sparrowhawk the wizard saves the world from Cob's evil
deeds by drawing the rune Agnen across the spring of the Dry River:

"'Be thou made whole!' he said in a clear voice, and with his staff
he drew in lines of fire across the gate of rocks a figure: the rune
Agnen, the rune of Ending, which closes roads and is drawn on coffin
lids.  And there was then no gap or void place among the boulders.
The door was shut."

Think of how difficult it would be to express that with a formalism,
preserving the emotions and the nuances.

I propose that the usage of *natural language* (augmented with
text-processing, database and NL understanding technology) for
expressing commonsense knowledge be studied.


> "Reasoning and problem-solving programs must eventually allow the
> full use of quantifiers and sets, and have strong enough control
> methods to use them without combinatorial explosion."

It would seem to me that one approach to this problem is the use of
heuristics, and a good way to learn to use heuristics well is to study
how the human brain does it.

Here follows a reference which you may now know and which will certainly
prove useful when studying the heuristic methods the human brain uses.

In 1946, the doctoral dissertation of the Dutch psychologist Adrian
D. de Groot was published.  The name of the dissertation is Het
Denken van den Schaaker, The Thinking of a Chess Player.

In the 30's, de Groot was a relatively well-known chess master.
The material of the book has been created by giving chess postitions
to Grandmasters, international masters, national masters and
first-class players and so forth for them to study.  The chess master
told aloud how he made the decision which move he thought was the best.

Good players immediately start studying the right alternatives.
Weaker players usually calculate as much but they usually follow
the wrong ideas.

Later in his life, de Groot became the education manager of the
Philips Corporation and the professor of Psychology in Amsterdam
University.  His dissertation was translated into English in the
60's in Stanford Institute as "Thought and Choice is Chess".


> "Whenever we write an axiom, a critic can say it is true only in a
> certain context.  With a little ingenuity, the critic can usually
> devise a more general context in which the precise form of the axiom
> does not hold.  Looking at human reasoning as reflected in language
> emphasizes this point."

I propose that the concept of a theory with a context be formalized.

A theory in logic has a set of true sentences (axioms) and a set of
inference rules which are used to derive theorems from axioms -
therefore, it can be described with a 2-tuple

        <axioms, inference_rules>.

A theory with a context would be a 3-tuple

        <axioms, inference_rules, context>

where "context" is a set of sentences.

Someone might create interesting theoretical philosophy or mathematical
logic research of this.


References:

/1/     Stuart Russell: The Compleat Guide to MRS, Stanford University

/2/     Antti Hautamaeki, a philosopher friend of mine, personal
        communication.

------------------------------

Date: 8 Jul 88 20:11:48 GMT
From: ssc-vax!bcsaic!rwojcik@beaver.cs.washington.edu  (Rick Wojcik)
Subject: Theoretical vs. Computational Linguistics (was Me and Karl
         Kluge...)

In article <1342@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>... but I don't know if a non-computational linguist
>working on semantics and pragmatics would call it advanced research work.

Implicit in this statement is the mistaken view that non-computational
linguists get to define 'advanced' research work.  Computational
linguists often are fully qualified theoretical linguists, not just
computer scientists with a few courses in linguistics.  But the concerns
of the computational linguist are not always compatible with those of 'pure'
theoretical linguists.  Since many linguistic theories do not attempt to
model the processes by which we produce and comprehend language (i.e. they
concern themselves primarily with the validation of grammatical form), they
fail to address issues that computational linguists are forced to ponder.
For example, non-computational linguists have largely ignored the questions
of how one disambiguates language or perceives meaning in ill-formed
phrases.  The question is not just how many possible meanings a form can
express, but how the listener arrives at the correct meaning in a given
context.  Given that theoretical linguists seldom have to demonstrate
concrete effects of their research, it is difficult to get them to focus
on these issues.  You should regard theoretical linguists as striving for
a partial theory of language, whereas computational linguists have to go
after the whole thing.  A major limitation for computational linguists is
that they must confine themselves to operations that they can get a
machine to perform.
--
Rick Wojcik   csnet:  rwojcik@boeing.com
              uucp:   uw-beaver!ssc-vax!bcsaic!rwojcik
address:  P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone:    206-865-3844

------------------------------

End of AIList Digest
********************

∂12-Jul-88  0630	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #6   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 12 Jul 88  06:30:18 PDT
Date: Tue 12 Jul 1988 00:50-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #6
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 12 Jul 1988       Volume 8 : Issue 6

Today's Topics:

  Free Will

---------------------------------------------------------------------

Date: Wed, 22 Jun 88 18:37 MST
From: "James J. Lippard" <Lippard@BCO-MULTICS.ARPA>
Reply-to: Lippard@BCO-MULTICS.ARPA
Subject: Carlos Castaneda

In a couple of recent issues of AI-List (vol. 7 nos. 28 and 42), Andy
Ylikoski has recommended the works of Carlos Castaneda, stating that they
"approach the concept of will from Yaqui Indian knowledge point of view" and
that "The Yaqui have their own scientific tradition anthropologically studied
by Castaneda."
   I would like to advise caution in reading these works, and recommend a few
books which are highly skeptical of Castaneda.  These works present evidence
that Castaneda's "Don Juan" writings are neither autobiographical nor valid
ethnography.  E.N.  Anderson, then associate professor of anthropology at UCLA
(where Castaneda received his doctorate), wrote (in The Zetetic, Fall/Winter
1977, p. 122) that "de Mille exposed many inconsistencies that prove *either*
that Castaneda was a brilliant fraud *or* that he was an incredibly careless
and sloppy ethnographer in a disorganized department." (He believes the
latter.)

   de Mille, Richard.  _Castaneda's Journey: The Power and the Allegory_,
     Capra Press, 1976.
   ---, editor.  _The Don Juan Papers: Further Castaneda Controversies_,
     Ross Erikson, 1980.
   Noel, Daniel, editor.  _Seeing Castaneda: Reactions to the "Don Juan"
     Writings of Carlos Castaneda_, G.P. Putnam's Sons, 1976.

The Noel book contains some conjectures regarding Castaneda's works being
bogus, but the de Mille books give the hard evidence (e.g., internal
inconsistencies and contradictions, comparisons with other studies of Yaqui
culture, interviews with people familiar with the author and subject matter,
examination of Castaneda's background and influences, etc.)


  Jim Lippard
  Lippard at BCO-MULTICS.ARPA

------------------------------

Date: 3 Jul 88 04:38:11 GMT
From: mailrus!uflorida!novavax!proxftl!bill@ohio-state.arpa  (T.
      William Wells)
Subject: Re: Free Will & Self-Awareness

In article <2485@uvacs.CS.VIRGINIA.EDU>, Carl F. Huber writes:
) In article <306@proxftl.UUCP> T. William Wells writes:
) >Let's consider a relatively uncontroversial example.  Say I have
) >a hot stove and a pan over it.  At the entity level, the stove
) >heats the pan.  At the process level, the molecules in the stove
) >transfer energy to the molecules in the pan.
) > ...
) >Now, I can actually try to answer your question.  At the entity
) >level, the question "how do I cause it" does not really have an
) >answer; like the hot stove, it just does it.  However, at the
) >process level, one can look at the mechanisms of consciousness;
) >these constitute the answer to "how".
)
) I do not yet see your distinction in this example.
) What is the difference between saying the stove _heats_ or the
) molecules _transfer_energy_?  The distinction must be made in the
) way we describe what's happening.  In each case above, you seem to
) be giving the pan and the molecules volition.

[Minor nitpick: the pan and the molecules act, but volition and
action are not the same things. The discussion of the difference
belongs in a philosophy newsgroup, however.]

)                                                The stove does not
) heat the pan.  The stove is hot.  The pan later becomes hot.  Molecules do
) not transfer energy.  The molecules in the stove have energy s+e.  Then
) the molecules in the pan have energy p+e and the molecules in the
) stove have energy s.
)
) So it seems that both cases here are entity level, since the answer
) to "how do I cause it" is the same.  If I have totally missed the
) point, could you please try again?
)
) -carl

I think you missed the point. Perhaps I can fill in some missed
information. I think you got the idea that the process level
description could be made without reference to entities; this is
not the case. The process level MUST be made with reference to
entities, the main point is that these acting entities are not
the same as the entity involved in the entity level description.

Does that help? Also, could we move this discussion to another
newsgroup?

------------------------------

Date: 6 Jul 88 15:36:15 GMT
From: mcvax!ukc!etive!aiva!jeff@uunet.uu.net  (Jeff Dalton)
Subject: Re: How to dispose of the free will issue (long)

In article <794@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>Whether or not we have free will, we should behave as if we do,
>because if we don't, it doesn't matter.

If that is true -- if it doesn't matter -- then we will do just as well
to behave as if we do not have free will.

,

------------------------------

Date: 6 Jul 88 17:04:13 GMT
From: mcvax!ukc!etive!aiva!jeff@uunet.uu.net  (Jeff Dalton)
Subject: Re: Free Will-Randomness and Question-Structure

In article <304@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
] Actually, the point was just that: when I say that something is
] true in a mathematical sense, I mean just one thing: the thing
] follows from the chosen axioms;

"True" is not the same as "follows from the axioms".  See Godel et al.

------------------------------

Date: 8 Jul 88 16:18:37 GMT
From: cs.utexas.edu!sdcrdcf!markb@ohio-state.arpa  (Mark Biggar)
Subject: Re: How to dispose of the free will issue (long)

In article <488@aiva.ed.ac.uk> Jeff Dalton writes:
>In article <794@l.cc.purdue.edu> Herman Rubin writes:
>>Whether or not we have free will, we should behave as if we do,
>>because if we don't, it doesn't matter.
>If that is true -- if it doesn't matter -- then we will do just as well
>to behave as if we do not have free will.

Not so, believing in free will is a no lose situation; while
believing that you don't have free is a no win situation.
In the first case either your right or it doesn't matter, in the second
case either your wrong or it doesn't matter.  Game theory (assuming
you put more value on being right then wrong (if it doesn't matter
there are no values anyway)) says the believing and acting like you
have free will is the way that has the most expected return.

Mark Biggar
{allegra,burdvax,cbosgd,hplabs,ihnp4,akgua,sdcsvax}!sdcrdcf!markb
markb@rdcf.sm.unisys.com

------------------------------

Date: 8 Jul 88 19:53:21 GMT
From: bc@media-lab.media.mit.edu  (bill coderre)
Subject: Re: How to dispose of the free will issue (long)

In article <5384@sdcrdcf.UUCP> markb@sdcrdcf.UUCP (Mark Biggar) writes:
>In article <488@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva Jeff Dalton writes:
>>In article <794@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>>>Whether or not we have free will, we should behave as if we do,
>>>because if we don't, it doesn't matter.
>>If that is true -- if it doesn't matter -- then we will do just as well
>>to behave as if we do not have free will.
>Not so, believing in free will is a no lose situation; while
>believing that you don't have free is a no win situation.




Whereas arguing about free will is a no-win situation.

Arguing about free will is also certainly not AI.

Thank you for your consideration.



mr bc

------------------------------

Date: 10 Jul 88 22:04:43 GMT
From: ukma!uflorida!novavax!proxftl!bill@husc6.harvard.edu  (T.
      William Wells)
Subject: Re: How to dispose of the free will issue (long)

In article <5384@sdcrdcf.UUCP>, markb@sdcrdcf.UUCP (Mark Biggar) writes:
> In article <488@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva Jeff Dalton writes:
> >In article <794@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
> >>Whether or not we have free will, we should behave as if we do,
> >>because if we don't, it doesn't matter.
> >If that is true -- if it doesn't matter -- then we will do just as well
> >to behave as if we do not have free will.
>
> Not so, believing in free will is a no lose situation; while
> believing that you don't have free is a no win situation.
> In the first case either your right or it doesn't matter, in the second
> case either your wrong or it doesn't matter.  Game theory (assuming
> you put more value on being right then wrong (if it doesn't matter
> there are no values anyway)) says the believing and acting like you
> have free will is the way that has the most expected return.

Pascal, I think it was, advanced essentially the same argument in
order to defend the proposition that one should believe in god.

However, both sides of the argument agree that the issue at hand
has no satisfactory resolution, and thus we are free to be
religious about it; both are also forgetting that the answer to
this question has practical consequences.

Pick your favorite definition of free will. Unless it is one
where the "free will" has no causal relationship with the rest
of the world (but then why does it matter?), the existence or
lack of existence of free will will have measurable consequences.

For example, my own definition of free will has consequences
that, among many other things, includes the proposition that,
under normal circumstances, an initiation of physical force is
harmful both to the agent and the patient. (Do not argue this
proposition in this newsgroup, PLEASE.) It also entails a
definition of the debatable terms like `normal' and `harm' by
means of which this statement can be interpreted. This means
that I can test the validity of my definition of free will by
normal scientific means and thus takes the problem of free will
out of the religious and into the practical.

------------------------------

Date: 11 Jul 88 01:47:57 GMT
From: pasteur!agate!gsmith%garnet.berkeley.edu@ames.arpa  (Gene W.
      Smith)
Subject: Re: How to dispose of the free will issue (long)

In article <445@proxftl.UUCP>, bill@proxftl (T. William Wells) writes:

>Pick your favorite definition of free will. Unless it is one
>where the "free will" has no causal relationship with the rest
>of the world (but then why does it matter?), the existence or
>lack of existence of free will will have measurable consequences.

  Having a causal connection to the rest of the world is not the
same as having measurable consequences, so this argument won't
work. One possible definition of free will (with problems, but
don't let that worry us) is that there is no function (from
possible internal+external states to behavior, say) which
determines what the free will agent will do. To to test this is
to test a negative statement about the lack of a function, which
seems hard to do, to say the least.

>For example, my own definition of free will has consequences
>that, among many other things, includes the proposition that,
>under normal circumstances, an initiation of physical force is
>harmful both to the agent and the patient. (Do not argue this
>proposition in this newsgroup, PLEASE.) It also entails a
>definition of the debatable terms like `normal' and `harm' by
>means of which this statement can be interpreted. This means
>that I can test the validity of my definition of free will by
>normal scientific means and thus takes the problem of free will
>out of the religious and into the practical.

  This is such a weak verification of your free will hypothesis
as to be nearly useless, even if I accept that you are able to
make the deduction you claim. Freud claimed that psychoanalysis
was a science, deducing all kinds of things from his egos and his
ids. But he failed to show his explanations were to be preferred
to the possible alternatives; in other words, to show his ideas
had any real explanatory power. You would need to show your
ideas, whatever they are, had genuine explanatory power to claim
you had a worthwhile scientific theory.
--
ucbvax!garnet!gsmith    Gene Ward Smith/Garnet Gang/Berkeley CA 94720
"Some people, like Chuq and Matt Wiener, naturally arouse suspicion by
behaving in an obnoxious fashion." -- Timothy Maroney, aka Mr. Mellow

------------------------------

Date: 11 Jul 88 19:08:42 GMT
From: ns!ddb@umn-cs.arpa  (David Dyer-Bennet)
Subject: Re: How to dispose of the free will issue (long)

In article <488@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> In article <794@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
> >Whether or not we have free will, we should behave as if we do,
> >because if we don't, it doesn't matter.
> If that is true -- if it doesn't matter -- then we will do just as well
> to behave as if we do not have free will.
  While I would prefer to avoid *ALL* errors, I'll settle for avoiding
all *AVOIDABLE* erors.  If I do not have free will, none of my errors
are avoidable (I had no choice, right?); so I may as well remove the entire
no-free-will arena from my realm of consideration.
  The whole concept of "choosing to believe we have no free will" is
obviously bogus -- if we're choosing, then by definition we DO have free will.
  I understand, of course, that you all my be pre-destined not to comprehend
my arguments :-)
--
        -- David Dyer-Bennet
        ...!{rutgers!dayton | amdahl!ems | uunet!rosevax}!umn-cs!ns!ddb
        ddb@viper.Lynx.MN.Org, ...{amdahl,hpda}!bungia!viper!ddb
        Fidonet 1:282/341.0, (612) 721-8967 hst/2400/1200/300

------------------------------

Date: 11 Jul 88 19:16:19 GMT
From: ns!ddb@umn-cs.arpa  (David Dyer-Bennet)
Subject: Re: How to dispose of the free will issue (long)

In article <445@proxftl.UUCP>, bill@proxftl.UUCP (T. William Wells) writes:
> For example, my own definition of free will has consequences
> that,.... This means
> that I can test the validity of my definition of free will by
> normal scientific means and thus takes the problem of free will
> out of the religious and into the practical.
  Yep, that's what you'd need to have to take the debate out of the
religious and into the practical.  Not meaning to sound sarcastic, but
this is a monumental philosophical breathrough.  But could you exhibit
some of the difficult pieces of this theory; in particular, what is
the measurable difference between an action taken freely, and one that
was pre-determined by other forces?
--
        -- David Dyer-Bennet
        ...!{rutgers!dayton | amdahl!ems | uunet!rosevax}!umn-cs!ns!ddb
        ddb@viper.Lynx.MN.Org, ...{amdahl,hpda}!bungia!viper!ddb
        Fidonet 1:282/341.0, (612) 721-8967 hst/2400/1200/300

------------------------------

Date: 11 Jul 88 20:13:04 GMT
From: ns!logajan@umn-cs.arpa  (John Logajan x3118)
Subject: Re: How to dispose of the free will issue (long)


The no-free-will theory is untestable.
The free-will theory is like-wise untestable.
When the no-free-will theorists are not thinking about their lack of free will
they invariably adopt free-will outlooks.
So go with the flow, why fight your natural instincts to believe in that which
is un-provable.  If you must choose between un-provable beliefs, take the one
that requires the least effort.

- John M. Logajan @ Network Systems; 7600 Boone Ave; Brooklyn Park, MN 55428 -
- ...amdahl!bungia!ns!logajan, {...uunet, ...rutgers} !umn-cs!ns!logajan     -

------------------------------

Date: 11 Jul 88 22:27:53 GMT
From: uhccux!lee@humu.nosc.mil  (Greg Lee)
Subject: Re: How to dispose of the free will issue (long)

From article <11906@agate.BERKELEY.EDU>, by Gene W. Smith:
" ids. But he failed to show his explanations were to be preferred
" to the possible alternatives; in other words, to show his ideas
" had any real explanatory power. You would need to show your
" ideas, whatever they are, had genuine explanatory power to claim
" you had a worthwhile scientific theory.

No one ever knows the possible alternatives, therefore no scientific
theory is worthwhile.
        Greg, lee@uhccux.uhcc.hawaii.edu

------------------------------

End of AIList Digest
********************

∂12-Jul-88  1026	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #7   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 12 Jul 88  10:26:11 PDT
Date: Tue 12 Jul 1988 01:02-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #7
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 12 Jul 1988       Volume 8 : Issue 7

Today's Topics:

 Announcements:

  Annual Conceptual Graph Workshop '88
  SPEECH SCIENCE & TECH. CONFERENCE '88
  Fifth Israeli Symposium on Artificial Intelligence
  Directions and Implications of Advanced Computing (DIAC-88)
  SGAICO conference announcement
  OBJECT ORIENTED DATABASE WORKSHOP
  ECAI88 - Program

----------------------------------------------------------------------

Date: Wed, 06 Jul 88 10:08:10 +1000
From: "ERIC Y.H. TSUI" <munnari!aragorn.oz.au!eric@uunet.UU.NET>
Subject: Annual Conceptual Graph Workshop '88

Dear Colleague,

Last year, the second Annual Conference on conceptual graphs was organised by
Jean Fargues at the IBM Paris Scientific Center.

In 1989, I shall organise the Annual Conference on conceptual graphs at
Deakin University on the 10th and 11th of March.

I wish to invite you to attend this workshop, and I am looking forward to a
possible contribution you could propose, such as a 30 minutes presentation
with some handouts or an article. If you are interested in attending,
please notify

        Professor Brian J. Garner
        Division of Computing and Mathematics
        Deakin University
        Geelong, Victoria 3217
        Australia

        Phone: 61 52 471 383
        Telex: DUNIV AA35625
        FAX: 61 52 442 777
        Email: brian@aragorn.oz (CSNET)

Expenses will be the responsibility of the participants but there is no special
fee for attending the workshop. I am looking forward to your participation and
possible contribution.


                                        Brian J. Garner
                                        Professor of Computing
                                        Deakin University
                                        Geelong, Victoria 3217
                                        AUSTRALIA

Eric Tsui                               eric@aragorn.oz

------------------------------

Date: 6 Jul 88 05:20:06 GMT
From: munnari!csadfa.oz.au!miw@uunet.UU.NET (Michael Wagner)
Subject: SPEECH SCIENCE & TECH. CONFERENCE '88


The 2nd Australian International Conference on Speech Science and Technology,
SST-88, will be held at Macquarie University in Sydney from 29 November to
1 December 1988. The Conference will address all areas related to Speech
Science and Technology, specifically

Speech synthesis & voice response systems
Automatic speech recognition & understanding
Speaker verification & identification
Speech analysis & reconstruction
Speech coding, compression & encryption
Acoustic phonetics & speech production
Speech disorders & speech aids for the disabled
Speech technology applications

The 2 keynote speakers are:
Dr James Flanagan, Director, Information Principles Research Laboratory, AT&T
Dr Anthony Bladon, Director, Phonetics Laboratory, Oxford University

The Conference will be preceded by a 1-day speech science and technology
tutorial on 28 November.

Submission of papers: Prospectice authors are invited to submit a 400-word
summary to Dr M. Wagner, SST-88 Program Coordinator, Dept of Computer Science,
University College/ADFA, Canberra ACT 2601, Australia, Tel. (062)688955,
Fax (062)688581, Telex AA62030adfadm, ACSnet: miw@csadfa.oz,
UUCP: ...!uunet!munnari!csadfa.oz!miw, ARPA: miw%csadfa.oz@uunet.uu.net,
JANET:  csadfa.oz!miw@ukc
to be RECEIVED by 8 August 1988. Authors will be notified of the acceptance of
their papers by 22 August and photo-ready papers are due by 17 October 1988.

Conference registration A$195 (A$230 after 17 October), Full-time students
A$65, Tutorial registration A$180 (A$215 after 17 October), Full-time students
A$10.

Further information from Prof. J.E. Clark, SST-88 Secretary, Speech Hearing &
Language Research Centre, Macquarie University, Sydney NSW 2109, Australia,
Tel. (02)8058784 or 8058782, Fax (02)8874752, Telex AA122377macuni,
ACSnet sr_mail@mqccvaxa.mq.oz

------------------------------

Date: 6 Jul 88 17:13:47 GMT
From: WISDOM.BITNET!udi@ucbvax.berkeley.edu
Subject: Fifth Israeli Symposium on Artificial Intelligence


                Call For Papers


Fifth Israeli Symposium on Artificial Intelligence
Tel-Aviv, Ganei-Hata`arucha,
December 27-28, 1988


The Israeli Symposium on Artificial Intelligence is the annual meeting
of the Israeli Association for Artificial Intelligence, which is a SIG
of the Israeli Information Processing Association.  Papers addressing
all aspects of AI, including, but not limited to, the following
topics, are solicited:

        - AI and education
        - AI languages, logic programming
        - Automated reasoning
        - Cognitive modeling
        - Expert systems
        - Image understanding, pattern recognition and analysis
        - Inductive inference, learning and knowledge acquisition
        - Knowledge theory, logics of knowledge
        - Natural language processing
        - Perception, machine vision
        - Planning and search
        - Robotics

This year, the conference is held in cooperation with the SIG on
Vision, Image Processing and Pattern Recognition, and in
conjunction with the Tenth Israeli Conference on CAD and Robotics.
There will be a special track devoted to Vision, Image Processing
and Pattern Recognition. Joint activities with the Confernece on CAD
and Robotics include the openning session, a session on Robotics
and AI, and the exhibition.

Submitted papers will be refereed by the program committee, listed
below.  Authors should submit 4 _camera-ready_ copies of a full paper or
an extended abstract of at most 15 A4 pages.  Accepted papers will
appear without revision in the proceedings.  Submissions prepared on a
laser printed are preferred.  The first page should contain the title,
the author(s), affiliation, postal address, e-mail address, and
abstract, followed immediately by the body of the paper.  Page numbers
should appear in the bottom center of each page.  Use 1 inch margin
and single column format.

Submitted papers should be received at the following address by
October 1st, 1988:

        Ehud Shapiro
        5th ISAI
        The Weizmann Institute of Science
        Rehovot 76100, Israel

The conference program will be advertized at the end of October.  It
is expected that 30 minutes will be allocated for the presentation of
each paper, including question time.


Program Committee

Moshe Ben-Bassat, Tel-Aviv University
Martin Golumbic, IBM Haifa Scientific Center
Ehud Gudes, Ben-Gurion University
Tamar Flash, Weizmann Institute of Science
Yoram Moses, Weizmann Institute of Science
Uzzi Ornan, Technion
Gerry Sapir, ITIM
Ehud Shapiro (chair), Weizmann Institute of Science
Jeff Rosenschein, Hebrew University
Shimon Ullman, Weizmann Institute of Science
Hezy Yeshurun, Tel-Aviv University

Secreteriate

Israeli Association for Information Processing
Kfar Hamacabia
Ramat-Gan 52109, Israel

------------------------------

Date: 8 Jul 88 20:51:17 GMT
From: ssc-vax!bcsaic!douglas@beaver.cs.washington.edu  (Douglas
      Schuler)
Subject: Directions and Implications of Advanced Computing (DIAC-88)


              DIRECTIONS AND IMPLICATIONS OF ADVANCED COMPUTING

             DIAC-88   Twin Cities, Minnesota   August 21, 1988

      Earle Browne Continuing Education Center, University of Minnesota

Computing technology in public and  private  institutions  poses  challenging
technical,  political,  and social dilemmas. Programmers, analysts, students,
and professors will face these dilemmas, either actively or unwittingly. Both
within  the  computing  profession  and  in the relation of our profession to
other institutions, we have much to consider.

The second annual  symposium  on  Directions  and  Implications  of  Advanced
Computing will be held at the University of Minnesota campus on Sunday August
21, 1988, the day before the American Association for Artificial Intelligence
(AAAI) conference.

Douglas Engelbart, the DIAC-88 plenary speaker, will share his perspective on
using  the  computer  to  address  global  problems.   Since the late 1950's,
Engelbart has worked with systems that augment the human intellect  including
his  NLS/Augment  system,  a  hypertext system that pioneered "windows" and a
"mouse."  The driving force behind Engelbart's professional career  has  been
his  recognition  of  social  impacts  of  computing technology.  The plenary
session  will  be followed by presentations of research papers  and  a  panel
discussion.  The panel, John Ladd (Brown University), Deborah Johnson  (Rens-
salaer Polytechnic), Claire McInerney (College of St. Catherine)  and  Glenda
Eoyang (Excel  Instruction)  will address  the question, "How  Should Ethical
Values be Imparted  and  Sustained in the Computing Community?"

                         Presented Papers

  Computer Literacy: A Study of Primary and Secondary Schools, Ronni
    Rosenberg

  Dependence Upon  Expert  Systems:   The  Dangers  of  the  Computer  as
    an Intellectual Crutch, Jo Ann Oravec

  Computerized Voting, Eric Nilsson

  Computerization and Women's Knowledge, Lucy Suchman and Brigitte Jordan

  Some Prospects for Computer Aided Negotiation, Douglas Schuler

  Computer Accessibility for Disabled Workers: It's the Law (invited paper)
    Richard E. Ladner

Send symposium registration to: DIAC-88, CPSR/Los Angeles,  P.O.   Box  66038
Los  Angeles,  CA   90066-0038.   Enclose  check payable to CPSR/DIAC-88 with
registration.  For additional information, call David Pogoff, 612-933-6431.

  NAME ___________________________________________________
  ADDRESS _________________________________________________
  ________________________________________________________
  ________________________________________________________
  Phone (home) _____________________ (work) ______________________

  Please check one:
  Symposium Registration           Regular             O $50
  (Includes Proceedings and Lunch) CPSR Member         O $35
                                   Student/Low Income  O $25

  I cannot attend, but want the symposium proceedings  O $15

There  will  a  reception  following  the  symposium.   Proceedings  will  be
distributed  to  registrants  at  the  symposium.  Non-attendees will receive
proceedings by October 15, 1988.

------------------------------

Date: 11 Jul 88 11:53 +0100
From: Jiri Dvorak <dvorak%iam.unibe.ch@RELAY.CS.NET>
Subject: SGAICO conference announcement


  SWISS GROUP OF ARTIFICIAL INTELLIGENCE AND COGNITIVE SCIENCE (SGAICO)
  ---------------------------------------------------------------------

                      1988 ANNUAL CONFERENCE ON
                      -------------------------

   ARTIFICIAL INTELLIGENCE IN MANUFACTURING, ASSEMBLY, AND ROBOTICS
   ----------------------------------------------------------------

The conference will be held at the University of Berne on October 5, 1988.
Besides invited overview lectures presenting the state of the art and
future trends there will be a poster session and reports on current work
in Switzerland (and perhaps other European countries).


A                          ONE DAY TUTORIAL
                           ----------------

will precede the conference on October 4, 1988.


CONFERENCE PROGRAM:

H. Bunke, University of Berne: Opening
K. Kempf, Intel Corp., Santa Clara, CA: Practical Applications of Artificial
   Intelligence in Manufacturing
B. Neumann, University of Hamburg: Planning and Configuration Based on
   Knowledge Hierarchies
K. Konolige, SRI International, Menlo Park, CA: Integrating Perception,
   Action, and Intention
J. Troccaz, University of Grenoble: Automatic Robot Programming: On-going
   Research and Future Trends
R. Bless, M. Mueller, ETH Zuerich: ARM: Automated Robust Assembly
C.V. Rusca, EPF Lausanne: The IRMA Project: Towards a Robust Parallel Logic
   Programming Environment in Robotics
E. Gmuer, H. Bunke, University of Berne: PHI-1: A Robot Vision System Based
   on CAD Models
J.P. Mueller, University of Neuchatel: MARS: Mobile Autonomous Robot System
P. Rixhon, P. Rixhon AG, Basel: Integrated Configuration Systems - the
   "Mehrstrom" Experience
Panel discussion on Artificial Intelligence and Robotics in Switzerland.

TUTORIAL PROGRAM

A. Kak, Purdue University, West Lafayette, IN: Scene understanding with
   reflectance and range images
R. Dillmann, University of Karlsruhe: CAD-Oriented Programming of Robot
   Applications
Prof. J.-P. Mueller, University of Neuchatel: Planning for Artificial
   Intelligence Based Robotics
B. Neumann, University of Hamburg: Introduction to Configuration Expert
   Systems


For more information contact

Prof. H. Bunke
Program chairman SGAICO '88
Institut fuer Informatik und angew. Mathematik
Universitaet Bern
Laenggassstrasse 51
CH-3012 Bern / Switzerland
Tel. (+41 31) 65 44 51 or 65 86 81

or

J. Dvorak   (local organization)
at the same address
Tel. (+41 31) 65 49 02
Email: dvorak@iam.unibe.ch
    or dvorak%iam.unibe.ch@relay.cs.net
 UUCP: ..!uunet!mcvax!iam.unibe.ch!dvorak


Fees:
                          conference:   tutorial:
   -----------------------------------------------
   SI or SVI/FSI member    130.- Sfr    120.- Sfr
   non members             200.- Sfr    250.- Sfr
   registered students      50.- Sfr     40.- Sfr

   These fees include one copy of the conference proceedings and
   tutorial material.


For registration please contact the SI secretariat (Ms. A.-M. Nicolet),
Schwandenholzstrasse 286, 8052 Zuerich, Switzerland. Tel: (+41 1) 371 73 42.

------------------------------

Date: 11 Jul 88 14:34:46 GMT
From: fordyce@home.ti.com (David Fordyce)
Subject: OBJECT ORIENTED DATABASE WORKSHOP
Article-I.D.: ti-csl.53603


                  OBJECT-ORIENTED DATABASE WORKSHOP

                 To be held in conjunction with the

                             OOPSLA '88

              Conference on Object-Oriented Programming:
                 Systems, Languages, and Applications

                          26 September 1988

                    San Diego, California, U.S.A.


Object-oriented database systems combine the strengths of
object-oriented programming languages and data models, and database
systems.  This one-day workshop will expand on the theme and scope of a
similar OODB workshop held at OOPSLA '87.  The 1988 Workshop will
consist of the following four panels:

  Architectural issues: 8:30 AM - 10:00 AM

    Therice Anota (Graphael), Gordon Landis (Ontologic),
    Dan Fishman (HP), Patrick O'Brien (DEC),
    Jacob Stein (Servio Logic), David Wells (TI)

  Transaction management for cooperative work: 10:30 AM - 12:00 noon

    Bob Handsaker (Ontologic), Eliot Moss (Univ. of Massachusetts),
    Tore Risch (HP), Craig Schaffert (DEC),
    Jacob Stein (Servio Logic), David Wells (TI)

  Schema evolution and version management:  1:30 PM - 3:00 PM

    Gordon Landis (Ontologic), Mike Killian (DEC),
    Brom Mehbod (HP), Jacob Stein (Servio Logic),
    Craig Thompson (TI), Stan Zdonik (Brown University)

  Query processing: 3:30 PM - 5:00 PM

    David Beech (HP), Paul Gloess (Graphael),
    Bob Strong (Ontologic), Jacob Stein (Servio Logic),
    Craig Thompson (TI)


Each panel member will present his position on the panel topic in 10
minutes.  This will be followed by questions from the workshop
participants and discussions.  To encourage vigorous interactions and
exchange of ideas between the participants, the workshop will be limited
to 60 qualified participants.  If you are interested in attending the
workshop, please submit three copies of a single page abstract to the
workshop chairman describing your work related to object-oriented
database systems.  The workshop participants will be selected based on
the relevance and significance of their work described in the abstract.

Abstracts should be submitted to the workshop chairman by 15 August 1988.
Participants selected will be notified by 5 September 1988.

                        Workshop Chairman:

                       Dr. Satish M. Thatte
           Director, Information Technologies Laboratory
                Texas Instruments Incorporated
                   P.O. Box 655474, M/S 238
                        Dallas, TX 75265

                      Phone: (214)-995-0340
  Arpanet: Thatte@csc.ti.com   CSNet: Thatte%ti-csl@relay.cs.net

Regards, David

------------------------------

Date: Tue, 12 jul88 10:08:10
From: ecai88
      <ecai88%infovax.informatik.tu-muenchen.dbp.de@RELAY.CS.NET>
Subject: ECAI88 - Program


The 8th ECAI 1988 is sponsored by the Gesellschaft fuer Informatik e.V.
(GI), organized and hosted by the Institut fuer Informatik der
Technischen Universitaet Muenchen, under the auspices of the European
Coordinating Committee for Artificial Intelligence (ECCAI).

Industrial  Exhibition

The 8th ECAI will present the latest advances of the technology and
applications of AI.The industrial exhibition takes place at the MENSA
during the conference, from Tuesday, August 2 to Friday, August 5,
1988.

Opening:
Tuesday  - Thursday     9:00 - 19:00
Friday                  9:00 - 15:00
_____________________________________
Invited Talks

Invited Talk I:
Deutsches Museum, Tuesday, August 2, 15:30 - 16:30
Jerry DeJong (Urbana Campaign):
Some Thoughts on the Present and the Future of Explanation-Based Learning

Room A, B, Wednesday, August 3, 17:45 - 18:45
Tom Mitchell (Carnegie Mellon):
Commentator to DeJongs talk
Chairperson: Y. Kodratoff

Invited Talk II:
Room A, B, Thursday, August 4, 9:00 - 10:00
Chris Hogger (Imperial College):
PROLOG Programming Environments

Room A,B, Thursday, August 4, 17:45 - 18:45
Jean Rohmer (BULL Company):
Commentator to Hoggers talk
Chairperson: H. Gallaire

Invited Talk III:
Room A, B, Friday, August 5,  9:00 - 10:00
Erik Sandewall (Linkoping):
Future Developments in Artificial Intelligence - A Personal View
Chairperson: M.-J. Schachter-Radig
_____________________________________
Panel Sessions

Panel I:
Deutsches Museum, Tuesday, August 2, 16:30 - 18:00
Moderator: H. Coelho
Title: Interactions among Intelligent Agents
Panelmembers: J.G.Ganascia, G. Guida, G. Kiss, E. Werner, Y. Wilks

Panel II:
Room P, Thursday, August 4, 16:00 - 17:45
Moderator: M. Boden
Title:  What Is Computation?
Panelmembers: A. Clark, A. Slomann, J. Siekmann

Panel III:
Room P, Friday, August 5, 10:30 -12:30
Moderators: P. Smets and J. Campbell
Title: Applicability of Non-classical Logical Methods to Artificial
       Intelligence Problems.
Panelmembers: not known at editorial deadline.
_____________________________________
ESPRIT- Session

Room P, Wednesday, August 3, 14:00 - 16:30
Chairperson: B. Lepape

Subjects:
1. Introduction
2. Presentation of Projects:
   - Project ESB (P 96)
   - Project ESTEAM (P 316)
   - Project LOKI (P 107)
3. Research and ESPRIT
_____________________________________
Workshop "AI in Medicine"
Thursday, August 4, 14:00 - 18:00, Room E.

Dr. Rolf Engelbrecht
GSF-MEDIS  Muenchen
Ingolstaedter Landstrasse1
D-8042 Neuherberg, FRG
Phone: ++49-89/ 31 87-53 30
EARN: ENGEL at DM0GSF11

Objective

The goal of this workshop is to promote intensive interaction between
the researchers in the field of medical applications of AI as well as
those working in central issues inside AI.  Suitable topics include, but
are not limited to:

-       Representation of medical knowledge
-       Knowledge acquisition
-       Integration of AI and standard software
        (such as for statistical analysis, data base
        management systems)
-       Knowledge based systems and hospital
        information systems
-       Applications of AI in medicine
-       Evaluation of medical expert systems

Potential participants are invited to submit an abstract on issues
relating to the workshop's topics or describing their own work related
to AI in medicine.  Selected abstracts will be presented. Attendance is
limited to 70 participants and requires a registration to ECAI.

The workshop is sponsored by
-       AIME European Society for Artificial
        Intelligence in Medicine
-       GMDS Gesellschaft fr Medizinische
        Dokumentation, Informatik und Statistik

Organizing Committee

R. Engelbrecht, Muenchen (Chairman),
M. Fieschi, Marseille, J. Fox, London,
M. Stefanelli, Pavia, Th. Wetter, Heidelberg
_____________________________________

Sessions

Architectures and Languages 1
Chairperson: F. McCabe
Room A, Thursday, August 4, 16:00-17:45

C. Martin, K. Waldhoer
16:00   BASAR : A Blackboard Based Software Architecture

H. Laasri, R. Maitre, T. Mondot, F. Charpiller, J.P. Haton
16:15   ATOME:  A Blackboard Architecture with Temporal and Hypothetical
        Reasoning

R. Krickhahn, R. Nobis, A. Mhlmann, M.-J. Schachter-Radig
16:45   Applying the KADS Methodology to Develop a Knowledge Based System
        Nethandler

K. Masuda, H. Ishizuka, H. Ivayama, K. Taki, L. Sugino
17:15   Preliminary Evaluation of the Connection Network for the Multi-PSI
System

Architecture and Languages 2
Chairperson: J.P. Sansonnet
Room B, Friday, August 5, 10:30-12:30

P. Dixneuf, A. Meiler, M. Pocheron
10:30   ELOISE'S  Heart: An Efficient Frame for Production Systeme Execution

H. Boley
10:45   Iconic-Declarative Programming and Adaption Rules

I. I. Dimitrov
11:00   INEX:  Flexible and Efficient Objects

T. Wilmes
11:30   A Typed Unification of Functional and Logic Programming
        - Based on Many-Valued Functions

B. Barachini, N. Theuretzbacher
12:00   PAMELA:  An Expert System Technology for Real-time Control Applications
Cognition 1
Chairperson: B. Wielinga
Room A, Wednesday, August 3, 14:00-15:45

L. Steels
14:00   Steps towards Common Sense

D. Partridge, J. McDonald, V. Johnston, K. Paap
14:30   AI  Programs and  Cognitive Models: Models of Perceptual Processes

E. Plaza, R. Lopez de Mantaras
15:00   Model-based Knowledge Acquisition for Heuristic Classification Systems

A. M. Burton, N.R. Shadbolt, G. Rugg, A.P. Hedgerock
15:15   Knowledge Elication Techniques in Classification Domains

Cognition 2
Chairperson: D. Partridge
Room E, Thursday, August 4, 10:30-12:30

H. Lambert, L. Eshelman, Y. Iwasaki
10:30   Acquiring and Complementing the Model for Diagnostic Tasks

M.-C. Rousset
11:00   On the Consistency of Knowledge Bases: The COVADIS System

B.S. Doherty, J.J. Stuart
11:30   Induction and Dialogue in Specification Formalisation:
        an Object-based Approach

J. M. Slack
12:00   Linguistic Constrains and Memory Management

D. Pospelov
12:15   Modelling of Deeds and Normative Behaviour in Intelligent Systems

Cognition 3
Chairperson: D. Pospelov
Room A, Friday, August 5, 10:30-12:00

K. Tanaka, K. Kubota
10:30   Memory-based Learner Model and its Application to a Game Coach

D. Fum, P. Giangrandi, C.Tasso
10.45   Student Modelling Techniques in Foreign Language Teaching

J.H. Sumiga, B. Khazaei, J.J.A. Siddiqi
11:00   A Cognitive Model of Program Designer Behaviour

P. de Greef, J. Breuker, G. Schreiber, J. Wielemaker
11:15   StatCons - Knowledge Acquisition in a Complex Domain

J. Sandberg, J. Breuker, R. Winkels
11:45   Research On HELP-Systems: Empirical Study and Model Construction

Demos Of Academic AI Software
Demos done on Wednesday afternoon by the authors and on appointment during the
conference, Rooms S1, S2.

X. Tong, Z. He, R. Yu
A Survey of the Expert System Tool ZDEST-2

J. E. Larsson, P. Persson
An Intelligent Help System for Idpac

J. Maree
ENIARC:  An Intelligent Explicative Expert System for Rhythm Analysis in
         Electro-cardiograms

N. Guarino
DRL: Terminologic and Relational Knowledge in Prolog

M. Franova
Fundamentals for a New Methodology for Inductive Theorem Proving:
CM-Construction of Atomic Formulae

E. Andre
On the Simultaneous Interpretation of Real World Image Sequences and their
Natural Language Description: The System SOCCER

Epistemology
Chairperson: M. Boden
Room A, Wednesday, August 3, 16:00-17:15

C. Thornton
16:00   Links Between Content and Information-Content

L.E. Janlert
16:30   Pictorial Knowledge Representation

S. Hagglund, I. Rankin
16:45   Investigating the Usability of Expert Critiquing in Knowledge-based
                Consultation Systems

A. Clark
17:00   Two Kinds of Cognitive Sciences ?

Industrial Applications 1
Chairperson: G. Guiho
Room B, Wednesday, August 3, 11:00-12:30

P. Prosser
11:00   A Hybrid Genetic Algorithm for Pallet Loading

T.J. Grant
11:30   An Algorithm for Obtaining Action Sequences from a Procedures Knowledge
        Base

E. Tulp
12:00   TRAINS, An Active Time-table Searcher

Industrial Applications 2
Chairperson: B. Lepape
Room D, Wednesday, August 3, 16:30-17:30

A. Carpentier, B. Solet, M.P. Branca, P.G. Kubansky
16:30   Escut:  An Expert System for Configuring Digital Telephone
        Switching  Equipments

M.-J. Schachter-Radig, D. Wermser
17:00   A Sales Assistent for Chemical Measurement Equipment - SEARCHEM

A. Huber, S. Becker
17:15   Production Planning using a Temporal Planning Component

Knowledge Representation 1
Chairperson: R. Lopez de Mantaras
Room D, Wednesday, August 3, 9:00-10:30

B. Bredeweg, B.J. Wielinga
9:00    Integrating Qualitative Reasoning Approaches

W. Van de Velde
9:30    Inference Structure as a Basis for Problem Solving

M. Frixione, S. Gaglio, G. Spinelli
10:00   Proper Names and Individual Concepts in SI-Nets

Knowledge Representation 2
Chairperson: E. Tyugu
Room A, Wednesday, August 3, 11:00-12:30

W. Wobcke
11:00   A Global Theory of Inheritance

M. Ayel
11:30   Protocols for Consistency Checking in Expert System Knowledge Bases

E. Chouraqui, P. Dugerdil
12:00   Conflicts Solving in a Frame-like Multiple Inheritance System

Knowledge Representation 3
Chairperson: J.P. Laurent
Room A, Thursday, August 4, 10:30-12:15

B. Elfrink, H. Reichgelt
10:30   The Use of Assertion-time Inference in Logic-based Knowledge Bases

J. Ferber, P. Volle
11:00   Using Conference in Object-oriented Representations

K. Eberle
11:15   Extensions of Event-Structures

M. Poesio
11:45   Toward a Hybrid Representation of Time

Knowledge Representation 4
Chairperson: L. Steels
Room C, Friday, August 5, 10:30-12:00

A. Farquhar
10:30   A Qualitative Reasoning Approach to Fault Avoidance

J. Cuena
10:45   The Qualitative Modelling of Axis-based Flow Systems:
        Methodology and Examples

C. Popp
11:00   Answering WHY? HOW? And WHY-NOT? Questions in a Blackboard System

M. Porcheron
11:15   MILORE: a Meta-Level Knowledge Based Architecture for Production System
        Execution

M. Sharples, B. Du Boulay
11:45   Knowledge Representation for a Concept Tutoring System

Logic Programming 1
Chairperson: A. Martelli
Room B, Wednesday, August 3, 9:00-10:30

M. Ducass
9:00    Opium: a Meta-debugger for Prolog

J. Chassin, J.-C. Syre, H. Westphal
9:30    Implementation of a Parallel Prolog System on a Commercial
        Multiprocessor

M. Cavalieri, F. Lamma, P. Mello
10:00   An Extended Prolog Machine for Dynamic Context Handling

Logic Programming 2
Chairperson: J. Rohmer
Room B, Wednesday, August 3, 16:00-17:30

M. Dincbas, H. Simonis, P. Van Hentenryck
16:00   Solving the Car-Sequencing Problem in Constraint Logic Programming

T. Hrycej
16:30   Temporal Prolog

P. Saint-Dizier
17:00   Foundations of DISLOG, Programming in Logic with Discontinuities

T. Conrad
17:15   A Many Sorted PROLOG based on Equational Unification

Logic Programming 3
Chairperson: M. Dincbas
Room D, Thursday, August 4, 14:00-15:30

S. Owen, R. Hull
14:00   The Use of Explicit Interpretation to Control Reasoning about Protein
        Topology

C.-K. Looi
14:30   Analysing Novices  Programs in a Prolog Intelligent Teaching System

J. Zhang, P.W. Grant
15:00   An Automatic Difference-List Transformation Algorithm for Prolog

Machine Learning 1
Chairperson: D. Sleeman
Room C, Wednesday, August 3, 14:00-15:30

M. Keane
14:00   Where's the Beef?  The Absence of Pragmatic Factors in Pragmatic
        Theories of Analogy

R. E. Stepp, B.L. Whitehall, L.B. Holder
14:30   Towards Intelligent Machine Learning Algorithms

J. L. de Siqueira, J.-F. Puger
15:00   Explanation-based Generalisation of Failures

Machine Learning 2
Chairperson: C. Rollinger
Room C, Wednesday, August 3, 16:00-17:30

J. Diederich
16:00   Connectionist Recruitment Learning

R. Goodman, P. Smyth
16:30   Information-Theoretic Rule Induction

B. Cestnik, I. Bratko
17:00   Learning Redundant Rules in Noisy Domains

J. Herrmann
17:15   A Machine Learning Approach to Estimation for IC Design

Machine Learning 3
Chairperson:  Y. Kodratoff
Room B, Thursday, August 4, 14:00 - 15:15

F. Bergadano, A. Giordana, L. Saitta
14:00   Concept Acquisition in an Integrated EBL and SBL Environment

M. Valtorta
14:30   Automating Rule Strengths in Expert Systems

P. P. Terpstra, M.W. van Someren
14:45   INDE:  A System for Heuristic Knowledge Refinement

Y. Takada
15:00   Grammatical Inference for Even Linear Languages Based on Control Sets

Machine Learning 4
Chairperson: K. Morik
Room B, Thursday, August 4, 16:00 - 17:30

J. Blythe
16:00   Constraining Search in a Hierarchical Discriminative Learning System

J. G. Ganascia
16:30   Improvement and Refinement of the Learning Bias Semantic

O. Gascuel, A. Danchin
17:00   Data Analysis using a Learning Program, a Case Study:
        An Application of PLAGE to a Biological Sequence Analysis

Machine Learning 5
Chairperson: T. Niblett
Room D, Friday, August 5, 10:30 - 12:00

G.D. Oosthuizen, D.R. McGregor
10:30   Induction Through Knowledge Base Normalisation

J. Nicolas
11:00   Consistency and Preference Criterion for Generalization Languages
        Handling Negation and Disjunction

W. Van de Velde
11:30   Quality of Learning

Multi-Agent Interaction 1
Chairperson: G. Kiss
Room A, Thursday,  August 6, 14:00 - 15:15

J. R. Galliers
14:00   A Strategic Framework for Multi-Agent Cooperative Dialogue

J. Ayel
14:30   A Conceptual Supervision Model in Computer Integrated Manufacturing

D. Connah , M. Shiels, P. Wavish
15:00   A Testbed for Research on Cooperating Agents

Multi-Agent Interaction 2
Chairperson: H. Coelho
Room C, Thursday,  August 6, 16:00 - 17:15

N.R. Seel
16:00   Modelling Iterated Strategies: A Case Study

S. Adey
16:30   High Level Control of Simulated Ships and Aircraft

C.A. Fields, M.J. Coombs, E.S. Dietrich, R.T. Hartley
16:45   Incorporating Dynamic Control into the Model Generative Reasoning System

Natural Language Understanding 1
Chairperson:  W. Wahlster
Room A, Wednesday,  August 3, 9:00 - 10:30

E. Andre, G. Herzog, T. Rist
9:00    On the Simultaneous Interpretation of Real World Image Sequences and
        their Natural Language Description: the System SOCCER

G. Retz-Schmidt
9:30    A REPLAY of SOCCER: Recognizing Intensions in the Domain of Soccer
                Games

M. Otani, J.M. Lancel
9:45    Sentence Generation: From Semantic Representation to Sentences
        throughout Linguistic Definitions and Lexicon-Grammer

A.J.H. Simons
10:15   A Qualitative Model of the Articulators

Natural Language Understanding 2
Chairperson:  U. Zernik
Room D,  Thursday,  August 4, 16:00 - 17:30

T. Nakazawa, L. Neher, E.W. Hinrichs
16:00   Unification with Disjunctive and Negative Values for GPSG Grammars

B. Dunin-Keplicz
16:30   Partial Reconstruction of Coreferential Structure of Discours

L. Lesmo, M. Berti, P. Terenziani
17:00   A Network Formalisation for Representing Natural Language Quantifiers

Non-Standard Approaches 1
Chairperson:  P. Smets
Room D, Wednesday, August 3, 11:00- 12:15

P. Jackson, H. Reichgelt
11:00   A Modal Proof Method for Doxastic Reasoning in Incomplete Theories

M.-O. Cordier
11:30   SHERLOCK:  Hypothetical Reasoning in an Expert System Shell

N. Bidoit, C. Froidevaux
12:00   More on Stratified Default Theories

Non-Standard Approaches 2
Chairperson:  J. Campbell
Room B, Wednesday, August 3,  14:00 - 15:30

P. Smets
14:00   Transferable Belief Model Versus Bayesian Model

D. Dubois, H. Prade, C. Testemale
14:30   In Search of a Modal System for Possibility Theory

A. L. Brown
15:00   Logics of Justified Belief

Non-Standard Approaches 3
Chairperson: R. Turner
Room B, Thursday, August 4, 10:30 - 12:15

Y. Moinard      10:30   Computing Circumscription Of Horn Theories

P. Besnard, J. Houdebine, R. Rolland
10:45   A Formula Circumscriptively both Valid and Unprovable

P.J. de la Quintana
11:00   Computing Quantifiers in Predicate Modal Logics

M.R.B. Clarke
11:30   Intuitionistic Non-Monotonic Reasoning - Further Results

A. D'Angelo, C. Mirolo, E. Pagello
11:45   A Multiagent Planner for Reasoning with Incomplete Knowledge in a
        Distributed Environment

Reasoning and Theorem Proving 1
Chairperson:  L. Aiello
Room C, Wednesday, August 3, 9:00 - 10:30

O. Dressler
9:00    Extending the Basic ATMS

E. Lafon, C.B. Schwind
9:30    A Theorem Prover for Action Performance

G. M. Provan
10:00   Solving Diagnostic Problems Using Extended Truth Maintenance Systems

Reasoning and Theorem Proving 2
Chairperson:  W. Walther
Room C, Wednesday, August 3, 11:00 - 12:30

S. Biundo
11:00   Automated Synthesis of Recursive Algorithms as a Theorem Proving Tool

M. Franova
11:30   An Implementation of Program Synthesis from Formal Specifications:
        PRECOMAS

A. Stevens
12:00   A Rational Reconstruction of Boyer and Moore'S Technique for
        Constructing Induction Formulars

Reasoning and Theorem Proving 3
Chairperson:  L. Wallen
Room E, Wednesday, August 3, 14:00 - 15:30

E.P.K. Tsang
14:00   Element in Temporal Reasoning in Planning

W. Lukaszewicz
14:15   Chronological Minimization of Abnormality: Simple Theories of Action

M. Lenzerini, D. Nardi
14:30   Belief Revision as Meta-Reasoning

T. Hrycej
14:45   Intelligent Backtracking with StructuredContexts

H. Tuominen
15:15   Translations from Epistemic into Dynamic Logic

Reasoning and Theorem Proving 4
Chairperson:  C. Hogger
Room E, Wednesday, August 3, 16:00 - 17:30

B. Liu
16:00   A Reinforcement Approach to Scheduling

I. Niemela
16:30   Autoepistemic Predicate Logic

H. Freitag, M. Reinfrank
17:00   A Non-Monotonic Deduction System Based on (A)TMS

Reasoning and Theorem Proving 5
Chairperson:  E. Omedeo
Room D, Thursday, August 4, 10:30 - 12:30

F. Giunchiglia, E. Giunchiglia
10:30   Building Complex Derived Inference Rules: A Decider for the Class
        of Prenex Universal-Existential Formulas

J. Paredis
10:45   Qualified Logic as a Means of Integrating Conceptual Formalisms

K. Ammon
11:00   Discovering a Proof for the Fixed Point Theorem:  A Case Study

J. Treur
11:30   Completeness and Definability in Diagnostic Expert Systems

G. Charminade
12:00   Some Computational Aspects of an Order-Sorted Calculus: Order-Sorted
        Unification Using Compact Representation of Clauses

Robotics
Chairperson: M. Brady
Room C, Thursday, August 4, 14:00 - 15:30

S. Bocionek
14:00   Computer-Aided Configuration of Gantry Robots

P. Levi
14:30   TOPAS:  A Task Oriented Planner for Optimized Assembly Sequences

G. Adorni, A. Camurri, A. Poggi, R. Zaccaria
15:00   Integrating Spatio Temporal Knowledge: A Hybrid Approach

Vision 1
Chairperson: B. Radig
Room D, Wednesday, August 3, 14:00 - 15:45

R. Mohr, G. Masini
14:00   Good Old Discrete Relaxation

C. Sielaff
14:30   Hierarchies Over Relational Structures

K. Ammon, S. Stier
15:00   Constructing Polygon Concepts from Line Drawings

G. Vivo, P. Cosoli, R. Salonna
15:15   An Environment for Expert Image Processing

A. Saroldi
15:30   Successive Grouping: Adding Knowledge to Improve Segmentation

Vision 2
Chairperson: R. Mohr
Room C, Thursday, August 4, 10:30 - 12:00

W. Menhardt
10:30   Image Analysis Using Iconic Fuzzy Sets

V. Johnston, P. Lopez, D. Partridge
10:45   A Biologically Based Algorithm for Rapid Scene Analysis

E. Thirion, R. Mohr
11:00   Matching 3-D Images Without Backtracking Through Feature Grouping



E. Grosso, G. Sandini, C. Frigato
11:30   Extraction Of 3D Information and Volumetric Uncertainty from Multiple
                Stereo Images
_____________________________________
Tutorials

The 8th Ecai is preceded by tutorials, organized by the GI-DIA.
More information are available at Gi-DIA.

Tutorial 1: Expert Systems
Monday, August 1, Room T1
Jean-Gabriel Ganascia, Jim Hunter

Tutorial 2: Logic Programming
Monday, August 1, Room T2
Ulrich Furbach, Klaus Estenfeld, Franz Kurfess

1. Methods of Logic Programming

Besides the usual SLD-resolution approach to logic programming with Horn
clauses, we will describe various other methods; e.g. programming using
full first oder predicate logic and equational logic. Furthermore it
will be demonstrated that narrowing, a commonly used inference method
for equational logic programming, can be related very closely to the
paramodulation rule used in theorem proving.

2. Implementation

With PROLOG as the main representative of the logic programming
paradigm, we will discuss the implementation of an interpreter realizing
the PROLOG computation rule. Starting from this classic technique, we
will present mechanisms for the efficient compilation of PROLOG.

3. Parallel Logic Programming

The execution speed of logic programs on conventional machines has
improved a lot in the last few years, partly due to the topics mentioned
in the previous part, but also to more powerful computer systems.
Nevertheless there is still an urgent need for more efficiency in logic
programming. Here, the exploitation of parallelism promises considerable
increases. Parallelism can be introduced on various levels; first, on
the implementation level by executing independent parts of the program
in parallel; second, on the language level by a set of constructs to
control the parallel execution of logic programs, and third, by
meta-level features like various evaluation strategies. The last of this
tutorial gives a brief overview of methodologies under investigation for
the exploitation of parallelism in logic programming.

Ulrich Furbach

received his master degree from Technische Universitaet Muenchen in 1976
and his Ph.D. in theoretical Computer Science from the Universitaet der
Bundeswehr Muenchen in 1983. As an assistent in the Computer Science
Department of Universitaet der Bundeswehr, he worked on semantics of
programming languages, logic and fuctional programming and on knowledge
representation. In 1987 he joined the Artificial Intelligence Group at
the Technische Universitaet Muenchen, where he is involved in the
Esprit-Project 973 (Advanced Logic Programming Environments). He is now
the leader of the AI-Group.

Klaus Estenfeld

1972-1976 Study of Computer Science and Mathematics at the University of
          Saarbruecken
1976-1981 Assistent in the Theoretical Computer Science Dept. at the University
          of Saarbruecken
1/80 Ph.D. with a work in Formal Language Theory
10/81-12/83 Researcher leader in the Logic Programming group of ECRC (European
            Computer-Industry Research Centre), responsible for ECRC-Prolog
1/84-9/86 Project leader for AI-tools in the Siemens data division (a.o.
          developments of a Prolog-DB connection)
11/87- Group leader in Siemens Corporate Applied Computer Sciences with special
       interests in Prolog implementation and in extending Prolog for special
       applications (e.g. CAD electronic)

Franz Kurfess

is a member of the Artificial Intelligence Group at the Technical
University of Munich. The focus of his current work is concentrated on
the design of a parallel inference machine based on W. Bibel's
connection method, and is carried out within the framework of the
ESPRIT-Project 415 on Advanced Information Processing. He received his
diploma in computer science from the Technical University of Munich in
1984. His research interests inlcude exploration in parallel and
distributed systems in general, trying to make us of the knowledge
gathered here for the design of parallel computer architectures.



Tutorial 3: Machine Learning
Monday, August 1, Room T3
Yves Kodratoff, Katharina Morik

Tutorial 4: Modal Logics
Monday, August 1, Room T4
Allan Ramsay, Raymond Turner

Tutorial 5: Natural Language Processing
Monday, August 1, Room T5
Uri Zernik, Koenraad De Smedt

Tutorial 6: Deductive Data Bases
Tuesday, August 2, Room T6
Herve Gallaire, Jean-Marie Nicolas

Tutorial 7: Intelligent Tutoring Systems
Tuesday, August 2, Room T7
Benedict du Boulay, Peter Ross

Tutorial 8: Theorem Proving
Tuesday, August 2, Room T8
Jieh Hsiang, Jean-Pierre Jouannaud

------------------------------

End of AIList Digest
********************

∂17-Jul-88  2121	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #8   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 17 Jul 88  21:21:19 PDT
Date: Sun 17 Jul 1988 23:55-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #8
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 18 Jul 1988        Volume 8 : Issue 8

Today's Topics:

  Guidelines for posting (please read this)

  Computer Games Olympiad
  Spang Robinson AI Report, Vol. 4, No. 5
  Summary of 'Canadian Artifical Intelligence'  July 1988
  Franz Inc. makes available Allegro CL/GNU Emacs Interface

----------------------------------------------------------------------

Date: Sun, 1 Jul 88 22:41 EDT
From: AILIST-REQUEST@AI.AI.MIT.EDU
Subject: Guidelines for posting (please read this)


        When posting messages to AILIST, please try to include
descriptive Subject: lines.  Things like 'Submission to AILIST', 'Please
Post This', and the like are not really very helpful.  They cause me the
extra labor of replacing them with something (hopefully) more
appropriate.

        I assume that anything sent to AILIST@AI.AI.MIT.EDU is intended
as a submission.  If you want to communicate something that shouldn't be
posted (questions, comments, requests to be added/deleted), use
AILIST-REQUEST@AI.AI.MIT.EDU.

        When posting a query about a subject, that alone should be your
Subject: line.  For instance, if you want to know about the FOOBAR
software package, avoid 'Need to know about FOOBAR' or 'Need help with
FOOBAR' in favor of simply 'FOOBAR'.  If you are responding to a query, use
the form 'Response to FOOBAR'.

        When posting Calls for Papers or Announcements, the Subject:
line should simply be the name of the conference or whatever.  Leave it
to me to figure out precisely what category it fits into.  Announcements
should be short and to the point.  Registration forms and unnecessary
verbiage will be edited out without notice.

        For seminar announcements, a Subject: line more descriptive than
'Reminder -- AI Industries Revolving Seminar Tuesday' is nice.  Perhaps
a paraphrase of the title of the talk, followed by the name of the
author would be more informative.

        Please resist the urge to fiddle with the Subject: line when
replying to someone else's message.  It's true that conversations tend
to drift off-subject.  But not everyone in the world has good
conversation-following mail software (in particular, I don't) and
keeping semi-consistent subject lines will help immensely.

        If anyone has any other suggestions or comments on this, please
send them to AILIST-REQUEST@AI.AI.MIT.EDU


        Many thanks,


                - nick

------------------------------

Date: 11 Jul 88 16:36:33 GMT
From: eagle!icdoc!qmc-cs!liam@bloom-beacon.mit.edu  (William Roberts)
Subject: Computer Games Olympiad

There is to be a Computer Games Olympiad in London in 1989.  The details
are as follows:

"The world's first Olympiad for computer programs will take place at the
Park Lane Hotel, London, from August 9th to 15th 1989.  This unique event
will feature tournaments for chess, bridge, backgammon, draughts, poker,
Go, and many other classic "thinking" games.  In every tournament all of
the competitors will be computer programs.  The role of the human operators
will merely be to tell their own programs what moves have been made by
their opponents.
The Computer Olympiad is organised by International Chess Master David Levy,
who is President of the International Computer Chess Association.
Anyone wanting more information on the event should send a large stamped
addressed envelope to:  Computer Olympiad, 11 Loudoun Road, London NW8 OLP,
England.

                        CALL FOR PAPERS

The 1st London Conference on Computer Games will take place as part of the
Computer Olympiad during the period August 9th to 15th 1989.  Papers are
invited on any aspect of programming computers to play "thinking" games
such as chess, bridge, Go, backgammon, etc.
The conference Chair will be Professor Tony Marsland, from the Computing
Science Department at the University of Alberta, Edmonton, Canada.  The
editor of the conference proceedings will be Don Beal, from the Computer
Science Department at Queen Mary College, London University.
Papers should preferrably be 3000 to 4000 words in length, and if possible,
should be submitted with an IBM-PC format disk containing the text as a
file for a widely-used word-processor (e.g. Wordstar).  The closing date
for submissions is May 9th 1989.  Papers should be sent to: Computer
Olympiad, 11 Loudoun Road, London NW8 OLP, England.

--

William Roberts         ARPA: liam@cs.qmc.ac.uk  (gw: cs.ucl.edu)
Queen Mary College      UUCP: liam@qmc-cs.UUCP
LONDON, UK              Tel:  01-975 5250

------------------------------

Date: Tue, 12 Jul 88 19:06:31 CDT
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: Spang Robinson AI Report, Vol. 4, No. 5

Spang Robinson Report on Artificial INtelligence, May 1988,
Vol. 4, No. 5

Lead article is on AI and telecommunications

Bell Atlantic of Morgantown,WV sells a C based expert system
development tool called LASER.

The earliest telecom expert systems were in diagnostic, maintenance of
switches, cables and trunks.  New adventures are in planning, network
management, and help-desk systems.    Ameritech is working with Bellcore
to develop an "Intelligent Network project."  Illinois Bell has
developed SLEEK to configure subscriber lines and NetDesk to respond to
serviceneeds.
BBN uses DesigNet internally to prototype customer networks.

Network Equipment Technologies is developing a product to do
real time diagnostic andmonitoring of private long-distance networks.

The centerfold is a table of telecommunications applications, listing
nature of product, hardware, cost to develop, etc.


_________________________________________________________________________

The next article is about the marketing of pre-built expert systems.

Right Writer and Gramentek II (grammar and style checkers) use expert
system technology, sold 100,000 copies each and have not been sold as
expert systems.

___________________________________________________________________________

The next article is on Symbolic Math Systems

Symbolics is porting Macsyma to run under Gold Hill Common Lisp.
There have been several delays.

Wolfram, developer of SMP, is releasing Mathematica, a new symbolic
math system.
The manual is being sold by Addison-Wesley and runs on Mac +,
not on MS-DOS, perhaps on other systems.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Training: Neurocomputing System/ Expert Systems

They give the MIT product, Explorations in Parallel Distributed Processing:
A Handbook of Models, Programs and Exercises which consists of a lab
manual and two disks for $27.50.  It is "unquestionably the best learning
tool wehave found so far in this area.  We give this solid MIT product
two thumbs up."  3,300 copies were sold in thefirst seven
weeks.  Fifteen thousnad copies of Explorations in the Microstructure
of Cognition have been sold.

Gold HIll Computers announced AXLE for $1995.  It comes with
a stand-alone introductory tool plus  an expert system development tools.
The latter requires GoldWorks.  It also needs a PC/AT with five
to seven megabytes of extended memory and 7.5 to 11 meg of available
hard disk space.  Gold Hill is eight million in revenue, 11,000 customers
and is 38th largest software firm.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Shorts:

Dialog Designer allows users to generate Macintosh Dialog systems
and generates LISP code.  It works with Perl Lisp and is sold
by Coral Software.

(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((
Technology Applications sells Keystone
which is a development and delivery tool with
a format compatable to Kee.  Runningon 286 or 386, it costs $4,000.

Perceptics sells Knowledge Shapers which converts rules into decision
trees to improve efficiency.   It then generates C or Ada code and costs
$5,000.  It is aimed at developers requiring high speed, e. g. vision
and real time.

Symbolics now sells
Joshua        $15,000  expert system development and delivery
Concordia     $10,000  document developer
Statice       $10,000 object oriented database

Symbolics expects in three years that 40%of their revenue will be
software.

Symbolics Third quarter revenue $17.4 million, net loss of 4.8 million.

Intellicorp - Third quarter Net loos of $269,000

Teknowledge  Revenue 3.5 million  Losses 1.1 million

MCC is now abandoning four years of LISP code for VLSI CAD due to
performance problems.  Work will be redone in C on Sun Workstations

Transform Logic quarterly had 1.5 million revenue, .6 million net loss.
Itis a competitameter with Bachman.  It does COBOL automatic programming.

Gold Hill will port GoldWorks to the Sun 386i.

Chestnut Software has released dBLISP a tool to interface dBASE III to
Golden Common Lisp  (cost $295 without source, $495 with source)

NLI's Datatalker (natural language-database interface) is now on
the Sun386i.

Texas Instruments sells Procedure Consultant which uses fault trees
and a visual interface similar ot expert systems.  Sells for $495.00.

Hecht Nielson Neurocomputers got $50,000 to identify battlefield
problems and to design a neurocomputer.

Intelligent Technology Group and Starwood Corp will sell INtelligent
Portfolio Manger.

Computer Sciences Corporation STAR*LAB  has released an expert system to help
software development.   It runs under Smalltalk/V286 and costs $15,000
first copy, $1000 each additional copy.

------------------------------

Date: Fri, 15 Jul 88 21:10:39 CDT
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: Summary of 'Canadian Artifical Intelligence'  July 1988

Summary of Canadian Atifical Intelligence  July 1988

Short:
Xerox will stop manufacturing the 110 Series Lisp machine and its Descendants.
It will be porting its InterLisp environment to Sun Machines and should be
available by the end of the Summer.

Proposal for creating Special Interest Group on Educational Technology
______________________________
Discussion  of AI applications toCanadian resource indistries.

Calgary D&S Knowledge Systems has a well-log analysis system which
runs on a PC>

A system uses vision to monitor thejoint angles on the boom and
stick of a Caterpillar excavator.  The operator can now be a
a remote location and this prevents accidents.

CAIP has finished an expert system to evaluate the impact of forrestry
operations on fish habitats.

Acquired Intelligence has a project to anlayze oilseed commodities markets
and to provide veterinary expert system to Third World Country dairy farmers.

The British Columbia Departmentof Forestry will be uising an Analyst Avisor
to update forest inventory maps.  Another system helps correlate maps
with photoimages.

SWIFTG provides epxert advise to severe weather forecasting.

ROKEY is an expert system to track pollution spills into groundwater.  Another
one willhelp locate drilling waste sumps.

_____________________________________________________________________________

AI in Education:

Simon Fraser University -  Expert System to help develop Lesson Plans
Alberta Research Council and      - computer managed learning system
Computer-Based Training Systems
Softwords                - production rules in CAI
Laval University -  BIOMEC, videodiscs and production rules
Universityof Saskatchewans  -  SCENT intelligent tutor for LISP
Laval University  -            geography tutor, nutrition tutor
                               simulation based optics
                     -  SCARABEE, help students write adventure stories
                     - natural language basedwriting package
University of MOntreal - HERON, intelligent advisor for word processing
                       - data modelling in data base design
________________________________________
AI research at Applied Systems, INc.

Scheduler for municipal transit operation
Expert System for highway transportation
Contract Evaluation Expert System
Expert Systems for use in ferries and similar ships including test runs
   on the ocean-going ferrie, M. V.Carrie.
Expert System for telecommunications project
Sixth Generation Research
Project to build translating telephone between Japan, US and other coutnries.
Prototype Expert System Decision Aid for Truck Dispactchers
Spacecraft autonomy system for space satellites
Expert system for dealing with problems such as drivers becoming sick

________________________________________
AI research at MacDonald Dettwiler and Associates

PREFORMA is a system to assist in weather forecasting systems.
Their system does twelve hour prediction of precipitation for
a single site using rules of thumb used by forecasters.  Unfortunately
it is not yet as good as human forecasters.

SWIFT is a predictor of severe weather in Canada's prairie provinces.
It uses the SC4 statistical index of potential weather activity
and observed weather.  It parses the remarks sections of SA wether
reports.  It has spatial and temporal reasoning.

FEX automatical determines rivers, lakes, roads, railways and bridges
from remote sensing imagery.  It uses spectral data, whether lines
are parallel indicating road shapes, whether lines intersect
other potential roads and knowledge that roads are not lcoated
in water and buildings are frequently near roads.

STAR is a system for predicting faults in off-highway logging trucks
and planning preventive maintenance systems.
________________________________________
Acquired INtelligence Inc.

This system waincubated at the University of Victoria.

They have a Knowledge Acquisition methodology that will be developed
for PC/AT type machines.  It will be interfaced with NEXPERTs.
Test systems include clinical neuropsychology
  diagnosis of children's learning disabilities and brain dysfunctions
  haematology
  interpretation of geophysical instrument readings for mineral exploration
  commodity market analysis

________________________________________
Report on the Social Confernece

Report on the 1988 Distributed Artificial Intelligence Workshop

Report on Fourth IEEE Conference on Artificial Applications

Book Reviews
Visual Reconstruction by Andrew Blanke andAndrewZisserman
Logics forArtificial Intelligence by Raymond Turner
Manufacturing INtelligence by Paul Kenneth WWright and David Allan Bourne

------------------------------

Date: 16 Jul 88 17:46:08 GMT
From: akbar.UUCP!layer@ucbvax.berkeley.edu  (Kevin Layer)
Subject: Franz Inc. makes available Allegro CL/GNU Emacs Interface


                             Introduction
                             ------------

Franz Inc. is offering to the Lisp community version 1.2 of their
interface between Allegro Common Lisp (previously known as "Extended
Common Lisp") and GNU Emacs.  This interface will also work between
GNU Emacs and Franz Lisp Opus 43 (the current release from Franz Inc.)
or Franz Lisp Opus 38 (the last public domain version).

The goal of this interface is to offer the Lisp programmer a tightly
integrated Lisp environment.  The interface can be broken down into
three parts:

        * Lisp editing modes for Common and Franz Lisp,
        * Inferior modes for Common and Franz Lisp subprocesses, and
        * TCP Common Lisp modes for socket communication with Common
          Lisp.

The first two are available for both Common and Franz Lisp, but the
third feature, which enables the tight coupling of the environments,
is only available for Allegro CL.  It uses multiprocessing in Allegro
CL and UNIX domain socket to communicate information between the GNU
Emacs and Allegro CL worlds.

The features of the interface are:

        * enhanced subprocess modes, including
          - file name completion
          - an input ring to allow fetching of previously typed input
        * macroexpansion of forms in a Lisp source file
        * `find-tag' for Lisp functions (the Allegro CL world is
          queried for the location of Lisp functions)
        * who-calls: find all callers of a Lisp function
        * Arglist, `describe' and function documentation are available
          in Lisp source buffers (again, the information comes
          dynamically from Allegro CL)
        * automatic indentation of forms entered to an inferior Lisp

The interface is written entirely in GNU Emacs Lisp, with the
exception of a replacement for the standard `open-network-stream' in
src/process.c.  Some of the advanced features use UNIX domain sockets
and also use features specific to the implementation of Allegro CL
(multiprocessing).

The interface is fully documented on-line.


                              Ownership
                              ---------

The Lisp/GNU Emacs interface is the property of Franz Incorporated.
The Emacs Lisp source code is distributed and the following notice is
present in all source files for which it applies:

;;
;; copyright (C) 1987, 1988 Franz Inc, Berkeley, Ca.
;;
;; The software, data and information contained herein are the property
;; of Franz, Inc.
;;
;; This file (or any derivation of it) may be distributed without
;; further permission from Franz Inc. as long as:
;;
;;      * it is not part of a product for sale,
;;      * no charge is made for the distribution, other than a tape
;;        fee, and
;;      * all copyright notices and this notice are preserved.
;;
;; If you have any comments or questions on this interface, please feel
;; free to contact Franz Inc. at
;;      Franz Inc.
;;      Attn: Kevin Layer
;;      1995 University Ave
;;      Suite 275
;;      Berkeley, CA 94704
;;      (415) 548-3600
;; or
;;      emacs-info%franz.uucp@Berkeley.EDU
;;      ucbvax!franz!emacs-info

Some files contain GNU Emacs derived code, and those files contain
the GNU Emacs standard copyright notice.


                        Obtaining the Software
                        ----------------------

To obtain version 1.2 of this interface either:

1) copy it from ucbarpa.berkeley.edu or ucbvax.berkeley.edu via FTP
   (login `ftp', password your login name) from the directory
   pub/fi/gnudist-1.2-tar.Z, or

2) send a check (sorry, no PO's accepted) in the amount of $50 for a
   US address and $75 for a foreign address to Franz Inc. to the
   following address:

        Franz Inc.
        Attn: Emacs/LISP Interface Request
        1995 University Ave
        Suite 275
        Berkeley, CA 94704

   Please specify the media (`tar' format only) which is one of:

        * 1/2", 1600 bpi, 9-track
        * 1/4", cartridge tape--specify the machine type (ie, TEK, SUN)


                             Future Work
                             -----------

Improvements to this interface will be made in the future, so to
facilitate the exchange of information about this and user's
experiences, questions and suggestions a mailing list will be created
as a forum for discussion on topics relating to this interface.  If
you would like to be on this mailing list (local redistribution is
encouraged), please drop me a note.  If you have trouble with one of
the addresses below, try one of:

                          layer@Berkeley.EDU
                                 -or-
                             ucbvax!layer

----------------------------------------------------------------------------

        D. Kevin Layer                  Franz Inc.
        layer%franz.uucp@Berkeley.EDU   1995 University Avenue, Suite 275
        ucbvax!franz!layer              Berkeley, CA  94704
                                        (415) 548-3600, FAX: (415) 548-8253

------------------------------

End of AIList Digest
********************

∂18-Jul-88  0249	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #10  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 18 Jul 88  02:49:25 PDT
Date: Mon 18 Jul 1988 00:09-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #10
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 18 Jul 1988       Volume 8 : Issue 10

Today's Topics:

 Calls for Papers:

  ACM SIGIR 1989 Annual Conference - information retrieval
  Israeli AI Conference
  AI and the Simulation of Behaviour 89
  2nd Conf. on AI and Law
  AI Research for the Battlefield Environment

----------------------------------------------------------------------

Date: Wed, 13 Jul 88 18:31:31 EDT
From: Edward A. Fox <fox@fox.cs.vt.edu>
Subject: ACM SIGIR 1989 Annual Conference


                            SIGIR '89
                         CALL FOR PAPERS

                       12th INTERNATIONAL
                          CONFERENCE ON

          RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL

                    Cambridge, Massachusetts
                       June 25 - 28, 1989

                     Sponsored by ACM SIGIR
                      In cooperation with:
                       AICA - GLIR (Italy)
                   BCS - IRG (United Kingdom)
                GI (Federal Republic of Germany)
                         INRIA (France)

                      Information Retrieval

Information retrieval is one of the most exciting areas of
research and development in the computer and information sciences
today.  Research in this field is becoming increasingly important
in areas as diverse as hypertext, natural language processing,
knowledge representation, expert systems, database and multi-
media object management systems, software engineering and office
information systems.  Similarly, techniques developed in these
and other areas have strong impact on work in information
retrieval, even in its traditional applications in document and text
retrieval systems.  The Annual ACM SIGIR Conference is the prem-
ier forum for presentation and discussion of current research in
this multidisciplinary area.  The 12th Annual Conference will
focus especially on the relationships between information
retrieval and other fields.  The technical program will consist
of contributed research papers and panel presentations. In addi-
tion, there will be a program of tutorials on Sunday 25 June.


                      TOPICS FOR SIGIR '89

Original research papers and panel proposals are solicited on
topics including, but not limited to the following:

Information retrieval theory
e.g. Retrieval models
     Evaluation
     Document and query representation

Artificial Intelligence and Information Retrieval
e.g. Knowledge representation
     Natural language processing
     Connectionism
     Expert systems

Interface issues
e.g. User modelling
     Human-computer interaction
     Intelligent interfaces

Hypertext and Multimedia Systems
e.g. Automatic construction of links
     Search and navigation

Applications
e.g. Software reuse
     Office information systems
     Case-based retrieval

Implementation issues
e.g. Parallel processing
     File organization
     Text searching hardware
     Storage devices, e.g. optical storage



                  INSTRUCTIONS FOR CONTRIBUTORS

                       Contributed Papers

Persons wishing to contribute original research papers should
send four copies of either: a ten to twelve page (double-spaced)
extended abstract; or, a twenty page full paper, to the appropri-
ate program chair, as indicated below.  Papers will be published
in the conference proceedings, and authors will be required to
sign an ACM copyright release form.  Submissions are due 14
December 1988.

                       Panel Presentations

Suggestions for panels should consist of descriptions of the
topic to be covered, the names of proposed speakers and modera-
tor, brief abstracts of the proposed presentations, and the
desired length of time for the panel.  Four copies of proposals,
of no more than three pages, should be sent to the appropriate
program chair. Proposals are due 14 December 1988.  Email may
be used for panel proposals, but must be backed up by hard copy.

                            Tutorials

Proposals for tutorials should consist of the topic to be dis-
cussed, the name(s) and brief biographies of the presenter(s),
and an outline of the tutorial.  Four copies of proposals, of no
more than three pages, are due 16 January 1989. Email may be used
for tutorial proposals, but backed up by hard copy.  Proposals
should be sent to the tutorial chair:

     Paul Gandel
     AT&T Bell Laboratories
     Room 2J-501
     Holmdel NJ 07733
     ihnp4!hoqam!pbg



                         IMPORTANT DATES





14 December 1988    Papers and panel proposals due to Program
                    Chairs

16 January 1989     Tutorial proposals due to Tutorial Chair

17 February 1989    Authors informed of acceptance of papers and
                    proposals

20 March 1989       Final versions of papers due





                         Program Chairs

Prof. N.J. Belkin
4 Huntington Street
School of Communication, Information & Library Studies
Rutgers University
New Brunswick, NJ 08903
USA
njb@flash.bellcore.com (internet)
belkin@zodiac (bitnet)
(Americas & Asia)


Prof. C.J. van Rijsbergen
Department of Computing Science
Glasgow University
Lilybank Gardens
Glasgow G12 8QQ
Scotland
cjvr@cs.glasgow.ac.uk
(Europe, Africa, Australia)


                        Program Committee

Robert B. Allen, Bell              Vijay Raghavan, University of
Communications Research            Southwestern Louisiana

Abraham Bookstein, University      Stephen Robertson, The City
of Chicago                         University, London

Alex Borgida, Rutgers              Gerard Salton, Cornell
University                         University

Christine Borgman, UCLA            Karen Sparck Jones, Cambridge
                                   University
Giorgio Brajnik, Universita
degli Studi di Udine               Craig Stanfill, Thinking
                                   Machines Corporation
Yves Chiaramella, Laboratoire
Genie Informatique - IMAG          Jean Tague, University of
                                   Western Ontario
Stavros Christodoulakis,
University of Waterloo             Carlo Tasso, Universita degli
                                   Studi di Udine
Paul Cohen, University of
Massachusetts                      C.J. van Rijsbergen, Glasgow
                                   University
Edward A. Fox, Virginia Poly-
technic Institute and State        Clement Yu, University of
University (VPI&SU)                Illinois at Chicago Circle

William B. Frakes, AT&T Bell           Conference Committee
Laboratories
                                   Conference Chair:
Norbert Fuhr, Technische            Bruce Croft, University of
Hochschule Darmstadt                Massachusetts

Peter Ingwersen, Royal Danish      Program Chairs:
School of Librarianship             Nick Belkin, Rutgers
                                      University
Janet Kolodner, Georgia Tech        C.J. van Rijsbergen, Glasgow
                                      University
Donald H. Kraft, Louisiana State
University                         Tutorials Chair:
                                    Paul Gandel, AT&T Bell
Michael J. McGill, OCLC                  Laboratories

Norman Meyrowitz, Brown            Local Arrangements Chair:
University                          Candy Schwartz, Simmons
                                    College
Erich J. Neuhold, Institute for
Integrated Publication and         Publicity Chair:
Information Systems                 Edward A. Fox, VPI&SU

Fausto Rabitti, IEI-CNRS           Treasurer:
                                    Donna Harman, National Bureau
Roy Rada, Liverpool University      of Standards

------------------------------

Date: Thu, 14 Jul 88 22:34:57
From: Shmuel Peleg <peleg%humus.Huji.AC.IL@MITVMA.MIT.EDU>
Subject: Israeli AI Conference

=====================================================================

                Call For Papers


Fifth Israeli Conference on Artificial Intelligence
Tel-Aviv, Ganei-Hata`arucha,
December 27-28, 1988


The Israeli Conference on Artificial Intelligence is the annual meeting
of the Israeli Association for Artificial Intelligence, which is a SIG
of the Israeli Information Processing Association.  Papers addressing
all aspects of AI, including, but not limited to, the following
topics, are solicited:

        - AI and education
        - AI languages, logic programming
        - Automated reasoning
        - Cognitive modeling
        - Expert systems
        - Image processing and pattern recognition
        - Inductive inference, learning and knowledge acquisition
        - Knowledge theory, logics of knowledge
        - Natural language processing
        - Computer vision and visual perception
        - Planning and search
        - Robotics

This year, the conference is held in cooperation with the SIG on
Computer Vision and Pattern Recognition,  and in conjunction with the
Tenth Israeli Conference on CAD and Robotics.  There will be a special
track devoted to Computer Vision and Pattern Recognition.  Joint
activities with the Confernece on CAD and Robotics include the
openning session, a session on Robotics and AI, and the exhibition.

Submitted papers will be refereed by the program committee, listed
below.  Authors should submit 4 camera-ready copies of a full paper or
an extended abstract of at most 15 A4 pages.  Accepted papers will
appear without revision in the proceedings.  Submissions prepared on a
laser printed are preferred.  The first page should contain the title,
the author(s), affiliation, postal address, e-mail address, and
abstract, followed immediately by the body of the paper.  Page numbers
should appear in the bottom center of each page.  Use 1 inch margin
and single column format.

Submitted papers should be received at the following address by
October 1st, 1988:

        Ehud Shapiro
        5th ICAI
        The Weizmann Institute of Science
        Rehovot 76100, Israel

The conference program will be advertized at the end of October.  It
is expected that 30 minutes will be allocated for the presentation of
each paper, including question time.


Program Committee

Moshe Ben-Bassat, Tel-Aviv University (B25@taunivm.bitnet)
Martin Golumbic, IBM Haifa Scientific Center
Ehud Gudes, Ben-Gurion University (ehud@bengus.bitnet)
Tamar Flash, Weizmann Institute of Science
Yoram Moses, Weizmann Institute of Science
Uzzi Ornan, Technion
Shmuel Peleg, Hebrew University (peleg@humus.bitnet)
Gerry Sapir, ITIM
Ehud Shapiro (chair), Weizmann Institute of Science (udi@wisdom.bitnet)
Jeff Rosenschein, Hebrew University (jeff@humus.bitnet)
Shimon Ullman, Weizmann Institute of Science (shimon@wisdom.bitnet)
Hezy Yeshurun, Tel-Aviv University (hezy@taurus.bitnet)

Secreteriate

Israeli Association for Information Processing
Kfar Hamacabia
Ramat-Gan 52109, Israel

=====================================================================

------------------------------

Date: Fri, 15 Jul 88 13:30:24 +0100
From: Tony Cohn <agc%snow.warwick.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: AISB89


AISB89      Call for Papers
===========================

AISB (The Society for the Study of Artificial Intelligence and
Simulation of Behaviour) will hold its seventh biennial conference at
the University of Sussex, April 17-21 1989.  The occasion will also
mark the first 25 years of AISB's existence.

Papers of not more than 5000 words are invited on any aspect of Artificial
Intelligence or the Simulation of Behaviour including

Vision                    Knowledge Representation
Knowledge Acquisition     Automated Reasoning
Cognitive Modelling       Commonsense Reasoning
Learning                  Psychological, Philosophical or Social Implications
Search                    Languages, Machines and Environments for AI
Planning                  Natural Language Understanding

Papers may describe theoretical or practical work but should make a
significant and original contribution to knowledge about the field of
Artificial Intelligence.  A prize will be awarded for the best paper.
It is expected that the proceedings will be published as a book.

Requirements for Submission:

        Each paper should contain an abstract of not more than 200
        words and a list of up to four keywords or phrases describing the
        content of the paper.  Authors should give an electronic mail address
        where possible, but all submissions should be in hardcopy in
        letterquality print.  Papers should be written in 12 point or pica
        typewriter face on A4 or 8.5" x 11" paper.  Five copies should
        submitted.  Papers must be written in English.  Submission of a paper
        implies that all authors have obtained all necessary clearances from
        their institution and that an author will attend the conference to
        present the paper if it is accepted.  Papers should describe work that
        will be unpublished on the date of the conference.

Deadline for submission: 1 November 1988

Notification of acceptance mailed by: 7 December 1988

Deadline for camera ready copy: 24 January 1989

Papers and all queries regarding the programme should be sent to
the programme chairman:

Dr Tony Cohn
Dept Computer Science
University of Warwick
COVENTRY
CV4  7AL
UK                      email: agc@uk.ac.warwick.cs
                        arpa:  agc%uk.ac.warwick.cs@nss.cs.ucl.ac.uk

Programme committee:
        Tony Cohn, David Hogg, Alison Kidd,
        Chris Mellish, Mike Sharples, Sam Steel

All other correspondence and queries regarding the conference
should be addressed to:

        Judith Dennison
        SSAISB Executive Officer
        Arts E
        University of Sussex
        BRIGHTON
        BN1 9QN                 Tel: +44 (273) 678379
                                Email: judithd@uk.ac.sussex.cvaxa

------------------------------

Date: Fri, 15 Jul 88 13:22:42 EDT
From: carole hafner <hafner%corwin.ccs.northeastern.edu@RELAY.CS.NET>
Subject: 2nd Conf. on AI and Law


                          CALL FOR PAPERS

                   Second International Conference on
                     ARTIFICIAL INTELLIGENCE and LAW

                          June 13-16, 1989
                    University of British Columbia
                  Vancouver, British Columbia, Canada


The field of AI and Law -- which seeks both to understand fundamental mechanisms
of legal reasoning as well as to develop useful applications of AI to law --
is burgeoning with accomplishments in both basic research and practical
applications. This increased activity is due in part to more widely available
AI technology, advances in fundamental techniques in AI and increased interest
in the law as an ideal domain for studying certain issues central to AI.
The activities range from development of classic expert systems, intended as
aids to lawyers and judges, to investigation of canonical elements of case-based
and analogical reasoning. The study of AI and law both draws on and contributes
to progress in basic concerns in AI, such as representation of common sense
knowledge, example-based learning, explanation, and non-monotonic reasoning,
and in jurisprudence, such as the nature of legal rules and the doctrine
of precedent.

The Second International Conference on Artificial Intelligence and
Law (ICAIL-89) seeks to stimulate further collaboration between workers in
both disciplines, provide a forum for sharing information at the cutting
edge of research and applications, spur further research on fundamental
problems in both the law and AI, and provide a continuing focus for the
emerging AI and law community.

Authors are invited to contribute papers on topics such as the following:

        -- Legal Expert Systems
        -- Conceptual Information Retrieval
        -- Case-Based Reasoning
        -- Analogical Reasoning
        -- Representation of Legal Knowledge
        -- Computational Models of Legal Reasoning

In addition, papers on relevant theoretical issues in AI (e.g., concept
acquisition, mixed paradigm systems using rules and cases) and in
jurisprudence/legal philosophy (e.g., open-textured predicates, reasoning
with precedents and rules) are also invited provided that the relationship
to both AI and Law is clearly demonstrated. It is important that all authors
identify the original contributions presented in their papers, exhibit
understanding of relevant past work, discuss the limitations as well as
the promise of their ideas, and demonstrate that the ideas have matured
beyond the proposal stage. Each submission will be reviewed by at least three
members of the Program Committee and judged as to its originality, quality,
and significance.

Authors should submit six (6) copies of an Extended Abstract, which must include
a full list of references, by January 10, 1989 to the Program Chair:
      Edwina L. Rissland
      Department of Computer and Information Science
      University of Massachusetts, Amherst, MA 01003, USA;
      (413) 545-0332, rissland@cs.umass.edu.
Submissions should be 6 to 8 pages in length, not including references.
No electronic submissions can be accepted. Notification of acceptance or
rejection will be sent out by early March. Final camera-ready copy of the
complete paper (up to 15 pages) will be due by April 15, 1989.

Program Chair: Edwina L. Rissland, University of Massachusetts/Amherst and
Harvard Law School

General Co-Chairs: Robert T. Franson, Joseph C. Smith, Faculty of Law,
University of British Columbia

Secretary-Treasurer: Carole D. Hafner, Northeastern University

 Program     Kevin D. Ashley            IBM Thomas J. Watson Reasearch Center
 Committee:  Trevor J.M. Bench-Capon    University of Liverpool
             Donald H. Berman           Northeastern University
             Jon Bing                   University of Oslo
             Michael G. Dyer            UCLA
             Anne v.d.L. Gardner        Palo Alto, California
             L. Thorne McCarty          Rutgers University
             Marek J. Sergot            Imperial College London

------------------------------

Date: Sat, 16 Jul 88 12:48:12 EDT
From: John Benton <john@ai.etl.army.mil>
Subject: AI Research for the Battlefield Environment

*******************************CALL FOR ABSTRACTS****************************
The U.S. Army Symposium/Workshop on Artificial Intelligence Research for the
Battlefield Environment will be held on November 15-18, 1988 at the Westin
Hotel in El Paso, Texas.  The Symposium/Workshop is being held under the
auspices of the Assistant Secretary of the Army for Research, Development and
Acquisition and is co-sponsored by the U.S. Army Engineer Topographic
Laboratories, The Atmospheric Sciences Laboratory and the Ballistic Research
Laboratory.  No classified papers will be presented at the Symposium.  Extended
abstracts (of 200 to 300 words) addressing the issues listed in the attached
Symposium Program are being solicited.  Abstracts which most closely address
these issues will be given preference for acceptance.  The abstracts must be
submitted to the Session Chairs listed below by September 1, 1988.  Contractors
are reminded to include a clearance from their Contracting Officer with the
abstract.  Government authors must include a clearance for the abstract from
their agency.  Authors of abstracts accepted for inclusion in the symposium
will be notified not later than September 30 that their abstract has been
accepted and that a camera-ready manuscript must be submitted no later than the
first day of the Symposium.  Letters indicated that the papers have been
cleared by the relevant authorities must be included with the submitted paper.


The chairman for the Session on Automated Terrain Reasoning is John R. Benton,
tel: (202)355-2717, Autovon 345-2717, ARPANET: john@etl.arpa.  His address is

        Commander and Director
        U.S. Army Engineer Topographic Laboratories
        ATTN: ETL-RI-I (John Benton)
        Fort Belvoir, Virginia 22060-5546

The chairman for the Session on The Realistic Battlefield is Dr. Howard Holt.
tel: (505)678-2412 or Autovon 258-2412.  His address is

        Commander and Director
        U.S. Army Atmospheric Sciences Lab
        ATTN:  SLCAS-AS (DR. E. Howard Holt)
        White Sands Missile Range, NM 88002-5501

The chairman for the Session on State-of-the-Art Applications is Morton
Hirschberg. tel:  (301)278-6661 Autovon 298-6661, ARPANET: mort@brl.arpa

        Director
        Ballistic Research Laboratory
        ATTN:  SLCBR-SE-C (M. Hirschberg)
        Aberdeen Proving Ground, MD 21005-5066

(Note: Abstracts may be sent by ARPANET to Mr. Benton or Mr. Hirschberg
accompanied by a statement that the abstract has been cleared and that the
the clearance has been mailed.)


                                                John Benton
                                                Program Committee Chairman

****************************************************************************

                               Symposium Program
                                      for
       U.S. Army Symposium/Workshop on Artificial Intelligence Research
                       for the Battlefield Environment

            Session I:  Introduction and Military Requirements

                   Session II: Automated Terrain Reasoning
  Session Chair: John R. Benton, U.S. Army Engineer Topographic Laboratories

Assuming the existence of a topographic data base, can automated  terrain  rea-
soning  systems  be  developed to provide support for operations in the Battle-
field Environment.  Submitted papers should address the following questions:

     a.  What are the relevant military doctrines; can we identify  them,  con-
     vert them to computer representation?

     b.  What current research on spatial  reasoning  has  been  done  that  is
     relevant  to  exploiting  the  battlefield  environment?   What additional
     research needs to be done?  Can cold weather factors be incorporated  into
     the research efforts?

     c.   How  will  the  Condensed  Army  Mobility  Model  System  (CAMMS)  be
     integrated  into  automated  terrain reasoning.  Are there inadequacies in
     the model?

     d.  What special requirements do terrain reasoning  systems  put  on  Geo-
     graphic Information Systems (GIS)?  Are present GIS's adequate?

     e.  Do Expert Systems (ES) have a role in spatial  reasoning  -  fundamen-
     tally or only as an interface to the military doctrine representation?

     f.  How can we make the information usable to the GI in the  Field?   Will
     it be  at platoon, company, battalion, division or corps?  Is it premature
     to distinguish applications along these lines?

                      Session III:  The Realistic Battlefield
 Session Chairman: Dr. Howard Holt, U.S. Army Atmospheric Sciences Laboratory

How can we apply artificial intelligence techniques  for  exploitation  of  the
realistic  battlefield  environment  with  multiple  sources of smoke, dust and
obscurants?  Papers will address the following questions:

     a.  Is relevant military doctrine subjective?  Can it easily be  converted
     to a computer representation.

     b.  How can  information on smoke and obscurants be usefully presented  to
     the GI in the field?

     c.  Can Geographic Information Systems be  used  to  represent  obscurants
     which  move  as  a  function  of  time.  How can obscurant data be made to
     interact with a GIS.

     d.  What role will Expert Systems (ES) play?

                     Session III: State-of-the Art Applications
     Session Chair: Morton Hirschberg, U.S. Army Ballistic Research Laboratory

Are there any State-of-the Art applications?  What are the best candidates  for
automating terrain reasoning?  Submitted papers should address these questions.

------------------------------

End of AIList Digest
********************

∂18-Jul-88  0549	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #11  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 18 Jul 88  05:49:23 PDT
Date: Mon 18 Jul 1988 00:16-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #11
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 18 Jul 1988       Volume 8 : Issue 11

Today's Topics:

 Free Will

  K. Godel.
  Carlos Castaneda
  How to dispose of the free will issue

----------------------------------------------------------------------

Date: Tue, 12 Jul 88 17:53:05 +0100
From: "Gordon Joly, Statistics, UCL"
      <gordon%stats.ucl.ac.uk@ESS.Cs.Ucl.AC.UK>
Subject: K. Godel.

>From AIList Digest   V8 #6

[ Date: 6 Jul 88 17:04:13 GMT
[ >From: mcvax!ukc!etive!aiva!jeff@uunet.uu.net  (Jeff Dalton)
[ Subject: Re: Free Will-Randomness and Question-Structure
[
[ In article <304@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
[ ] Actually, the point was just that: when I say that something is
[ ] true in a mathematical sense, I mean just one thing: the thing
[ ] follows from the chosen axioms;
[
[ "True" is not the same as "follows from the axioms".  See Godel et al.

Mathematical axioms spawn proofs (not truths). Godel's theorem says you
may find a proposition you cannot prove with your "chosen axioms".

Way out. Make the statement you cannot prove an axiom, taking care not
to get an unbounded number of axioms.

I look forward to axiomatic engineering and axiomatic political
science.

Gordon Joly.

Surface mail: Dr. G.C.Joly, Department of Statistical Sciences,
      University College London, Gower Street, LONDON WC1E 6BT, U.K.
E-mail:                                            | Tel: +44 1 387 7050
 JANET (U.K. national network) gcj@uk.ac.ucl.stats |      extension 3636
         (Arpa/Internet form: gcj@stats.ucl.ac.uk) |
Relays: ARPA,EAN: @nss.cs.ucl.ac.uk                |
        CSNET: %nss.cs.ucl.ac.uk@relay.cs.net      |
        BITNET: %ukacrl.bitnet@cunyvm.cuny.edu, @ac.uk
        EARN: @ukacrl.bitnet, @AC.UK
By uucp/Usenet: ....!uunet!mcvax!ukc!stats.ucl.ac.uk!gcj

------------------------------

Date: 12 Jul 88 19:08:25 GMT
From: hartung@nprdc.arpa (Jeff Hartung)
Reply-to: hartung@nprdc.arpa (Jeff Hartung)
Subject: Re: Carlos Castaneda


In a previous article, James J. Lippard writes:
>   I would like to advise caution in reading these works, and recommend a few
>books which are highly skeptical of Castaneda.  These works present evidence
>that Castaneda's "Don Juan" writings are neither autobiographical nor valid
>ethnography.  E.N.  Anderson, then associate professor of anthropology at UCLA
>(where Castaneda received his doctorate), wrote (in The Zetetic, Fall/Winter
>1977, p. 122) that "de Mille exposed many inconsistencies that prove *either*
>that Castaneda was a brilliant fraud *or* that he was an incredibly careless
>and sloppy ethnographer in a disorganized department." (He believes the
>latter.)
>...

I noticed that the most recent Casteneda book in the series, "The Fire From
Within," was published as a work of fiction, unlike the previous six books.  I
took this to be a confession that the works were largely fictitous even prior
to it.  Furthermore, the later books state that what Casteneda believed to be
a Yaqui philosophy initially was in fact a view belonging to a small cult of
"sorcerers" and not to the Yaqui in general, even if you *do* believe the
assertion that the first six books make of being non-fiction.

--Jeff Hartung--

ARPA - hartung@nprdc.arpa
       hartung@sdics.ucsd.edu
UUCP - !ucsd!nprdc!hartung
       !ucsd!sdics!hartung

------------------------------

Date: 13 Jul 88 18:03:32 GMT
From: ns!logajan@umn-cs.arpa  (John Logajan x3118)
Subject: Re: How to dispose of the free will issue (long)


Since we are asked to believe in unprovable things, such as the no-free-will
theory (or the free-will theory for that matter) why not believe in every
unproveable theory.

Just try combining the deterministic theory with the many worlds theory. In
many worlds, at each instant the universe splits into an infinite number of
alternate universes, each one taking a slightly different 'turn'.  i.e. in
one universe I get killed, in another I don't etc.  Each sub-universe futher
splits into an infinite number and so on.

You can argue determinism both ways here.  After all every possibility is
addressed, and so it is deterministic in some sense and yet it isn't.

My point is that unproveable theories aren't very useful.

- John M. Logajan @ Network Systems; 7600 Boone Ave; Brooklyn Park, MN 55428 -
- {...rutgers!umn-cs, ...amdahl!bungia, ...uunet!rosevax!bungia} !ns!logajan -

------------------------------

Date: 15 Jul 88 12:47:23 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: How to process of the free will question.

Suppose we build an intelligent system who is able to access
information from the outside world and thereby acquire knowledge
and abilities.

We ask it, "Do you have Free Will?"

Suppose it answers, "Gee, I'm not sure.  Can you suggest an
experiment I can do on myself to find out?"

What do we reply?

--Barry Kort

------------------------------

End of AIList Digest
********************

∂18-Jul-88  0824	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #12  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 18 Jul 88  08:23:53 PDT
Date: Mon 18 Jul 1988 00:20-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #12
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 18 Jul 1988       Volume 8 : Issue 12

Today's Topics:

  does AI kill? - Responses to the Vincennes downing of an Iranian Jetliner

----------------------------------------------------------------------

Date: 12 Jul 88 22:21:17 GMT
From: amdahl!nsc!daisy!klee@ames.arpa  (Ken Lee)
Subject: does AI kill?

This appeared in my local paper yesterday.  I think it raises some
serious ethical questions for the artificial intelligence R&D community.
------------------------------
COMPUTERS SUSPECTED OF WRONG CONCLUSION
from Washington Post, July 11, 1988

Computer-generated mistakes aboard the USS Vincennes may lie at the root
of the downing of Iran Air Flight 655 last week, according to senior
military officials being briefed on the disaster.

If this is the case, it raises the possibility that the 280 Iranian
passengers and crew may have been the first known victims of "artificial
intelligence," the technique of letting machines go beyond monitoring to
actually making deductions and recommendations to humans.

The cruiser's high-tech radar system, receivers and computers - known as
the Aegis battle management system - not only can tell the skipper what
is out there in the sky or water beyond his eyesight but also can deduce
for him whether the unseen object is friend or foe and say so in words
displayed on the console.

This time, said the military officials, the computers' programming could
not deal with the ambiguities of the airline flight and made the wrong
deduction, reached the wrong conclusion and recommended the wrong
solution to the skipper of the Vincennes, Capt. Will Rogers III.

The officials said Rogers believed the machines - which wrongly
identified the approaching plane as hostile - and fired two missiles at
the passenger plane, knocking it out of the sky over the Strait of
Hormuz.
------------------------------

The article continues with evidence and logic the AI should have used to
conclude that the plane was not hostile.

Some obvious questions right now are:
        1.  is AI theory useful for meaningful real world applications?
        2.  is AI engineering capable of generating reliable applications?
        3.  should AI be used for life-and-death applications like this?

Comments?

Ken
--
uucp:  {ames!atari, ucbvax!imagen, nsc, pyramid, uunet}!daisy!klee
arpanet:  atari!daisy!klee@ames.arc.nasa.gov

STOP CONTRA AID - BOYCOTT COCAINE

------------------------------

Date: 13 Jul 88 15:23:00 GMT
From: uxe.cso.uiuc.edu!morgan@uxc.cso.uiuc.edu
Subject: Re: does AI kill?


I think these questions are frivolous. First of all, there is nothing in
the article that says AI was involved. Second, even if there was, the
responsibility for using the information and firing the missile is the
captain's. The worst you could say is that some humans may have oversold
the captain and maybe the whole navy on the reliability of the information
the system provides. That might turn out historically to be related to
the penchant of some people in AI for grandiose exaggeration. But that's
a fact about human scientists.

And if you follow the reasoning behind these questions consistently,
you can find plenty of historical evidence to substitute 'engineering'
for 'AI' in the three questions at the end.
I take that to suggest that the reasoning is faulty.

Clearly the responsibility for the killing of the people in the Iranian
airliner falls on human Americans, not on some computer.

At the same time, one might plausibly interpret the Post article as
a good argument against any scheme that removes human judgment from the
decision process, like Reagan's lunatic fantasies of SDI.

------------------------------

Date: 13 Jul 88 19:18:47 GMT
From: ucsdhub!hp-sdd!ncr-sd!ncrlnk!rd1632!king@ucsd.edu  (James King)
Subject: RE: AI and Killing (Long and unstructured)


Does AI Kill

Some of my opinions:

In my understanding of the Aegis system there is NO direct connect between
the fire control and the detection systems.  The indirect connect between
these two modules is now (and hopefully will be for some time) a human
being.  A knowledge-based system interface to sensor units on the Vincennes
was not responsible for killing anyone.  It was a human's
position to make the decision, based on multiple sensors and decision
paths (experience), whether or not to fire on the jet.

The computer system onboard the Vincennes was responsible for evaluating
a sensor reading(s) and providing recommendations as to what the reading was
and possibly what action to take.  Is this fundamentally any different
than employing a trained human to interpret radar signals and asking for
their opinion on a situation?

There are two answers to the above.  One is that there is little difference
in the input or the output of human or the computer system.  Two the
difference IS in arriving at the output. The computer is limited by the rules
in the computer system and the fact that a computer system may not possess
a complete understanding of the situation (i.e. area tensions, ongoing
stress in battle situations, prior situational experience that
includes "gut" reactions or intuition, etc.)  Of course speed of
execution is critical but that's another topic.  The human can rely on
emotions, peripheral knowledge, intuition, etc. to make a decision or
a recommendation.  But the point is that each is concluding something
about the sensor output and reporting it to the commander. (period)

I feel that the system contributed to the terrible mistake by providing
inaccurate interpretation of a sensor signal(s) and making an incorrect
recommendation(s) to the commander - BUT it was still a human decision
that caused the firing.  BTW I do not (at present) condemn the commander
for the decision he made - as best I know (as he did) the ship's captain
made the best decision under the circumstances (danger to his ship) and
through interpretation of his sensor evaluations (Aegis as one type).
But I disagree with placing oneself in that position of killing another.

Our problems (besides having our fingers on the trigger) stem from the
facts that we have to have a trigger in the first place and secondly that
we have to rely on sensory input to make a decision - where certain
sensory inputs may be in error.  The "system" is at fault - not just
one component of that system.  To correct the system we fix the components.
So we make the Aegis system "smarter" so it contains knowledge about the
patterns that an Air Bus gives as a cross section (or signature) on a
radar sensor - what then?  We have refined the system somewhat.  Do we
keep refining these systems continuously so they grow through experience
as we do?

Oh no!  We've started another discussion topic.

--------------------------------------------------------------------------

A couple thoughts on Mr Lee's questions:

- Whether AI can produce real applications:
  - The conventional/commercial view of AI is that of expert systems.
    Expert systems in and of themselves are just software systems which
    were programmed using a computer language(s) like any other software
    system.  Some pseudo-differences are based on how we view the
    development of this software.  We are now more aware of the need for
    embedding expert's knowledge in these systems and providing automated
    methods for "reasoning" through this information.
  - Through being aware people ARE developing systems that use expert
    knowledge in very focused domains (i.e. fault diagnosis, etc.).
    - The same problem exists in a nuclear plant.  Did the fault diagnosis
      system properly diagnose the problem and report it?  More
      importantly did the operator act/reason on the diagnosis properly?
    - Where would the blame end?  Is it the expert system's fault or is it
      the sensor's fault - or is it human error?
  - Systems that learn and reason based on non-original programmed
    functionalities have not been developed/deployed but we'll see ...


- Whether AI can be used in life-death situations:

  - If you were in a situation in which a decision had to be made within
    seconds, i.e. most life and death situations, would you:
    1.  Rely on a toss of a coin?
    2.  Make a "shot in the dark" decision?
    3.  Make a quickly reasoned decision based on two or three inferences in
        your mind?
    OR
    4.  Use the decision of a computer (if it had knowledge of the situation's
        domain and could perform thousands of logical inferences/second)?
  - One gets you even odds.  Two gets you a random number for the odds.
    Three gives you slightly better odds based on minimal decision making.
    And four provides you with a recommendation based on the knowledge of
    maybe a set of experts and with the speed of computer processing.
  - If you're an emotional person you probably pick two.  Maybe if you
    have a quick, "accessable" hunch you pick three.  But if you're a
    logical, disciplined, person you would go with the greatest backing
    which is four (and a combination of one through three if the commander
    is experienced!).
  - Which one characterizes a commander in the Navy?
  - Personally I'm not sure which I'd take.


Jim King
NCR Corporation
j.a.king@dayton.ncr.com

Remember these ideas do not represent the position or ideas of my
employer.

------------------------------

Date: 13 Jul 88 22:07:19 GMT
From: ssc-vax!ted@beaver.cs.washington.edu  (Ted Jardine)
Subject: Re: does AI kill?

In article <1376@daisy.UUCP>, klee@daisy.UUCP (Ken Lee) writes:
> This appeared in my local paper yesterday.  I think it raises some
> serious ethical questions for the artificial intelligence R&D community.
> ... [Washington Post article about Aegis battle management computer
>       system being held responsible for providing erroneous conclusions
>       which led to the downing of the Iranian airliner -- omitted]
>
> The article continues with evidence and logic the AI should have used to
> conclude that the plane was not hostile.
>
> Some obvious questions right now are:
>       1.  is AI theory useful for meaningful real world applications?
>       2.  is AI engineering capable of generating reliable applications?
>       3.  should AI be used for life-and-death applications like this?
> Comments?

First, to claim that the Aegis Battle Management system has an AI component
is patently ridiculous.  I'm not suggesting that this is Ken's claim, but it
does appear to be the claim of the Washington Post article.  What Aegis is
capable of doing is deriving conclusions based on an analysis of the radar
sensor information.  Its conclusions, while I wouldn't consider them AI, may
be so considered by someone else, but without a firm basis.  Let's first
agree that a mistake was made.  And let's also agree that innocent lives were
lost, and tragicly so.  What the key issue here is, I believe, that tentative
conclusions were generated based on partial information.  There is nothing
but good design and good usage that will prevent this from occurring, regardless
whether the system is AI or not.  I believe that a properly created AI system
would have made visible the conclusion and it tentative nature.  I believe
that AI systems can be built for meaningful real world application.  But there
is a very real pitfall waiting along the road to producing such a system.

It's the pitfall that permits us to invest some twenty years of time and some
multiple thousands of dollars (hundreds of thousands?) into the training and
education of a person with a Doctorate in a scientific or engineering discipline
but not to permit a similar investment into the creation of the knowledge base
for an AI system.  Most of the people I have spoken to want AI, are convinced
that they need AI, but when I say that it costs money, time and effort just
like anything else, the backpedaling speed goes from 0 to 60 mph in less time
than a Porsche or Maseratti!

I think we need to be concerned about this issue.  But I hope we can avoid
dumping AI technology down the drain because it won't give us GOOD answers
unless we INVEST some sweat and some money.  (Down off the soap box, boy! :-)

TJ {With Amazing Grace} The Piper
aka Ted Jardine  CFI-ASME/I
Usenet:         ...uw-beaver!ssc-vax!ted
Internet:       ted@boeing.com
--
TJ {With Amazing Grace} The Piper
aka Ted Jardine  CFI-ASME/I
Usenet:         ...uw-beaver!ssc-vax!ted
Internet:       ted@boeing.com

------------------------------

Date: 14 Jul 88 00:06:10 GMT
From: nyser!cmx!billo@itsgw.rpi.edu  (Bill O)
Subject: Re: does AI kill?

In article <1376@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>Computer-generated mistakes aboard the USS Vincennes may lie at the root
>...
>If this is the case, it raises the possibility that the 280 Iranian
>passengers and crew may have been the first known victims of "artificial
>intelligence," the technique of letting machines go beyond monitoring to
>actually making deductions and recommendations to humans.
>...
>is out there in the sky or water beyond his eyesight but also can deduce
>for him whether the unseen object is friend or foe and say so in words
>displayed on the console.
>

This is an interesting question -- it touches on the whole issue of
accountability in the use of AI. If an AI system makes a
recommendation that followed (by humans or machines), and the end
result is that humans get hurt (physically, financially,
psychologically, or whatever), who is accountable:

1) the human(s) who wrote the AI system

2) the human(s) who followed the injurious recommendation (or
   who, by inaction, allowed machines to follow the
   recommendation)

3) (perhaps absurdly) the AI program itself, considered as a
   responsible entity (in which case I guess it will have to
   be "executed" -- pun intended).

4) no one

However, as interesting as this question is (and I'm not sure where
the discussion of it should be), lets not jump to the conclusion that
AI was involved in the Vincennes incident.  We cannot assume that the
Post's writers know what AI is, and the "top military officials" may
also be confused, or may have ulterior motives for blaming AI. Maybe
AI is involved, maybe it isn't. For instance, a system that simply
matches radar image size and/or characteristics -- probably coupled
with information from the FOF transponder signal -- to the words
friend or foe "printed on the screen" is very likely not AI by most
definitions. Perhaps the Iranians were the first victims of "table
look-up", (although I have my doubts about "first").  Does anyone out
there know about Ageis -- does it use AI? (Alas, this is probably
classified).

Bill O'Farrell, Northeast Parallel Architectures Center at Syracuse University
(billo@cmx.npac.syr.edu)

------------------------------

Date: 14 Jul 88 06:32:58 GMT
From: portal!cup.portal.com!tony_mak_makonnen@uunet.uu.net
Subject: Re: does AI kill?

no AI does not kill, but AI-people do. The very people that can
write computer programs for you should be the last to decide how
when and how much the computer should compute. The reductive process
coexist in the human brain with the synthetic process. The human in the
loop could override a long computation by bringing in factors that could
not practically be foreseen: 'why did the  Dubai tower say..?, 'why is the
other cruiser reporting a different altitude" ; these small doubts from all
over the environment could trigger experiences in the brain which could
countermand the neat calculated decision. Ultimately the computer is equipped
with a sensory system so poor compared to the human brain . From an extreemely
small slice of what is out there we expect a real life conclusion.

------------------------------

Date: 14 Jul 88 14:17:52 GMT
From: uflorida!novavax!proxftl!tomh@gatech.edu  (Tom Holroyd)
Subject: Re: does AI kill?

In article <1376@daisy.UUCP>, klee@daisy.UUCP (Ken Lee) writes:
> Computer-generated mistakes aboard the USS Vincennes may lie at the root
> of the downing of Iran Air Flight 655 last week, according to senior
> military officials being briefed on the disaster.
> ...
> The officials said Rogers believed the machines - which wrongly
> identified the approaching plane as hostile - and fired two missiles at
> the passenger plane, knocking it out of the sky over the Strait of
> Hormuz.
> ...
> Some obvious questions right now are:
>       1.  is AI theory useful for meaningful real world applications?
>       2.  is AI engineering capable of generating reliable applications?
>       3.  should AI be used for life-and-death applications like this?

Let's face it.  That radar system *was* designed to kill.  It was only
doing its job.  In a real combat situation, you can't afford to make the
mistake of *not* shooting down the enemy, so you err on the side of
shooting down friends.  War zones are dangerous places.

Now, before y'all start firing cruise missiles at me, I am *NOT*, I repeat
NOT praising the system that killed 280 people.  What I am saying is that
it wasn't the fault of the computer program that incorrectly identified
the airliner as hostile.  The blame lies entirely on the captain of the
USS Vincennes and his superiors for using the system in a zone where
commercial flights are flying.

The question is not whether AI should be used for life-and-death applications,
but whether it should be switched on in a situation like that.

In my opinion.

P.S.  And it could have been done on purpose, by one side or the other.

Tom Holroyd
UUCP: {uflorida,uunet}!novavax!proxftl!tomh

The white knight is talking backwards.

------------------------------

Date: 14 Jul 88 15:20:22 GMT
From: ockerbloom-john@yale-zoo.arpa  (John A. Ockerbloom)
Subject: Re: AI and Life & Death Situations

In article <496@rd1632.Dayton.NCR.COM> James King writes:
>- Whether AI can be used in life-death situations:
>
>  - If you were in a situation in which a decision had to be made within
>    seconds, i.e. most life and death situations, would you:
>    1.  Rely on a toss of a coin?
>    2.  Make a "shot in the dark" decision?
>    3.  Make a quickly reasoned decision based on two or three inferences in
>        your mind?
>    OR
>    4.  Use the decision of a computer (if it had knowledge of the situation's
>        domain and could perform thousands of logical inferences/second)?
>  - One gets you even odds.  Two gets you a random number for the odds.
>    Three gives you slightly better odds based on minimal decision making.
>    And four provides you with a recommendation based on the knowledge of
>    maybe a set of experts and with the speed of computer processing.
>  - If you're an emotional person you probably pick two.  Maybe if you
>    have a quick, "accessable" hunch you pick three.  But if you're a
>    logical, disciplined, person you would go with the greatest backing
>    which is four (and a combination of one through three if the commander
>    is experienced!).

I don't think this is a full description of the choices.  If you indeed
have a great deal of expertise in the area, you will have a very large set
of explicit and implicit inferences to work from, fine-tuned over years
of experience.  You will also have a good idea about the relative
importance of different facts and rules, and can thereby find the relevant
decision paths very quickly.  In short, your mental decision would be
based on "deep" knowledge of the situation, and not just on "two or three
inferences."

Marketing hype aside, it is very difficult to get a computer program
to learn from experience and pick out relevant details in a complex
problem.  It's much easier just to give it a "shallow" form of
knowledge in a set of inference rules.  In a high-pressure situation,
I would not have time to find out *how* a given computer program arrived
at a decision, unless I was very familiar with its workings to begin with.
So if I were experienced myself, I'd trust my own judgment over the
program's in a life-or-death scenario.

John Ockerbloom
------------------------------------------------------------------------------
ockerbloom@cs.yale.EDU              ...!{harvard,cmcl2,decvax}!yale!ockerbloom
ockerbloom@yalecs.BITNET            Box 5323 Yale Station, New Haven, CT 06520

------------------------------

Date: 14 Jul 88 15:27:56 GMT
From: rti!bdrc!jcl@mcnc.org  (John C. Lusth)
Subject: Re: does AI kill?

In article <1376@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
<This appeared in my local paper yesterday.  I think it raises some
<serious ethical questions for the artificial intelligence R&D community.
<------------------------------
<COMPUTERS SUSPECTED OF WRONG CONCLUSION
<from Washington Post, July 11, 1988
<
<Computer-generated mistakes aboard the USS Vincennes may lie at the root
<of the downing of Iran Air Flight 655 last week, according to senior
<military officials being briefed on the disaster.
<
   text of article omitted...
<
<The article continues with evidence and logic the AI should have used to
<conclude that the plane was not hostile.
<
<Some obvious questions right now are:
<       1.  is AI theory useful for meaningful real world applications?
<       2.  is AI engineering capable of generating reliable applications?
<       3.  should AI be used for life-and-death applications like this?
<
<Comments?

While I am certainly no expert on the Aegis system, I have my doubts
as to whether Artificial Intelligence techniques (rule-based technology,
especially) were used in constructing the system.  While certain parts
of the military are hot on AI, overall I don't think they actually trust
it enough to deploy it as yet.

If what I suspect is correct (does anyone out there know otherwise?), perhaps
the developers of Aegis could have avoided this disaster by incorporating
a rule-based system rather than a less robust methodology such as decision
trees or decision support systems.

John C. Lusth
Becton Dickinson Research Center
RTP, NC

...!mcnc!bdrc!jcl

------------------------------

Date: 14 Jul 88 17:24:37 GMT
From: ut-emx!juniper!wizard@sally.utexas.edu  (John Onorato)
Subject: Re: AI and Killing


The situation is perfectly clear to me.  See, the Aegis system (which the
Vincennes used to track the airbus) does not tell the operator what size
or any other identifying this about the planes that it is tracking.  It
just gives the operator information that it gets from the transponder in
the belly of the plane.  The jet that got shot down had both civilian and
military transponders, as it was also being used as a troop transport from
time to time.  Therefore, the captain of the Vincennes DID NOT KNOW that
the object was a civilian one -- the pilot of the airbus did not answer any
hailings, he was deviating from the normal flight path for civilian planes
crossing the gulf, and the airbus was late (there were no planes scheduled
to be overhead at the time that the airbus was flying (was 27 minutes late
I think).  Therefore, the captain, seeing that this COULD POSSIBLY BE a hostile
plane, blew it out of the sky.  I think he did the right thing... I feel
remorse and regret for the lives that were lost, but captain Will Rogers
was just acting to save the lives of himself and his crew.

The computer is not at fault here.  Of course, SOME of the blame can be put
on it by saying that it didn't show the size, etc of the airbus (which
would then identify it as a civilian plane).  However, this oversight is
a design fault, not an equipment fault.  It (Aegis) was never designed
to be used in a half-peace/half-war type situation like today's Gulf.
I do not feel that the blame can be placed on Captain Will Rogers' head,
either, for he was just acting on what information he had.  If it is
anyone's fault, it was the pilot of the airbus's fault...  he was late,
he deviated from the accepted flight path, he was carrying two sets of
transponders, and he never answered any hails from the Vincennes.

That's MY view, anyway.  If anyone cares to differ, go right ahead.


wizard


--
|--------------------|------------------------------| Joy is in the
|wizard@juniper.UUCP | juniper!wizard@emx.utexas.edu|   ears that hear...
|juniper!wizard      | ut-emx!juniper!wizard        |
|--------------------|------------------------------| -- Stephen R Donaldson

------------------------------

End of AIList Digest
********************

∂18-Jul-88  1651	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #15  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 18 Jul 88  16:51:00 PDT
Date: Mon 18 Jul 1988 00:28-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #15
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 18 Jul 1988       Volume 8 : Issue 15

Today's Topics:

  More on the Soundex algorithm

----------------------------------------------------------------------

Date: 11 Jul 88 18:21:16 GMT
From: sundc!netxcom!sdutcher@seismo.css.gov  (Sylvia Dutcher)
Subject: Re: Soundex algorithm

In article <12520@sunybcs.UUCP> stewart@sunybcs.UUCP (Norman R. Stewart) writes:
>
>     The source I've used for Soundex (developed by the
>Remington Rand Corp., I believe), is
>
>     Huffman, Edna K. (1972) Medical Record Management.
>        Berwyn, Illonois: Physicians' Record Company.

I've written a soundex program based on the rules in Knuth's _Searching_and_
Sorting.  These are also the rules used at the National Archives to sort
census data.  These rules differ slightly from the ones posted by Mr. Stewart.
If you don't need to match anyone else's soundex, the most important rule is
to be consistent.  I will insert Knuth's rules below.
>The algorithm is very simple,

1. Retain the first letter of the name, and drop all occurrances of a, e, h,
i, o, u, w, y in other positions.

>1:  Assign number values to all but the first letter of the
>word, using this table
>   1 - B P F V                                    2 - C S K G J Q X Z
>   3 - D T                                        4 - L
>   5 - M N                                        6 - R
>   7 - A E I O U W H Y

2. Assign number values as above, except for 7.

>2: Apply the following rules to produce a code of one letter and
>   three numbers.
>   A: The first letter of the word becomes the initial character
>      in the code.
>   B: When two or more letters from the same group occur together
>      only the first is coded.
>   C: If two letters from the same group are seperated by an H or
>      a W, code only the first.

3. If two or more letters with the same code were adjacent in the original
name (begore step 1), omit all but the first.

>   D: Group 7 letters are never coded (this does not include the
>      first letter in the word, which is always coded).

4. Convert to the form "letter, digit, digit, digit" by adding trailing
zeros or dropping rightmost digits.


BTW according to the reference in Knuth's book, this algorithm was
developed by Margaret Odell and Robert Russell in 1922.


>Norman R. Stewart Jr.             *  How much more suffering is
>C.S. Grad - SUNYAB                *  caused by the thought of death
>internet: stewart@cs.buffalo.edu  *  than by death itself!
>bitnet:   stewart@sunybcs.bitnet  *                       Will Durant


--
Sylvia Dutcher                      *   The likeliness of things
NetExpress Communications, Inc.     *   to go wrong is in direct
1953 Gallows Rd.                    *   proportion to the urgency
Vienna, Va. 22180                   *   with which they shouldn't.

------------------------------

Date: Wed, 13 Jul 88 10:35:06 EDT
From: "William J. Joel" <JZEM%MARIST.BITNET@MITVMA.MIT.EDU>
Subject: Re: Soundex algorithm

/* The following is source code for a Soundex algorithm written in */
/* Waterloo Prolog. */
/* William J. Joel*/
/* Marist College */
/* Poughkeepsie, NY */
/* jzem@marist.bitnet */

key(a,-1).
key(b,1).
key(c,2).
key(d,3).
key(e,-1).
key(f,1).
key(g,2).
key(h,-2).
key(i,0).
key(j,2).
key(k,2).
key(l,4).
key(m,5).
key(n,5).
key(o,-1).
key(p,1).
key(q,2).
key(r,6).
key(s,2).
key(t,3).
key(u,-1).
key(v,1).
key(w,-3).
key(x,2).
key(y,-2).
key(z,2).

soundex(Name,Code)<-
   string(Name,Code1) & write(Code1) &
   soundex1(Code1,A.B.C.D.Rem) &
   string(Code,A.B.C.D.nil).

soundex1(Head.Code1,Head.Code)<-
   keycode(Head.Code1,Code2) & write(Code2) &
   reduce(Code2,T.Code3) & write(T.Code3) &
   eliminate(Code3,Code4) & write(Code4) &
   append(Code4,0.0.0.nil,Code).

reduce(X.(-2).X.Rem,List)<-
   reduce(X.Rem,List).
reduce(X.X.Rem,List)<-
   reduce(X.Rem,List).
reduce(X.Y.Z.Rem,X.List)<-
   ↑X==Z &
   reduce(Y.Z.Rem,List).
reduce(X.Y.Rem,X.List)<-
   ↑X==Y &
   reduce(Y.Rem,List).
reduce(X.nil,X.nil).
reduce(nil,nil).

eliminate(X.Rem,List)<-
   lt(X,0) &
   eliminate(Rem,List).
eliminate(X.Rem,X.List)<-
   gt(X,0) &
   eliminate(Rem,List).
eliminate(nil,nil).

keycode(H.T,N.CodeList)<-
   key(H,N) &
   keycode(T,CodeList).
keycode(nil,nil).


append(Head.Tail,List,Head.NewList)<-
   append(Tail,List,NewList).
append(nil,List,List).

------------------------------

Date: 13 Jul 88 17:02:31 GMT
From: ihnp4!twitch!anuck!jrl@bloom-beacon.mit.edu  (j.r.lupien)
Subject: Re: Soundex algorithm

From article <1552@randvax.UUCP>, by leverich@randvax.UUCP (Brian Leverich):
> Incidentally, does anyone know if there's been any genealogy applications
> built using Prolog or the like?  Looks like a logic programming approach
> to maintaining relations between individuals might be a big win.  -B

Not really a C question anymore, but there is such a beast. The
United Kingdom imigration and naturalization department has a
Prolog implementation for their citizenship status analysis system.

Apparently the rules involved are very complicated and perversely
interconnected, having to do with date and place of birth, date and
place of birth of both parents and their citizenship at the time of
your birth as well as now, and probably the phase of the moon etc.

I can't give you a reference for this offhand, since I heard about
it in a keynote lecture by the MIT AI lab. They should be able to
point you in the right direction.

John Lupien
twitch!mvuxe!anuxh!jrl

------------------------------

End of AIList Digest
********************

∂18-Jul-88  1908	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #16  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 18 Jul 88  19:08:27 PDT
Date: Mon 18 Jul 1988 00:30-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #16
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 18 Jul 1988       Volume 8 : Issue 16

Today's Topics:

 Queries:

  Commercial Machine Learning Programs
  LISP Implementation: Clarification of the Question
  Proceedings of Conf. on Intelligent Tutoring Systems
  Computer Algebra System Tutor
  summary: Computer Vision and Logic Programming
  muLISP documentation
  Computer Algebra System Tutor
  AI & Software Engineering (Soloway)

----------------------------------------------------------------------

Date: Fri, 15 Jul 88 11:18:56 EDT
From: Larry Hunter <hunter-larry@YALE.ARPA>
Subject: Commercial Machine Learning Programs


I'd trying to find out about commercial machine learning programs.
I'm particularly interested in hearing about vendors of machine
learning products, but I'm also interested in hearing from any
commercial users of any form of machine learning.  Please
send mail directly to Hunter@Yale.edu, as I do not read this list.

                                         Thanks!

                                         Larry Hunter
                                         Hunter@Yale.edu

------------------------------

Date: Tue, 12 Jul 88 12:46:15 CDT
From: drl@vuse.vanderbilt.edu (David R. Linn)
Subject: LISP Implementation: Clarification of the Question

"What???  Oh, I see; we use the term 'interpreter' to mean different
things."

From the initial response to my question about noninterpretive LISP
implementations, it appears obvious that I misphrased the question.
Perhaps a bit of context will help.

My project involved porting the Franz LISP system (38.87)to two nonBSD
environments. The core of the FL system is a C and assembly language
program called _rawlisp_. This program is, by my definition, a LISP
interpreter but it supplies more than just the evaluation service.  It
supplies the memory management, i/o primitives, stack/memory
management and basic LISP environment maintenance. All FL programs,
whether interpreted or compiled, run within the environment provided
by the interpreter (usually by _lisp_, a version of _rawlisp_ whose
environment has been heavily augmented by loading (compiled) LISP
code). Perhaps a better name for what I call a "LISP interpreter"
would be a "LISP environmental support program" (LISP ESP).

Now, hopefully, I can restate my quesion in clearer terms: Does anyone
know of a LISP implmentation that does not support the language by
loading (possibly compiled) LISP code into a LISP ESP in order to
execute it?  I disqualify setups like the FL autorun facility since
these implicitly involve loading code into a LISP ESP and preloaded
systems like the FL LISP compiler _liszt_.  Although it would be
difficult, I can see that it might be possible to support LISP like C:
compile the source to object code and link the object with a startup
routine and (probably hefty) library.

Again, please reply directly to me and I will summarize the responses.

         David

David Linn - drl@vuse.vanderbilt.edu (should now agree with header)

------------------------------

Date: Wed, 13 Jul 88 17:22:12 -0400
From: howell%community-chest.mitre.org@gateway.mitre.org
Subject: Proceedings of Conf. on Intelligent Tutoring Systems

Can someone please tell me how to obtain copies of the proceedings of the
International Conference on Intelligent Tutoring Systems, held 1 .. 3 June
in Montreal?

Thanks

     Chuck Howell
     The MITRE Corporation, Mail Stop Z645
     7525 Colshire Drive, McLean, VA 22102
     NET:  howell@mitre.arpa or
           howell%community-chest.mitre.org@gateway.mitre.org

------------------------------

Date: 13 Jul 88 13:22:55 GMT
From: mcvax!lifia.imag.fr!gb@uunet.UU.NET (Guilherme Bittencourt)
Reply-to: mcvax!lifia.imag.fr!gb@uunet.UU.NET (Guilherme Bittencourt)
Subject: Computer Algebra System Tutor


        I would like any  pointer  (references,  system names, addresses or
even source   code  :-) to  any   knowledge-based  system to    aid  in the
utilization of a computer algebra system (Reduce, Macsyma,  Maple, etc).

        I am planning to build such a system as an application of my thesis
work (a system to  aid in the design  of knowledge-based systems) and  very
interested in any previous work.

        Please  answer by e-mail, I  will   summarize  if there is   enough
interest. Thanks in advance.

--
 Guilherme BITTENCOURT             +-----+      gb@lifia.imag.fr
 L.I.F.I.A.                        | <0> |
 46, Avenue Felix Viallet          +-----+
 38031 GRENOBLE Cedex - FRANCE                     (33) 76574668

------------------------------

Date: 14 Jul 88 06:53:49 GMT
From: ucsbcsl!cornu.ucsb.edu!nosmo@bloom-beacon.mit.edu
Subject: summary: Computer Vision and Logic Programming

Well,

Some of you may remember my early posting, regarding the intersection of
computer vision and logic programming.  For those of you who replied, many
thanks.

What was found:

Alan J. Vayda (vayda@ee.ecn.purdue.edu) is using prolog for high-level
object recognition on range data. Ref: Kak, Vayda, Cromwell, Kim and Chen,
"Knowledge-Based Robotics", Proc. of the 1987 Conf. on Robotics and Auto-
mation.

Ray Reiter and Alan Mackworth at Univ. of British Colubia have a paper
"The Logic of Depiction" (UBC TR 87-24) which proposes a theory of vision
based in first order logic. Net address: mack%grads.cs.ubc.ca@RELAY.CS.NET.
(note: Dr. Mackworth, I could never get mail back to you. My address is at
the end of this message.)

In vol. 1 of "Concurrent Prolog: collect papers", edited by E. Shapiro, MIT
Press, 1987, S. Edelman and E. Shapiro have "Image Processing in Concurrent
Prolog", this deals with algoritms for low-level vision. (edelman or udi,
@wisdom.BITNET)

Denis Gagne's thesis, from U. of Alberta, in Edmonton, home of the Oilers
and the West Ed. Mall, describes a reasoning approach to scene analysis.
The person that refered this to me didn't know the title, but did give me
a lead on how to find out. Randy Goebel (goebel@alberta.uucp) and David
Poole (dlpoole@waterloo.csnet) were Denis's advisors.

Mulgaonkar, Shapiro and Haralick, describe a rule-based approach for
determining shap-from-perspective in "Shape from Perspective: a Rule Based
Approach", CVG&IP 36, pp. 289-320, 1986.

That's about all that I've heard.  I'm still interested in hearing more,
though.

Vince Kraemer (nosmo@cornu.ucsb.edu)
4946 La Ramada Dr.
Santa Barbara, CA 93111

"Always look on the bright side of life!" - man on the cross, in
                                                The Life of Brian
(note : our postnews has been ill, hence the delay.)

------------------------------

Date: Thu, 14 Jul 88 12:53 P
From: F61104%BARILAN.BITNET@MITVMA.MIT.EDU
Subject: muLISP documentation

Hello,
  We are using an ES shell called AQUAINT which allows user defined
functions in muLISP. However, the documentation for muLISP is
not included with the system. Could anybody please help with
some pointers on how to get hold of it? (Reply to me or the net).
Thanks,
  Gilly
e-mail; F61104@BARILAN.BITNET
s-mail; Gilly Chukat
        Dept. of Life Sciences
        Bar-Ilan University
        52100 Ramat-Gan
        Israel

------------------------------

Date: 13 Jul 88 13:21:05 GMT
From: mcvax!inria!imag!lifia!gb@uunet.uu.net  (Guilherme Bittencourt)
Subject: Computer Algebra System Tutor


        I would like any  pointers  (references,  system names, addresses or
even source   code  :-) to  any   knowledge-based  system to    aid  in the
utilization of a computer algebra system (Reduce, Macsyma, Maple, etc).

        I am planning to build such a system as an application of my thesis
work (a system to  aid in the design  of knowledge-based systems) and  very
interested in any previous work.

        Please  answer by e-mail, I  will   summarize  if there is   enough
interest. Thanks in advance.

--
 Guilherme BITTENCOURT             +-----+      gb@lifia.imag.fr
 L.I.F.I.A.                        | <0> |
 46, Avenue Felix Viallet          +-----+
 38031 GRENOBLE Cedex - FRANCE                     (33) 76574668

------------------------------

Date: Fri, 15 Jul 88 11:27:31 -0400
From: howell%community-chest.mitre.org@gateway.mitre.org
Subject: AI & Software Engineering (Soloway)

I've just read an overview of the Yale AI Project in the Winter 87 issue of
AI Magazine (shows you how deep my "to read" stack has become!).  I was
particularly interested in the work being done by Elliot Soloway on
applying story understanding techniques to program "comprehension".  Any
pointers to additional information on this work would be greatly
appreciated.  Pointers to other work on applying AI tools & techniques to
Software Engineering (esp. to qualitative assessment of large volumes of
PDL and code) would be appreciated.  If the volume of responses is big
enough, I'll mail a summary to the ailist.

Thanks,

     Chuck Howell
     The MITRE Corporation, Mail Stop Z645
     7525 Colshire Drive, McLean, VA 22102
     NET:  howell@mitre.arpa or
           howell%community-chest.mitre.org@gateway.mitre.org

------------------------------

End of AIList Digest
********************

∂19-Jul-88  0128	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #9   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Jul 88  01:27:46 PDT
Date: Mon 18 Jul 1988 00:03-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #9
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 18 Jul 1988        Volume 8 : Issue 9

Today's Topics:

 Philosophy

  Reproducing the brain in low-power analog CMOS (LONG!)
  ASL notation
  Smart money
  AI (GOFAI) and cognitive psychology
  Metaepistemology & Phil. of Science
  Generality in Artificial Intelligence
  Critique of Systems Theory

----------------------------------------------------------------------

Date: 2 Jul 88 15:03:55 GMT
From: uwvax!uwslh!lishka@rutgers.edu (Fish-Guts)
Reply-to: uwvax!uwslh!lishka@rutgers.edu (Fish-Guts)
Subject: Re: Reproducing the brain in low-power analog CMOS (LONG!)


In a previous article,jbn@GLACIER.STANFORD.EDU (John B. Nagle) writes:
>Date: Wed, 29 Jun 88 11:23 EDT
>From: John B. Nagle <jbn@glacier.stanford.edu>
>Subject: Reproducing the brain in low-power analog CMOS
>To: AILIST@ai.ai.mit.edu
>
>
>      Forget Turing machines.  The smart money is on reproducing the brain
>with low-pwer analog CMOS VLSI.  Carver Mead is down at Caltech, reverse
>engineering the visual system of monkeys and building equivalent electronic
>circuits.  Progress seems to be rapid.  Very possibly, traditional AI will
>be bypassed by the VLSI people.
>
>                                       John Nagle

     There is one catch: you do need to know what the Brain (be it
human, monkey, or insect) is doing at a neuronal level.  After two
courses in Neurobiology, I am convinced that humans are quite far away
from understanding even a fraction of what is going on.

     Although I do not know much about the research happening at
Caltech, I would suspect that they are reverse engineering the visual
system from the retina and working their way back to the visual
cortex.  From what I was taught, the first several stages in the
visual system consist of "preprocessing circuits" (in a Professor's
words), and serve to transform the signals from the rods and cones
into higher-order constructs (i.e. lines, motion, etc.).  If this is
indeed what they are researching at Caltech, then it is a good choice,
although a rather low level one.  I would bet that these stages lend
themselves well to being implemented in hardware, although I don't
know how many "deep" issues about the Brain and Mind that
implementation of these circuits will solve.

    One of my Professors pointed out that studying higher order visual
functions is hard because of all of the preprocessing stages that
occur before the visual cortex (where higher order functions are
thought to occur).  Since we do not yet know exactly what kinds of
"data abstractions" are reaching the visual cortex, it is hard to come
up with any hardcore theories of the visual cortex because noone
really knows what the input to that area looks like.  Sure, we know
what enters the retina, and a fair bit about the workings of the rods
and cones, but how the signals are transformed as they pass through
the bipolar, horizontal, amacrine, and ganglion cells is not known to
any great certainty.  [NOTE: a fair bit seems to be known about how
individual classes of neurons tranform the signals, but the overall
*picture* is what is still missing].  Maybe this is what the folks at
Caltech are trying to do; I am not sure.  It would help a great deal
in later studies of the visual system to know what kinds of inputs are
reaching the visual cortex.

     However, there are other areas of research that do address higher
order functions in cortex.  The particular area that I am thinking of
is the Olfactory System, specifically the Pyriform Cortex.  This
cortex is only one synapse away from the olfactory sensory apparatus,
so the input into the Pyriform Cortex is *not* preprocessed to any
great degree; much less so than the > 4 levels of preprocessing that
occur in the visual system.  Futhermore, the Pyriform Cortex seems to
be similar in structure to several Neural Net models, including some
proposed by Hopfield (who does much of his work in hardware).
Therefore, it is much easier to figure out what sort of inputs are
reaching the Pyriform Cortex, though even this has been elusive to a
large degree.  The current research indicates that certain Neural Net
models effectively duplicate characteristics of the cortex to a good
degree.  I have read several interesting articles on the storage and
retrieval of previous odors in the Pyriform Cortex; these papers use
Content-Addressable Memory (also called Associative Memory) as the
predominant model.  There is even some direct modelling being done by
a couple of Grad. students on a Sun workstatoin down in California.
[If anyone wants these references, send me email; if there is enough
interest, I will post them].

     My point is: do not write off AI (especially Neural Net) theories
so fast; there is much interesting work being done right now that has
a much potential as being important in the long run as the work being
done at Caltech.  Just because they are implementing Brain *circuits*
and *architecture* in hardware does not mean they will get any closer
than AI researchers.  I still believe that AI researchers are doing
much more higher order research than anything that has been
implemented in hardware.

     Furthermore, the future of AI does not lie only in Artificial
Intelligence and Computer Science.  You bring up a good point: look at
other disciplines as well; your example was hardware.  But look even
further: Neurobiology is a fairly important area of reasearch in this
area.  So are the Computational sciences: Hopfield does quite a bit of
research here.  Furthermore, Hopfield is (or was, at least) a
Physicist; in fact I saw him give a lecture here at the UW about his
networks where he related his work to AI, Neurobiology, Physics,
Computational Sciences, and more...and the talk took place (and was
sponsored by) the Physics department.  Research into AI and Brain
related fields is being performed in many discplines, and each will
influence all others in the end.

     Whew!  Sorry to get up on my soapbox, but I had to let it out.
Remember, these are just my humble opinions, so don't take them too
seriously if you do not like them.  I would be happy to discuss this
further over email, and I can give you references to some interesting
articles to read on ties between AI and Neurobiology (almost all
dealing with the Olfactory System, as that looks the most promising
from my point of view).

                                        -Chris--
Christopher Lishka                | lishka@uwslh.uucp
Wisconsin State Lab of Hygiene    | lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617 | ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
"...Just because someone is shy and gets straight A's does not mean they won't
put wads of gum in your arm pits."
                          - Lynda Barry, "Ernie Pook's Commeek: Gum of Mystery"

------------------------------

Date: 3 Jul 88 18:47:04 GMT
From: doug@feedme.UUCP (Doug Salot)
Reply-to: doug@feedme.UUCP (Doug Salot)
Subject: ASL notation

hayes.pa@XEROX.COM writes:
>On dance notation:
>A quick suggestion for a similar but perhaps even thornier problem:  a notation
>for the movements involved in deaf sign language.

I'm not sure how this would benefit the deaf community, as most signers
are quite capable of reading written language, however work has been
done to "quantize" ASL.  Look to the work of Poizner, Bellugi, et al
at the Salk Institute for suggested mappings between several movement
attributes (planar locus, shape, iteration freq., etc) and language
attributes (case, number, etc.).
---
Doug Salot || doug@feedme.UUCP || ...{trwrb,hplabs}!felix!dhw68k!feedme!doug
                    "Thinking: The Thinking Man's Sport"

------------------------------

Date: Thu, 7 Jul 88 09:09:00 pdt
From: Ray Allis <ray@BOEING.COM>
Subject: Smart money

John Nagle says:

>       Forget Turing machines.  The smart money is on reproducing the brain
>with low-power analog CMOS VLSI.  Carver Mead is down at Caltech, reverse
>engineering the visual system of monkeys and building equivalent electronic
>circuits.  Progress seems to be rapid.  Very possibly, traditional AI will
>be bypassed by the VLSI people.

Hear, hear!  Smart money indeed!  I don't know whether Carver prefers to work
low-profile, but this has to be the most undervalued AI work of the century.
This is the most likely area for the next breakthrough toward creating
intelligent artifacts.  And the key concept is that these devices are
ANALOG.


Ray Allis
Boeing Computer Services-Aerospace Support
CSNET: ray@boeing.com

------------------------------

Date: Tue, 12 Jul 88 12:48:25 EDT
From: Ralf.Brown@b.gp.cs.cmu.edu

To: comp-ai-digest@rutgers.edu
Path: b.gp.cs.cmu.edu!Ralf.Brown@B.GP.CS.CMU.EDU
From: Ralf.Brown@B.GP.CS.CMU.EDU
Newsgroups: comp.ai.digest
Subject: Re: Generality in Artificial Intelligence
Message-ID: <22d9eea5@ralf>
Date: 12 Jul 88 10:49:09 GMT
Sender: ralf@b.gp.cs.cmu.edu
Lines: 29
In-Reply-To: <19880712044954.9.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU>


In a previous article, YLIKOSKI@FINFUN.BITNET writes:
}Thus, it would seem that in order for us to see true commonsense
}knowledge exhibited by a program we need:
}
}        * a vast amount of knowledge involving the world of a person
}          in virtual memory.  The knowledge involves gardening,
}          Buddhism, the emotions of an ordinary person and so forth -
}          its amount might equal a good encyclopaedia.

Actually, it would probably put the combined text of several good encyclopedias
to shame.  Even encyclopedias and dictionaries leave out a lot of "common-sense"
information.

The CYC project at MCC is a 10-year undertaking to build a large knowledge
base of real world facts, heuristics, and methods for reasoning over the
knowledge base.[1]  The current phase of the project is to carefully represent
400 articles from a one-volume encyclopedia.  They expect their system to
contain about 10,000 frames once they've encoded the 400 articles, about half
of them common-sense concepts.


[1] CYC: Using Common Sense Knowledge to Overcome Brittleness and Knowledge
Acquisition Bottlenecks, AI Magazine v6 #4.

--
UUCP: {ucbvax,harvard}!cs.cmu.edu!ralf -=-=-=- Voice: (412) 268-3053 (school)
ARPA: ralf@cs.cmu.edu  BIT: ralf%cs.cmu.edu@CMUCCVMA  FIDO: Ralf Brown 1:129/31
Disclaimer? I     |Ducharm's Axiom:  If you view your problem closely enough
claimed something?|   you will recognize yourself as part of the problem.

------------------------------

Date: 13 Jul 88 04:48:34 GMT
From: pitt!cisunx!vangelde@cadre.dsl.pittsburgh.edu  (Timothy J Van)
Subject: AI (GOFAI) and cognitive psychology

What with the connectionist bandwagon, everyone seems to be getting a lot
clearer about just what AI is and what sort of a picture of cognition
it embodies.  The short story, of course, is that AI claims that thought
in general and intelligence in particular is the rule governed manipulation
of symbols.  So AI is committed to symbolic representations with a
combinatorial syntax and formal rules defined over them.  The implemenation
of those rules is computation.

Supposedly, the standard or "classical" view in cognitive psychology is
committed to exactly the same picture in the case of human cognition, and
so goes around devising models and experiments based on these commitments.


My question is - how much of cognitive psychology literally fits this kind
of characterization?  Some classics, for example the early Shepard and
Metzler experiments on image rotation dont seem to fit the description
very closely at all.  Others, such as the SOAR system, often seem to
remain pretty vague about exactly how much of their symbolic machinery
they are really attributing to the human cognizer.

So, to make my question a little more concrete - I'd be interested to know
what people's favorite examples of systems that REALLY DO FIT THE
DESCRIPTION are?  (Or any other interesting comments, of course.)

Tim van Gelder

------------------------------

Date: 14 Jul 88 09:55:40 EDT
From: David Chess <CHESS@ibm.com>
Subject: Metaepistemology & Phil. of Science

>In this sense the reality is unknowable.  We only have
>descriptions of the actual world.

This "only" seems to be the key to the force of the argument.  If
it were "we have descriptions of the actual world", it would sound
considerably tamer.   The "only" suggests that there is something
else (besides "descriptions") that we *might* have, but that we
do not.   What might this something else be?   What, besides
"descriptions", could we have of the actual world?   I certainly
wouldn't want the actual world *itself* in my brain (wouldn't fit).

Can anyone complete the sentence "The actual world is unknowable to
us, because we have only descriptions/representations of it, and not..."?

(I would tend to claim that "knowing" is just (roughly) "having
 the right kind of descriptions/representations of", and that
 there's no genuine "unknowability" here; but that's another
 debate...)

Dave Chess
Watson Research

* Disclaimer: Who, me?

------------------------------

Date: Thu, 14 Jul 88 15:06:53 BST
From: mcvax!doc.ic.ac.uk!sme@uunet.UU.NET
Reply-to: sme@doc.ic.ac.uk (Steve M Easterbrook)
Subject: Re: Generality in Artificial Intelligence

In a previous article, YLIKOSKI@FINFUN.BITNET writes:
>> "In my opinion, getting a language for expressing general
>> commonsense knowledge for inclusion in a general database is the key
>> problem of generality in AI."
>...
>Here follows an example where commonsense knowledge plays its part.  A
>human parses the sentence
>
>"Christine put the candle onto the wooden table, lit a match and lit
>it."
>
>The difficulty which humans overcome with commonsense knowledge but
>which is hard to a program is to determine whether the last word, the
>pronoun "it" refers to the candle or to the table.  After all, you can
>burn a wooden table.
>
>Probably a human would reason, within less than a second, like this.
>
>"Assume Christine is sane.  The event might have taken place at a
>party or during her rendezvous with her boyfriend.  People who do
>things such as taking part in parties most often are sane.
>
>People who are sane are more likely to burn candles than tables.
>
>Therefore, Christine lit the candle, not the table."

Aren't you overcomplicating it a wee bit? My brain would simply tell me
that in my experience, candles are burnt much more often than tables.
QED.

This to me, is very revealing. The concept of commonsense knowledge that
McCarthy talks of is simply a huge base of experience built up over a
lifetime. If a computer program was switched on for long enough, with a
set of sensors similar to those provided by the human body, and a basic
abililty to go out and do things, to observe and experiment, and to
interact with people, it would be able to gather a similar set of
experiences to those possessed by humans. The question is then whether
the program can store and index those experiences, in their totality, in
some huge episodic memory, and whether it has the appropriate mechanisms
to fire useful episodic recalls at useful moments, and apply those
recalls to the present situation, whether by analogy or otherwise.

From this, it seems to me that the most important task that AI can
address itself to at the present is the study of episodic memory: how it
can be organised, how it can be accessed, and how analogies with past
situations can be developed. This should lead to a theory of experience,
ready for when robotics and memory capacities are advanced enough for
the kind of exeriment I descibed above. With all due respect to McCarthy
et al, attempts to hand code the wealth of experience of the real world
that adult human beings have accumulated are going to prove futile.
Human intelligence doesn't gather this commonsense by being explicitly
programmed with rules (formal OR informal), and neither will artificial
intelligences.

>It seems to me that the inferences are not so demanding but the
>inferencer utilizes a large amount of background knowledge and a good
>associative access mechanism.

Yes. Work on the associative access, and let the background knowledge
evolve itself.

>...
>What kind of formalism should we use for expressing the commonsense
>knowledge?

Try asking what kind of formalism should we use for expressing episodic
memories? Later on you suggest natural language. Is this suitable?
Do people remember things by describing them in words to themselves?
Or do they just create private "symbols", or brain activation patterns,
which only need to be translated into words when being communicated to
others? Note: I am not saying people don't think in natural language,
only that they don't store memories as natural language accounts.

I don't think this kind of experience can be expressed in any formalism,
nor do I think it can be captured by natural language. It needs to
evolve as a private set of informal symbols, of which the brain (human
or computer) does not need to consciously realise are there. All it needs
to do is associate the right thought with the symbols when they are
retrieved, i.e. to interpret the memories. Again I think this kind of
ability evolves with experience: at first, symbols (brain activity
patterns) would be triggered which the brain would be unable to interpret.

If this is beginning to sound like an advocation of neural
nets/connectionist learning, then so be it. I feel that a conventional
AI system coupled to a connectionist net for its episodic memory might
be a very sensible achitecture. There are probably other ways of
achieving the same behaviour, I don't know.

One final note. Read the first chapter of the Dreyfus and Dreyfus book
"Mind over Machine", for a thought-provoking account of how experts
perform, using "intuition". Logic is only used in hindsight to support
an intuitive choice. Hence "heuristics is compiled hindsight". Being
arch-critics of AI, the Dreyfuses conclude that the intuition that experts
develop is intrinsically human and can never be reproduced by machine.
Being an AI enthusiast, I conclude that intuition is really the
unconscious application of experience, and all that's needed to
reproduce it is the necessary mechanisms for storing and retrieving
episodic memories by association.

>In my opinion, it is possible that an attempt to express commonsense
>knowledge with a formalism is analogous to an attempt to fit a whale
>into a tin sardine can.  ...

I agree.
How did you generate this analogy? Is it because you had a vast
amount of common sense knowledge about whales and sardines, and tins,
(whether expressed in natural language (vast!), or some formal system)
through which you had to wade to realise that sardines will fit
in small tins but whales will not, and eventually synthesis this particular
concept, or did you just try to recall (holistically) an image that
matched the concept of forcing a huge thing into a rigid container?

Steve Easterbrook.

------------------------------

Date: Fri, 15 Jul 88 13:55:04 PDT
From: larry@VLSI.JPL.NASA.GOV
Subject: Philosophy: Critique of Systems Theory

--
Using Gilbert Cockton's references to works critical of systems theory, over
the last month I've spent a few afternoons in the CalTech and UCLA libraries
tracing down those and other criticisms.  The works I was able to get and
study are at the end of this message.  I also examined a number of introduc-
tory texts on psychology and sociology from the last 15 years or so.

General Systems Theory was founded by biologist Ludwig von Bertallanfy in
the late '40s.  It drew heavily on biology, borrowed from many areas, and
promised a grand unified theory of all the sciences.  The ideas gained
momentum till in the early '70s in the "humanics" or "soft sciences" it had
reached fad proportions.  Bertallanfy was made an honorary psychoanalyst,
for instance, and a volume appeared containing articles by various prominent
analysts discussing the effects of GST on their field.  After that peak,
interest died down.  Indexes in social-sciences textbooks carried fewer
references to smaller discussions.  The GST journal became thinner and went
from annual to biannual; the latest issue was in 1985.  Interest still
exists, however, and in various bibliographies of English publications for
1987 I found at total of seven new books.

What seems to have happened is that the more optimistic promises of GST
failed and lost it the support of most of its followers.  Its more success-
ful ideas were pre-empted by several fields.  These include control theory
in engineering, taxonomy of organizations in management, and the origins of
psychosis in social psychology.

For me the main benefit of GST has been a personally satisfactory resolution
of the reduction paradox, which follows.

Because of the limits of human intelligence, we simplify and view the
universe as various levels of abstraction, each level fairly independent of
the levels "above" and "below" it.  This gives rises to arguments, however,
about whether, for instance, physics or psychology is "truer."  The simple
answer is that both are only approximations of an ultimately unknowable
reality and, since both views are too useful give up, their incompatibility
is inevitable and we just have to live with it.  This is what many physi-
cists have done with the conflict between quantum and wave views of energy.

The GST view is that each higher level of organization is built on a
previous one via systems, which are new kinds of units made from binding
together more elementary units.  New kinds of systems exhibit synergy:
attributes and abilities not observed in any of their constituent elements.
But where do these attributes/abilities come from?  I found no answer in the
writings on system theory that I read, so I had to make my own answer.

I finally settled on what interaction effects.  Two atoms bound together
chemically affect each other's electron shrouds, forming a new shroud around
them both.  This gives the resulting molecule a color, solubility, conduc-
tivity, and so on that neither solitary atom has.

Similarly, two people can cross a wall neither can alone by one standing on
the other's shoulders to reach the top, then pulling the bottom partner up.
A "living" machine can repair and reproduce itself.  And consciousness can
arise from elements that don't have it -- memories, processors, sensors, and
effectors -- though no amount of study of individual elements will find life
or consciousness.  They are the result of interaction effects.

     _System Analysis in Public Policy: a Critique_, I. Hoos, 1972
     _Central Problems in Social Theory_, Anthony Giddens, 1979
     _System Theory, System Practice_, P. B. Checkland, 1981
     _On Systems Analysis_, David Berlinski, 1976
     _The Rise of Systems Theory_, R. Lilienfeld, 1978

     The last two are reviewed in _Futures_, 10/2, p. 159 and 11/2, p. 165.
     They also contain criticisms of system theory which, they complain,
       Berlinski and Lilienfeld overlooked.

------------------------------

End of AIList Digest
********************

∂19-Jul-88  1157	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #9   
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Jul 88  11:57:05 PDT
Date: Mon 18 Jul 1988 00:03-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #9
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 18 Jul 1988        Volume 8 : Issue 9

Today's Topics:

 Philosophy

  Reproducing the brain in low-power analog CMOS (LONG!)
  ASL notation
  Smart money
  AI (GOFAI) and cognitive psychology
  Metaepistemology & Phil. of Science
  Generality in Artificial Intelligence
  Critique of Systems Theory

----------------------------------------------------------------------

Date: 2 Jul 88 15:03:55 GMT
From: uwvax!uwslh!lishka@rutgers.edu (Fish-Guts)
Reply-to: uwvax!uwslh!lishka@rutgers.edu (Fish-Guts)
Subject: Re: Reproducing the brain in low-power analog CMOS (LONG!)


In a previous article,jbn@GLACIER.STANFORD.EDU (John B. Nagle) writes:
>Date: Wed, 29 Jun 88 11:23 EDT
>From: John B. Nagle <jbn@glacier.stanford.edu>
>Subject: Reproducing the brain in low-power analog CMOS
>To: AILIST@ai.ai.mit.edu
>
>
>      Forget Turing machines.  The smart money is on reproducing the brain
>with low-pwer analog CMOS VLSI.  Carver Mead is down at Caltech, reverse
>engineering the visual system of monkeys and building equivalent electronic
>circuits.  Progress seems to be rapid.  Very possibly, traditional AI will
>be bypassed by the VLSI people.
>
>                                       John Nagle

     There is one catch: you do need to know what the Brain (be it
human, monkey, or insect) is doing at a neuronal level.  After two
courses in Neurobiology, I am convinced that humans are quite far away
from understanding even a fraction of what is going on.

     Although I do not know much about the research happening at
Caltech, I would suspect that they are reverse engineering the visual
system from the retina and working their way back to the visual
cortex.  From what I was taught, the first several stages in the
visual system consist of "preprocessing circuits" (in a Professor's
words), and serve to transform the signals from the rods and cones
into higher-order constructs (i.e. lines, motion, etc.).  If this is
indeed what they are researching at Caltech, then it is a good choice,
although a rather low level one.  I would bet that these stages lend
themselves well to being implemented in hardware, although I don't
know how many "deep" issues about the Brain and Mind that
implementation of these circuits will solve.

    One of my Professors pointed out that studying higher order visual
functions is hard because of all of the preprocessing stages that
occur before the visual cortex (where higher order functions are
thought to occur).  Since we do not yet know exactly what kinds of
"data abstractions" are reaching the visual cortex, it is hard to come
up with any hardcore theories of the visual cortex because noone
really knows what the input to that area looks like.  Sure, we know
what enters the retina, and a fair bit about the workings of the rods
and cones, but how the signals are transformed as they pass through
the bipolar, horizontal, amacrine, and ganglion cells is not known to
any great certainty.  [NOTE: a fair bit seems to be known about how
individual classes of neurons tranform the signals, but the overall
*picture* is what is still missing].  Maybe this is what the folks at
Caltech are trying to do; I am not sure.  It would help a great deal
in later studies of the visual system to know what kinds of inputs are
reaching the visual cortex.

     However, there are other areas of research that do address higher
order functions in cortex.  The particular area that I am thinking of
is the Olfactory System, specifically the Pyriform Cortex.  This
cortex is only one synapse away from the olfactory sensory apparatus,
so the input into the Pyriform Cortex is *not* preprocessed to any
great degree; much less so than the > 4 levels of preprocessing that
occur in the visual system.  Futhermore, the Pyriform Cortex seems to
be similar in structure to several Neural Net models, including some
proposed by Hopfield (who does much of his work in hardware).
Therefore, it is much easier to figure out what sort of inputs are
reaching the Pyriform Cortex, though even this has been elusive to a
large degree.  The current research indicates that certain Neural Net
models effectively duplicate characteristics of the cortex to a good
degree.  I have read several interesting articles on the storage and
retrieval of previous odors in the Pyriform Cortex; these papers use
Content-Addressable Memory (also called Associative Memory) as the
predominant model.  There is even some direct modelling being done by
a couple of Grad. students on a Sun workstatoin down in California.
[If anyone wants these references, send me email; if there is enough
interest, I will post them].

     My point is: do not write off AI (especially Neural Net) theories
so fast; there is much interesting work being done right now that has
a much potential as being important in the long run as the work being
done at Caltech.  Just because they are implementing Brain *circuits*
and *architecture* in hardware does not mean they will get any closer
than AI researchers.  I still believe that AI researchers are doing
much more higher order research than anything that has been
implemented in hardware.

     Furthermore, the future of AI does not lie only in Artificial
Intelligence and Computer Science.  You bring up a good point: look at
other disciplines as well; your example was hardware.  But look even
further: Neurobiology is a fairly important area of reasearch in this
area.  So are the Computational sciences: Hopfield does quite a bit of
research here.  Furthermore, Hopfield is (or was, at least) a
Physicist; in fact I saw him give a lecture here at the UW about his
networks where he related his work to AI, Neurobiology, Physics,
Computational Sciences, and more...and the talk took place (and was
sponsored by) the Physics department.  Research into AI and Brain
related fields is being performed in many discplines, and each will
influence all others in the end.

     Whew!  Sorry to get up on my soapbox, but I had to let it out.
Remember, these are just my humble opinions, so don't take them too
seriously if you do not like them.  I would be happy to discuss this
further over email, and I can give you references to some interesting
articles to read on ties between AI and Neurobiology (almost all
dealing with the Olfactory System, as that looks the most promising
from my point of view).

                                        -Chris--
Christopher Lishka                | lishka@uwslh.uucp
Wisconsin State Lab of Hygiene    | lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617 | ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
"...Just because someone is shy and gets straight A's does not mean they won't
put wads of gum in your arm pits."
                          - Lynda Barry, "Ernie Pook's Commeek: Gum of Mystery"

------------------------------

Date: 3 Jul 88 18:47:04 GMT
From: doug@feedme.UUCP (Doug Salot)
Reply-to: doug@feedme.UUCP (Doug Salot)
Subject: ASL notation

hayes.pa@XEROX.COM writes:
>On dance notation:
>A quick suggestion for a similar but perhaps even thornier problem:  a notation
>for the movements involved in deaf sign language.

I'm not sure how this would benefit the deaf community, as most signers
are quite capable of reading written language, however work has been
done to "quantize" ASL.  Look to the work of Poizner, Bellugi, et al
at the Salk Institute for suggested mappings between several movement
attributes (planar locus, shape, iteration freq., etc) and language
attributes (case, number, etc.).
---
Doug Salot || doug@feedme.UUCP || ...{trwrb,hplabs}!felix!dhw68k!feedme!doug
                    "Thinking: The Thinking Man's Sport"

------------------------------

Date: Thu, 7 Jul 88 09:09:00 pdt
From: Ray Allis <ray@BOEING.COM>
Subject: Smart money

John Nagle says:

>       Forget Turing machines.  The smart money is on reproducing the brain
>with low-power analog CMOS VLSI.  Carver Mead is down at Caltech, reverse
>engineering the visual system of monkeys and building equivalent electronic
>circuits.  Progress seems to be rapid.  Very possibly, traditional AI will
>be bypassed by the VLSI people.

Hear, hear!  Smart money indeed!  I don't know whether Carver prefers to work
low-profile, but this has to be the most undervalued AI work of the century.
This is the most likely area for the next breakthrough toward creating
intelligent artifacts.  And the key concept is that these devices are
ANALOG.


Ray Allis
Boeing Computer Services-Aerospace Support
CSNET: ray@boeing.com

------------------------------

Date: Tue, 12 Jul 88 12:48:25 EDT
From: Ralf.Brown@b.gp.cs.cmu.edu

To: comp-ai-digest@rutgers.edu
Path: b.gp.cs.cmu.edu!Ralf.Brown@B.GP.CS.CMU.EDU
From: Ralf.Brown@B.GP.CS.CMU.EDU
Newsgroups: comp.ai.digest
Subject: Re: Generality in Artificial Intelligence
Message-ID: <22d9eea5@ralf>
Date: 12 Jul 88 10:49:09 GMT
Sender: ralf@b.gp.cs.cmu.edu
Lines: 29
In-Reply-To: <19880712044954.9.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU>


In a previous article, YLIKOSKI@FINFUN.BITNET writes:
}Thus, it would seem that in order for us to see true commonsense
}knowledge exhibited by a program we need:
}
}        * a vast amount of knowledge involving the world of a person
}          in virtual memory.  The knowledge involves gardening,
}          Buddhism, the emotions of an ordinary person and so forth -
}          its amount might equal a good encyclopaedia.

Actually, it would probably put the combined text of several good encyclopedias
to shame.  Even encyclopedias and dictionaries leave out a lot of "common-sense"
information.

The CYC project at MCC is a 10-year undertaking to build a large knowledge
base of real world facts, heuristics, and methods for reasoning over the
knowledge base.[1]  The current phase of the project is to carefully represent
400 articles from a one-volume encyclopedia.  They expect their system to
contain about 10,000 frames once they've encoded the 400 articles, about half
of them common-sense concepts.


[1] CYC: Using Common Sense Knowledge to Overcome Brittleness and Knowledge
Acquisition Bottlenecks, AI Magazine v6 #4.

--
UUCP: {ucbvax,harvard}!cs.cmu.edu!ralf -=-=-=- Voice: (412) 268-3053 (school)
ARPA: ralf@cs.cmu.edu  BIT: ralf%cs.cmu.edu@CMUCCVMA  FIDO: Ralf Brown 1:129/31
Disclaimer? I     |Ducharm's Axiom:  If you view your problem closely enough
claimed something?|   you will recognize yourself as part of the problem.

------------------------------

Date: 13 Jul 88 04:48:34 GMT
From: pitt!cisunx!vangelde@cadre.dsl.pittsburgh.edu  (Timothy J Van)
Subject: AI (GOFAI) and cognitive psychology

What with the connectionist bandwagon, everyone seems to be getting a lot
clearer about just what AI is and what sort of a picture of cognition
it embodies.  The short story, of course, is that AI claims that thought
in general and intelligence in particular is the rule governed manipulation
of symbols.  So AI is committed to symbolic representations with a
combinatorial syntax and formal rules defined over them.  The implemenation
of those rules is computation.

Supposedly, the standard or "classical" view in cognitive psychology is
committed to exactly the same picture in the case of human cognition, and
so goes around devising models and experiments based on these commitments.


My question is - how much of cognitive psychology literally fits this kind
of characterization?  Some classics, for example the early Shepard and
Metzler experiments on image rotation dont seem to fit the description
very closely at all.  Others, such as the SOAR system, often seem to
remain pretty vague about exactly how much of their symbolic machinery
they are really attributing to the human cognizer.

So, to make my question a little more concrete - I'd be interested to know
what people's favorite examples of systems that REALLY DO FIT THE
DESCRIPTION are?  (Or any other interesting comments, of course.)

Tim van Gelder

------------------------------

Date: 14 Jul 88 09:55:40 EDT
From: David Chess <CHESS@ibm.com>
Subject: Metaepistemology & Phil. of Science

>In this sense the reality is unknowable.  We only have
>descriptions of the actual world.

This "only" seems to be the key to the force of the argument.  If
it were "we have descriptions of the actual world", it would sound
considerably tamer.   The "only" suggests that there is something
else (besides "descriptions") that we *might* have, but that we
do not.   What might this something else be?   What, besides
"descriptions", could we have of the actual world?   I certainly
wouldn't want the actual world *itself* in my brain (wouldn't fit).

Can anyone complete the sentence "The actual world is unknowable to
us, because we have only descriptions/representations of it, and not..."?

(I would tend to claim that "knowing" is just (roughly) "having
 the right kind of descriptions/representations of", and that
 there's no genuine "unknowability" here; but that's another
 debate...)

Dave Chess
Watson Research

* Disclaimer: Who, me?

------------------------------

Date: Thu, 14 Jul 88 15:06:53 BST
From: mcvax!doc.ic.ac.uk!sme@uunet.UU.NET
Reply-to: sme@doc.ic.ac.uk (Steve M Easterbrook)
Subject: Re: Generality in Artificial Intelligence

In a previous article, YLIKOSKI@FINFUN.BITNET writes:
>> "In my opinion, getting a language for expressing general
>> commonsense knowledge for inclusion in a general database is the key
>> problem of generality in AI."
>...
>Here follows an example where commonsense knowledge plays its part.  A
>human parses the sentence
>
>"Christine put the candle onto the wooden table, lit a match and lit
>it."
>
>The difficulty which humans overcome with commonsense knowledge but
>which is hard to a program is to determine whether the last word, the
>pronoun "it" refers to the candle or to the table.  After all, you can
>burn a wooden table.
>
>Probably a human would reason, within less than a second, like this.
>
>"Assume Christine is sane.  The event might have taken place at a
>party or during her rendezvous with her boyfriend.  People who do
>things such as taking part in parties most often are sane.
>
>People who are sane are more likely to burn candles than tables.
>
>Therefore, Christine lit the candle, not the table."

Aren't you overcomplicating it a wee bit? My brain would simply tell me
that in my experience, candles are burnt much more often than tables.
QED.

This to me, is very revealing. The concept of commonsense knowledge that
McCarthy talks of is simply a huge base of experience built up over a
lifetime. If a computer program was switched on for long enough, with a
set of sensors similar to those provided by the human body, and a basic
abililty to go out and do things, to observe and experiment, and to
interact with people, it would be able to gather a similar set of
experiences to those possessed by humans. The question is then whether
the program can store and index those experiences, in their totality, in
some huge episodic memory, and whether it has the appropriate mechanisms
to fire useful episodic recalls at useful moments, and apply those
recalls to the present situation, whether by analogy or otherwise.

From this, it seems to me that the most important task that AI can
address itself to at the present is the study of episodic memory: how it
can be organised, how it can be accessed, and how analogies with past
situations can be developed. This should lead to a theory of experience,
ready for when robotics and memory capacities are advanced enough for
the kind of exeriment I descibed above. With all due respect to McCarthy
et al, attempts to hand code the wealth of experience of the real world
that adult human beings have accumulated are going to prove futile.
Human intelligence doesn't gather this commonsense by being explicitly
programmed with rules (formal OR informal), and neither will artificial
intelligences.

>It seems to me that the inferences are not so demanding but the
>inferencer utilizes a large amount of background knowledge and a good
>associative access mechanism.

Yes. Work on the associative access, and let the background knowledge
evolve itself.

>...
>What kind of formalism should we use for expressing the commonsense
>knowledge?

Try asking what kind of formalism should we use for expressing episodic
memories? Later on you suggest natural language. Is this suitable?
Do people remember things by describing them in words to themselves?
Or do they just create private "symbols", or brain activation patterns,
which only need to be translated into words when being communicated to
others? Note: I am not saying people don't think in natural language,
only that they don't store memories as natural language accounts.

I don't think this kind of experience can be expressed in any formalism,
nor do I think it can be captured by natural language. It needs to
evolve as a private set of informal symbols, of which the brain (human
or computer) does not need to consciously realise are there. All it needs
to do is associate the right thought with the symbols when they are
retrieved, i.e. to interpret the memories. Again I think this kind of
ability evolves with experience: at first, symbols (brain activity
patterns) would be triggered which the brain would be unable to interpret.

If this is beginning to sound like an advocation of neural
nets/connectionist learning, then so be it. I feel that a conventional
AI system coupled to a connectionist net for its episodic memory might
be a very sensible achitecture. There are probably other ways of
achieving the same behaviour, I don't know.

One final note. Read the first chapter of the Dreyfus and Dreyfus book
"Mind over Machine", for a thought-provoking account of how experts
perform, using "intuition". Logic is only used in hindsight to support
an intuitive choice. Hence "heuristics is compiled hindsight". Being
arch-critics of AI, the Dreyfuses conclude that the intuition that experts
develop is intrinsically human and can never be reproduced by machine.
Being an AI enthusiast, I conclude that intuition is really the
unconscious application of experience, and all that's needed to
reproduce it is the necessary mechanisms for storing and retrieving
episodic memories by association.

>In my opinion, it is possible that an attempt to express commonsense
>knowledge with a formalism is analogous to an attempt to fit a whale
>into a tin sardine can.  ...

I agree.
How did you generate this analogy? Is it because you had a vast
amount of common sense knowledge about whales and sardines, and tins,
(whether expressed in natural language (vast!), or some formal system)
through which you had to wade to realise that sardines will fit
in small tins but whales will not, and eventually synthesis this particular
concept, or did you just try to recall (holistically) an image that
matched the concept of forcing a huge thing into a rigid container?

Steve Easterbrook.

------------------------------

Date: Fri, 15 Jul 88 13:55:04 PDT
From: larry@VLSI.JPL.NASA.GOV
Subject: Philosophy: Critique of Systems Theory

--
Using Gilbert Cockton's references to works critical of systems theory, over
the last month I've spent a few afternoons in the CalTech and UCLA libraries
tracing down those and other criticisms.  The works I was able to get and
study are at the end of this message.  I also examined a number of introduc-
tory texts on psychology and sociology from the last 15 years or so.

General Systems Theory was founded by biologist Ludwig von Bertallanfy in
the late '40s.  It drew heavily on biology, borrowed from many areas, and
promised a grand unified theory of all the sciences.  The ideas gained
momentum till in the early '70s in the "humanics" or "soft sciences" it had
reached fad proportions.  Bertallanfy was made an honorary psychoanalyst,
for instance, and a volume appeared containing articles by various prominent
analysts discussing the effects of GST on their field.  After that peak,
interest died down.  Indexes in social-sciences textbooks carried fewer
references to smaller discussions.  The GST journal became thinner and went
from annual to biannual; the latest issue was in 1985.  Interest still
exists, however, and in various bibliographies of English publications for
1987 I found at total of seven new books.

What seems to have happened is that the more optimistic promises of GST
failed and lost it the support of most of its followers.  Its more success-
ful ideas were pre-empted by several fields.  These include control theory
in engineering, taxonomy of organizations in management, and the origins of
psychosis in social psychology.

For me the main benefit of GST has been a personally satisfactory resolution
of the reduction paradox, which follows.

Because of the limits of human intelligence, we simplify and view the
universe as various levels of abstraction, each level fairly independent of
the levels "above" and "below" it.  This gives rises to arguments, however,
about whether, for instance, physics or psychology is "truer."  The simple
answer is that both are only approximations of an ultimately unknowable
reality and, since both views are too useful give up, their incompatibility
is inevitable and we just have to live with it.  This is what many physi-
cists have done with the conflict between quantum and wave views of energy.

The GST view is that each higher level of organization is built on a
previous one via systems, which are new kinds of units made from binding
together more elementary units.  New kinds of systems exhibit synergy:
attributes and abilities not observed in any of their constituent elements.
But where do these attributes/abilities come from?  I found no answer in the
writings on system theory that I read, so I had to make my own answer.

I finally settled on what interaction effects.  Two atoms bound together
chemically affect each other's electron shrouds, forming a new shroud around
them both.  This gives the resulting molecule a color, solubility, conduc-
tivity, and so on that neither solitary atom has.

Similarly, two people can cross a wall neither can alone by one standing on
the other's shoulders to reach the top, then pulling the bottom partner up.
A "living" machine can repair and reproduce itself.  And consciousness can
arise from elements that don't have it -- memories, processors, sensors, and
effectors -- though no amount of study of individual elements will find life
or consciousness.  They are the result of interaction effects.

     _System Analysis in Public Policy: a Critique_, I. Hoos, 1972
     _Central Problems in Social Theory_, Anthony Giddens, 1979
     _System Theory, System Practice_, P. B. Checkland, 1981
     _On Systems Analysis_, David Berlinski, 1976
     _The Rise of Systems Theory_, R. Lilienfeld, 1978

     The last two are reviewed in _Futures_, 10/2, p. 159 and 11/2, p. 165.
     They also contain criticisms of system theory which, they complain,
       Berlinski and Lilienfeld overlooked.

------------------------------

End of AIList Digest
********************

∂19-Jul-88  1158	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #13  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Jul 88  11:57:44 PDT
Date: Mon 18 Jul 1988 00:23-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #13
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 18 Jul 1988       Volume 8 : Issue 13

Today's Topics:

  does AI kill? - Continued

----------------------------------------------------------------------

Date: 15 Jul 88 07:03:30 GMT
From: heathcliff!alvidrez@columbia.edu  (Jeff Alvidrez)
Subject: Re: AI and Killing


In the case of the Iranian airliner snafu, to say that AI kills just opens
up the whole "Guns don't kill people..." can of worms.

We all know the obvious:  even if someone does point the finger at an expert
system, EVEN if it has direct control of the means of destruction (which I
don't believe is the case with Aegis), the responsibility will always be
passed to some human agent, whether it be those who put the gun the the
computer's power or those who designed the system.  No one wants to take
this kind of responsibility; who would?  So it is passed on until there
is no conceivable way it could rest with anyone else (something like
the old Conrad cartoon with Carter pointing his finger at Ford, Ford
pointing to Nixon, Nixon pointing to (above, in the clouds), Johnson,
and so on, until the buck reaches Washington, and he points back at
Carter).

With a gun, it is quite clear where the responsibility lies, with the
finger on the trigger.  But with machines doing the work for us, it is no
longer so evident.  For the Iranian airliner, who goes up against the wall?
Rogers?  "I was just going on what the computer told me..."  The people
behind Aegis?  "Rogers used his own judgment; Aegis was intended only as
an advisory system".  The machine adds a level of indirection which makes
way for a lot of finger-pointing but no real accountability.  Though
I don't think we'll see much of that this time around with public opinion
of the Iranians what it is, but wait until it's someone else's plane
("Mr. President, we've got this Air-Traffic Controller expert system we'd
like you to see... an excellent opportunity to put this strike to rest...").

That is why the issue of the DOD use of AI is important: like all the other
tools of war we have developed through centuries of practice, AI allows
us to be even more detached from the consequences of our actions.  Soon,
we will not even need a finger to push the button, and our machines will
do the dirty work.  No blood on my fingers, so why should I care?

Now that we have established our raw destructive capabilities quite
clearly (and to the point of absurdity, as we talk of kill-ratios and
measure predicted casualties of a single weapon in the millions), AI is
the next logical step.  And like gun control, trying to avoid this is just
sticking your head in the sand.  If the technology is there, SOMEONE will
manage to use it, no matter how much effort is put into suppressing it.

One thing I have noticed is the vast amount of AI research funded by
DOD grants, more so (at least from what I have seen) than in other CS
fields.  Is there any doubt as to what use the fruits of this research
will go?  Certainly the possibilities for peaceful uses of any knowledge
gained are staggering, but I think it is quite clear what the DOD wants
it for.

They are, after all, in only one business.


-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Jeff Alvidrez
alvidrez@heathcliff.cs.columbia.edu

The opinions expressed in this article are fictional.  Any resemblence
they may bear to real opinions (especially those of Columbia University)
is purely coincidental.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

------------------------------

Date: 15 Jul 88 13:02:28 GMT
From: mailrus!uflorida!beach.cis.ufl.edu!tws@ohio-state.arpa  (Thomas
      Sarver)
Subject: Did AI kill? (was Re: does AI kill?)

In article <2091@ssc-vax.UUCP> ted@ssc-vax.UUCP (Ted Jardine) writes:
>
>First, to claim that the Aegis Battle Management system has an AI component
>is patently ridiculous.  I'm not suggesting that this is Ken's claim, but it
>does appear to be the claim of the Washington Post article.
>
>It's the pitfall that permits us to invest some twenty years of time and some
>multiple thousands of dollars (hundreds of thousands?) into the training and
>education of a person with a Doctorate in a scientific or engineering
>discipline
>but not to permit a similar investment into the creation of the knowledge base
>for an AI system.
>
>TJ {With Amazing Grace} The Piper
>aka Ted Jardine  CFI-ASME/I
>Usenet:                ...uw-beaver!ssc-vax!ted
>Internet:      ted@boeing.com
--

The point that everyone is missing is that there is a federal regulation that
makes certain that no computer has complete decision control over any
military component.  As the article says, the computer RECOMMENDED that the
blip was an enemy target.  The operator was at fault for not ascertaining the
computer's reccomendation.

I was a bit surprised Ted Jardine from boeing didn't bring this up in his
comment.

As for the other stuff about investing in an AI program:  I think there needs
to be sound, informed guidelines for determining whether a program can enter
a particular duty.  1) People aren't given immediate access to decision-making
procedures, neither should a computer.  2) however, there are certain
assumptions one can make about a person one can't make about a computer.
3) The most important module of an AI program is the one that says "I DON'T
KNOW, you take over."  4) The second most important is the one that says, "
I Think its blah blah WITH CERTAINTY X"   5) Just as there are military
procedures for relieving humans of their decision-making status, there should
be some way to do so for the computer.

Summary: No, AI did not kill.  Operator didn't look any farther than screen.


+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
But hey, its the best country in the world!
Thomas W. Sarver

"The complexity of a system is proportional to the factorial of its atoms.  One
can only hope to minimize the complexity of the micro-system in which one
finds oneself."
        -TWS

Addendum: "... or migrate to a less complex micro-system."

------------------------------

Date: 15 Jul 88 13:29:00 GMT
From: att!occrsh!occrsh.ATT.COM!tas@bloom-beacon.mit.edu
Subject: Re: does AI kill?


>no AI does not kill, but AI-people do. The very people that can
 ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑

        Good point!

        Here is a question.  Why blame the system when there the human in
        the loop makes the final decision?  I could understand if the Aegis
        system had interpreted the incoming plane as hostile AND fired the
        missiles, but it did not.

        If the captain relied solely on the information given to him by the
        Aegis system, then why have the human in the loop?  The idea is as
        I always thought was for the human to be able to add in unforeseen
        factors not accounted for in the programming of the Aegis system.

        Lets face it, I am sure ultimately it will be easier to place the
        blame on a computer program (and thus on the supplier) than on a
        single individual.  Isn't that kind of the way things work, or am
        I being cynical?

Tom

------------------------------

Date: 15 Jul 88 15:58:48 GMT
From: fluke!kurt@beaver.cs.washington.edu  (Kurt Guntheroth)
Subject: Re: does AI kill?

I am surprised that nobody (in this newsgroup anyway) has pointed out yet
that AEGIS is a sort of limited domain prototype of the Star Wars software.
Same mission (identify friend or foe, target selection, weapon selection and
aiming, etc.)  Same problem (identifying and classifying threats based on
indirect evidence like radar signatures perceived electronically at a
distance).  And what is scary, the same tendency to identify everything as a
threat.  Only when Star Wars perceives some Airbus as a threat, it will
initiate Armageddon.  Even if all it does is shoot down the plane, there
won't be any human beings in the loop (though a lot of good they did on the
Vincennes (sp?)).

I will believe in Star Wars only once they can demonstrate that AEGIS works
under realistic battlefield conditions.  The history of these systems is
really bad.  Remember the Sheffield, smoked in the Falklands War because
their Defense computer identified an incoming Exocet missile as friendly
because France is a NATO ally?  What was the name of our other AEGIS cruiser
that took a missile in the gulf, because they didn't have their guns turned on,
because their own copter pilots didn't like the way the guns tracked them in
and out.

------------------------------

Date: 15 Jul 88 19:21:13 GMT
From: l.cc.purdue.edu!cik@k.cc.purdue.edu  (Herman Rubin)
Subject: Re: does AI kill?

In article <4449@fluke.COM>, kurt@tc.fluke.COM (Kurt Guntheroth) writes:

                        ............

>       Only when Star Wars perceives some Airbus as a threat, it will
> initiate Armageddon.  Even if all it does is shoot down the plane, there
> won't be any human beings in the loop (though a lot of good they did on the
> Vincennes (sp?)).

                        .............

There are some major differences.  One is that the time scale will be a little
longer.  Another is that it is very unlikely that _one_ missile will be fired.
One can argue that that is a possible ploy, but a single missile from the USSR
could bring retaliation with or without SDI.  A third is that I believe that
one can almost guarantee that commercial airliners will identify themselves when
asked by the military.

The AI aspects of the AEGIS system are designed for an open ocean war with many
possible enemy aircraft.  That it did not avoid a situation not anticipated
in a narrow strait is not an adequate criticism of the designers.  However, it
is not AI, nor do I think that there are many AI systems, although many are
so called.  It is precisely the mode of failure.  I define intelligence to be
the ability to handle a totally unforeseen situation.  I see no way that a
deterministic system can be intelligent.  SDI will not be intelligent; it will
do what it is told, not what one thinks it has been told.  This is the nature
of computers at the present time.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet, UUCP)

------------------------------

Date: 15 Jul 88 19:35:00 GMT
From: smythe@iuvax.cs.indiana.edu
Subject: Re: does AI kill?


/* Written 10:58 am  Jul 15, 1988 by kurt@fluke in iuvax:comp.ai */

-  [star wars stuff deleted]

-I will believe in Star Wars only once they can demonstrate that AEGIS works
-under realistic battlefield conditions.  The history of these systems is
-really bad.  Remember the Sheffield, smoked in the Falklands War because
-their Defense computer identified an incoming Exocet missile as friendly
-because France is a NATO ally?  What was the name of our other AEGIS cruiser
-that took a missile in the gulf, because they didn't have their guns turned on
-because their own copter pilots didn't like the way the guns tracked them in
-and out.
-/* End of text from iuvax:comp.ai */

Lets try and get the facts straight.  In the Falklands conflict the
British lost two destroyers.  One because they never saw the missle
coming until it was too late.  It is very hard to shoot down an
Exocet.  In the other case, the problem was that the air defense
system was using two separate ships, one to do fire control
calculations and the other to actually fire the missle.  The ship that
was lost had the fire control computer.  It would not send the command
to shoot down the missle because there was another British ship in the
way.  The ship in the way was actually the one that was to fire the
surface-to-air missle.  Screwy.  I don't know which event involved the
Sheffield, but there was no misidentification in either case.

The USS Stark, the ship hit by the Iraqi-fired exocet, is not an AEGIS
cruiser at all but a missile frigate, a smaller ship without the
sophisticated weapons systems found on the cruiser.  The captain did
not activate the close-support system because he did not think the
Iraqi jet was a threat.  Because of this some of his men died.  This
incident is now used as a training exercise for ship commanders and
crew.  In both the Stark's case and the Vincennes' case the captains
made mistakes and people died.  In both cases the captains (or
officers in charge, in the case of the Stark) had deactivated the
automatic systems.  On the Stark, it may have saved lives.  On the
Vincennes, the tragedy would have occurred sooner.

I don't really think that AI technology is ready for either of these
systems.  Both decisions involved weighing the risks of losing human
lives against conflicting and incorrect information, something that AI
systems do not yet do well.  It is clear that even humans will make
mistakes in these cases, and will likely continue to do so until the
quality and reliability of their information improves.

Erich Smythe
Indiana University
smythe@iuvax.cs.indiana.edu
iuvax!smythe

------------------------------

Date: 16 Jul 88 00:06:21 GMT
From: garth!smryan@unix.sri.com  (Steven Ryan)
Subject: Re: AI and Killing

Many years ago, Vaughn Bode' wrote about machines that took over the
world and exterminated humanity (The Junkwaffel Papers, et cetera).

Of course, they were just cartoons.

------------------------------

Date: 16 Jul 88 07:26:46 GMT
From: portal!cup.portal.com!tony_mak_makonnen@uunet.uu.net
Subject: Re: AI and Killing

responding to a response by David Gagliano: I already disavow every word
I wrote since in committing to a few lines I had to compromise a lot.
What is permanent is what we can learn from the experience. It is easy to a
accept that there is a problem  in the human-binary relation; we do not
have neat guides on the proper mix. What is apparent to me is that the
expanded discrete input the computer allows expands rather than reduce
the qualitative input of the human component, which is required. You can
say the human is required to become a faster and more intelligent thinker.
While the computer takes up the discursive or calculative function from
the brain; one has a sense that it shifts a much heavier burden to its
synthetic functions: the imagination, intuition , that which seems to
go on subliminally. I feel this area must be addressed before we conclude
that computers are getting too fast for us to allow practical use in real
time decision making. While we wonder what aspect of mind the computer
is pushing us to understand the problem will remain.
We can assume that the human component must form a bond, a feel
of what the info on the display means; just as a police officeer
might with a police dog; the formation of tacit knowledge which
may or may not be capable of explication. To use an expression
the processing of display info must become second nature. We are
back to the immensely obvious requirement of first hand experience
and the following questions among others. Can the skipper take
the attitude that there is always someone else who will readthe
machine for him? What happens when he is really the receptor of
info processed by the radarman? Can the Navy afford the type
of individual it will take to effectively integrate computer info
into decision in real time.? Gee I this point I am not really
sure this means anything! but what the heck....

------------------------------

Date: 16 Jul 88 13:37:58 GMT
From: ruffwork@cs.orst.edu  (Ritchey Ruff)
Subject: Re: does AI kill? (Human in the loop)

In article <7333@cup.portal.com> tony_mak_makonnen@cup.portal.com writes:
>[...] The human in the
>loop could override a long computation by bringing in factors that could
>not practically be foreseen: 'why did the  Dubai tower say..?, 'why is the...

Ah, Yes!  The DoD says "sure, we have humans in the loop."  When most
of the programs (like IFF) are of the type where the computer says
"press the red button" and the human in the loop presses the red
button!  (in this case it's, IFF says it's a foe, so shoot it down...).
Everything I've read thus far (and I might have missed something ;-)
implies the IFF was the desciding factor...

--ritchey ruff  ruffwork@cs.orst.edu -or- ...!tektronix!orstcs!ruffwork

------------------------------

Date: 16 Jul 88 14:29:29 GMT
From: sunybcs!stewart@rutgers.edu  (Norman R. Stewart)
Subject: Re: does AI kill?


Is anybody saying that firing the missle was the wrong decision under
the circumstances?   The ship was, afterall, under attack by Iranian
forces at the time, and the plane was flying in an unusual manner
for a civilian aircraft (though not for a military one).  Is there
any basis for claiming the Captain would have (or should have) made
a different decision had the computer not even been there.

While I'm at it, the Iranians began attacking defenseless commercial
ships in international waters, killing innocent crew members, and
destroying non-military targets (and dumping how much crude oil into
water?).  Criticizing the American Navy for coming to defend these
ships is like saying that if I see someone getting raped or mugged
I should ignore it if it is not happening in my own yard.

The Iranians created the situation, let them live with it.



Norman R. Stewart Jr.             *  How much more suffering is
C.S. Grad - SUNYAB                *  caused by the thought of death
internet: stewart@cs.buffalo.edu  *  than by death itself!
bitnet:   stewart@sunybcs.bitnet  *                       Will Durant

------------------------------

Date: 17 Jul 88 14:07:08 GMT
From: uvaarpa!virginia!uvacs!cfh6r@umd5.umd.edu  (Carl F. Huber)
Subject: Re: does AI kill?

In article <1376@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>This appeared in my local paper yesterday.  I think it raises some
>serious ethical questions for the artificial intelligence R&D community.
>------------------------------
>COMPUTERS SUSPECTED OF WRONG CONCLUSION
>from Washington Post, July 11, 1988
>
>Computer-generated mistakes aboard the USS Vincennes may lie at the root

[other excerpts from an article in Ken's local paper deleted]


This sounds more like the same old story of blaming the computer.  Also,
it is not clear about where the "intelligence" comes in to play here,
artificial or otherwise (not-so-:-).  It really sounds like the user was
not very well trained to use the program, and the program may not have been
informative enough, but this also is not presented in the article.

I don't see any new ethical questions being raised at all.  I see a lot of
organic material going through the air-conditioning at the pentagon.


-carl huber

------------------------------

End of AIList Digest
********************

∂19-Jul-88  1158	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #14  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Jul 88  11:58:18 PDT
Date: Mon 18 Jul 1988 00:25-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #14
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 18 Jul 1988       Volume 8 : Issue 14

Today's Topics:

 Seminars:

  Bilingual Children as Translators
  Computer Modelling of Child Language Learning
  User Interface Strategies '88 (Satellite Course)
  Case memory for a case-based reasoner

----------------------------------------------------------------------

Date: Wed 6 Jul 88 09:41:31-EDT
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: Bilingual Children as Translators


                       BBN Science Development Program
                     Language & Cognition Seminar Series

      BILINGUAL CHILDREN AS TRANSLATORS: RECOGNIZING AND CAPITALIZING
           ON NATURAL ABILITIES IN LANGUAGE MINORITY STUDENTS

                            Sheila M. Shannon
               Research Associate, Department of Psychology
                            Yale University


                            BBN Laboratories
                           10 Moulton Streeet
                     Large Conference Room, 2nd Floor

                   10:30 a.m., Tuesday, July 12, 1988



Abstract:  Recent research in psycholinguistics, sociolinguistics,
and language pedadgogy has looked at translation (oral) and
interpretation (written) activities and skills in bilingual children.
Earlier work on translation strictly dealt with the professional field
of translation and interpretation, and not with the spontaneous kinds
of translating in which bilinguals engage.  This presentation reviews
the more recent work in the three disciplines with a focus on the
author's own work in sociolinguistics and pedadgogy.  I examine the
nature of translation skills and ways they naturally emerge as a
benefit to being bilingual; explore ways that cognitive, linguistic,
and social abilities are involved in translation activity; consider
how these abilities may be integrated into language classroom
experiences; and assess a pilot program based on translation exercises
implemented in one classroom.  The work presented here fundamentally
concerns itself with bilingual children of language minority
communities in this country--those who require our greatest efforts to
insure their academic success.  I present work carried out with one
Mexican American community in California and a Puerto Rican community
in Connecticut.

------------------------------

Date: Sat, 9 Jul 88 10:54:02 EDT
From: dlm@research.att.com
Subject: Computer Modelling of Child Language Learning


               How Do Children Learn to Judge Grammaticallity?
                                     or
      Research Issues for Computer Modelling of Child Language Learning

                       Thursday, July 14, 1988, 10:30 am
                  AT&T Bell Laboratories - Murray Hill 3D-436

                              Mallory Selfridge
                        The University of Connecticut

    Development of a successful computer model of child language learning
    would  have  important  implications  for  the development of natural
    language interfaces to computers.  However, no such fully  successful
    model  has yet been developed, and ongoing research is taking several
    different approaches.  The purpose of this talk is  to  identify  the
    most  promising  approach  and  the most important research issues it
    suggests.  This talk first discusses the problem of developing a com-
    puter  model  of  child language learning and argues that the primary
    questions are those of accounting  for  empirical  data  rather  than
    abstract  questions from theoretical linguistics.  It then identifies
    a set of several linguistically-motivated  questions,  including  the
    question of how children learn to judge grammaticallity, and suggests
    that they should be answered as side-effects of computational mechan-
    isms  required  to account for empirical data.  The "grammar acquisi-
    tion" approach to child language learning is then  reviewed,  and  is
    judged to be undesirably abstract and of uncertain promise.  Then, an
    example of a "semantic" approach  to  child  language  learning,  the
    CHILD  program,  is considered, and its performance in accounting for
    empirical data is described.  Further, CHILD's ability  to  learn  to
    judge   grammaticallity   is   described,   and  answers  to  set  of
    linguistically-motivated questions are proposed  as  side-effects  of
    CHILD's mechanisms.  This talk concludes that the "semantic" approach
    to computer models of child language learning is the most  promising,
    and  identifies  as important research issues a) the investigation of
    the relationship  between  language  and  memory  processes;  b)  the
    development of non-linguistic representations of syntactic knowledge;
    c) the investigation of the process  whereby  the  child  infers  the
    meaning of an incompletely understood utterance; and d) the identifi-
    cation and  investigation  of  additional  empirical  data  on  child
    language learning.

    SPONSOR:  Bruce Ballard -  allegra!bwb

------------------------------

Date: Tue, 12 Jul 88 11:26:45 EDT
From: hendler@dormouse.cs.umd.edu (Jim Hendler)
Subject: User Interface Strategies '88 (Satellite Course)


A two-day national satellite TV course October 5 and 12, 1988



Organized by Ben Shneiderman, University of Maryland



Presenting

Thomas Malone, MIT

Donald Norman, University of California, San Diego

James Foley, George Washington University



This course is produced by the University of Maryland Instructional
Television (ITV) System and broadcast nationwide at more than 200 sites
on the AMCEE/NTU (National Technological University) Satellite
Network.  For a copy of the full brochure and information on attending at
an AMCEE site in your area or at an ITV site in the Washington, DC
area, call the University of Maryland ITV office at (301) 454-8955.  You
may consider arranging a private showing as a special event for your
organization, university, or company.



Overview: New user interfaces ideas have engaged many researchers,
designers, programmers, and users in the past year.  These four leaders of
the field offer their perspectives on why the user interface is a central
focus for expanding the application of computers.  Each will offer his
vision and suggest exciting opportunities for next year's developments.
Demonstrations, new software tools, guiding principles, emerging
theories, and empirical results will be presented.



Intended audience: User interface designers, programmers, software
engineers, human factors specialists, managers of computer, information,
and communications projects, trainers, etc.

---- October 5, 1988 -------------------------------------------------

Ben Shneiderman, University of Maryland

Lecture 1: INTRODUCTION: User Interfaces Strategies

Lecture 2: HYPERTEXT: Hype or Help?

Thomas W. Malone, MIT

Lecture 3:  COMPUTER-SUPPORTED COOPERATIVE WORK:
Using information technology for coordination

Lecture 4: COMPUTER-SUPPORTED COOPERATIVE WORK:
Design principles and applications

Discussion Hour


---- October 12, 1988 -----------------------------------------------

Donald A. Norman, University of California, San Diego

Lecture 5: USER CENTERED SYSTEM DESIGN:
Emphasizing usability and understandability

Lecture 6: Practical principles for designers


Jim Foley, George Washington University

Lecture 7: Software tools for designing and implementing user-computer
interfaces

Lecture 8: User Interface Management Systems (UIMSs)

Discussion Hour

******  If the full text is too long, then please cut it here  ************


-----  October 5, 1988 ----------------------------------

INTRODUCTION: NEW USER INTERFACE STRATEGIES
AND HYPERTEXT

    Ben Shneiderman, University of Maryland


Why user interface issues are now recognized as the vital force

The three pillars: (1) Usability labs & interactive testing, (2) User interface
management systems, (3) Guidelines documents & standards

New menus, clever input devices, sharper displays, more color,
teleoperation, collaboration

UI vs AI: User interface goes a step beyond artificial intelligence

Hypertext: Hype or Help?  Understanding new medias: When and how to
use hypertext.  User interface design for hypertext; Automatic importing and
exporting


Ben Shneiderman is an Associate Professor in the Department of
Computer Science, Head of the Human-Computer Interaction Laboratory,
and Member of the Institute for Advanced Computer Studies, all at the
University of Maryland at College Park.  Dr. Shneiderman is the author of
Software Psychology: Human Factors in Computer and Information
Systems (1980) and Designing the User Interface: Strategies for Effective
Human-Computer Interaction (1987).


COMPUTER SUPPORTED COOPERATIVE WORK:

USING INFORMATION TECHNOLOGY FOR COORDINATION

    Thomas W. Malone, MIT

New applications have begun to appear that help people work together more
productively. Organizations are beginning to use new systems to (a)
increase coordination of design teams, (b) solicit input on new projects from
diverse sources, and (c) display and manipulate information more
effectively in face-to-face meetings.  These new applications (often called
computer supported cooperative work or groupware) are likely to become
widespread in the next few years.

Types of groupware (face-to-face vs. remote; simultaneous vs. delayed).

Electronic meeting rooms (e.g., Xerox Colab, MCC, Univ. of Arizona,
Univ. of Michigan).

Asynchronous coordination tools (e.g., electronic messaging, collaborative
authoring, Information Lens (demo will be made), Coordinator).

Guidelines for designing organizational interfaces: (importance of
semi-formal systems, incremental adoption paths, user autonomy,
social and political factors).


Thomas W. Malone is the Douglas Drane Career Development Associate
Professor of Information Technology and Management at the MIT School
of Management. He serves on the editorial boards of Human-Computer
Interaction, Information Systems Research, MIS Quarterly, and
Organizational Science.  Before joining the MIT faculty, Professor Malone
was a research scientist at the Xerox Palo Alto Research Center (PARC).


-----  October 12, 1988  ---------------------------------------------

USER CENTERED SYSTEM DESIGN:
EMPHASIZING USABILITY AND UNDERSTANDABILITY

    Donald A. Norman, University of California, San Diego

The emphasis is on ways to make new devices easy to understand and easy
to use.  This is done, to a large extent, by making the information necessary
to do the task available, thus minimizing the memory burden and learning
time.  The ideal is that when one does a task, the knowledge required
should be that of the task: as much as possible, the tool itself should be
invisible.

The Seven Stages of Action:  (1) Forming the goal; (2) Forming the
intention; (3) Specifying an action; (4) Executing the action; (5) Perceiving
the system state; (6) Interpreting the system state; (7) Evaluating the
outcome.


Direct Manipulation and the Model World Metaphor

   Making the computer invisible -- letting the user work directly on the task.

Seven Principles for Transforming Difficult Tasks Into Simple Ones:  (1)
Use Both Knowledge in the World and in the Head. (2) Simplify the
Structure of Tasks. (3) Make Things Visible: Bridge the Gulfs of Execution
and Evaluation. (4) Get the Mappings Right. (5) Exploit the Power of
Constraints, Both Natural and Artificial. (6) Design for Error. (7) When All
Else Fails, Standardize.


Donald A. Norman is Professor of Psychology at the University of
California, San Diego, where he is Director of the Institute for Cognitive
Science and chair of the interdisciplinary PhD program in Cognitive
Science.  Prof. Norman received a BS degree from MIT and a MS degree
from the University of Pennsylvania, both in Electrical Engineering.  His
doctorate, from the University of Pennsylvania, is in Psychology.  He has
published extensively in journals and books, and is the author or co-author
of eight books.  His most recent book (published in Spring, 1988), is The
Psychology of Everyday Things.

TOOLS FOR DESIGNING AND IMPLEMENTING
USER-COMPUTER INTERFACES

    James D. Foley, The George Washington University


Design and implementation of successful user interfaces is facilitated by
appropriate software tools.  The tools enhance designer and programmer
productivity, and simplify user interface refinement as experience is gained
with early users.  The tools can also enforce user interface design precepts
by incorporating design decisions into the interface.

Graphics subroutine packages.

Window managers - client-server model of X Windows, services to the
application programmer.

Interaction technique libraries - procedures for presenting menus, dialogue
boxes, scroll bars, etc.

Application frameworks, such as Apple's MacApp.

Rapid prototyping systems - quick design of interactive system prototypes
by non-programmers.

User Interface Management Systems - higher-level specification, automatic
implementation.

Expert system tools - to give designer guidance/feedback on design, to give
user help and guidance.

Several system-building tools will be demonstrated (GWU's UIDE,
Help-by example, either Prototyper on the Mac or Bricklin's demo program
on the PC).

James Foley is Professor and chairman-elect at the Department of EE &
CS, George Washington University.  He is co-author, with A. vanDam, of
Fundamentals of Interactive Computer Graphics.  His article "Interfaces for
Advanced Computing" appeared in the October 1987 Scientific American.
Foley is an associate editor of Transactions on Graphics, and a fellow of the
IEEE.

------------------------------

Date: Thu 14 Jul 88 16:48:40-EDT
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Case memory for a case-based reasoner

                    BBN Science Development Program
                       AI Seminar Series Lecture

                 CASE MEMORY FOR A CASE-BASED REASONER

                             JANET KOLODNER
                    Georgia Institute of Technology,
                      & MIT (AI in Medicine Group),
                        & Thinking Machines Corp.
                      (janetk@zermatt.lcs.mit.edu)

                                BBN Labs
                           10 Moulton Street
                    3rd floor large conference room
                      10:30 am, Tuesday July 19th

                    *** NOTE: NOT THE USUAL ROOM ***

Perhaps the most important support process a case-based reasoner needs
is a memory for cases.  Analysis of observations of physicians using
cases during problem solving have led us to derive requirements for a
case memory.  We then created representations, retrieval algorithms, and
selection heuristics that support these requirements.  In this talk, I
first present observations of physicians using cases during problem
solving and then present the requirements on memory that arise from
analyzing doctors' behavior.  I will also present the representations,
retrieval algorithms, and selection heuristics that derive from those
requirements.  The memory model is implemented in a computer program
called PARADYME (Parallel Dynamic Memory) and runs on the Connection
Machine.  Research was done in conjunction with physicians at New
England Medical Center and programmers at Thinking Machines.

------------------------------

End of AIList Digest
********************

∂19-Jul-88  2145	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #17  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Jul 88  21:45:24 PDT
Date: Wed 20 Jul 1988 00:27-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #17
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 20 Jul 1988     Volume 8 : Issue 17

Today's Topics:

  Does AI kill?  --  Third in a series ...

----------------------------------------------------------------------

Date: 18 Jul 88 12:53:30 GMT
From: linus!marsh@gatech.edu  (Ralph Marshall)
Subject: Re: does AI kill?

In article <4449@fluke.COM> kurt@tc.fluke.COM (Kurt Guntheroth) writes:
>I am surprised that nobody (in this newsgroup anyway) has pointed out yet
>that AEGIS is a sort of limited domain prototype of the Star Wars software.
>Same mission (identify friend or foe, target selection, weapon selection and
>aiming, etc.)  Same problem (identifying and classifying threats based on
>indirect evidence like radar signatures perceived electronically at a
>distance).  And what is scary, the same tendency to identify everything as a
>threat.  Only when Star Wars perceives some Airbus as a threat, it will
>initiate Armageddon.  Even if all it does is shoot down the plane, there
>won't be any human beings in the loop (though a lot of good they did on the
>Vincennes (sp?)).
>
        I had the same reaction when I heard about the chain of events that
caused the AEGIS system to screw up. Basically it was a system DESIGN problem,
in that it is not really designed to handle a mix of civilian and military
targets in the same area.  Obviously, in the type of large scale war for
which AEGIS was designed nobody is going to be flying commercial jets
into a battle zone, so you don't have to account for that when you decide
how to identify unknown targets.  I see much the same problem with Star Wars,
in that even if you can actually build the thing so that it works properly
without any system-level testing, you are going to have a hell of a time
deciding what the operational environment is going to be like.  If in fact
the US/USSR combined mission to Mars goes forward, can you imagine the
number of orbiting and launched "targets" the Soviet Union will have for the
system to deal with ?

        Another point that I haven't heard discussed is one that was raised
in the current Time magazine: Captain Rogers received permission from the
Admiral (I think) who was in charge of the battle group _before_ firing the
missiles.  This is a pretty decent example of proper decision-making procedures
for lethal situations, since several humans had to concur with the analysis
before the airplane was shot down.

---------------------------------------------------------------------------
Ralph Marshall (marsh@mitre-bedford.arpa)

Disclaimer:  Often wrong but never in doubt...  All of these opinions
are mine, so don't gripe to my employer if you don't like them.

------------------------------

Date: 18 Jul 88 13:43:51 GMT
From: att!whuts!spf@bloom-beacon.mit.edu  (Steve Frysinger of Blue
      Feather Farm)
Subject: Re: Re: does AI kill?

> Is anybody saying that firing the missle was the wrong decision under
> the circumstances?   The ship was, afterall, under attack by Iranian
> forces at the time, and the plane was flying in an unusual manner
> for a civilian aircraft (though not for a military one).  Is there
> any basis for claiming the Captain would have (or should have) made
> a different decision had the computer not even been there.

I think each of us will have to answer this for ourselves, as the
Captain of the cruiser will for the rest of his life.  Perhaps one
way to approach it is to consider an alternate scenario.

Suppose the Captain had not fired on the aircraft.  And suppose the
jet was then intentionally crashed into the ship (behavior seen in
WWII, and made plausible by other Iranian suicide missions and the
fact that Iranian forces were already engaging the ship).  Would
we now be criticizing the Captain for the death of his men by NOT
firing?

As I said, we each have to deal with this for ourselves.

Steve

------------------------------

Date: 18 Jul 88 16:32:05 GMT
From: fluke!kurt@beaver.cs.washington.edu  (Kurt Guntheroth)
Subject: Re: does AI kill?

>> In article <4449@fluke.COM>, kurt@tc.fluke.COM (Kurt Guntheroth) writes:

>>      Only when Star Wars perceives some Airbus as a threat, it will
>> initiate Armageddon.  Even if all it does is shoot down the plane, there
>> won't be any human beings in the loop (though a lot of good they did on the
>> Vincennes (sp?)).

> Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907

> There are some major differences.  One is that the time scale will be a little
> longer.  Another is that it is very unlikely that _one_ missile will be fired.
> A third is that I believe that one can almost guarantee that commercial
> airliners will identify themselves when asked by the military.

1.  Huh?  Time scale longer?  That's not what the Star Wars folks are saying.
2.  What about a power with a limited first strike capability.  Maybe they
    launch ten missiles on a cloudy day, and we only see one.
3.  The Airbus didn't respond.  KAL 007 didn't respond.  History is against you.

> The AI aspects of the AEGIS system are designed for an open ocean war.
> That it did not avoid a situation not anticipated...is not an adequate
> criticism of the designers.

This is exactly a valid criticism of the design.  For the AEGIS system to
make effective life-and-death decisions for the battle units it protects, it
must be flexible about the circumstances of engagement.  Why is AEGIS
specified for open ocean warfare?  Because that is the simplest possible
situation.  Nothing in the entire radar sphere but your ships, your planes,
and hostile targets.  No commercial traffic, no land masses concealing
launchers, reducing time scales, and limiting target visibility.  What does
this have to do with Star Wars?  No simplifying assumptions may be made for
Star Wars (or any effective computerized area defense system).  It has to
work under worst case assumptions.  It has to be reliable.  And for Star
Wars at least, it must not make mistakes that cause retailatory launches.
I don't know enough about defense to know that these requirements are not
contradictory.  Can a system even be imagined that could effectively
neutralize threats under all circumstances, and still always avoid types of
mistakes that kill large numbers of people accidentally?  What are the
tradeoff curves like?  What do the consumers (military and civilian)
consider reasonable confidence levels?

It also does no good to say "The AEGIS isn't really AI, so the question
is moot."  Calling a complex piece of software AI is currently a marketing
decision.  Acedemic AI types have at least as much stake as anyone else in
NOT calling certain commercial or military products AI.  It would cheapen
their own work if just ANYBODY could do AI.  In fact, I don't even think
AI-oid programs should be CALLED AI.  It should be left to the individual
user to decide if the thing is intelligent, the way we evaluate the
behavior of pets and politicians, on a case by case basis.

------------------------------

Date: 18 Jul 88 18:17:32 GMT
From: otter!sn@hplabs.hp.com  (Srinivas Nedunuri)
Subject: Re: Did AI kill? (was Re: does AI kill?)

/ otter:comp.ai / tws@beach.cis.ufl.edu (Thomas Sarver) /
Thomas Sarver writes:
>The point that everyone is missing is that there is a federal regulation that
>makes certain that no computer has complete decision control over any
>military component.  As the article says, the computer RECOMMENDED that the
>                                                       ~~~~~~~~~~~
>blip was an enemy target.  The operator was at fault for not ascertaining the
>computer's reccomendation.

        Perhaps this is also undesirable, given the current state of AI
technology. Even a recommendation amounts to the program having taken some
decision. It seems to me that the proper place for AI (if AI was used) is
in filtering the mass of information that would normally overwhelm a human.
In fact not only filtering but collecting this information and presenting it
in a more amenable form _ based on simple, robust wont-usually-fail
heuristics. In this way it is clear that AI is offering an advantage - a
human simply could not take in all the information in its original form and
come to a sensible decision in a reasonable time.
        We don't know yet what actually happened on the Vincennes but the
computer's recommendation could well have swayed the Captain's decision,
psychologically.

------------------------------

Date: 19 Jul 88 00:36:35 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu 
      (Stephen Smoliar)
Subject: Re: does AI kill?

In article <143100002@occrsh.ATT.COM> tas@occrsh.ATT.COM writes:
>
>       Lets face it, I am sure ultimately it will be easier to place the
>       blame on a computer program (and thus on the supplier) than on a
>       single individual.  Isn't that kind of the way things work, or am
>       I being cynical?
>
If you want to consider "the way things work," then I suppose we have to
go back to the whole issue of blame which is developed in what we choose
to call our civilization.  We all-too-humans are not happy with complicated
answers, particularly when they are trying to explain something bad.  We
like our answers to be simple, and we like any evil to be explained in
terms of some single cause which can usually be attributed to a single
individual.  This excuse for rational thought probably reached its nadir
of absurdity with the formulation of the doctrine of original sin and the
principle assignment of blame to the first woman.  Earlier societies realized
that it was easier to lay all blame on some dispensible animal (hence, the
term scapegoat) than to pick on any human . . . particularly when any one
man or woman might just as likely be the subject of blame as any other.
Artificial intelligence has now given us a new scapegoat.  We, as a society,
can spare ourselves all the detailed and intricate thought which goes into
understanding how a plane full of innocent people can be brought to a fiery
ruin by dismissing the whole affair as a computer error.  J. Preser Eckert,
who gave the world both Eniac and Univac, used to say that man was capable
of going to extraordinary lengths just to avoid thinking.  When it comes to
thinking about disasterous mistakes, the Aegis disaster has demonstrated, if
nothing else, just how right Eckert was.

------------------------------

Date: 19 Jul 88 03:02:06 GMT
From: anand@vax1.acs.udel.edu  (Anand Iyengar)
Subject: Re: does AI kill?

In article <4471@fluke.COM> kurt@tc.fluke.COM (Kurt Guntheroth) writes:
>contradictory.  Can a system even be imagined that could effectively
>neutralize threats under all circumstances, and still always avoid types of
>mistakes that kill large numbers of people accidentally?  What are the
>tradeoff curves like?  What do the consumers (military and civilian)
>consider reasonable confidence levels?
        It seems that even people can't do this effectively: look at the
justice system.

                                                        Anand.

------------------------------

Date: 19 Jul 88 14:23:48 GMT
From: att!whuts!spf@bloom-beacon.mit.edu  (Steve Frysinger of Blue
      Feather Farm)
Subject: Re: AI kills?

>In article <4563@whuts.UUCP> you write:
>>Suppose the Captain had not fired on the aircraft.  And suppose the
>>jet was then intentionally crashed into the ship (behavior seen in
>>WWII, and made plausible by other Iranian suicide missions and the
>>fact that Iranian forces were already engaging the ship).  Would
>>we now be criticizing the Captain for the death of his men by NOT
>>firing?
>
>Do you really think the captain and all his men would have stood there
>mesmerized for the several minutes it would take for an Irani suicide
>flight to come in and crash?

Yes, if you accept the hypothesis that they had decided not to fire on
the (apparently) civilian aircraft, especially if a deceptive ploy was
used (e.g. disabled flight, &c.).  They wouldn't have realized its
hostile intent until several SECONDS before the aircraft would
impact.  Put all the missiles you want into it, the fuel (and possibly
explosives) laden fuselage of the jet airliner would have a high
probability of impact, with attendent loss of life.

>I mean, *think* for at least 10 seconds
>about the actual situation before posting such obvious nonsense.

Apart from the needlessly offensive tone of your remarks, I believe you
should pay some attention to this yourself.  Every recent Naval surface
casualty has exactly these elements.  Your blind faith in Naval weapon
systems is entirely unjustified - ask someone who knows, or maybe go
serve some time in the Med. yourself.

>Kamikazis might have worked in WWII, but the AEGIS, even if can't tell
>the difference between a commerical airliner on a scheduled flight and
>an F-14 in combat, has proven its ability to blow the hell out of nearby
>aircraft.

No.  It has proven its ability to disable/destroy the aircraft, not
vaporize the aircraft.  If you research Kamikazi casualties in WWII
you'll find that significant damage was done by aircraft which had
been hit several times and whose pilots were presumed dead in the air.
I cannot comment on the capabilities of the weapons available to the
AEGIS fire control; I only encourage you to research the open literature
(at least) before you make such assumptions.

> There was absolutely nothing for him to fear.

Right.  Stand on the bridge, in the midst of hostile fire, with an
adversary known to use civilian and military resources in deceptive
ways, and THEN tell me there was nothing for him to fear.  You might
be able to critique football this way, but don't comment on war.

At first blush I categorized your position as bullshit, but then it
occurred to me that it might only be wishful thinking.  The uninitiated
public in general WANTS to believe our technology is capable of meeting
the threats. You're not to blame for believing it; our leaders try to
make you believe it so you'll keep paying for it.  Your ignorance is,
therefore, innocent.  In the future, however, please do not corrupt
a constructive discussion with hostile remarks.  If you can't
participate constructively, please keep your thoughts to yourself.

Steve

P.S. Incidently, we haven't even touched on the possibility of the
airliner having been equipped with missiles, quite an easy thing
to do.  I don't claim the Captain did the RIGHT thing; I just think
the populace (as usual) has 20/20 hindsight, but hasn't got the
slightest idea about what the situation was like in REAL-TIME.  It's
always a good idea to look at both sides, folks.

------------------------------

Date: 19 Jul 88 09:13:24 PDT (Tuesday)
From: Rodney Hoffman <Hoffman.es@Xerox.COM>
Subject: Re: does AI kill?

The July 18 Los Angeles Times carries an op-ed piece by Peter D. Zimmerman, a
physicist who is a senior associate at the Carnegie Endowment for International
Peace and director of its Project on SDI Technology and Policy:

        MAN IN LOOP CAN ONLY BE AS FLAWLESS AS COMPUTERS.

  [In the Iranian Airbus shootdown,] the computers aboard ship use
  artificial intelligence programs to unscramble the torrent of infor-
  mation  pouring from the phased array radars.  These computers decided
  that the incoming Airbus was most probably a hostile aircraft, told
  the skipper, and he ordered his defenses to blast the bogey (target)
  out of the sky.  The machine did what it was supposed to, given the
  programs in its memory.  The captain simply accepted the machine's
  judgment, and acted on it....

  Despite the fact that the Aegis system has been exhaustively tested at
  the RCA lab in New Jersey and has been at sea for years, it still failed
  to make the right decision the first time an occasion to fire a live
  round arose.  The consequences of a similar failure in a "Star Wars"
  situation could lead to the destruction of much of the civilized world.
  [Descriptions of reasonable scenarios ....]

  The advocates of strategic defense can argue, perhaps plausibly, that
  we have now learned our lesson.  The computers must be more sophisticated,
  they will say.  More simulations must be run and more cases studied so
  that the artificial intelligence guidelines are more precise.

  But the real lesson from the tragedy in the Persian Gulf is that
  computers, no matter how smart, are fallible.  Sensors, no matter how
  good, will often transmit conflicting information.  The danger is not
  that we will fail to prepare the machines to cope with expected situa-
  tions.  It is the absolute certainty that crucial events will be ones
  we have not anticipated.

  Congress thought we could prevent a strategic tragedy by insisting that
  all architectures for strategic defense have the man in the loop.  We
  now know the bitter truth that the man will be captive to the computer,
  unable to exercise independent judgment because he will have no indepen-
  dent information, he will have to rely upon the recommendations of his
  computer adviser.  It is another reason why strategic defense systems
  will increase instability, pushing the world closer to holocaust --
  not further away.

  - - - - -

I'm not at all sure that Aegis really uses much AI.  But many lessons implicit
in Zimmerman's piece are well-taken.  Among them:

  * The blind faith many people place in computer analysis is rarely
    justified.  (This of course includes the hype the promoters use to
    sell systems to military buyers, to politicians, and to voters.
    Perhaps the question should be "Does hype kill?")

  * Congress's "man in the loop" mandate is an unthinking palliative,
    not worth much, and it shouldn't lull people into thinking the problem
    is fixed.

  * To have a hope of being effective, "people in the loop" need additional
    information and training and options.

  * Life-critical computer systems need stringent testing by disinterested
    parties (including operational testing whenever feasible).

  * Many, perhaps most, real combat situations cannot be anticipated.

  * The hazards at risk in Star Wars should rule out its development.

  -- Rodney Hoffman

------------------------------

Date: 19 Jul 88 09:34 PDT
From: Harley Davis <HDavis.pa@Xerox.COM>
Subject: Does AI Kill?

I used to work as the artificial intelligence community of the Radar Systems
Division at Raytheon Co, the primary contractors for the Aegis detection radar.
(Yes, that's correct - I was the only one in the community.)  Unless the use of
AI in Aegis was kept extremely classified from everyone there, it did not use
any of the techniques we would normally call AI, including rule-based expert
systems.  However, it probably used statistical/Bayesian techniques to interpret
and correlate the data from the transponder, the direct signals, etc. to come up
with a friend/foe analysis.   This analysis is simplified by the fact that our
own jets give off special transponder signals.

But I don't think this changes the nature of the question - if anything, it's
even more horrifying that our military decision makers rely on programs ~even
less~ sophisticated than the most primitive AI systems.

-- Harley Davis
   HDavis.PA@Xerox.Com

------------------------------

Date: Tue, 19 Jul 88 17:50 PDT
From: Gavan Duffy <Gavan@SAMSON.CADR.DIALNET.SYMBOLICS.COM>
Subject: Re: does AI kill?

     no AI does not kill, but AI-people do.

Only if they have free will.

------------------------------

End of AIList Digest
********************

∂20-Jul-88  1934	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #18  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 20 Jul 88  19:34:22 PDT
Date: Wed 20 Jul 1988 22:16-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #18
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 21 Jul 1988      Volume 8 : Issue 18

Today's Topics:

 Query:

  Undergrad programs in cognitive science

----------------------------------------------------------------------

Date: 16 Jul 88 04:55:59 GMT
From: mind!ghh@princeton.edu  (Gilbert Harman)
Subject: undergrad programs in cognitive science

I am trying to get an up to date list of undergraduate
programs in cognitive science or cognitive studies.
--
                       Gilbert Harman
                       Princeton University Cognitive Science Laboratory
                       221 Nassau Street, Princeton, NJ 08542

                       ghh@princeton.edu
                       HARMAN@PUCC.BITNET

------------------------------

Date: 17 Jul 88 01:59:46 GMT
From: att!chinet!mcdchg!clyde!watmath!utgpu!utcsri!jarvis.csri.toronto
      .edu!neat.ai.toronto.edu!tjhorton@bloom-beacon.mit.edu 
      ("Timothy J. Horton")
Subject: Re: programs in cognitive science


ghh@confidence.princeton.edu (Gilbert Harman) writes:
>I am trying to get an up to date list of undergraduate
>programs in cognitive science or cognitive studies.


This is a summary of responses to a question posed in comp.ai a few months ago
about (university) programs in cognitive science.  The original question in-
cluded the following (slightly fixed) information (and some misinformation):

MIT: Department of Brain and Cognitive Science

Brown: Department of Linguistics and Cognitive Science, 12 Faculty
Fields of study: Linguistics, Vision, Reasoning, Neural Models, Animal Cognition

UCSD: interdisciplinary PhD in Cognitive Science exists
a Dept of Cognitive Science is in the works
undergraduate program in Cog Sci currently offered by psychology
emphases in Connectionism, Psychology, AI, Linguisitics, Neuroscience,
            Philosophy, Social Cognition

Stanford: Graduate Program in Cognitive Science
Psychology (organizing dept), Linguistics, Computer Science, Philosophy

Rochester: interdisciplinary PhD in Cognitive Science

UC Berkley: Cognitive Science Program, focus on linguistics

Princeton: interdisciplinary program in Cognitive Science

Toronto: Undergraduate Major in Cognitive Science and Artificial Intelligence

Michigan: no current program in Cognitive Science, but some opportunities

University of Western Ontario: Center for Cognitive Science

Edinburgh: department of Cognitive Science (formerly School of Epistemics)
focus on linguistics

Sussex: School of Cognitive Science


--------------------- RESPONSES (partially EDITED) ---------------------------

From: "Donald A. Norman" <norman%ics@sdcsvax.ucsd.edu> at UCSD

>At UCSD, we are indeed in the process of establishing a Department of Cognitive
>Science.  We are now hiring, but formal classes will not start until the Fall
>of 1989.  We will have both an undergraduate and a PhD program.  We now have
>an Interdisciplinary PhD program:  students enter some department, X, and join
>the interdisciplinary program after completing the first year requirements of
>X.  They then receive a "PhD in X and Cognitive Science."  We have about 20
>students now and have given out about 3 PhDs.
>  The strengths are in the computational understanding of cognition, with
>strong emphasis in psychology, AI, linguisitics, neuroscience, philosophy, and
>social cognition.  PDP (connectionism) is one of the strengths at UCSD, and
>the approach permeates all of the different areas of Cognitive Science, even
>among those of us who do not directly do work on weights, algorithms, or
>connectionist architectures
>  Yes, there is a Cognitive Science Society.  It hosts an annual conference
>(the next one will be in Montreal).  It publishes the journal "Cognitive
>Science."  You can find out about it by writing the secretary treasurer:
>    Kurt Vanlehn                       vanlehn@a.psy.cmu.edu
>    Department of Psychology
>    Carnegie-Mellon University
>    Pittsburgh, PA 15213

-----
From: Jeff Elman <elman@amos.ling.ucsd.edu> at UCSD (taken from comp.ai)

>The University of California, San Diego is considering the establishment of a
>Department of Cognitive Science ...  The Department will take a broadly-based
>approach to the study of cognition.  It will be concerned with the neurological
>basis of cognition, individual cognition, cognition in social groups, and
>machine intelligence.  It will incorporate methods and theories from a wide
>variety of disciplines including Anthropology, Computer Science, Linguistics,
>Neuroscience, Philosophy, Psychology, and Sociology.

-----
From: Tom Olson <olson@cs.rochester.edu> at Rochester

>The University of Rochester has an interdisciplinary Ph. D. in Cog Sci,
>basically a bridge between Comp. Sci., Psych and Philosophy.  I don't know
>much about how it is organized.  If you're interested, you might write to
>alice@cs.rochester.edu or lachter@cs.rochester.edu who are among the first
>students in the program.  Presumably we're strong in linguistics, vision,
>connectionism, and inexact ("probabilistic") reasoning.
>PS Connectionism is not fading at San Diego as far as I know.

-----
From: Michael McInerny <mcinerny@cs.rochester.edu> at Rochester

>Here at the UofRochester (Hi Neighbor!), we have an "interdisciplinary"
>Cog Sci dept. that includes fac. from Comp Sci, Psych, Philosophy, and
>Neuroscience.  I'm a grad student enrolled in the program, via the Comp
>Science dept., which means I have to get my own committee together,
>and build my own program, on top of passing regular CS stuff like Quals.
>I understand there is an undergraduate major in the dept too.

-----
From: William J. Rapaport <rapaport@cs.buffalo.edu> at SUNY

>State University of New York at Buffalo has several active cognitive science
>programs.  What follows is a slightly outdated on-line information sheet on
>two of them.
   [contact the author (or myself) for the full text.  The description reads
   in part: "(the group's) activities have focused upon language-related
   issues and knowledge representation... "]
>The newest is the SUNY Buffalo Graduate Studies and Research Initiative in
>Cognitive and Linguistic Sciences, whose Steering Committee is currently
>planning the establishment of a Cog and Ling Sci Center and running a
>colloquium series.  For more information, please contact me.  In addition,
>let me know if you wish to be on my on-line mailing list for colloquium
>announcements.

-----
From: Marie Bienkowski <bienk@spam.istc.sri.com>

>Princeton University has an excellent Cognitive Science program, although
>there is no department by that name.  They have active research programs
>on automated tutoring, vocabulary acquisition, reasoning, belief revision,
>connectionism (with Bellcore), computational linguistics, cognitive
>anthropology, and probably more that I've missed.  The main sponsoring
>departments are Psychology, Philosophy and Linguistics.
>  A good person to contact is bjr@mind.princeton.edu, who is, in real life,
>a professor in the Psychology Dept.  His p-mail address is:
>       Brian Reiser
>       Cognitive Science Laboratory
>       21 Nassau St.
>       Princeton, NJ  08542

-----
From: Rodney Hoffman <Hoffman.es@xerox.com>

>There is an undergraduate program in Cognitive Science at Occidental College
>(Los Angeles).  The director is Saul Traiger <oxy!traiger@CSVAX.Caltech.edu>;
>write to him for more information.

-----
From: "Saul P. Traiger" <oxy!traiger@csvax.caltech.edu> at Occidental College

>The following appeared in Ailist Digest last summer. Let me know if you'd
>like more information.
>  Occidental College,  a liberal arts college which enrolls approximately
>1600 students, is pleased to announce a new Program in Cognitive
>Science. The Program offers an undergraduate major and minor in Cognitive
>Science. Faculty participating in this program include members of the
>departments of mathematics, linguistics, psychology, and philosophy.
>[...]  The undergraduate major in Cognitive Science at Occidental College
>includes courses in mathematics, philosophy, psychology and linguistics.
>Instruction in mathematics introduces students to computer languages,
>discrete mathematics,  logic, and the mathematics of computation.
>Philosophy offerings  cover the philosophy of mind, with emphasis on
>computational models of the mind, the theory of knowledge, the philosophy
>of science, and the philosophy of language. Psychology courses include
>basic psychology, learning, perception, and cognition. Courses in
>linguistics provide a theoretical foundation in natural languages, their
>acquisition, development, and structure.  For more information about
>Occidental College's Cognitive Science Program:
>  Professor Saul Traiger    ARPANET: oxy!traiger@CSVAX.Caltech.EDU
>  Cognitive Science Program BITNET:  oxy!traiger@hamlet
>  1600 Campus Road          CSNET:   oxy!traiger%csvax.caltech.edu@RELAY.CS.NET
>  Occidental College        UUCP:    {seismo,rutgers,ames}!cit-vax!oxy!traiger
>  Los Angeles, CA 90041

-----
From: Roy Eagleson <deepthot.UWO.CDN!elroy@julian.uucp> at Western Ontario

>"The Centre for Cognitive Science" at UWO is a community of professors,
>research assistants, and graduate students from: Psychology, Computer Science,
>Philosophy, Neurobiology, Engineering, and Library Science.  In addition to
>the related graduate and undergraduate courses offered by those faculties
>and departments, there is an undergraduate course in Cognitive Science
>offered through Psychology.  We can send you more info if you want it.
>
>As for the Cognitive Science Society, you can drop them a line at:
>       Cognitive Science Society,
>       Department of Psychology
>       Carnegie-Mellon University
>       Schenley Park
>       Pittsburgh, PA 15213
>Zenon Pylyshyn was their President for 1985-86.

-----
From: John Laird <laird@caen.engin.umich.edu> at Michigan

>There is no formal undergraduate or graduate program in Cognitive Science
>at this time.  We will be offering an undergraduate course in Cognitive Science
>next term, co-taught by AI, Psych., Ling., and Philosophy.  We also have the
>Cognitive Science and Machine Intelligence Lab.   It is supported by three
>colleges: Engineering; Business; and Literature, Sciences and the Arts.
>The Lab sponsers a variety of Cognitive Science activities: talks, workshops,
>research groups, etc.  I expect that in a few years we will have undergraduate
>and graduate programs in Cognitive Science, but for now, students must be in
>a specific department and take cross-listed courses.
-----

>From Professor Tom Perry, Simon Fraser University, Vancouver

>The Cognitive Science Program does not yet have a graduate program, but one is
>planned for the near future.  At present, qualified students can do advanced
>degrees under Special Arrangements.
[...]
>   Cognitive Science Program
>   Department of Philosophy
>   Simon Fraser University
>   Burnaby, BC, Canada V5A 1S6

[Special arrangements means: "Exceptionally able applications, who wish to work
for a Master's or Doctoral degree outside or between existing programs at Simon
Fraser University, may apply to work under Special Arrangements.  (the student)
must have a well-developed plan of studies in an area which can be shown to
have internal coherence and academic merit, and which the University has appro-
priate expertise and interests among its faculty members ..."]

-----
>From Donald H. Mitchell of Bendix Aero. Tech. Ctr <DON@atc.bendix.com>

>In 1985, the president of Northwestern University set aside a decent pot of
>money and charged the Cognitive Psychology program to find a chairman for an
>interdisciplinary Cognitive Science program.  They aggressively set out and
>brought dozens of big names in for show-and-tell.  They made offers to
>several; however, as far as I know, they never caught one.  Maybe they have
>one now?  I do not know.
>Northwestern has a small but high-quality group of Cognitive Psychologists
>[...] The work is primarily on human cognition: verbal information processing
>... human decision making... human expertise in game-playing, ... heuristic
>search, and machine learning (genetic algorithms).

------------------------------

Date: 19 Jul 88 01:34:52 GMT
From: mind!ghh@princeton.edu  (Gilbert Harman)
Subject: Re: programs in cognitive science

I have been preparing an up to date list of undergraduate
programs in cognitive science or cognitive studies.  So far,
I have the following list.  I would appreciate hearing of
any additions or corrections to this information.

Campuses with undergraduate degree programs:

Brown University (Department of Cognitive and Linguistic Sciences)
Carnegie Mellon University (Cognitive Science Program)
Hampshire College (Cognitive Science Program)
MIT (Department of Brain and Cognitive Sciences)
Occidental College (Cognitive Science Program)
Simon Fraser University (Cognitive Science Program)
UCLA (Psychology and Cognitive Science)
University of Edinburgh (Department of Cognitive Science)
University of Pennsylvania (Computer Science and Cognitive Science)
University of Rochester (Cognitive Science Program)
University of Sussex (School of Cognitive Science)
University of Toronto (Major in Cognitive Science and Artificial Intelligence)
Uiversity of Western Ontario (Center for Cognitive Science)
Vassar College (Cognitive Science Program)
Wesleyan University (Cognitive Science Program)

Campuses with undergraduate minor tracks and independent majors:

Brandeis University
Columbia University
Dartmouth College
Gustavus Adolphus College
Harvard University
Northeastern University
Occidental College
Smith College
Stanford University
SUNY Binghamton
Tufts University
UC Berkeley
UC San Diego
University of Florida
University of Maryland
University of Massachusetts at Amherst
University of Oregon
Wellesley College

Almost all courses in these programs are not specifically
courses in "Cognitive Studies" or Cognitive Science" but are
instead courses in a particular department such as
Psychology or Computer Science.  Almost always there is a
one semester (or occasionally a full year) introductory course
in Cognitive Studies.  Often there are one or two advanced
seminars on "Topics in Cognitive Studies" where the content
of the seminar is expected to vary from year to year
depending on the interests of the instructor.  Almost all
the courses in any of these programs are courses in
particular established departments.  Occasionally, some of
these courses in other departments are also given a
Cognitive Studies designation in the school catalog.

The main exceptions seem to be MIT, where there is no
Department of Psychology but instead a Department of Brain
and Cognitive Science, and the UC, San Diego, where an
ambitious array of courses in Cognitive Science has been
proposed for their new Department of Cognitive Science.
--
                       Gilbert Harman
                       Princeton University Cognitive Science Laboratory
                       221 Nassau Street, Princeton, NJ 08542

                       ghh@princeton.edu
                       HARMAN@PUCC.BITNET

------------------------------

Date: 19 Jul 88 16:40:41 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: programs in cognitive science


      One wonders what all these cognitive science graduates are going to
do after graduation.

                                        John Nagle

------------------------------

End of AIList Digest
********************

∂20-Jul-88  2249	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #19  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 20 Jul 88  22:49:35 PDT
Date: Wed 20 Jul 1988 22:37-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #19
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 21 Jul 1988      Volume 8 : Issue 19

Today's Topics:

 Queries:
  Expert system shells
  Expert System Shells (for the Amiga)
  Allegro Common Lisp
  Neural Networks
  GRAPHAEL GBASE

----------------------------------------------------------------------

Date: 15 Jul 88 19:19:08 GMT
From: pyramid!oliveb!intelca!mipos3!mipos2.intel.com!rajeevc@decwrl.de
      c.com  (Rajeev Chandrasekhar)
Subject: Expert System Shells


Hi,


     I am not exactly sure if this is the right newsgroup !but anyway
here goes..

     I am looking to get names of Commercially available Expert
System Shells to be used in a Unix environment. Id appreciate if
anyone in Netland would mail me some info.

                  -- Rajeev

Voice : (W) 408-765-4632
        (H) 408-249-0418
e-mail   {..hplabs,oliveb}\!intelca\!mipos3\!mipos2\!rajeevc


-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
/       If you thing Unix is great, you ought to try Parallel Unix      /
/                                                                       /
/       e-mail : rajeevc@mipos2.intel.com                               /

------------------------------

Date: 14 Jul 88 14:25:58 GMT
From: mcvax!ukc!reading!onion!cf-cm!cybaswan!cslaurie@uunet.uu.net 
      (Laurie Moseley )
Subject: Expert system shells for the Amiga


##########################################################################

Does anyone know of any good (or just any) expert system shells for the
Amiga ?

                                Laurie Moseley

###########################################################################

------------------------------

Date: Tue, 19 Jul 88 13:15:28 EDT
From: Andrew_Simms@um.cc.umich.edu
Subject: Allegro Common Lisp

 Allegro Common Lisp Users (Coral Software's CL for Macintosh):

 I am  interested in getting people's opinions about the Allegro 1.2
 upgrade. This upgrade is available to registered users for around
 $100 (plus shipping, consult the order form sent by Coral for
 details).

 The main thing I wish to know is:  Is it worth it and why?

 There are also a number of new products listed.  If you are
 using any of these in conjunction with CL, your comments would be
 appreciated.

 -- Andrew Simms

 respond via email:

        Andrew_Simms@um.cc.umich.edu              internet
        USERW03V@UMICHUM                          bitnet

------------------------------

Date: 15 Jul 88 13:35:00 GMT
From: uxe.cso.uiuc.edu!richman@uxc.cso.uiuc.edu
Subject: Neural Networks


Could someone recommend a good introductory text which deals with
Neural Networks?  Please e-mail responses to:
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Mike Richman
uucp:  {ihnp4,seismo,puree,convex,uunet}!uiucdcs!uiucuxc!uiucuxe!richman
arpanet:  richman%uiucuxe@a.cs.uiuc.edu     bitnet: richman@uiucuxe
csnet:    richman%uiucuxe@uiuc.csnet        icbm:   40 05 N  /  88 14 W
internet: richman@uiucuxe.cso.uiuc.edu      milnet: richman@uiucuxe.arpa
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Thanks!

------------------------------

Date: Sun, 17 Jul 88 23:37:13 EDT
From: Deeptendu Majumder <MEIBMDM%GITVM2.BITNET@MITVMA.MIT.EDU>

I have been trying to get information about GRAPHAEL GBASE object
oriented system. An address would be a great help.

Thanks
Deeptendu Majumder
Box 30963
Georgia Tech
Atlanta, GA 30332
<MEIBMDM @ GITVM2>

------------------------------

End of AIList Digest
********************

∂21-Jul-88  2124	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #20  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 21 Jul 88  21:24:19 PDT
Date: Fri 22 Jul 1988 00:01-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #20
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 22 Jul 1988       Volume 8 : Issue 20

Today's Topics:

 Queries:
  on-line ai abstracts?
  Robotics mailing list
  Carver Mead and Analog VLSI
  Graphael address?
  Abstract Operators and Efficent Algorithms
  Space Station expert systems

 Responses:
  Prolog & UK Immigration [was Re: Soundex algorithm]
  Frame systems; and net addresses.
  muLISP documentation
  Pointers on Scheduling systems with preferential attributes

----------------------------------------------------------------------

Date: 15 July 1988 0928-PDT (Friday)
From: stanonik@nprdc.arpa (Ron Stanonik)
Reply-to: stanonik@nprdc.arpa
Subject: on-line ai abstracts?

Hello,

Is an on-line AI abstracts service available?  Here's how one
of our users described what they wanted:

        "What I had in mind was something more on the order of a computer-
        based literature search for AI topics like what is provided by
        Psychological Abstracts or ERIC on-line systems for their content
        domains."

Thanks,

Ron Stanonik
stanonik@nprdc.arpa




        [Editor's note - Several people here at MIT (including myself)
and elsewhere have discussed the possibility of doing something like
this, (either abstracts, an online journal, or full text) - and making
it available over the internet.  As far as I know no one has implemented
one yet.

        Recently, a reader of the physics list asked a similar
question for that domain, pointing out that since most people in the
research community already use something like TEX or SCRIBE for
formatting papers, it wouldn't be very difficult to provide a pre-print
service to distribute them.

        I am considering indexing the AIList archives and making these
available in some way.  Any interest?

                - nick]

------------------------------

Date: Sat, 16 Jul 88 20:32:51 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Robotics mailing list


      I've suggested the establishment of a robotics mailing list, and I
now have a list of about 25 people who would like to subscribe.  Unfortunately,
the machine I'm on is for various reasons not a good place to put the list
distribution.  So I'd appreciate a volunteer with good network connections,
including BITNET and the UK nets, to take up the task.

                                        John Nagle

------------------------------

Date: Monday, 18 Jul 88 14:37:50 EDT
From: boyle (Franklin Boyle) @ a.psy.cmu.edu
Subject: Carver Mead and Analog VLSI

   There have been several postings recently about Carver Mead and his
work on analog VLSI and how he's applying it to the visual system, etc.
Does anyone have pointers to his work as it applies to the visual system,
neural networks, AI in general, etc.?

Thanks,
Frank

------------------------------

Date: 19 Jul 88 15:44:51 GMT
From: mcvax!ukc!strath-cs!glasgow!pp@uunet.uu.net  (Mr Paul Philbrow)
Subject: Graphael address?

I've seen Graphael's G-Base described as an AI oodbms.
I'm evaluating oodbs for engineering applications.
Does someone have a contact address (electronic
or paper) for Graphael?  Please email.

Thanks, Paul
Persistent Programming Research Group
--
Paul Philbrow, Glasgow University,
Computing Science Department,                               pp@uk.ac.glasgow.cs
Glasgow G12 8QQ, Scotland         (or try pp%cs.glasgow.ac.uk@nss.cs.ucl.ac.uk)

------------------------------

Date: 19 Jul 88 23:33:35 GMT
From: ndsuvax!ncthangi@uunet.uu.net  (sam r. thangiah)
Subject: Abstract Operators and Efficent Algorithms

Could anyone give me pointers of references to work done on intelligently
combining abstract operators to produce efficent algorithms?

Thanks in advance

Sam


--
Sam R. Thangiah,  North Dakota State University.
300 Minard Hall     UUCP:       ...!uunet!plains.nodak.edu!ncthangi
NDSU, Fargo         BITNET:     ncthangi@plains.nodak.edu.bitnet
ND 58105            ARPA,CSNET: ncthangi%plains.nodak.edu.bitnet@cunyvm.cuny.edu

------------------------------

Date: 20 Jul 88 15:09:28 GMT
From: att!chinet!mcdchg!clyde!watmath!utgpu!jarvis.csri.toronto.edu!me
      !ecf!apollo@bloom-beacon.mit.edu  (Vince Pugliese)
Subject: Space Station expert systems


    I am posting this item for someone without mail privileges
The preferred address is the physical address contained in the
item below, but I will be glad to relay messages if e-mailed
to me.  Thanks in advance.



*********************************************************************

    Our company is involved in the development of an autonomous
operations planner for space station applications using AI
techniques (specifically Expert Systems).  We are currently
investigating the purchasing of software/hardware to
accomplish this task.  We are looking at ES Shells as well
as the possibility of using various AI languages to write
our own ES.

    For those companies viewing this request, product
information can be sent to us at the address below.  We
are also interested in a list of developers/manufacturers,
if anyone has one.  Any useful information would be
appreciated.

                            Wayne Sincarsin

                            Dynacon Enterprises Limited
                            5050 Dufferin St.
                            Suite 200
                            Downsview, Ontario
                            Canada, M3H 5T5

                            (416) 667-0505

*********************************************************************

------------------------------

Date: 18 Jul 88 08:57:24 GMT
From: mcvax!ukc!dcl-cs!simon@uunet.uu.net  (Simon Brooke)
Subject: Response to - Prolog & UK Immigration [was Re: Soundex
         algorithm]

In article <1180@anuck.UUCP> jrl@anuck.UUCP (j.r.lupien) writes:
>
>Not really a C question anymore, but there is such a beast. The
>United Kingdom imigration and naturalization department has a
>Prolog implementation for their citizenship status analysis system.
>
Marek Sergot wrote a paper in 1986:

Sergot, M et al: _The_British_Nationality_Act_as_a_Logic_Programme_ [in:
Communications of the ACM, May, 1986 vol 29 no 5]

I can't find my copy of this just now, but no claim was made that this
system had ever been applied by the immigration people - indeed, as I
remember, no-one outside the 'gosh wow negation-as-failure is the answer
to all the world's problems' school of prolog enthusiasts was all that
impressed. The paper has (of course) been attacked by such opponents of
legal applications of AI as Philip Leith [e.g]:

Leith, P: _Legal_Expert_Systems:_Misunderstanding_the_Legal_Process_ [in:
Computers and the Law, Number 49, September 1986]

to which you may well say 'so what', since Leith would never admit to any
value in any knowledge-based approach to the law. Anyway - the system has
never been used to decide real world cases.


** Simon Brooke *********************************************************
*  e-mail : simon@uk.ac.lancs.comp                                      *
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*                                                                       *
*  Neural Nets: "It doesn't matter if you don't know how your program   *
***************  works, so long as it's parallel" - R. O'Keefe **********

------------------------------

Date: Mon, 18 Jul 88 08:53:13 PDT
From: lambert@cod.nosc.mil (David R. Lambert)
Subject: Response to - Frame systems; and net addresses.

In response to ncthangi's query (my direct email response was returned):

You might consider Intelligence/Compiler (IntelligenceWare, Inc., 213-417-
8896; $500), and perhaps First Class Fusion (Programs in Motion, 617-653-5093;
>$500?).

David R. Lambert
lambert@nosc.mil

------------------------------

Date: Mon, 18 Jul 88 09:57:05 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Response to - muLISP documentation


      muLISP and muMATH comes from

        Soft Warehouse, Inc.
        3615 Harding Avenue, Suite 505
        Honolulu, Hawaii  96816
        USA

        808-734-5801

Although a bit dated, (the original version was for the Apple II, in 1979),
it remains a useful symbolic math system.  Most LISP users will find the
dialect rather strange, both in syntax and semantics, but not inconsistent.

                                        John Nagle

------------------------------

Date: 19 Jul 88 09:55:36 GMT
From: Pat Prosser <mcvax!cs.strath.ac.uk!pat@uunet.UU.NET>
Reply-to: mcvax!cs.strath.ac.uk!pat@uunet.UU.NET
Subject: Response to - Pointers on Scheduling systems with
         preferential attributes


To quote roughly "a scheduling system that takes account of
preferential attributes ... such as worker/boss relationships ...".

In essence what you have are preference constraints that may be
satisfied or relaxed. ISIS and OPIS address this (Smith Fox Ow
LePape). ISIS and OPIS are now (how can we say) mature? For more
recent work on factory scheduling (and in particular dynamic/reactive
scheduling) see Ellerby (the SemiMan project) and also the work done
by us (DAS, distributed asynch scheduler).

Nice wee problem scheduling

------------------------------

End of AIList Digest
********************

∂22-Jul-88  0005	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #21  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 22 Jul 88  00:05:03 PDT
Date: Fri 22 Jul 1988 00:06-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #21
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 22 Jul 1988       Volume 8 : Issue 21

Today's Topics:

  Does AI kill?   --  Fourth in a series ...

----------------------------------------------------------------------

Date: 19 Jul 88 16:47:02 GMT
From: mcvax!ukc!etive!aiva!ken@uunet.uu.net  (Ken Johnson)
Subject: Re: does AI kill?

In article <12400014@iuvax> smythe@iuvax.cs.indiana.edu writes:

> I don't know which event involved the
> Sheffield, but there was no misidentification in either case.

I had also heard the story that the Exocet, having been sold to the
Argentinians by the Europeans, was not identified as a hostile missile.
Subsequently the computers were `reprogrammed' (media talk for giving
them a wee bit of new data.) Presumably if you sell arms to your enemies
this is what you must expect.

--
------------------------------------------------------------------------------
From Ken Johnson, AI Applications Institute, The University, EDINBURGH
Phone 031-225 4464 ext 212
Email k.johnson@ed.ac.uk

------------------------------

Date: 20 Jul 88 06:31:00 EDT
From: "CUGINI, JOHN" <cugini@ecf.icst.nbs.gov>
Reply-to: "CUGINI, JOHN" <cugini@ecf.icst.nbs.gov>
Subject: Does AI kill?


[ please excuse if this is repetition - our mailer has been berserk lately ]

Two points I haven't seen made so far...

1. Are AI systems to be held to a standard of perfection? I don't know
of *any* kind of system, constructed by humans, that doesn't fail -
airplanes crash, walkways collapse, nuclear power plants explode.
And, yes, people die ... so the issue isn't whether AI/computer
systems will ever fail, causing loss of life  - they will, be assured.
But so would the non-computer sustems which are the alternative.

Moreover, there will be instances (maybe the Vincennes was one, maybe not)
in which a non-AI system would've made a better choice.  Big deal.
The serious question is, on average, will the performance of a system
be enhanced (wrt whatever criteria you like - saving lives, etc.)
or degraded by use of AI components.

The Vincennes critics should make the case (if they can) that the AEGIS
system caused the shoot-down, that it wouldn't have occurred otherwise,
and that AEGIS has no compensating effects (maybe it's already saved
291 lives by deterring Iranian attacks on American ships...).

2. For what it's worth, I think it's a cheap shot to use this incident
as a excuse to hold forth on one's political views (on AI list).  The
AI-component debate is proper, but I'd urge some self-restraint before
we start pontificating on SDI, etc etc.  The "larger lesson" of the
Vincennes, like many other "lessons" of recent history, seems to depend
much more on one's prior political views than on any unambiguous
interpretation of events.


John Cugini  <Cugini@ecf.icst.nbs.gov>

------------------------------

Date: Wed, 20 Jul 88 10:02 EST
From: <INS_ATGE%JHUVMS.BITNET@mitvma.mit.edu>
Subject: Vincennes and AI

Lets examine the faults of the recent airbus downing in the gulf.
A commercial airbus supposedly carrying innocent civilians was shot
down by an Aegis cruiser.  Evidently, there is reason to suspect that
the airbus did not carry proper IFF transponder identification.
The Aegis was not able to properly identify the aircraft as friendly,
and since it was recently involved in a skirmish, shot the aircraft down.
   Did the Navy realize that a commercial airbus with incorrect IFF
transponders could be shot down in the Persian Gulf?  If not, then
there was a severe problem in the Navy's understanding of how their
equipment operates.  From what I have heard, the Aegis is designed to
shoot down large numbers of unfriendly aircraft, not neccessarily
designed to avoid shooting down nearby unidentified aircraft.
I don't see how the Navy could have placed such a killing system in the
Gulf without the realization that there would be danger to civilian
aircraft in the tight waters.
   Furthermore, it has often been noted in the literature that in wars,
innocent civilians are killed.  Allow me to define war as a state of
hostility between states which allows at least one of those states
to injure some part of the other state's population.
   It would be nice if we could have nice clean wars, but unfortunately,
they do not exist.

    But I have strayed off the subject.  I find this incident a possible
example of mal-understanding of how a system works in a situation,
if this incident did indeed suprise anyone in the Pentagon.
The systems in the Persian Gulf are going to have a hard time identifying
friend and foe, and there may indeed be increased use of commercial
transponders on military aircraft in the region if this is possible.
  --As we noted in Vietnam, even "real intelligence" cannot always
discover who is friend, and who is foe.

Thomas Edwards
 ins_atge@jhuvms

------------------------------

Date: 20 Jul 88 22:38:03 GMT
From: smithj@marlin.nosc.mil  (James Smith)
Subject: Re: does AI kill?

In article <1376@daisy.UUCP>, klee@daisy.UUCP (Ken Lee) writes:
> Computer-generated mistakes aboard the USS Vincennes may lie at the root
> of the downing of Iran Air Flight 655 last week, according to senior
> military officials being briefed on the disaster.
> ...
> The officials said Rogers believed the machines - which wrongly
> identified the approaching plane as hostile - and fired two missiles at
> the passenger plane, knocking it out of the sky over the Strait of
> Hormuz.
> ...
> Some obvious questions right now are:
>       1.  is AI theory useful for meaningful real world applications?
>       2.  is AI engineering capable of generating reliable applications?
>       3.  should AI be used for life-and-death applications like this?

>The blame lies entirely on the captain of the
>USS Vincennes and his superiors for using the system in a zone where
>commercial flights are flying.

In all of this debating over whether-or-not AEGIS should be used in the gulf,
and whether-or-not Captain Rogers erred in his decision to shoot at
IAF 655, one crucial point has been overlooked - there is _no other_
combat direction system (in our, or any other navy) which can even begin
to cope with the volume of information that is efficiently processed
and displayed by AEGIS. The commanding officer of a pre-AEGIS ship would
have had far less time from target detection to shoot decision; he
would also have had a less-precise radar track and IFF information, and
would have had to make the shoot/no-shoot decision at a greater range
than Captain Rogers.

This is not a problem of applying AI but, rather, a problem requiring
an immediate life-or-death decision made, not in the laboratory or
the office, but in what Klausewitz referred to as the *fog of war*.


Jim Smith
UUCP: smithj!marlin!nosc
DDN:  smithj@marlin.nosc.mil

If we weren't crazy, we'd all go insane

                 - Jimmy Buffet

------------------------------

Date: 21 Jul 88 01:14:44 GMT
From: uvaarpa!virginia!uvacs!cfh6r@umd5.umd.edu  (Carl F. Huber)
Subject: Re: does AI kill?

In article <470@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
> ...
>Now, before y'all start firing cruise missiles at me, I am *NOT*, I repeat
>NOT praising the system that killed 280 people.

Let's be careful.  That system didn't kill anyone.

------------------------------

Date: 21 Jul 88 11:30:12 GMT
From: amelia!prandtl.nas.nasa.gov!msf@ames.arpa  (Michael S.
      Fischbein)
Subject: Re: does AI kill?

In article <1054@marlin.NOSC.MIL> James Smith writes:
>In all of this debating over whether-or-not AEGIS should be used in the gulf,
>and whether-or-not Captain Rogers erred in his decision to shoot at
>IAF 655, one crucial point has been overlooked - there is _no other_
>combat direction system (in our, or any other navy) which can even begin
>to cope with the volume of information that is efficiently processed
>and displayed by AEGIS.

Nonsense.  The Navy Tactical Data System (NTDS), the pre-AEGIS computerized
sensor/fire control computer system had very similiar capabilities as
far as tracking and individual's displays go.  AEGIS supports the large
`wallboard' displays which were not supported by NTDS until recently;
budget constraints have prevented retrofitting NTDS ships with the large
screens.  AEGIS is only present on the Ticonderoga class AAW cruisers
with the SPY-1 radar; other primary AAW combatants use NTDS and other radar
systems, such as the SPS-48E and will continue to do so.

> The commanding officer of a pre-AEGIS ship would
>have had far less time from target detection to shoot decision; he
>would also have had a less-precise radar track and IFF information, and

IFF has little or nothing to do with AEGIS; I would appreciate any reference
that compares the SPY-1 to the SPS-48 (current models of each) and shows
significantly greater precision to either.  The SPY-1 is faster as it
doesn't rotate, but from my (admittedly slightly out-of-date) personal
experience, they can offer comparable performance.

>would have had to make the shoot/no-shoot decision at a greater range
>than Captain Rogers.
>
>This is not a problem of applying AI but, rather, a problem requiring
>an immediate life-or-death decision made, not in the laboratory or
>the office, but in what Klausewitz referred to as the *fog of war*.

Absolutely true.  If you haven't been there, people, try to think about
what it was like on the ship.

                mike
Michael Fischbein                 msf@ames-nas.nas.nasa.gov
                                  ...!seismo!decuac!csmunix!icase!msf
These are my opinions and not necessarily official views of any
organization.

------------------------------

Date: 21 Jul 88 17:24:59 GMT
From: lakesys!tomk@csd1.milw.wisc.edu  (Tom Kopp)
Subject: AI...Shoot/No Shoot

It seems a lot of this argument upon the AI systems (or whatever you wish to
call them....information sorters...AI routines...whatever) Stems from how the
Captain acted up on it in the situation.

Someone brought up the phrase "Shoot/No Shoot" and that reminded me that
Police officers in many areas go through a special shoot/no shoot test
regarding the use of their firearms.  Does anybody know if Naval Command
Candidates go through similar testing on a simulated ship, or what?

Looked at in this light, I can't see where he had any choice BUT to shoot,
based upon what his computers were telling him.  If it were indeed a loaded
passenger get w/ civilians on board, then I of course, regret the action, but
that doesn't change the situation.  He couldn't possibly get a visual fix on
the target, his computers were warning him of a threat, and an unidentified
aircraft on a 100% direct course to his ship was descending at an angle that
would very soon bring it into attack position.

I still don't understand WHY the computers mis-lead the crew as to the type of
aircraft, especially at that range.  I know that the Military has tested
heavily some proposed radar gear for the Tomcat (and possibly other planes)
that is capable of target identification based upon the radar signature, and
having the target head on only helps.  It counts the number of blades on the
turbofan and thus knows what kind of engine it is, and thus narrows the
possibility down to a very few aircraft, often even to one.  Either this is
not installed on the AEGIS ships or it was malfunctioning....It also may not
even be completed yet....I read about it in Aviation Week/Space Technology or
something a year or so ago....forgot just where I saw it...

------------------------------

Date: Thu 21 Jul 88 11:51:54-PDT
From: Conrad Bock <BOCK@INTELLICORP.ARPA>
Subject: Re: Does AI kill?


Overautomation can kill if the task is to decide when to kill.  We have
all experienced programs that try to be too smart.  Both industry and
military institutions would like to take the human out of the loop
(more control for the people at the top), hence overautomation.  I
believe this is the critique of Dreyfus and Dreyfus.

Conrad Bock

------------------------------

End of AIList Digest
********************

∂23-Jul-88  2314	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #22  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 23 Jul 88  23:14:19 PDT
Date: Sun 24 Jul 1988 01:54-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #22
To: AIList@AI.AI.MIT.EDU


AIList Digest            Sunday, 24 Jul 1988       Volume 8 : Issue 22

Today's Topics:

 Free Will:

  How to dispose of naive science types (short)
  Carlos Castaneda
  Goedel's Theorem

----------------------------------------------------------------------

Date: 18 Jul 88 10:43:22 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: How to dispose of naive science types (short)

In article <442@ns.ns.com> logajan@ns.ns.com (John Logajan x3118) writes:
>My point is that unproveable theories aren't very useful.

        a) most of your theories of interpersonal interaction, which
           you use whenever you interact with someone, will be unproven,
           and unproveable, if only for practical reasons.

        b) as a lapsed historian, may I recommend that you study the
           history of ideas and religion.  You will find that scientific
           theories, sanctified by science's notions of "proof" don't
           even account for 1% of the "theories" which have driven
           historical changes.

I don't know what you mean by "useful", and suspect that you have not
spent too long worrying about it either.  I suggest you reflect over
your last few days and list the decisions you have made as a result of
scientific theory, and the decisions which you've had to make by magic
because the scientists have not sorted out all the world for you yet.  I
think most of your decisions will fall into the non-scientific,
unproveable category.  Now are the theories which are guiding you each
day really that useless?
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

             The proper object of the study of humanity is humans, not machines

------------------------------

Date: 18 Jul 88 23:07:32 GMT
From: pur-ee!mendozag@uunet.UU.NET (Grado)
Reply-to: pur-ee!mendozag@uunet.UU.NET (Victor M Grado)
Subject: Re: Carlos Castaneda


>In a previous article, James J. Lippard writes:
>>   <caution words (and references) in reading Castan~eda's books>

>>And then Jeff Hartung adds:
>I noticed that the most recent Casteneda book in the series, "The Fire From
>Within," was published as a work of fiction, unlike the previous six books.  I
>took this to be a confession that the works were largely fictitous even prior
>to it.  Furthermore, the later books state that what Casteneda believed to be
>a Yaqui philosophy initially was in fact a view belonging to a small cult of
>"sorcerers" and not to the Yaqui in general, even if you *do* believe the
>assertion that the first six books make of being non-fiction.

 I think that the last book is "The Power of Silence" (which I have not read).
 Anyway, thanks to Mr. Lippard for posting the controversy references.
 Having grown up in the Valley Yaqui and heard many sorcery stories,
 I always took the Castan~eda's books with a grain of salt. It was not until
 I read a Stanford CS Memo by Avron Barr (1977, MetaCognition or some such),
 giving as reference "Tales of Power", that I tried to go back to those books.
 Although my views about these books were always the ones reflected in the last
 sentence Jeff wrote above, I found "The Teachings of Don Juan" to be very
 believable (I need to read the posted references). On the other hand,
 the fact that the last books are published as fiction (although in "The Fire
 From Within" Foreword Castaneda asserts that he "had no other choice but to
 render his teachings [Don Juan's] in the form of a narrative, a narrative of
 what happened, as it happened.") does not imply a confession that the
 previous books were fiction (although it could be fraud). Castaneda is a
 prolific writer but he might be using a Sly Stallone tactic to keep his income
 secure.

   Maybe next time I go back to the Yaqui Valley I go look for a sorcerer
 teacher :-). At least I will finish reading those books and the references.

   Victor M. Grado

------------------------------

Date: 20 Jul 88 09:41:03 GMT
From: mailrus!uflorida!novavax!proxftl!bill@husc6.harvard.edu  (T.
      William Wells)
Subject: Re: How to dispose of the free will issue (long)

In article <11906@agate.BERKELEY.EDU> Gene W. Smith writes:
> In article <445@proxftl.UUCP>, bill@proxftl (T. William Wells) writes:
>
> >Pick your favorite definition of free will. Unless it is one
> >where the "free will" has no causal relationship with the rest
> >of the world (but then why does it matter?), the existence or
> >lack of existence of free will will have measurable consequences.
>
>   Having a causal connection to the rest of the world is not the
> same as having measurable consequences, so this argument won't
> work.

This may be true in general, however, it is not relevant to the
issue at hand.  Should something which affects ourselves be not
measurable, by what means can we assert that it be causal?

>       One possible definition of free will (with problems, but
> don't let that worry us) is that there is no function (from
> possible internal+external states to behavior, say) which
> determines what the free will agent will do.

That does not agree to my idea of `causally related'. In fact,
I could almost use your phrase as a description of `causally
unrelated'.

---

I think that I shall bow out of the debate on free will.  My
original intent was to inject a few ideas which I had not seen
discussed before and to see what was done with them, not to
debate my own view on the subject.  I do not really have the time
for that.

Worse, I find that I have the choice between spouting vague
generalities and making definite assertions based on my own
philosophy.  Since my philosophy is derived from Objectivism my
doing the latter is guaranteed to generate lots of smoke and very
little light.

For example, ddb@ns.ns.com (David Dyer-Bennet) writes:

>  Yep, that's what you'd need to have to take the debate out of the
>religious and into the practical.  Not meaning to sound sarcastic, but
>this is a monumental philosophical breathrough.  But could you exhibit
>some of the difficult pieces of this theory; in particular, what is
>the measurable difference between an action taken freely, and one that
>was pre-determined by other forces?

Should I answer him in vague generalities?  To do so would not be
responsive.  Should I give him my views?  Should I suggest that
there is an invalid premise in the way that his question is
phrased?  (It seems that he would like me to show what the
difference is in the action, but the difference is not in the
action but in the cause.) If I do so without explaining the
philosophical positions on which they are based, I'll fail to
demonstrate my point.  If I do try to explain my philosophy,
we'll get completely off the subject.

So, bye for now and happy debating!

------------------------------

Date: 20 Jul 88 14:10:01 GMT
From: icc!dswinney@afit-ab.arpa  (David V. Swinney)
Subject: Re: How to dispose of the free will issue

In article <407@ns.ns.com> logajan@ns.ns.com (John Logajan x3118) writes:
>
>The no-free-will theory is untestable.
>The free-will theory is like-wise untestable.
>When the no-free-will theorists are not thinking about their lack of free will
>they invariably adopt free-will outlooks.
>So go with the flow, why fight your natural instincts to believe in that which
>is un-provable.  If you must choose between un-provable beliefs, take the one
>that requires the least effort.
>
I contend that the use of the phrase "free will" is misleading.  No one
(at least no one I know of) believes in *FREE* will.
The real question is  "To what extent is the universe deterministic?".

We all (?) believe that our decisions are based on our past experience
and our personality (read genetics or spirit depending on where you are
arguing from).  Thus the question is *not* whther or not we make choices,
but rather whether or not our decision is partially or completely
determined by our prior training and nature.

The "free-will" theorists hold that are choices are only partially
deterministic and partially random.

The "no-free-will" theorists hold that are choices are completely
deterministic with no random component.

The shadings along the way tell you whether to punish crime (add negative
experiences to change behavior) or to ignore it completely (past input
makes no difference to a fully free will).

As I said before, I know no one who believes in completely free will
but the previous example indicates that the question can not be eliminated
by pretending that only two sides of the argument exist.



The opinions I express are my own...unless they prove to be wrong (in which
case I didn't really write this.)
D.V.Swinney     dswinney@galaxy.afit.af.mil

------------------------------

Date: Wed, 20 Jul 88 11:02 EST
From: <PGOETZ%LOYVAX.BITNET@mitvma.mit.edu>
Subject: Goedel's Theorem

    Shame on you, professor! Goedel's Theorem showed that you WILL have an
unbounded number of axioms following the method you propose. That is why most
mathematicians consider it an important theorem - it states you can never have
an axiomatic system "as complex as"
arithmetic without having true statements which are unprovable.

Phil Goetz
PGOETZ@LOYVAX.bitnet

------------------------------

Date: 22 Jul 88 16:02:20 GMT
From: ns!logajan@umn-cs.arpa  (John Logajan x3118)
Subject: Re: How to dispose of naive science types (short)

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> logajan@ns.UUCP (John Logajan x3118) writes:
> >unproveable theories aren't very useful.

> most of your theories [...] will be unproven,
> and unproveable, if only for practical reasons.

Theories that are by their nature unproveable are completely different from
theories that are as of yet unproven.  Unproveable theories are rather
special in that they usually only occur to philosophers, and have little to
do with day to day life.  You went on and on about unproven theories but failed
to deal with the actual subject, namely unproveable theories.

Please explain to me how an unproveable theory (one that makes no unique
predictions) can be useful?

- John M. Logajan @ Network Systems; 7600 Boone Ave; Brooklyn Park, MN 55428 -
- {...rutgers!dayton, ...amdahl!ems, ...uunet!rosevax} !umn-cs!ns!logajan    -

------------------------------

Date: 23 Jul 88 17:43:44 GMT
From: uflorida!novavax!maddoxt@gatech.edu  (Thomas Maddox)
Subject: Re: How to dispose of naive science types (short)

In article <531@ns.UUCP> logajan@ns.UUCP (John Logajan x3118) writes:
>gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>> logajan@ns.UUCP (John Logajan x3118) writes:
[Cockton]
>> most of your theories [...] will be unproven,
>> and unproveable, if only for practical reasons.
[Logajan]
>Theories that are by their nature unproveable are completely different from
>theories that are as of yet unproven.

        I believe that Gilbert Cockton is not discriminating between
assumptions (and their close relatives, hypotheses) and theories,
proven or otherwise.  John Loganjan's comment comes in at a higher
conceptual level where one presumes the assumption/theory distinction
has been made.

------------------------------

Date: 23 Jul 88 18:46:25 GMT
From: bunny!rjb1@husc6.harvard.edu  (Richard J. Brandau)
Subject: Re: How to dispose of naive science types (short)

> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> Please explain to me how an unproveable theory (one that makes no unique
> predictions) can be useful?

Perhaps you mean a NONDISPROVABLE theory.  An "unproveable" theory is
a very special thing, often much harder to find than a "proveable"
theory.  If you can show that a theory is unprovable (in some axiom
set), you've done a good day's science.

No theories make "unique predictions" about the real, (empirical)
world.  Are quarks the ONLY way to explain the proliferation of
subnuclear particles?  Perhaps a god of the cyclotron made them
appear.  The difference between the scientific and religious theories
is that the scientific one can be DISproven: it makes predictions that
can be TESTED.

You may, if you like, apply this distiction to the beliefs that
determine your behavior.  Since you can't disprove the existence of
God, you may choose to chuck out all religion.  Since you CAN think of
ways to disprove f=ma, you may avoid being run over by a bus.

-- Rich Brandau

|  I take no responsiblity for the words or deeds of my employer, and
|  vice versa.  Symbolics is a trademark of Symbolics, Inc.  UNIX is a
|  trademark of AT&T.  Edsel is a trademark of the Ford Motor Company.

------------------------------

End of AIList Digest
********************

∂24-Jul-88  0126	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #23  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 24 Jul 88  01:26:09 PDT
Date: Sun 24 Jul 1988 02:00-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #23
To: AIList@AI.AI.MIT.EDU


AIList Digest            Sunday, 24 Jul 1988       Volume 8 : Issue 23

Today's Topics:

 Philosophy:

  lim(facts about the world) -> ding an sich ?
  Critique of Systems Theory
  Are all Reasoning Systems Inconsistent?
  Generality in Artificial Intelligence
  Metaepistemology and unknowability

----------------------------------------------------------------------

Date: Sun, 17 Jul 88 11:23:37 EDT
From: George McKee <mckee%corwin.ccs.northeastern.edu@RELAY.CS.NET>
Subject: lim(facts about the world) -> ding an sich ?

In AIList Digest v7 #41 John McCarthy wrote
> There is no reason to expect a mathematical theorem about
> cellular automata in general or the Life cellular automaton
> in particular that says that a physicist program will be able
> to discover the fundamental physics of its world is the
> Life cellular automaton.

This may be true if one thinks in terms of cellular automata, but
if one thinks in other terms that are convertible to statements
about automata, say the lambda calculus, the possible existence of
such a theorem is not such a far-fetched idea.  I'm enough of a
newcomer to this kind of thinking myself that I won't pretend
to understand this in exhaustive detail, but the concepts seem
to fit together well enough at low resolution...

One of the most fascinating results to come out of work in
denotational semantics is that one can look at the sequence of
statements that represent each step in the evaluation of lambda
expressions and see that not only do the steps follow in a way
that makes it proper to say that the expressions are related by
a partial order, but that this partial order has enough additional
structure that it can be called a continuous lattice, where the
definition of "continuous" is closely related to the definition
that topologists use to describe more familiar sorts of mathematical
things, like surfaces in space.  How close "closely related" has to
be in order to be convincing is unclear to me at this time.

It's this property of continuity that makes people much more
comfortable with calling the lambda calculus a "calculus" than
they used to be, and forms the basis for the rest of this argument.
(Duke Briscoe's remarks in v8 #2 suggest that he may be thinking
along these lines as well.)  It means that a reasoning system based
on the lambda calculus is halfway to being able to model real systems.
Without going into quantum mechanics, which would lead to a discussion
of a different aspect of computability, real systems in addition to
being continuous, are also dense.  That is, given an appropriate
definition of "nearby", it's always possible to find or generate
a new element between any two nearby elements.  In this sense,
real systems contain an infinite amount of detail.

The real numbers, of course, contain infinite numbers of values
like pi or the square root of 2, that fail to be computable
functions in the sense that they can only be fully actualized by
nonterminating computations.  But a system like the lambda
calculus that is able to evaluate data as programs doesn't have
to compute forever in order to be able to claim to know about
irrational numbers.  Such a system can represent dense structures
even though the representational system itself may not be dense.

The issue of density is not so important in a cellular automaton
universe as it is in our own, where at human scales of measurement
the world is in fact dense, and a physics of partial differential
equations based on real numbers has been marvelously successful.

Things become really interesting when one considers a device made
of dense physical material, functioning as a digital, non-dense
computer system, attempting to discover and represent its own structure.
The device, at the physical level, is itself, a ding an sich.
At the representational level, a finite representation can exist that
is not the ding an sich, but can approximate its behavior and explain
its structure to whatever degree of accuracy and detail you want,
given enough time.  Can a device (or evolutionary series of devices)
that starts out without a representation of the world devise a
series of progressively more accurate representations?  As long as
the structure of the world (the ding an sich, not the (tentative)
representation) is consistent from time to time and place to place,
I can't see why not.  After all, this is just an abstract, semantical
way of looking at learning.

But what about the last step, recognizing the convergence of the
series of world-models and representing the limit of that series,
i.e. representing *the* ding an sich rather than a set of
approximations?  The properties of continuity and density
in the lambda calculus suggest that enough analogies with the
differential calculus might exist to make this plausible, and
that farther on, a sufficiently self-referential analog computer
(the human brain may be one of this type) might be able to "compile"
the representation back into a form suitable for direct action.
My knowledge of either kind of calculus is not deep enough to
allow me to do much more than guess about how to express this
rigorously.

        In other words, even though it may not be possible to
duplicate the universe in calculo (why bother, when the world is
there to be examined?), it seems to me that it's likely to be
possible to _understand_ its principles of organization, no
matter how discrete your logic is.  Acting congruently with that
understanding is a different question.
        - George McKee
          NU Computer Science

------------------------------

Date: 19 Jul 88 05:40:06 GMT
From: bingvaxu!vu0112@cs.buffalo.edu
Reply-to: vu0112@bingvaxu.cc.binghamton.edu (Cliff Joslyn)
Subject: Re: Philosophy: Critique of Systems Theory


In a previous article, larry@VLSI.JPL.NASA.GOV writes:
>Using Gilbert Cockton's references to works critical of systems theory, over
>the last month I've spent a few afternoons in the CalTech and UCLA libraries
>tracing down those and other criticisms.

I'm very sorry to have missed the original discussion. Gilbert: could
you re-mail to me?

>General Systems Theory was founded by biologist Ludwig von Bertallanfy in
>the late '40s.  It drew heavily on biology, borrowed from many areas, and
>promised a grand unified theory of all the sciences.

The newer term is "Systems Science."  For example, I study in a Systems
Science department (one of a very few in the country), and the
International Society for General Systems Research is changing its name
to the Int. Soc. for the Systems Sciences.

>The ideas gained
>momentum till in the early '70s in the "humanics" or "soft sciences" it had
>reached fad proportions.

Sad but true.

>What seems to have happened is that the more optimistic promises of GST
>failed and lost it the support of most of its followers.  Its more success-
>ful ideas were pre-empted by several fields.  These include control theory
>in engineering, taxonomy of organizations in management, and the origins of
>psychosis in social psychology.

It should not be lost sight of that "Systems Science" and "Cybernetics"
are different views of the same field.  They showed the same course of
development, especially in Norbert Weiner's career.  With the rise of
Chaos theory, fractals, connectionism, family therapy, global politics,
and so many other things, GST/Cybernetics is implicitly achieving the
kinds of results they always claimed.  The body of GST work stands as a
testament to the vision of those who could see the future of science,
even though they couldn't claim a corner for themselves.

>For me the main benefit of GST has been a personally satisfactory resolution
>of the reduction paradox, which follows.
> [ excellent description omitted ]

It is a very difficult task to defend the discipline, which I do,
because it is not clear that it is a discipline in the traditional
sense.  While it has a body of knowledge and a variety of specific
claims about the world, and especially about dialectical philosopy, it
is inherently interdisciplinary.  George Klir, one of my teachers,
describes it as a "second dimension" of science, studying the
similarities of systems across systems types.  This in itself is
addressing the problem of reduction by talking about systems at
different scales.

>This is what many physi-
>cists have done with the conflict between quantum and wave views of
energy.

I refere you to an article I am currently reading, by another of my
professors, Howard Pattee, "The Complementarity Principle in Biological
and Social Structures," in _JOurnal of Social and Bio.  Structures_,
vol.  1, 1978.

>New kinds of systems exhibit synergy:
>attributes and abilities not observed in any of their constituent elements.
>But where do these attributes/abilities come from?

Some Systems Scientists claim emergent phenomena in the traditional
sense.  Others say that that concept is not necessary, but rather
"emergent" phenomena is just a problem of observing at multiple scales.
The physical unity of a rock is a physical property of the electrical
"synergy" of its constituent atoms.  Same for a hurricane, an organism,
an economy, or a society, only with different constituents.  In
dynamical systems it is common for there to be a complex interplay
between global and local effects and phenomena.

--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: Tue, 19 Jul 88 10:16:06 EDT
From: jon@XN.LL.MIT.EDU (Jonathan Leivent)
Subject: Are all Reasoning Systems Inconsistent?


Within any (finite) reasoning system, it is possible to construct a sentence S
from any preposition A such that S = (S -> A) using Godel-like methods to
establish the recursion.  However, such a sentence leads to the inconsistent
conclusion that A is true - any A!

1. S = (S -> A)         ; the definition of S, true by construction
2. S -> (S -> A)        ; a weaker form of 1.
                                [U = V, so U -> V]
3. S -> S               ; an obvious tautology
4. S -> (S ↑ (S -> A))  ; from 2. and 3. by conjunction of the consequents
                                [U -> V and U -> W, so U -> (V ↑ W)]
5. (S ↑ (S -> A)) -> A  ; modus ponens
6. S -> A               ; from 4. and 5. by transitivity of ->
                                [U -> V and V -> W, so U -> W]
7. S                    ; from 1. and 6.
                                [U = V and V, so U]
8. A                    ; from 6. and 7. by modus ponens

Am I doing something wrong, or did logic just fall down the rabbit hole?



-- Jon Leivent

------------------------------

Date: Thu, 21 Jul 88 16:05:59 EDT
From: jon@XN.LL.MIT.EDU (Jonathan Leivent)
Subject: more 'Are all Reasoning Systems Inconsistent?'


I have found a mistake in my original proof.  Here is a revised version that
should more aptly be titled "Are all Reasoning Systems Impractical?":


Godel's method of creating self-referential sentences allows us to create the
sentence S such that S = (P(n) -> A), where A is any proposition and P(x) is
true if the sentence represented by the Godel number x is provable in this
reasoning system.  The self-reference comes from the fact that S can be so
constructed that n is the Godel number of S, hence P(n) asserts the
provability of S.  Originally, I managed to induce inconsistency in a
reasoning system by using the sentence S = (S -> A), the inconsistency being
that A is proven true regardless of what proposition it is (even a false one
would do).  The subtle mistake with that proof is that such a sentence is not
constructable by Godel numbering techniques.  The statement S = (P(n) -> A),
where n is the Godel number of S itself, is constructable, and yields rather
grave consequences:

1.  S = (P(n) -> A)			; definition of S, true by
					  construction
					  [n is the Godel number of S itself]
2.  P(n) -> S				; if something is provable, then it is
					  true
					  [the definition of P]
3.  P(n) -> (P(n) -> A)			; from 1 and 2 by substitution for S
4.  P(n) -> P(n)			; tautology
5.  P(n) -> (P(n) ↑ (P(n) -> A))	; conjunction of the consequents in 3
					  and 4
6.  (P(n) ↑ (P(n) -> A)) -> A		; modus ponens
7.  P(n) -> A				; transitivity of -> from 5 and 6
8.  S					; from 1 and 7
9.  P(n)				; the fact that steps 1 thru 8 prove S
					  is sufficient to prove P(n)
10. A					; from 7 and 9 by modus ponens

So, it seems on the surface that the same inconsistency is constructable: that
regardless of what A is, it can be proven to be true.  However, the conclusion
in step 9 that P(n) is true based on the derivation of S in steps 1 thru 8,
combined with the axiom P(n) -> S used in step 2, may be the source of the
inconsistency.  Perhaps, in order for P(n) to imply S, it must not lead to
inconsistency (a proof is not a proof if it leads to a contradiction).  This
insistance seems to be quite self-serving, but it does the trick - the
derivation of S in steps 1 thru 8 is not a proof of S because it "eventually"
leads to inconsistency in step 10, hence step 9 is not valid (not that if A is
instantiated to be a true propostion, then no contradiction is reached - only
if A is false or uninstantiated is there a contradiction).  We seem to have
saved the day, except for one thing: we are requiring that a true proof of any
statement involve the exhaustive search for inconsistency (contradictions).
The penalty is that this forces reasoning systems to take infinite time to
generate "true" theorems (otherwise, there may be a contradiction lurking
under the next stone).  There is no simple heuristic to use to determine that
the search for inconsistency can end (some would suggest that only theorems
that make statements about a reasoning system's own proof procedure are in
doubt, but the above theorem can be transformed isomorphically into a theorem
entirely about number theory with no reference to a proof procedure (using
Godel numbering techniques again) - the new theorem would still have the same
problems).  So, any reasoning system that can do things in finite time should
be doubted (the theorems may be true, but then again ...).

Leivent's Theorem: Doubt all theorems (including this one).

There's something rather sinister about this: how can this theorem be
disproven?  If one succeeds in proving the contrary in finite time, perhaps
the theorem is still true, and the proof to the contrary would eventually lead
to a contradiction.  Perfect proof is, in practice, impossible.

-- Jon Leivent

------------------------------

Date: 20 Jul 88 08:32 PDT
From: hayes.pa@Xerox.COM
Subject: Re: Generality in Artificial Intelligence

Steve Easterbrook gives us an old idea: that the way to get all the knowledge
our systems need is to let them experience it for themselves.  It doesnt work
like that, however, for several reasons.

First, most of our common sense isnt in episodic memory.   That candles burn
more often than tables isnt something from episodic memory, for example.  Or is
the suggestion is that we only store episodes and do the inductive
generalisations whenever we need to by remarkably efficient internal access
machinery?  Apart from the engineering difficulties ( I can imagine 1PC being
reinvented as a handy device to save memory ),  this has the problem that lots
of what we know CANT be induced from experiences.  Ive never been to Africa or
ancient Rome, for example, but I know a fair bit about them.

But the more serious problem is that the very idea of storing experiences
assumes that there is some way to encode the experiences, some episodic memory
which can represent episodes.  I dont mean which knows how to index them
efficiently, just put them into memory in the first place.  You know that, in
your `..experience, candles are burnt much more often than tables.'  How are all
those experiences of candle-combustion represented in your head?  Put your wee
robot in front of a candle, and what happens in its head?  Now think of all the
other stuff your episodic memory has to be able to represent.  How is this
representing done?  Maybe after a while following this thought you will begin to
see McCarthys footsteps on the trail in front of you.

Pat Hayes

------------------------------

Date: Thu, 21 Jul 88 21:19 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: metaepistemology and unknowability

Distribution-File:
        AILIST@AI.AI.MIT.EDU

In AIList Digest   V8 #9, David Chess <CHESS@ibm.com> writes:

>Can anyone complete the sentence "The actual world is unknowable to
>us, because we have only descriptions/representations of it, and not..."?

I may have misused the word "unknowable".  I'm applying a mechanistic
model of human thinking: it is an electrochemical process, neuron
activation patterns representing objects which one thinks of.  The
heart of the matter is if you can say a person or a robot *knows*
something if all it has is a representation, which may be right or
wrong, and there is no way for it to get absolute knowledge.  Well,
the philosophy of science has a lot to say about describing the
reality with a theory or a model.

Note that there are two kinds of models here.  The human brain
utilizes electrochemical, intracranial models without us being aware
of it; the philosophy of science involves written theories and
models which are easy to examine, manipulate and communicate.

I would say that the actual world is unknowable to us because we have
only descriptions of it, and not any kind of absolutely correct,
totally reliable information involving it.

>(I would tend to claim that "knowing" is just (roughly) "having
> the right kind of descriptions/representations of", and that
> there's no genuine "unknowability" here; but that's another
> debate...)

The unknowability here is uncertainty about the actual state of the
world very much in the same sense as scientific theories are theories,
not pure, absolute truths.


Andy Ylikoski

------------------------------

End of AIList Digest
********************

∂25-Jul-88  2251	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #24  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 25 Jul 88  22:51:38 PDT
Date: Tue 26 Jul 1988 01:28-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #24
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 26 Jul 1988      Volume 8 : Issue 24

Today's Topics:

  Does AI kill?    Fifth in a series ...

----------------------------------------------------------------------

Date: 20 Jul 88 19:31:09 GMT
From: hpl-opus!hpccc!hp-sde!hpfcdc!hpislx!hpmtlx!ed@hplabs.hp.com 
      ($Ed Schlotzhauer)
Subject: Re: Re: AI kills?

Steve Frysinger brings a valuable viewpoint into this discussion.  It sure
is easier to be a self righteous Monday-morning quarterback when *I'm* not
the one in a battle zone having to make an immediate, life-and-death decision.

I am reading the book "Very Special Intelligence" by Patrick Beesly.  It is
the story of the British Admiralty's Operational Intelligence Center during
WWII.  It shows vividly how a major part of the process of making sense of
the bits and pieces of intelligence data relies on the experience, intuition,
hunches, and just plain luck of the intelligence officers.  Granted our
"modern" techniques are not nearly so "primitive" (I wonder), but the same
must be true today.

I haven't finished the book yet, but two observations that immediately come to
mind are:

        1. In a war situation, you almost *never* have complete or even
        factual information at the time that decisions *must* be made.

        2. I would *never* trust a machine to make operational recommendations.

Ed Schlotzhauer

------------------------------

Date: Wed, 21 Jul 1988 20:00
From: Ferhat DJAVIDAN <DJAVI@TRBOUN>
Subject: Re: Re: does AI kill? by <stewart@sunybcs.bitnet>

In article V8 #13 <stewart@sunybcs.bitnet> Norman R. Stewart "Jr"  writes:

>While I'm at it, the Iranians began attacking defenseless commercial
>ships in international waters, killing innocent crew members, and
>destroying non-military targets (and dumping how much crude oil into
>water?).  Criticizing the American Navy for coming to defend these
>ships is like saying that if I see someone getting raped or mugged
>I should ignore it if it is not happening in my own yard.
>
>The Iranians created the situation, let them live with it.
>
>
>
>Norman R. Stewart Jr.             *  How much more suffering is
>C.S. Grad - SUNYAB                *  caused by the thought of death
>internet: stewart@cs.buffalo.edu  *  than by death itself!
>bitnet:   stewart@sunybcs.bitnet  *                       Will Durant

I am very sorry that this is not the subject for AIlist.
We are not discussing the politics, economics or strategies of USA
in this list.
According to you (I mean NORMAN), USA sees herself as a body-guard
of the world and she has the power to kill. In some cases this is true.
But only a fanatic person may say
>The Iranians created the situation, let them live with it.
First of all this is wrong. Iraqi started the war, including the situation.
I think you heard wrong, correct yourself.
Also both tribes are still buying some of their guns from USA directly.
I don't want to say "killing innocent people is the job of USA".
Please don't push me to do this.
Humanity will never forget Vietnam, Cuba, Nikaragua, Palestine,...
and indirect results occured because of the guns made in U.S.A.

Please don't discuss anything except AI in this list, else God knows!?

P.S. Good (or bad for you) news for peace acception from Iran.

Ferhat DJAVIDAN
<djavi@trboun.bitnet>

------------------------------

Date: Fri, 22 Jul 88 10:26 EDT
From: <MORRISET%URVAX.BITNET@MITVMA.MIT.EDU> (Cow Lover)
Subject: Re: Does AI kill?


  Thinking on down the line...

  Suppose we eventually construct an artificially intelligent
  "entity."  It thinks, "feels", etc...  Suppose it kills someone
  because it "dislikes" them.  Should the builder of the entity
  be held responsible?

  Just a thought...

  Greg Morrisett
  BITNET%"MORRISET@URVAX"

------------------------------

Date: 22 Jul 88 18:29:35 GMT
From: smithj@marlin.nosc.mil  (James Smith)
Subject: Re: does AI kill?

In article <754@amelia.nas.nasa.gov> Michael S. Fischbein writes:
>Nonsense. The Navy Tactical Data System (NTDS)... had very similar
>capabilities as far as tracking and individual's displays go.

Yes and no.  While the individual display consoles are similar in
function, the manner in which the data is displayed is quite different.
With an NTDS display, you get both radar video and computer-generated
symbols; AEGIS presents computer-generated symbology ONLY. Therefore,
the operator doesn't have any raw video to attempt to classify a target
by size, quality, etc. (As a side note, it is VERY difficult to
accurately classify a target solely on the basis of its radar return -
the characteristics of a radar return depend on a number of factors
(e.g. target orientation, target altitude and range (fade zones),
external target loads (fuel tanks, ordnance, etc)) that it is simply
not true to say 'an Airbus 300 will have a larger radar blip than an
F-14'.)

>I would appreciate any reference that compares the SPY-1 to the SPS-48
>and shows significantly greater precision to either.

I can find no _specific_ references in the unclassified literature,
however, it is commonly accepted that the SPY-1 radar has significantly
greater accuracy in both bearing (azimuth and elevation) and range
than other operational systems; in addition, the SPY-1 provides the
weapon control systems with several target updates per second, as
opposed to the roughly 5 second update intervals for the SPS-48 (Jane's
Weapon Systems, 1987-1988). The SPS-48(E), while providing an improvement
in performance over the older SPS-48(C), does not significantly alter
this.


+-----------------------------------------------------------------+
|    Jim Smith                  | If we weren't crazy,            |
| (619) 553-3704                |    we'd all go insane           |
|                               |                                 |
| UUCP: nosc!marlin!smithj      |            - Jimmy Buffett      |
| DDN:  smithj@marlin.nosc.MIL  |                                 |
+-----------------------------------------------------------------+

------------------------------

Date: 22 Jul 88 22:19:22 GMT
From: smithj@marlin.nosc.mil  (James Smith)
Subject: Re: AI...Shoot/No Shoot

In article <854@lakesys.UUCP> tomk@lakesys.UUCP (Tom Kopp) writes:
>Does anybody know if Naval Command Candidates go through similar
>testing (re: Shoot/No-Shoot decision)?

Yes, they do, starting as junior officers 'playing' a naval warfare
simulation known as "NAVTAG". NAVTAG is used to teach basic tactics of
sensor and weapon employment. Though somewhat restricted in terms of
the types of platforms it can simulate, within these limitations
NAVTAG does a fairly good job of duplicating the characteristics of
existing shipboard systems, and provides a useful tool in the
education of young naval officers.

Before someone becomes the commanding officer of a warship (e.g.
USS Vincennes), an officer will have been a "Tactical Action
Officer" during a previous shipboard tour. This individual has the
authority to release (shoot) ship's weapons in the absence of the
CO. TAO school involves a lot of simulated combat, and forces the
candidate to make numerous shoot/no-shoot decisions under conditions
as stressful as can be attained in a school (simulator) environment.
In addition, before assuming command, a prospective commanding officer
goes through a series of schools designed to hone his knowledge/decision-
making skills in critical areas.

>I still don't understand WHY the computers mis-lead the crew as to the
>type of aircraft...

Read article 660 on comp.society.futures.

Also, IAF 655 was 'squawking' MODE-2 IFF, which is used EXCLUSIVELY
for military aircraft/ships. The code they had set had, apparently, been
previously correlated with an Iranian F-14; thus, the computer made
the correlation between the aircraft taking off from Bandar Abbas
(which is known to be used as a military airbase) and the MODE-2 IFF.
This, coupled with the lack of response to radio challenges on both
civil and military channels, resulted in the aircraft being declared
'hostile' and subsequently destroyed.

------------------------------

Date: Sat, 23 Jul 88 17:36 EST
From: steven horst 219-289-9067           
      <GKMARH%IRISHMVS.BITNET@MITVMA.MIT.EDU>
Subject: Does AI kill? (long)

I must say I was taken by surprise by the flurry of messages about
the tragic destruction of a commercial plane in the Persian Gulf.
But what made the strongest impression was the *defensive* tone
of a number of the messages.  The basic defensive themes seemed to be:
  (1) The tracking system wasn't AI technology,
  (2) Even if it WAS AI technology, IT didn't shoot down the plane
  (3) Even if it WAS AI and DID shoot down the plane, we mustn't
      let people use that as a reason for shutting off our money.
Now all of these are serious and reasonable points, and merit some
discussion.  I think we ought to be careful, though, that we
don't just rationalize away the very real questions of moral
responsibility involved in designing systems that can affect (and
indeed terminate) the lives of many, many people.  These questions
arise for AI systems and non-AI systems, for military systems and
commercial expert systems.
     Let's start simple.  We've all been annoyed by design flaws
in comparatively simple and extensively tested commercial software,
and those who have done programming for other users know how hard
it is to idiotproof programs much simpler than those needed by the
military and by large private corporations.  If we look at expert
systems, we are faced with additional difficulties: if the history of
AI has shown anything, it has shown that "reducing" human reasoning
to a set of rules, even within a very circumscribed domain, is much
harder than people in AI 30 years ago imagined.
     But of course most programs don't have life-and-death consequences.
If WORD has bugs, Microsoft loses money, but nobody dies.  If SAM
can't handle some questions about stories, the Yale group gets a grant
to work on PAM.  But much of the money that supports AI research
comes from DOD, and the obvious implication is that what we design
may be used in ways that result in dire consequences.  And it really
won't do to say, "Well, that isn't OUR fault....after all, LOTS of
things can be used to hurt people.  But if somebody gets hit by a car,
it isn't the fault of the guy on the assembly line."  First of all,
sometimes it IS the car company's fault (as was argued against Audi).
But more to the point, the moral responsibility we undertake in
supplying a product increases with the seriousness of the consequences
of error and with the uncertainty of proper performance.  (Of course
even the "proper" performance of weapons systems involves the designer
in some moral responsibility.)  And the track record of very large
programs designed by large teams - often with no one on the team
knowing the whole system inside and out - is quite spotty, especially
when the system cannot be tested under realistic conditions.
     My aim here is to suggest that lots of people in AI (and other
computer-related fields) are working on projects that can affect lots
of people somewhere down the road, and that there are some very
real questions about whether a given project is RIGHT - questions
which we have a right and even an obligation to ask of ourselves, and
not to leave for the people supplying the funding.  Programs that
can result in the death or injury of human beings are not morally
neutral.  Nor are programs that affect privacy or the distribution
of power or wealth.  We won't all agree on what is good, what is
necessary evil and what is unpardonable, but that doesn't mean we
shouldn't take very serious account of how our projects are
INTENDED to be used, how they might be used in ways we don't intend,
how flaws we overlook may result in tragic consequences, and how
a user who lacks our knowledge or uses our product in a context it
was not designed to deal with can cause grave harm.
     Doctors, lawyers, military professionals and many other
professionals whose decisions affect other people's lives have ethical
codes.  They don't always live up to them, but there is some sense
of taking ethical questions seriously AS A PROFESSION.  It is good to
see groups emerging like Computer Professionals for Social
Responsibility.  Perhaps it is time for those of us who work in AI
or in computer-related fields to take a serious interest, AS A
PROFESSION, in ethical questions.
     --Steve Horst     BITNET address....gkmarh@irishmvs
                       SURFACE MAIL......Department of Philosophy
                                         Notre Dame, IN  46556

------------------------------

Date: 25 Jul 88 17:35:13 GMT
From: rochester!ken@cu-arpa.cs.cornell.edu  (Ken Yap)
Subject: Re: does AI kill?

I find this discussion as interesting as the next person but it is
straying from the charter of these newsgroups. Could we please continue
in soc.politics.arms-d or talk.politics?

        Ken




[Editor's note:

        I couldn't agree more.

        There is a time and a place for discussions like this.  It is
*not* in a large multi-network mailing list (gatwayed to several
thousand readers world-wide) intended for discussion of AI related
topics.


                - nick]

------------------------------

End of AIList Digest
********************

∂26-Jul-88  0236	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #25  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 26 Jul 88  02:36:34 PDT
Date: Tue 26 Jul 1988 01:41-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #25
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 26 Jul 1988      Volume 8 : Issue 25

Today's Topics:

 Philosophy:

  thesis + antithesis = synthesis
  Turing machines and brains (long)
  What can we learn from computers
  Metaepistemology and unknowability

----------------------------------------------------------------------

Date: Fri, 22 Jul 88 17:37 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: thesis + antithesis = synthesis

Distribution-File:
        AILIST@AI.AI.MIT.EDU

In AIList Digest   V8 #9, sme@doc.ic.ac.uk (Steve M Easterbrook)
writes:

>In a previous article, YLIKOSKI@FINFUN.BITNET writes:
>>> "In my opinion, getting a language for expressing general
>>> commonsense knowledge for inclusion in a general database is the key
>>> problem of generality in AI."
>>...
>>Here follows an example where commonsense knowledge plays its part.  A
>>human parses the sentence
>>
>>"Christine put the candle onto the wooden table, lit a match and lit
>>it."
>>
>> ... LOTS of stuff by me deleted ...
>>
>>Therefore, Christine lit the candle, not the table."
>
>Aren't you overcomplicating it a wee bit? My brain would simply tell me
>that in my experience, candles are burnt much more often than tables.
>QED.

There could be some kind of default reasoning going on.

Candles are burnt, not tables, unless there is something out of the
ordinary about the situation.  Christine is an ordinary person and the
situation is ordinary, therefore the candle was lit.

--- Andy

"When someone comes to you and shows you are wrong, don't start an
argument, try to create a synthesis."

------------------------------

Date: Fri, 22 Jul 88 16:39 EST
From: steven horst 219-289-9067           
      <GKMARH%IRISHMVS.BITNET@MITVMA.MIT.EDU>
Subject: Turing machines and brains (long)


   I don't recall how the discussion of Turing machines and their
suitability for "imitating" brains started in recent editions of
the Digest, but perhaps the following considerations may prove
useful to someone.
    I think that you can go a long way towards answering the question
of whether Turing machines can "imitate" brains or minds by rephrasing
the question in ways that make the right sorts of distinctions.
There have been a number of attempts to provide more perspicuous
terminology for modeling/simulation/duplication of mental
processes, brains -- or for that matter any other sorts of objects,
systems or processes.  (I'll use "system" as a general term for things
one might want to model.)  Unfortunately, none of these attempts
at supplying a technical usage has become standard.  (Though personally
I am partial to the one Martin Ringle uses in his anthology of
articles on AI.)
    The GENERAL question, however, seems to be something like this:
WHAT FEATURES CAN COMPUTERS (or programs, or computers running programs,
it really doesn't matter from the perspective of functional
description) HAVE IN COMMON WITH OTHER OBJECTS, SYSTEMS AND PROCESSES?
Of course some of these properties are not relevant to projects of
modeling, simulation or creation of intelligent artifacts.  (The fact
that human brains and Apple Macintoshes running AI programs both exist
on the planet Earth, for example, is of little theoretical interest.)
A second class of similarities (and differences) may or may not be
relevant, depending on one's own research interests.  The fact that
you cannot build a universal Turing machine (because it would require
an infinite storage tape) is irrelevant to the mathematician, but
for anyone interested in fast simulations, it is important to have
a machine that is able to do the relevant processing quickly and
efficiently.  So it might be the case that a Turing machine could
run a simulation of some brain process that was suitably accurate,
but intolerably slow.
    But the really interesting questions (from the philosophical
standpoint) are about (1) what it is, in general, for a program to
count as a model of some system S, and (2) what kinds of features
Turing machines (or other computers) and brains can share.  And I
think that the general answer to these questions goes something like
this: what a successful model M has in common with the system S of
which it is a model is an abstract (mathematical) form.  What, after
all, does one do in modeling?  One examines a system S, and attempts
to analyze it into constituent parts and rules that govern the
interactions of those parts.  One then develops a program and data
structures in such a manner that every unit in the system modeled
has a corresponding unit in the model, and the state changes of
the system modeled are "tracked" by the state changes of the model.
The ideal of modeling is isomorphism between the model and the
system modeled.  But of course models are never that exact, because
the very process of abstraction that yields a general theory
requires one to treat some factors as "given" or "constant" which
are not negligible in the real world.
    With respect to brains and Turing machines, it might be helpful
to ask questions in the following order:
  (1) What brain phenomena are you interested in studying?
  (2) Can these phenomena be described by lawlike generalizations?
  (3) Can a Turing machine function in a manner truly isomorphic
      to the way the brain phenomenon functions?  (I take it that
      for many real world processes, such ideal modeling cannot be
      accomplished, because to but it crudely, most of reality
      is not digital.)
  (4) Can a Turing machine function in a manner close enough to being
      isomorphic with the way the brain processes function so as to
      useful for your research project?  (Whatever it may be....)
  (5) Just what is the relationship between corresponding parts of
      the model and the system modeled?  Most notably, is functional
      isomorphism the only relevant similarity beween model and
      system modeled, or do they have some stronger relationships
      as well- e.g., do they respond to the same kinds of input
      and produce the same kinds of output?

What is interesting from the brain theorist's point of view, I should
think, is the abstract description that program and brain process
share.  The computer is just a convenient way of trying to get at that
and testing it.  Of course some people think that minds or brains
share features with (digital) computers at some lower level as well,
but that question is best kept separate from the questions about
modeling and simulation.
     Sorry this has been so long.  I hope it proves relevant to
somebody's interests.

Steven Horst
   BITNET address....gkmarh@irishmvs
   SURFACE MAIL......Department of Philosophy
                     Notre Dame, IN  46556

------------------------------

Date: Sun, 24 Jul 88 11:06:43 -0200
From: Oded Maler <oded%WISDOM.BITNET@CUNYVM.CUNY.EDU>
Subject: What can we learn from computers


What Can We Learn From Computers and Mathematics?
=================================================

A few digests ago Gilbert Cockton raised the above rhetoric question as
part of an attack on the computationally-oriented methodologies of AI
research.  As a side remark let us recall that many AIers see their main
question as "What can we teach computers?" or "What are the limits to
the application of Mathematics to real-world modeling?"

Although I consider a lot of Cockton's criticism as valid, his claim
about the non-existent contribution of Mathematics and Computers to the
understanding of intelligence is at least as narrow-minded as the
practice of those AI researchers who ignore the relevance of Psychology,
Sociology, Philosophy and disciplines alike.

I claim that experience with computers, machines, formal systems, in
all levels, (starting from typing, programming and hacking, through
computer science and up to pure mathematics) can teach a person (and a
scientist) a lot, and may build relevant intuitions, metaphors and
perspectives.  Some of it cannot be gained through a life-long traditional
humanistic scholarship.

Just imagine what a self-aware person can learn from using a
word-processor by introspecting his own performance: Context-sensitive
interpretation of symbols, learning by analogy (when you move from one
WP to another), reversible and unreversible operations, eye-brain-hand
coordination. I'm sure Mr. Cockton's thoughts have benefited from such
an experience, so think of those who were using or designing such devices
for some decades.

And what about programming? Algorithms, nested loops, procedures and
hierarchical organization, stacks and recursive calls, data-structures
in general, input and output, buffers, parameter passing and communication,
efficiency of programs, top-down refinement, and the most important
experience: debugging.  No time before in history had so many people and
so often been involved in the process of making theories (about the
behavior of programs), watching them being refuted, fixing them and
testing again.  It is very easy to criticize the simplistic incorporation
of such paradigms into models for human thinking, as many hackers and
so-called AI researchers often do, but to think that these metaphors are
useless and irrelevant to the study of intelligence is just making an
ideology out of one's ignorance.

And let's proceed to theoretical computer science: the limits of
what is computable by certain machines, the results from
complexity theory about what can be performed in principle and yet
is practically impossible, the paradigm of a finite-state machine
with input, output and internal states, mathematical logic and its
shortcomings, the theory and practice of databases, the research
in distributed systems, the formal treatment of knowledge and belief:
is none of these relevant to the humanities?

Not the mention the mathematical ideas concerning sets, infinity,
order relations, distance functions, convergence, algebraic structures,
the foundations of probability, dynamical systems, games and strategies
and many many others.

Mr. Cockton was implicitly concerned with the following question:
"What is the best selection of formal and informal experience, a
scientist must have during his/her (hopefully ongoing) development, in
order to contribute to the advance of the cognitive sciences?" Excluding
experiences of the above kind from the candidate list is not what I
expect from adherents of scholar tradition.  Every scientist grows
within some sub-sub-discipline, learns its methodologies, tricks, success
criteria and its most influential bibliographic references.  When we
turn later to inter-disciplinary areas, we cannot go back to the kindergarten,
and start learning a new discipline as undergraduates do. We must learn
to discover those parts of other disciplines that are essential and
relevant to our object of study. Because of our inherent limitations
(complexity..) we are doomed to neglect and ignore a lot of work done
within other disciplines and even within our own. C'est la vie.

I agree with Mr. Cockton that by reading some AI work one gets the impression
that history begun just around the production of the paper: no references
to prior work and to past philosophical and psychological treatments of
the same issues. But going the other extreme, by adopting the scholar
(and sometimes almost snobbish) tradition to the modern cognitive sciences
is ridiculous too. Does a physicist or a chemist need to give references to
pre-Newtonian works, to astrology or alchemistry? (I'm not speaking of
the historian or philosopher of science).

I got the impression that Mr. Cockton views informaticians and
mathematicians in a rather stereotyped way: technocrats, misantropes that
can only deal with machines, persons that want to put the lively world
into their cold formulae and programs, individuals who are insensitive
to the arts and to human-human interaction.  All this might be partially
true with respect to a certain fraction, but to generalize to the whole
community is like saying that humanists are nothing but a bunch of guys,
incapable of clear and systematic thinking, who use the richness and
ambiguity of natural language in order to hide their either self-evident
or vacuouss and meaningless ideas.  I don't want to continue this
local-patriotic type of propaganda, but one can fill several screens
with similar deficiencies of the current traditions in the humanities.
The applicability of such descriptions to any fraction of the ongoing
work in the humanities, still does not justify a claim such as "What
Can We Learn from (say) Sociology?".

To conclude, I think that a good cognitive scientist can learn A LOT
from mathematics and computers. A humanist may still do important
work in spite of his mathematical ignorance, but I suspect that
in some fields this will become harder and harder.

Oded Maler
Dept. of Applied Mathematics
Weizmann Institute
Rehovot 76100
Israel

oded@wisdom.bitnet

------------------------------

Date: 25 Jul 88 04:26:25 GMT
From: steve@comp.vuw.ac.nz (Steve Cassidy)
Reply-to: steve@comp.vuw.ac.nz (Steve Cassidy)
Subject: Re: metaepistemology and unknowability


In a previous article  YLIKOSKI@FINFUN.BITNET (Andy Ylikoski) writes:

>I would say that the actual world is unknowable to us because we have
>only descriptions of it, and not any kind of absolutely correct,
>totally reliable information involving it.

This seems like a totally useless definition of Knowing; what have you
gained by saying that I do not *know* about chairs because I only have
representations of them.  This seems to be a problem people have in
making definitions of concepts in cognition.

Dan Dennet tries, in Brainstorms, to provide a useful definition of
something like what we mean by "intelligence". To avoid the problems of
emotional attatchment to words he uses the less emotive "intentionality". He
develops a definition that could be useful in deciding how to make  systems
act like intelligent actors by restricting that definition to accurate
concepts.  (As yet I don't claim to understand what he means, but I think I
get his drift.)

Now, we can argue whether Dennet's 'intentionality' corresponds to
'intelligence' if we like, but what will it gain us? It depends on what your
goals as an AI researcher are. I'm interested in building models of cognitive
processes - in particular, reading - my premise in doing this is that
cognitive processes can be modelled computationally, and that by building
computational models we can learn some more about the real processes. I am not
interested in whether, at the end of the day I have an intelligent system, a
simulation of an intelligent system or just a dumb computer program. I will
judge my performance on results -- does it behave in a similar way to humans,
if so my model, and the theory it is based upon, is good.

Is there anyone out there who's work will be judged good or bad depending on
whether it can be ascribed `intelligence'?  It seems to me that it is only
useful to make definitions to some end, rather than for the sake of making
definitions; we are, after all, Applied Epistomologists and not
Philosophers (:-)


Steve Cassidy                               domain: steve@comp.vuw.ac.nz|
Victoria University, PO Box 600,   -------------------------------------|
Wellington, New Zealand              path: ...seismo!uunet!vuwcomp!steve|

"If God had meant us to be perfect, He would have made us that way"
                                             - Winston Niles Roomford III

------------------------------

End of AIList Digest
********************

∂26-Jul-88  0656	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #26  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 26 Jul 88  06:56:22 PDT
Date: Tue 26 Jul 1988 01:46-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #26
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 26 Jul 1988      Volume 8 : Issue 26

Today's Topics:

 Announcements:

  Knowledge Acquisition Workshop Proceedings
  Connectionism Conference
  late volunteers for AAAI-88???
  Chomsky awarded Kyoto Prize in basic sciences
  Prolog Source Library

----------------------------------------------------------------------

Date: 21 Jul 88 16:08:27 GMT
From: bcsaic!john@june.cs.washington.edu (John Boose)
Subject: Knowledge Acquisition Workshop Proceedings


KNOWLEDGE ACQUISITION WORKSHOP PUBLICATIONS

We have received numerous requests for proceedings information from
recent knowledge acquisition workshops; publication availability is
noted below.

John Boose, Brian Gaines, Co-Chairs

--------------------------------------------------------------------------
Knowledge Acquisition Workshop Publications
July 1988

Knowledge Acquisition Workshop, Banff, Canada, November 1986
Preprints distributed to attendees only.
Revized and updated papers published in the International Journal of
Man-Machine
Studies, January, February, April, August and September 1987 special issues
Papers plus editorial material and index collected in two books:
Gaines, B.R. & Boose, J.H. (Eds) Knowledge Acquisition for Knowledge-Based
Systems. London: Academic Press, 1988 (due for release Fall 1988).
Boose, J.H. & Gaines, B.R. (Eds) Knowledge Acquisition Tools for Expert
Systems.
London: Academic Press, 1988 (due for release Fall 1988).

European Knowledge Acquisition Workshop, EKAW87, Reading, UK, September 1987
Proceedings available as:
Proceedings of the First European Workshop on Knowledge Acquisition for
Knowledge-Based Systems.
Send sterling money order or draft for 39.00 payable to
University of Reading to:
Prof. T.R.Addis, Department of Computer Science, University of Reading,
Whiteknights, PO Box 220, Reading RG6 2AX, UK.

Knowledge Acquisition Workshop, Banff, Canada, October 1987
Preprints distributed to attendees only.
Revized and updated papers being published in the International Journal of
Man-Machine Studies, 1988 regular issues (in press).
Papers plus editorial material and index will be collected in book form,
together with other knowledge acquisition papers from IJMMS in 1989.

European Knowledge Acquisition Workshop, EKAW88, Bonn, West Germany, June 1988
Proceedings available as:
Proceedings of the European Workshop on Knowledge Acquisition for
Knowledge-Based Systems (EKAW88).
Send order to (the GMD will invoice you for DM68.00 plus postage):
Marc Linster, Institut fr Angewandte Informationstechnik der Gesellschaft fr
Mathematik und Datenverarbeitung mbH, Schloss Birlingoven,
Postfach 1240, D-5205 Sankt Augustin 1, West Germany.

Knowledge Acquisition Workshop, Banff, Canada, November 1988
Preprints available (400-500 pages, early November 1988) as:
Proceedings of the 3rd Knowledge Acquisition for Knowledge-Based Systems
Workshop.
Send money order, draft, or check drawn on US or Canadian bank for US$65.00 or
CDN$85.00 to:
SRDG Publications, Department of Computer Science, University of Calgary,
Calgary, Alberta, Canada T2N 1N4.
Revised and updated papers being published in the International Journal of
Man-Machine Studies, 1989 regular issues.
Papers plus editorial material and index will be collected in book form,
together with other knowledge acquisition papers from IJMMS in 1990.

We are planning to hold the 3rd European Knowledge Acquisition Workshop
in Paris in July, 1989, and the 4th AAAI-sponsored Knowledge Acquisition
Workshop in Banff, Canada, in the fall of 1989. Watch the net and AI
Magazine for details.
--
John Boose, Boeing Artificial Intelligence Center
  arpa: john@boeing.com     uucp: uw-beaver!uw-june!bcsaic!john

------------------------------

Date: 22 Jul 88 16:34 +0200
From: Rolf Pfeifer <pfeifer%ifi.unizh.ch@RELAY.CS.NET>
Subject: Connectionism Conference

*****************************************************************************

SGAICO Conference

*******************************************************************************

Program and Call for Presentation of Ongoing Work

       C O N N E C T I O N I S M   I N   P E R S P E C T I V E

                University of Zurich, Switzerland
                     10-13 October 1988

Tutorials:              10 October 1988
Technical Program:      11 - 12 October 1988
Workshops and
  Poster/demonstration
  session               13 October 1988

******************************************************************************
Organization:           - University of Zurich, Dept. of Computer Science
                        - SGAICO (Swiss Group for Artificial Intelligence and
                                Cognitive Science)
                        - Gottlieb Duttweiler Institute (GDI)

About the conference
____________________

Introduction:
Connectionism has gained much attention in recent years as a paradigm for
building models of intelligent systems in which intresting behavioral
properties emerge from complex interactions of a large number of simple
"neuron-like" elements. Such work is highly relevant to fields such as
cognitive science, artificial intelligence, neurobiology, and computer
science and to all disciplines where complex dynamical processes and
principles of self-organization are studied. Connectionism models seem to be
suited for solving many problems which have proved difficult in the past
using traditional AI techniques. But to what extent do they really provide
solutions? One major theme of the conference is to evaluate the import of
connectionist models for the various disciplines. Another one is to see
in what ways connectionism, being a young discipline in its present form,
can benefit from the influx of concepts and research results from other
disciplines. The conference includes tutorials, workshops, a technical program
and panel discussions with some of the leading researchers in the field.

Tutorials:
The goal of the tutorials is to introduce connectionism to people who are
relatively new to the field. They will enable participants to follow the
technical program and the panel discussions.

Technical Program:
There are many points of view to the study of intelligent systems. The
conference will focus on the views from connectionism, artificial
intelligence and cognitive science, neuroscience, and complex dynamics.
Along another dimension there are several significant issues in the study
of intelligent systems, some of which are "Knowledge representation and
memory", "Perception, sequential processing, and action", "Learning", and
"Problem solving and reasoning". Researchers from connectionism, cognitive
science, artificial intelligence, etc. will take issue with the ways
connectionism is approaching these various problem areas. This idea is
reflected in the structure of the program.

Panel Discussions:
There will be panel discussions with experts in the field on specialized
topics which are of particular interest to the application of connectionism.

Workshops and Presentations of Ongoing Work:
The last day of the conference is devoted to workshops with the purpose of
identifying the major problems that currently exist within connectionism,
to define future research agendas and collaborations, to provide a
platform for the interdisciplinary exchange of information and experience,
and to find a framework for practical applications. The workshop day will
also feature presentation of ongoing work (see "Call for presentation of
ongoing work").

*******************************************************************************
*                                                                             *
* CALL FOR PRESENTATION OF ONGOING WORK                                       *
*                                                                             *
* Presentations are invited on all areas of connectionist research. The focus *
* is on current research issues, i.e. "work in progress" is of highest        *
* interest even if major problems remain to be resolved. Work of RESEARCH     *
* GROUPS OR LABORATORIES is particularly welcome. Presentations can be in the *
* form of poster, or demonstration of prototypes. The goal is to encourage    *
* cooperation and the exchange of ideas between different research groups.    *
* Please submit an extended abstract (1-2 pages).                             *
*                                                                             *
* Deadline for submissions:     September 2, 1988                             *
* Notification of acceptance:   September 20, 1988                            *
*                                                                             *
* Contact: Zoltan Schreter, Computer Science Department, University of        *
* Zurich, Switzerland, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland   *
* Phone: (41) 1 257 43 07/11                                                  *
* Fax: (41) 1 257 40 04                                                       *
* or send mail to                                                             *
* pfeifer@ifi.unizh.ch                                                        *
*                                                                             *
*******************************************************************************



Tutorials


MONDAY, October 10, 1988
___________________________________________________________________________

08.30   Tutorial 1: Introduction to neural nets.
        F. Fogelman
                - Adaptive systems: Perceptrons (Rosenblatt) and Adalines
                  (Widrow & Hoff)
                - Associative memories: linear model (Kohonen),
                  Hopfield networks, Brain state in a
                  box model  (BSB; Anderson)
                - Link to other disciplines

09.30   Coffee

10.00   Tutorial 2: Self-organizing Topological maps.
        T. Kohonen
                - Theory
                - Application: Speech-recognizing systems
                - Tuning of maps for optimal recognition accuracy
                  (learning vector quantization)

11:30   Tutorial 3: Multi-layer neural networks.
        Y. Le Cun
                - Elementary learning mechanisms (LMS and Perceptron) and
                  their limitations
                - Easy and hard learning
                - Learning in multi-layer networks: The back-propagation
                  algorithm (and its variations)
                - Multi-layer networks:
                        - as associative memories
                        - for pattern recognition (a case study)
                - Network design techniques; simulators and software tools

13.00   Lunch

14.00   Tutorial 4: Parallel Distributed Processing of symbolic structure.
        P. Smolensky
                Can Connectionism deal with the kind of complex highly
                structured information characteristic of most AI domains?
                This tutorial presents recent research suggesting that
                the answer is yes.

15.30   Coffee

16.00   Tutorial 5: Connectionist modeling and simulation in neuroscience and
                psychology.
        R. Granger
                Biological networks are composed of neurons with a range of
                biophysical and physiological properties that give rise to
                complex learning and performance rules embedded in
                anatomical architectures with complex connectivity.
                Given this complexity it is of interest to identify which
                of the characteristics of brain networks are central and
                which are less salient with respect to behavioral function.
                "Bottom-up" biological modeling attempts to identify the
                crucial learning and performance rules and their
                appropriate level of abstraction.

17.30   End of tutorial sessions
_______________________________________________________________________________

Technical Program


TUESDAY, October 11, 1988
___________________________________________________________________________

Introduction

09:00   Connectionism: Is it a new paradigm?            M. Boden

09:45   Discussion

10:00   Coffee


1. Knowledge Representation & Memory.   Chair: F. Fogelman

        The perspective of:

10:30   -       Connectionism.  P. Smolensky    Dealing with structure in
                                                Connectionism

11:15   -       AI/             N.N.
                Cognitive Science

12:00   -       Neuroscience/   C. v. der Malsburg
                Connectionism                   A neural architecture  for
                                                the  representation of
                                                structured objects


12:45   Lunch


2. Perception, Sequential Processing & Action.  Chair:  T. Kohonen

        The perspective of:

14:30   -       Connectionism   M. Kuperstein   Adaptive sensory-motor
                                                coordination using neural
                                                networks

15:15   -       Connectionism/  M. Imbert       Neuroscience and Connectionism:
                Neuroscience                    The case of orientation
                                                coding.

16:00   Coffee

16:30   -       AI/             J. Bridle       Connectionist approaches to
                Connectionism                   artificial perception:
                                                A speech pattern  processing
                                                approach

17:15   -       Neuroscience    G. Reeke        Synthetic neural modeling:
                                                A new approach to Brain Theory

18:00   Intermission/snack


18.30 - 20.00  panel discussion/workshop on

Expert Systems and Connectionism. Chair: S. Ahuja

                D. Bounds       D. Reilly
                Y. Le Cun       R. Serra

___________________________________________________________________________


WEDNESDAY, October 12, 1988
___________________________________________________________________________

3. Learning. Chair: R. Serra

        The perspective of:

9:00    -       Connectionism   Y. Le Cun       Generalization  and network
                                                design strategies

9:45    -       AI              Y. Kodratoff    Science of explanations versus
                                                science of numbers

10:30   Coffee

11:00   -       Complex Dynamics/
                Genetic Algorithms
                                H. Muehlenbein  Genetic algorithms and
                                                parallel computers

11:45   -       Neuroscience    G. Lynch        Behavioral effects of learning
                                                rules for long-term
                                                potentiation

12:30   Lunch


4. Problem Solving & Reasoning. Chair:  R. Pfeifer

        The perspective of:

14:00   -       AI/             B. Huberman     Dynamical perspectives on
                Complex Dynamics                problem solving and reasoning

14:45   -       Complex Dynamics
                                L. Steels       The Complex Dynamics of common
                                                sense

15:30   Coffee

16:00   -       Connectionism   J. Hendler      Problem solving and reasoning:
                                                A Connectionist perspective

16:45   -       AI              P. Rosenbloom   A cognitive-levels perspective
                                                on  the role of Connectionism
                                                in symbolic goal-oriented
                                                behavior

17:30   Intermission/snack


18:00 - 19:30 panel discussion/workshop on

Implementation Issues & Industrial Applications. Chair:  P. Treleaven

        B. Angeniol     G. Lynch
        G. Dreyfus      C. Wellekens

__________________________________________________________________________


Workshops and presentation of ongoing work



THURSDAY, October 13, 1988
___________________________________________________________________________



9:00-16:00  Workshops in partially parallel sessions. There will be a separate
poster/demonstration session  for the presentation of ongoing work. The
detailed program will be based on the submitted work and will be available at
the beginning of the conference.


The workshops:

1. Knowledge Representation & Memory
        Chair: F. Fogelman

2. Perception, Sequential Processing & Action
        Chair: F. Gardin

3. Learning
        Chair: R. Serra

4. Problem Solving & Reasoning
        Chair: R. Pfeifer

5. Evolutionary Modelling
        Chair: L. Steels

6. Neuro-Informatics in Switzerland: Theoretical and technical neurosciences
        Chair: K. Hepp

7. European Initiatives
        Chair: N.N.

8. Other


16:10   Summing up:  R. Pfeifer

16:30   End of the conference


___________________________________________________________________________

Program as of June 29, 1988, subject to minor changes

___________________________________________________________________________



THE SMALL PRINT

Organizers
Computer Science Department, University of Zurich
Swiss Group for Artificial Intelligence and Cognitive Science  (SGAICO)
Gottlieb Duttweiler Institute (GDI)

Location
University of Zurich-Irchel
Winterthurerstrasse 190
CH-8057 Zurich, Switzerland

Administration
Gabi Vogl
Phone: (41) 1 257 43 21
Fax: (41) 1 257 40 04

Information
Rolf Pfeifer
Zoltan Schreter
Computer Science Department, University of Zurich
Winterthurerstrasse 190, CH-8057 Zurich
Phone: (41) 1 257 43 23 / 43 07
Fax: (41) 1 257 40 04

Sanjeev B. Ahuja, Rentenanstalt (Swiss Life)
General Guisan-Quai 40, CH-8022 Zurich
Phone: (41) 1 206 40 61 / 33 11

Thomas Bernold, Gottlieb Duttweiler Institute, CH-8803 Ruschlikon
Phone: (41) 1 461 37 16
Fax: (41) 1 461 37 39


Participation fees
Conference 11-13 October 1988:
Regular                         SFr.    350.--
ECCAI/SGAICO/
        SI/SVI-members          SFr.    250.--
Full time students              SFr.    100.--

Tutorials 10 October 1988:
Regular                         SFr.    200.--
ECCAI/SGAICO/
        SI/SVI-members          SFr.    120.--
Full time students              SFr.     50.--

For graduate students / assistants a limited  number of reduced
fees are available.

Documentation and refreshments are included.
Please remit the fee only upon receipt of invoice by the
Computer Science Department.

Language
The language of the conference is English.

Cancellations
If a registration is cancelled, there will be a  cancellation charge of
SFr. 50.-- after 1st October 1988, unless you name a replacement.

Hotel booking
Hotel booking will be handled separately.
Please indicate on your registration form
whether you would like information on hotel
reservations.

Proceedings
Proceedings of the conference will be published in book form.
They will become available in early 1989.

------------------------------

Date: 25 Jul 88 16:55:29 GMT
From: feifer@locus.ucla.edu
Subject: late volunteers for AAAI-88???

Some people are still asking is it possible to be a volunteer at
AAAI-88 even though the deadline has passed?

We are now accepting names on a waiting list.  We cannot confirm
before the conference.

If you would like to volunteer but have not been accepted
please send us your information (as per the instructions
below).  Then be at the volunteer orientation on Saturday
August 20 at 10am in rm c-6 of the st Paul civic center.
If there are any open positions, they will be given out on a
first come first serve basis.

Thank you for your interest in AAAI-88

-Richard Feifer

-----------------------



ANNOUNCEMENT:  Last Call: Student Volunteers Needed for AAAI-88

DEADLINE:      July 1, 1988

AAAI-88 will be held August 20-26, 1988 in beautiful St. Paul,
Minnesota.  Student volunteers are needed to help with local
arrangements and staffing of the conference.  To be eligible for
a Volunteer position, an individual must be an undergraduate or
graduate student in any field at any college or university.

This is an excellent opportunity for students to participate
in the conference.   Volunteers receive FREE registration at
AAAI-88, conference proceedings, "STAFF" T-shirt, and are
invited to the volunteer party. More importantly, by
participating as a volunteer, you become more involved and
meet students and researchers with similar interests.

Volunteer responsibilities are varied, including conference
preparation, registration, staffing of sessions and tutorials and
organizational tasks.  Each volunteer will be assigned
twelve (12) hours.

If you are interested in participating in AAAI-88 as a
Student Volunteer, apply by sending the following information:

Name
Electronic Mail Address (for mailing from arpa site)
USMail Address
Telephone Number(s)
Dates Available
Student Affiliation
Advisor's Name

to:

valerie@SEAS.UCLA.EDU

or

Valerie Aylett
3531-K Boelter Hall
Computer Science Dept.
UCLA
Los Angeles, California  90024-1596



Thanks, and I hope you join us this year!


Richard Feifer
Student Volunteer Coordinator
AAAI-88 Staff


-----------------------------------------------------------------
Richard G. Feifer                 feifer@cs.ucla.edu
UCLA
145 Moore Hall  --  Los Angeles  --  Ca  90024

------------------------------

Date: 25 Jul 88 19:05:48 GMT
From: mind!bob@princeton.edu  (Bob Freidin)
Subject: Chomsky awarded Kyoto Prize in basic sciences

On June 24th the Inamori Foundation of Japan announced the recipients of
this year's Kyoto Prizes.

        Basic Sciences:  Noam Chomsky (for contributions to Linguistics)

        Advanced Technology:  John McCarthy (for pioneering in Artificial
                                   Intelligence)

        Creative Arts and Moral Sciences:  Paul Thieme (for contributions
                                        to the history of Indian philosophy)

Each recipient will receive a prize of 45 million yen (approx. $350,000).
This is the fourth year these prizes, characterized as Japan's version of
the Nobel, have been awarded.  Kyoto Prize Laureates in the first two areas
include:

                Basic Sciences          Advanced Technology

        1985:   Claude E. Shannon       Rudolf Emil Kalman
        1986:   George E. Hutchinson    Nicole M. Le Douarin
        1987:   Jan H. Oort             Morris Cohen

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Robert Freidin
Director
Program in Linguistics
Princeton University

------------------------------

Date: 25-JUL-1988 22:22:16 GMT
From: POPX%VAX.OXFORD.AC.UK@MITVMA.MIT.EDU
Subject: Prolog Source Library

From: Jocelyn Paine,
      St. Peter's College,
      New Inn Hall Street,
      Oxford OX1 2DL.
      (mail address only; please don't phone).

Janet Address: POPX @ OX.VAX


                          PROLOG SOURCE LIBRARY


About 6  months ago,  I announced  the Prolog  Source Library  of public
domain software.  Anyone can send or  contribute. I haven't had  as many
entries as I'd hoped (gentle  hint: several subscribers said they'd send
things, and haven't  yet); and there must be new  readers who don't know
of it. So this is a reminder.


I shall  follow with details  of how the  library works, and  end with a
summary of the  catalogue so far. If you want  to contribute entries, or
to request them, please read on...


SENDING AND GETTING:


How to send contributions.

  Please send  source for  the library,  to user  POPX at  Janet address
  OX.VAX (the Vax-Cluster at  Oxford University Computing Service). If a
  file occupies more than about 1  megabyte, please send a short message
  about it  first, but don't  send the large  file itself until  I reply
  with  a  message  requesting  it.  This will  avoid  the  problems  we
  sometimes  have where  large files  are rejected  because there  isn't
  enough space for them.

  I accept  source on the understanding  that it will be  distributed to
  anyone who asks for  it. I intend that the contents  of the library be
  treated  in the  same way  as (for  example) proofs  published in  the
  mathematical literature, and algorithms  published in computer science
  textbooks -  as publicly available  ideas which anyone  can experiment
  with, criticise, and improve.

  I will try to put an entry into the library within one working week of
  its arrival.

Catalogue of entries.

  I will keep a catalogue of  contributions available to anyone who asks
  for it.

  The catalogue will  contain for each entry: the  name and geographical
  address of the entry's  contributor (to prevent contributors receiving
  unwanted  electronic  mail,  I  won't include  their  electronic  mail
  addresses unless  I'm asked to  do so);  a description of  the entry's
  purpose; and  an approximate  size in kilobytes  (to help  those whose
  mail systems can't receive large files easily).

  I  will  also include  my  evaluations  of its  ease  of  use, of  its
  portability and  standardness (by the standards  of Edinburgh Prolog);
  and my evaluation of any documentation included.

Quality of entries.

  Any contribution may be useful to  someone out there, so I'll start by
  accepting anything. I'm not just  looking for elegant code, or logical
  respectability.  However, it  would  be  nice if  entries  were to  be
  adequately documented, to come with examples  of their use, and to run
  under  Edinburgh Prolog  as described  in "Programming  in Prolog"  by
  Clocksin and Mellish. If you can therefore, I'd like you to follow the
  suggestions below.

    The main predicate  or predicates in each entry  should be specified
    so that someone who knows nothing about how they work can call them.
    This means specifying: the type and mode of each argument, including
    details of  what must be  instantiated on  call, and what  will have
    become instantiated  on return; under what  conditions the predicate
    fails, and  whether it's resatisfiable; any  side-effects, including
    transput  and clauses  asserted  or retracted;  whether any  initial
    conditions    are   required,    including   assertions,    operator
    declarations,  and  ancilliary  predicates.  In  some  cases,  other
    information,  like  the  syntax  of   a  language  compiled  by  the
    predicate, may be useful.

    A set  of example calls would  be useful, showing the  inputs given,
    and the outputs expected. Use  your discretion: if you contribute an
    expert system shell  for example, I'd like a  sample rulebase, and a
    description  of  how  to  call  the  shell  from  Prolog,  and  some
    indication  of what  questions  I can  ask the  shell,  but I  don't
    require that the  shell's dialogue be reproduced down  to every last
    carriage return and indentation.

    For programmers who want to  look inside an entry, adequate comments
    should be  given in the  source code,  together perhaps with  a more
    general description of  how the entry works,  including any relevant
    theory.

    In the documentation, references to  the literature should be given,
    if this is helpful.

    Entries should be  runnable using only the  predicates and operators
    described in "Programming in Prolog" (if  they are not, I may not be
    able to test them!). I don't object to add-on modules being included
    which are only runnable under certain implementations - for example,
    an add-on with  which a planner can display its  thoughts in windows
    on  a high-resolution  terminal -  but they  will be  less generally
    useful.

    As mentioned earlier, I will  evaluate entries for documentation and
    standardness, putting  my results  into the  catalogue. If  I can, I
    will also  test them, and  record how easy I  found them to  use, by
    following the instructions given.

  I emphasise that I will accept all entries; the comments above suggest
  how to improve the quality of entries, if you have the time.

Requesting entries.

  I can't  afford to copy  lots of discs, tapes,  papers, etc, so  I can
  only deal with requests to send files along the network. Also, I can't
  afford to send along networks that I have to pay to use from Janet.

  You may  request the catalogue,  or a particular  entry in it.  I will
  also  try  to satisfy  requests  like  "please  send all  the  natural
  language parsers which you have" -  whether I can cope with these will
  depend on the size of the library.

  I will  try to answer each  request within seven working  days. If you
  get no reply within fourteen working  days, then please send a message
  by  paper  mail  to  my  address. Give  full  details  of  where  your
  electronic mail  messge was  sent from,  the time,  etc. If  a message
  fails to  arrive, this may help  the Computing Service  staff discover
  why.


Although I  know Lisp,  I haven't  used it  enough to  do much  with it,
though I'm  willing just to  receive and pass on  Lisp code, and  to try
running it under VAX Lisp or Poplog version 13 Lisp.

I will also accept Pop-11 code.  Depending on what happens to the Poplog
Users' Group library (policy is being  decided now), I'll either add the
code to a Pop-11 section of my library, or pass it on to PUG.


THE CATALOGUE SO FAR:


                             TURTLE GRAPHICS
        Contributed by Salleh Mustaffa, University of Manchester
           Copyright (c) 1987 by David Lau-Kee, Univ. of York.

Simple Prolog turtle package, for VT100 or similar.


                              TYPE-CHECKER
                       Contributed by R.A.O'Keefe
                   Authors: Alan Mycroft & R.A.O'Keefe

(This program  was sent to  the Prolog Digest  on the 14th  of November,
1987.)

Defines a way  to specify a type for each  predicate, and a type-checked
"consult" which checks goals against the type of the predicate called.


                         GRAMMAR-RULE TRANSLATOR
                              Jocelyn Paine

Defines a  predicate, 'grexpand', for expanding  Definite Clause Grammar
Rules into Prolog clauses. These are the standard form of DCG rules, for
which a translator is built-in to many Prologs.

The translator is essentially the same as that published in "Programming
in Prolog", by Clocksin and Mellish.


                       DOUBLY-LINKED LIST PACKAGE
            Contributed by Philip Dart, Melbourne University

Doubly-linked list-handling package sent to the Prolog Digest on 14th of
November by Philip Dart, Melbourne University. Creates cyclic structures
along whicch you can move in either direction.


                             FILE SEPARATOR
                              Jocelyn Paine

Allows one  to separate text files  which have been  concatenated into a
larger file.  Mainly useful  to separate files  belonging to  the Prolog
Library which have been packed in this way.


                         OBJECT-ORIENTED PACKAGE
                       Author: Ben Staveley-Taylor

This program -  called POEM by its  author - comes from  a Poplog Users'
Group tape, received in early 1987.

POEM  makes available  some  of  the features  found  in languages  like
Simula-67. Classes  may be defined, objects  (instantiations of classes)
created and operated on as high- level entities.


                                UTILITIES
               Contributed by Bert Shure, SUN Microsystems
          Written by John Cugini, National Bureau of Standards

Various utility predicates, some commonly used, some not.

Predicates  include:   member;  append;  maplist;   other  list-handling
predicates;  predicates   for  handling   sets  represented   as  lists;
type-testing predicates; sorting and  merging; readline; a predicate for
getting  the  printable  representation   of  a  term;  rational  number
predicates; meta-logical  predicates for  dealing with  true disjunction
and negation.


                          PREDICATE AUTO-TESTER
                              Jocelyn Paine

Reads a file or files of Prolog goals, where each goal is accompanied by
a specification saying whether it  should succeed, fail, cause an error,
or pass some tests on its bound variables.

For  each  goal/specification pair,  the  program  calls the  goal,  and
compares  its effect  with  the specification.  If  they differ,  then a
warning  message   is  displayed.   Useful  for   automatically  testing
predicates against their expected outputs.


                       INTERVAL-ALGEBRA PREDICATES
                              Jocelyn Paine
                  Shelved on the 21st of December 1987

Predicates for  manipulating sets of  integers, represented as  lists of
disjoint intervals.  This is a compact  way of representing  large sets,
provided that they contain few gaps between intervals.


                      CURSOR-ADDRESSING PREDICATES
                              Jocelyn Paine
                  Shelved on the 21st of December 1987

There are two sets of predicates, one for VT100s and one for VT52s.

They: move to X,Y; clear a line or page; set inverse or normal video.


                        LIST-HANDLING PREDICATES
              Contributed by J.G. Forero, Reading University

The predicates'  functions are:  test for  list-ness; test  for sublist;
find element  at known  position, or position  of known  element; remove
duplicates; flatten a  list; add element after known  element; find that
part of a list following a given element.


                  EXPERT SYSTEM FOR FORESTRY MANAGEMENT
             Contributed by Steve Jones, Reading University

A  small expert  system for  forestry  management.


                                 LINGER:
       A  TOOL FOR GRAMMAR ANALYSIS OF WESTERN EUROPEAN LANGUAGES
    Contributed by Paul O'Brien and Masoud Yazdani, Exeter University

LINGER  is a  language-independent  system to  analyse natural  language
sentences and  report and correct  grammatical errors  encountered.   An
important objective is that the system should be easily configured for a
particular natural  language by an  expert in  that language but  not in
computer science.

Only a French grammar is available.


                            EDINBURGH  TOOLS
   Contributed by the AI Applications Institute, Edinburgh University

The DEC-10 Prolog Library was  an  extraordinary  and  catholic  collection  of
Prolog  routines, largely written by research workers and students in Professor
Alan Bundy's Mathematical Reasoning  Group  at  the  Department  of  Artificial
Intelligence  at the University of Edinburgh.  In summer 1987 we sifted through
the enormous amount of material in  this  library,  grouping  similar  material
together and converting some of the more used programs into Edinburgh Prolog.

These programs are all examples of Prolog programming to deal with objects  and
problems of many kinds.  (Some of these examples are very good examples, others
are not so; some are well commented, some  have  separate  documentation,  some
have  none.)  You  may be able to load tools for low-level operations into your
code ready-made, or you may gain insight into how to write good Prolog  (as  we
did) through just browsing amongst the source code here.

------------------------------

End of AIList Digest
********************

∂26-Jul-88  1239	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #27  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 26 Jul 88  12:38:47 PDT
Date: Tue 26 Jul 1988 01:50-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #27
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 26 Jul 1988      Volume 8 : Issue 27

Today's Topics:

 Queries:

  ES technology in RightWriter or Grammatik
  Journal of AI and Engineering
  Pattern Matching
  Memory Based Reasoning
  Response to - Graphael
  Expert System Applications in Government
  Response to - programs in cognitive science
  text-to-speech, text-to-phoneme, or text-to-syllable algorithms
  Small on-line dictionary (or English nouns & verbs) sought
  Modeling Trends

----------------------------------------------------------------------

Date: 20 Jul 88 12:00:20 GMT
From: Robert Dale <rda%epistemi.edinburgh.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: ES technology in RightWriter or Grammatik


I notice that, according to the recently posted summary of Spang 4:5,

        "Right Writer and Gramentek II (grammar and style checkers)
        use expert system technology, sold 100,000 copies each and
        have not been sold as expert systems."

I've never used either of these programs, but have read reviews of
them (and their promotional stuff) fairly closely, and I can't see the
ES technology.  As far as I could tell, both simply search the text
for "bad strings" and suggest corresponding replacements, presumably
using a simple table lookup mechanism (although the newer Grammatik
III may be more sophisticated).  Anyone care to comment?  Does the
Spang article say anything more illuminating?

R




--
Robert Dale        Phone: +44 31 667 1011 x6470 | University of Edinburgh
UUCP:   ...!uunet!mcvax!ukc!its63b!epistemi!rda | Centre for Cognitive Science
ARPA:   rda%epistemi.ed.ac.uk@nss.cs.ucl.ac.uk  | 2 Buccleuch Place
JANET:  rda@uk.ac.ed.epistemi                   | Edinburgh EH8 9LW Scotland

------------------------------

Date: Wed, 20 Jul 88 17:29:55 EDT
From: Subbarao Kambhampati <rao@alv.umd.edu>
Subject: Journal of AI and Engineering

I have heard about a journal called The Journal of AI and Engineering.  I
want to know who is on the Editorial Board of the journal, whether it has
started publishing and how I can get more info and sample copies etc.   Any
comments regarding the quality of the journal will also be appreciated.

Please reply by e-mail to rao@alv.umd.edu

Thanks in advance

-rao
Subbarao Kambhampati

------------------------------

Date: 22 Jul 88 02:28:28 GMT
From: att!chinet!mcdchg!clyde!watmath!watcgl!kppicott@bloom-beacon.mit
      .edu  (Dewey Duck)
Subject: Pattern Matching

Hi,

I am working on computer graphics and I recently ran into a problem that some
of you might be able to help me with.  It is basically a problem in pattern
matching.  I want to take as input, a collection of x,y points.  From this
collection I would like to be able to decompose the points into maximal
curve sections.  In general, I want to take a line drawing, digitize it using
some standard picture digitizing device, section the bitmap into groups of
points which represent single curves, represent these curves in a different
form and process those curves further.  I can do all but the sectioning.
As I see it I have two options at this point.  Either section the curves as
I've described above, or have a user sit there and mark the dots to be included
in each curve.  I do not want to do the latter if it can be avoided at all
(mainly because the user will end up being me).

So, here is the question part.  Is there any research out there that has
addressed and/or solved this class of problem?  I would appreciate it if
anyone could either tell me how it's done, tell me that it can't be done, or
point me in the correct direction to look into this further.  As I said
before, I am a graphics person and haven't run into this class of problem
so I am totally unaware of what has been done in this area.  Any little bit
of information would help greatly.

Thanks.  I will summarize any responses,

--- Kevin Picott, Dept. of CS, U. of Waterloo

 _____________________
(_)________________   \
  ________________|\   \UUCP:  [uunet!]watmath!watcgl!kppicott
 (_)______________\_\   \
   ______________________\*-NET:  kppicott@watcgl.waterloo.{edu, ca, cdn}
  (_)____________________|


example:

bitmap is:

                    .
                     .
                      .
                       ..                .                   ...
                         ..              .                 ..
                           ...           .                .
                              ....        .             ..
                                  ......  .       ......
                                        ..........
                                           .
                                           .
                                           .
                                           .
                                            .
                                            .
                                            .
                                            .
                                            .

Resultant patterns are:

                    .
                     .
                      .
                       ..                *                   ...
                         ..              *                 ..
                           ...           *                .
                              ....        *             ..
                                  ......  *       ......
                                        ..*.......
                                           *
                                           *
                                           *
                                           *
                                            *
                                            *
                                            *
                                            *
                                            *
where one curve is '*' and another is '.'.  These points are fed into a bezier
curve-maker as a unit.  (I have a program that can do this.)

Am I correct in my assumption above.  If not, I would appreciate any further
references you could give me.  I do not have access to a Mac, so looking at
the program myself is not an option at the moment.

Thanks,
--- Kevin Picott --- kppicott@watcgl.waterloo.edu

------------------------------

Date: 22 Jul 88 11:44:50 GMT
From: dowjone!gregb@uunet.uu.net  (Greg_Baber)
Subject: Memory Based Reasoning

Can anybody point me to refernces to something called
Memory Based Reasoning? (Books, articles, netnews, etc)
It would even be better if someone could (post,email)
a capsule summary.  Thanks a lot.

gregb
--
Reply to: Gregory S. Baber              Voice:  (609) 520-5077
          Dow Jones & Co., Inc.         E-mail: ..princeton!dowjone!gregb
          Box 300
          Princeton, New Jersey 08543-0300

------------------------------

Date: Fri, 22 Jul 88 10:56:21 pdt
From: purcell@loki.hac.com (ed purcell)
Subject: Response to - Graphael


Recent issues of "AI Expert" (and probably AAAI's "AI Magazine" also)
have advertisements for Graphael and their G-Base oodbms.  Their U.S.
address is:
    Graphael
    255 Bear Hill Road
    Waltham, MA 02154
    617-890-7055
Graphael's headquarters are in France.
(I have their President's business card, but it's from AAAI-84,
so I don't know if this address in France is still current.)

                Ed Purcell
                purcell%loki@hac2arpa.hac.com
                213-607-0793

------------------------------

Date: 23 Jul 88 03:28:52 GMT
From: koen@locus.ucla.edu
Subject: Expert System Applications in Government


I am looking for articles, books or summaries which give a good overview
of expert system applications currently being used or worked on in the US
government agencies. Any pointers will be appreciated.

 -- Koenraad Lecot

------------------------------

Date: 24 Jul 88 20:48:00 GMT
From: mcvax!ukc!warwick!cvaxa!aarons@uunet.uu.net  (Aaron Sloman)
Subject: Response to - programs in cognitive science

From Aaron Sloman Sun Jul 24 21:41:56 BST 1988
To: ghh@princeton.edu
Subject: Cognitive Sciences Programmes

Just saw your message
>Subject: programs in cognitive science
>Message-ID: <GHH.88Jul16135559@confidence.princeton.edu>
>Date: 16 Jul 88 04:55:59 GMT
>I am trying to get an up to date list of undergraduate
>programs in cognitive science or cognitive studies.

At Sussex University we started, in 1973, a Programme that included both
undergraduate and graduate studies in Cognitive Studies, including
Psychology, Linguistics, Philosophy, Social Anthropology and AI.

Originally this was based as a sub-school in the School of Social
Sciences, but in 1987 it was deemed big enough to be a separate school,
and from August 1988 will be joined by Computer Science, leading to a
new name:
    School of Cognitive and Computing Sciences

Undergraduate majors now include
    Psychology (various versions)
    Linguistics
    Computational Linguistics
    Philosophy
    Computing and AI
        (includes some of all the other disciplines)
    Computer Science
    Economics and Computing

There are also the following graduate courses:
    MA
    MSc conversion course in Knowledge Based Systems
    MPhil
    DPhil

The school is one of the few things still growing in UK Universities.

I hope this information is of some use.

Aaron Sloman,
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QN, England
    ARPANET : aarons%uk.ac.sussex.cvaxa@nss.cs.ucl.ac.uk
    JANET     aarons@cvaxa.sussex.ac.uk
    BITNET:   aarons%uk.ac.sussex.cvaxa@uk.ac

As a last resort (it costs us more...)
    UUCP:     ...mcvax!ukc!cvaxa!aarons
            or aarons@cvaxa.uucp

Phone:  University +(44)-(0)273-678294

------------------------------

Date: 24 Jul 88 23:31:57 GMT
From: hubcap!ncrcae!gollum!rolandi@gatech.edu  (Walter Rolandi)
Subject: text-to-speech, text-to-phoneme, or text-to-syllable
         algorithms

Thanks to all those who responded to my request for ways to identify
the syllables of English.  Several people suggested text-to-speech
algorithms but no one has offered to provide one.  Does anyone have a
text-to-speech algorithm that they would be willing to post?  I am sure
many people would be interested.

Thanks.

Walter Rolandi
rolandi@ncrcae.UUCP
rolandi@ncrcae.Columbia.NCR.COM
NCR Advanced Systems Development, Columbia, SC

------------------------------

Date: Mon, 25 Jul 88 15:57:06 +1000
From: "ERIC Y.H. TSUI" <munnari!aragorn.oz.au!eric@uunet.UU.NET>
Subject: Small on-line dictionary (or English nouns & verbs) sought

Would anyone has access to an electronic copy of English nouns and verbs ?
A small (500 entries) to medium collection (5000 entries) would be appropriate.
It would be ideal if the verbs (and/or nouns) are grouped into various
categories. I am also prepared to work with a small on-line dictionary and
manually extract the required knowledge.

The knowledge is sought for the design of lexicon and semantic knowledge
for a restricted NL front end (for encoding rules).

Eric Tsui                               eric@aragorn.oz
Division of Computing and Mathematics
Deakin University
Geelong, Victoria 3217
AUSTRALIA

------------------------------

Date: Mon, 25 Jul 88 19:35 N
From: <RCSTBW@HEITUE5>
Subject: Modeling Trends

Dear AIlisters,

At the Eindhoven University of Technology, the Bureau for Biomedical and
Health Technology will try to model trends in Medical Technology. They are
thinking of using an expert-system (shell) to do so, but is that the right
way to do it?

Does anyone of you out there know of earlier attempts to characterize trend?
Has anyone developed software to do so?

Can anyone tell me if there has been an attempt in cognitive psychology to
use artificial intelligence to recognize trends? I know for example of one
attempt tp predict the actions of persons by eliciting their goals ( C.F.
Schmidt N.S. Sridharan and J.L. Goodson, 1978 in Artificial Intelligence).
The system they used was called BELIEVER. But it seems impossible to track
down more recent research on that. The BELIEVER system was developed at the
Rutgers University, New Brunswick. It is a psychological theory of how
people understand actions of others. In my opinion recognizing trends is
understanding and predicting actions of groups of people (or companies).

At our department of Industrial Engineering people are working on research
in Human Factors and Automation in the process industry. We are especially
interested in how to use artificial intelligence (or expert-sytems if anyone
prefers that) to support proces-operators by their task as controller and
guard of (chemical) processes. There has already been research on this
subject Hoechst and DSM (2 pretty large chemical companies) by Harmen Kragt
and Tjerk van der Schaaf.
A colleague and very good friend of mine, Arjen den Boer, will do research
on this subjuct, starting at 1 september, 1988. His subject will be:
Diagnostic support of the process operator. Can anyone tell me about recent
research on that?

Summarizing:
I am looking for software for IBM PC/XT/AT or VAX/VMS, if possible public
domain, for the following applications:

        1. Characterization and recognition of trends,
        2. Recognition of actions of groups of people, can anyone tell me
           where I can find Schmidt, Sridharan or Goodson,
        3. Diagnostic support for process operators in (chemical)
           industries.

I would appreciate it, when you will send me a copy of your response by
(direct) electronic mail, because that is much easier for me to process and
to react.

You can send your response to:

                                RCSTBW@HEITUE51 (Ben Willems)

Thanks in advance!

                    Ben.

------------------------------

End of AIList Digest
********************

∂27-Jul-88  0244	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #28  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 27 Jul 88  02:44:25 PDT
Date: Tue 26 Jul 1988 23:01-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #28
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 27 Jul 1988     Volume 8 : Issue 28

Today's Topics:

 Free Will:

  Undecidability
  How to dispose of the free will issue
  How to dispose of naive science types (short)
  Free Will (long)

----------------------------------------------------------------------

Date: Sun, 24 Jul 88 09:42:23 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Re undecidability

Goetz writes:

>                              Goedel's Theorem showed that you WILL have an
> unbounded number of axioms following the method you propose. That is why most
> mathematicians consider it an important theorem - it states you can never have
> an axiomatic system "as complex as"
> arithmetic without having true statements which are unprovable.

      Always bear in mind that this implies an infinite system.  Neither
undecidability nor the halting problem apply in finite spaces.  A
constructive mathematics in a finite space should not suffer from either
problem.  Real computers, of course, can be thought of as a form of
constructive mathematics in a finite space.

      There are times when I wonder if it is time to displace infinity from
its place of importance in mathematics.  The concept of infinity is often
introduced as a mathematical convenience, so as to avoid seemingly ugly
case analysis.  The price paid for this convenience may be too high.

      Current thinking in physics seems to be that everything is quantized
and that the universe is of finite size.  Thus, a mathematics with infinity
may not be needed to describe the physical universe.

      It's worth considering that a century from now, infinity may be looked
upon as a mathematical crutch and a holdover from an era in which people
believed that the universe was continuous and developed a mathematics to
match.

                                        John Nagle

------------------------------

Date: 24 Jul 88  1526 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: free will

[In reply to message sent Sun 24 Jul 1988 02:00-EDT.]

Almost all the discussion is too vague to be a contribution.  Let me
suggest that AI people concentrate their attention on the question of how
a deterministic robot should be programmed to reason about its own free
will, as this free will relates both to its past choices and to its future
choices.  Can we program it to do better in the future than it did in the
past by reasoning that it could have done something different from what it
did, and this would have had a better outcome?  If yes, how should it be
programmed?  If no, then doesn't this make robots permanently inferior to
humans in learning from experience?

Philosophers may be excused.  They are allowed take the view that
the above questions are too grubbily technical to concern them.

------------------------------

Date: 24 Jul 88 23:20:24 GMT
From: amdahl!pyramid!thirdi!metapsy!sarge@ames.arpa  (Sarge Gerbode)
Subject: Re: How to dispose of the free will issue

In article <421@afit-ab.arpa> dswinney@icc.UUCP (David V. Swinney) writes:
>The "free-will" theorists hold that are choices are only partially
>deterministic and partially random.
>
>The "no-free-will" theorists hold that are choices are completely
>deterministic with no random component.

If my actions were random, I would not consider myself to have "free will".
Only if my actions were self-determined would I so consider myself.  As Bohm
pointed out: "The laws of chance are just as necessary as the causal laws
themselves." [*Causality and Chance in Modern Physics*]

I think most would agree that we have at least some degree of self-determinism,
and beyond that, we have some degree of causativeness over our own natures,
e.g. our habits and our understanding.  That is the basis upon which laws
concerning negligence rest.

How far this "second-order" self-determinism extends is an open question, but
the issue of randomness doesn't, I think, enter into it.
--
Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
Institute for Research in Metapsychology
950 Guinda St.  Palo Alto, CA 94301
--
Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
Institute for Research in Metapsychology
950 Guinda St.  Palo Alto, CA 94301

------------------------------

Date: 24 Jul 88 23:36:25 GMT
From: buengc!bph@bu-cs.bu.edu  (Blair P. Houghton)
Subject: Re: How to dispose of naive science types (short)

In article <531@ns.UUCP> logajan@ns.UUCP (John Logajan x3118) writes:
>gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>> logajan@ns.UUCP (John Logajan x3118) writes:
>> >unproveable theories aren't very useful.
>
>> most of your theories [...] will be unproven,
>> and unproveable, if only for practical reasons.
>
>Theories that are by their nature unproveable are completely different from
>theories that are as of yet unproven.  Unproveable theories are rather
>special in that they usually only occur to philosophers, and have little to
>do with day to day life.  You went on and on about unproven theories but failed
>to deal with the actual subject, namely unproveable theories.
>
>Please explain to me how an unproveable theory (one that makes no unique
>predictions) can be useful?
>

Rudy Carnap wrote _The Logical Syntax of Language_ in 1937.  In it
he described the development of an all-encompassing, even recursive
syntax that could be used to implement logic without bound.

One of the simplest examples of unproveability is the paradox

"This sentence is false."

It drives you nuts if you analyze it semantically; but, it's blithering
at a very low level if you hit it with logic:  call the sentence S;
the sentence then says

"If S then not-S."

Even a little kid can see that such a thing is patent nonsense.

The words in the sentence--the semantics--confuse the issue; while
both sentences say exactly the same thing in different semantics.

Carnap's thesis in the book was of course that the logic of communication
is in the syntax, not the semantics.

I'm correcting myself: now that I look at it, the paradox really says

"S = not-S."

Carnap's mistake (what makes him horribly obscure these days) is
that he did all of this amongst a sea of bizarre symbolic definitions
designed as an example of the derivation of his syntactical language;
but he did it, and it's a definition of _everything_ necessary to
carry on a logical calculus without running into walls of description.
It even defines itself without resorting to outside means; sort of
like writing a C compiler in C without ever having to write one in
assembly, and running it on itself to produce the runnable code.
Of course, the computer is a semantic thing...

I would hope some stout-hearted scientists would apply this sort of thing
to unproveable theories; we might find out about god, after all.

                                --Blair
                                  "To be, or not to be;
                                   that requires one TTL gate
                                   at a minimum, but you could
                                   do it with three NAND-gates,
                                   or just hook the output
                                   to Vcc."

------------------------------

Date: 25 Jul 88 12:02:06 GMT
From: l.cc.purdue.edu!cik@k.cc.purdue.edu  (Herman Rubin)
Subject: Re: How to dispose of naive science types (short)

In article <6032@bunny.UUCP>, rjb1@bunny.UUCP (Richard J. Brandau) writes:
> > gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> > Please explain to me how an unproveable theory (one that makes no unique
> > predictions) can be useful?

< Perhaps you mean a NONDISPROVABLE theory.  An "unproveable" theory is
< a very special thing, often much harder to find than a "proveable"
< theory.  If you can show that a theory is unprovable (in some axiom
< set), you've done a good day's science.

> No theories make "unique predictions" about the real, (empirical)
> world.  Are quarks the ONLY way to explain the proliferation of
> subnuclear particles?  Perhaps a god of the cyclotron made them
> appear.  The difference between the scientific and religious theories
> is that the scientific one can be DISproven: it makes predictions that
> can be TESTED.
>
> You may, if you like, apply this distiction to the beliefs that
> determine your behavior.  Since you can't disprove the existence of
> God, you may choose to chuck out all religion.  Since you CAN think of
> ways to disprove f=ma, you may avoid being run over by a bus.

It is recognized that any non-trivial complete theory cannot be exactly true.
If I say that there will be some history of the universe, this is a trivial
untestable theory, and is completely useless.  If I say that the motion of
the planets is describable by Newton's law of gravity, this is clearly false,
but is quite adequate for spaceship navigation.  Even with relativistic
corrections it is false, because it ignores such things as tidal friction.
Furthermore, we do not know the precise form of gravitation in a relativistic
framework, and even less the modifications due to quantum mechanical considera-
tions.

In the strict sense, we will never have a correct theory.  The proper question
about a theory is whether its errors should be ignored at the present time.
And it is quite possible that for some purposes they should and for others
they should not.  But unless the theory provides predictive power or insight,
its accuracy is unimportant.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet, UUCP)

------------------------------

Date: Mon, 25 Jul 88 13:22 EST
From: steven horst 219-289-9067           
      <GKMARH%IRISHMVS.BITNET@MITVMA.MIT.EDU>
Subject: Free Will (long)

A few quibbles about some characterizations of free will and
related problems:
(1) D.V.Swinney (dsinney@galaxy.afit.af.mil) writes:
> The "free-will" theorists hold that are (sic) choices are only
> partially deterministic and partially random.
>
> The "no-free-will" theorists hold that are (sic) choices are
> completely deterministic with no random component.

I'm not really sure whether Swinney means to equate free will with
randomness, but if he does he is surely mistaken.  On the one hand,
there are some kinds of randomness that are of no use to the free
will theorist: the kind of randomness suggested by quantum physics,
for example, does not give the free will theorist what he wants.
One can believe in quantum indeterminism without believing in
free will.  On the other hand, the term "choice" is ambiguous between
(a) the ACT OF CHOOSING and (b) THAT WHICH IS CHOSEN (in this case,
let's say the behavior that results from the choosing).  It's not
clear which of these Swinney means.  I think that what the free will
theorist (at least some free will theorists, at any rate) would say
is that the CHOOSING is not determined (in the sense of being the
inevitible result of a previous state of affairs governed by a
universal law), but the resulting behavior IS, in a sense, determined:
it is determined by the act of choosing and the states of the
organism and its environment that allow what is chosen to be carried
out.  (There is a fairly large philosophical corpus on the subject
of "agent causation".)
    What the advocate of free will (we'll exclude compatibilists for
the moment) must not say is that choices freely made can receive
an adequate explanation in terms of natural laws and states of
affairs prior to the free act.  So Swinney is right that
(non-compatibilist) free will theorists are not determinists.  But
randomness just doesn't capture what the free will theorist is after.
And I think the reason is something like this: human actions can be
looked at from an "external" perspective, just like any other events
in the world.  As such, they either fall under laws covering causal
regularities or they do not, and so from this perspective they are
either determined or random.  But unlike other events in nature,
the actions (and mental states) of thinking beings can also be
understood from an "internal" or "first-person" perspective.  It is
only by understanding this perspective that the notion of FREEDOM
becomes intelligible.  Moreover, it is not clear that the two
perspectives are commensurable - so it isn't really clear that one
one can even ask coherent questions about freedom and determinism.
At any rate, the notions of "freedom" and "bondage" of the will
are not reducible to indeterminism and determinism.

(2) John Logan (logajan@ns.uucp) writes that
> Unproveable theories aren't very useful.
   and that
> Unproveable theories are rather special in that they usually only
> occur to philosophers.

     If we were talking about logic or mathematics, Logan's assertions
might be correct, though even there some of the most interesting
"theories" are not known to be proveable.  But in the sciences, NO
interesting theories are proveable, as Karl Popper argued so
persuasively (and frequently and loudly) for many years.  The nature
of the warrant for scientific theories is a complicated thing.
(For those interested, I would recommend Newton-Smith's book on
the subject, which as I recall is entitled "Rationality in Science".)
Perhaps Logan did not mean to conjure visions of the logical
positivists when he used the word "proveable", in which case I
apologize for conjuring Popper in return.  But the word "proof"
really does bring to mind a false, if popular, picture of the nature
of scientific research.

Steven Horst
   BITNET.......gkmarh@irishmvs
   SURFACE......Department of Philosophy
                Notre Dame, IN  46556

------------------------------

End of AIList Digest
********************

∂28-Jul-88  2017	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #29  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 28 Jul 88  20:16:55 PDT
Date: Thu 28 Jul 1988 22:57-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #29
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 29 Jul 1988       Volume 8 : Issue 29

Today's Topics:

 Mathematical Logic:

  (This began as part of the 'free will' discussion, but
   seems to have branched out since ...)

  Are all reasoning systems inconsistent?
  Bounded systems are too limited
  Goedel's Theorem
  self reference paradox

----------------------------------------------------------------------

Date: Wed, 27 Jul 88 08:47:38 EDT
From: PAUL_MALENFANT@VOID.CEO.DG.COM
Reply-to: <PAUL_MALENFANT%VOID.CEO.DG.COM@adam.DG.COM>
Subject: Re: Are all reasoning systems inconsistent?

Jon, after reading your proof beginning with S = (S -> A), I constructed
the truth table for it.

     S    A  |  (S -> A) |  S = (S -> A)
  =========================================
     T    T  |     T     |      T
     F    T  |     T     |      F
     T    F  |     F     |      F
     F    F  |     T     |      F

As you can see, it can only be true iffi both S and A are true, so
your proof isn't saying anything new.  A must be true by definition,
not by deduction.

Similarly, S = (P(n) -> A) can be constructed in the same way.

  S  P(n)  A |  (P(n) -> A) | S = (P(n) -> A)
==============================================
  T   T    T |       T      |      T
  F   T    T |       T      |      F
  T   F    T |       T      |      T             *
  F   F    T |       T      |      F
  T   T    F |       F      |      F
  F   T    F |       F      |      T             *
  T   F    F |       T      |      T             *
  F   F    F |       T      |      F

Here, there are more true combinations, but the ones that are marked
by * must be discarded because they violate (S = P(n)) which was not
in the original expression - if it was, then the only instance of
truth for the expression would require S, P(n) and A to be all true.

So in answer to your question, the reasoning system wasn't inconsistent,
it just revealed something which had to be true about the expression
in the first place.

Paul

------------------------------

Date: 27 Jul 88 14:30:34 GMT
From: cik@l.cc.purdue.edu (Herman Rubin)
Subject: Bounded systems are too limited


In a previous article, John B. Nagle writes:

> Goetz writes:
>
> >                              Goedel's Theorem showed that you WILL have an
> > unbounded number of axioms following the method you propose. That is why
> > most mathematicians consider it an important theorem - it states you can
> > never have an axiomatic system "as complex as"
> > arithmetic without having true statements which are unprovable.
>
>       Always bear in mind that this implies an infinite system.  Neither
> undecidability nor the halting problem apply in finite spaces.  A
> constructive mathematics in a finite space should not suffer from either
> problem.  Real computers, of course, can be thought of as a form of
> constructive mathematics in a finite space.
>
>       There are times when I wonder if it is time to displace infinity from
> its place of importance in mathematics.  The concept of infinity is often
> introduced as a mathematical convenience, so as to avoid seemingly ugly
> case analysis.  The price paid for this convenience may be too high.
>
>       Current thinking in physics seems to be that everything is quantized
> and that the universe is of finite size.  Thus, a mathematics with infinity
> may not be needed to describe the physical universe.
>
>       It's worth considering that a century from now, infinity may be looked
> upon as a mathematical crutch and a holdover from an era in which people
> believed that the universe was continuous and developed a mathematics to
> match.
>
>                                       John Nagle

The finiteness of size of the universe is irrelevant to the question of
whether an infinite system is needed.  The number of points in the smallest
interval in one dimension is the same and the number needed for any finite-
dimensional model of the universe.  The resolution of the various paradoxes
require a concept of infinity, but not of an unbounded universe.

And even if the universe is finite and quantized, so that at any physical
time there are only finitely many points in space (with the appropriate
relativistic modifications), and any history has only a finite number of
time points, the probabilistic considerations require that the parameter
space is infinite.

Mathematics does not allow infinitely long arguments in most of its branches.
A proof must have finite length in its language.  However, bounded length
cannot be required.  I cannot imagine a remotely reasonable system which
would allow proofs to have 65535 lines but not 65536.  Or a system which
would allow an argument to use 1988 variables but not 1999.  The postulates
about the integers state that for every integer there is a next integer--
a next _larger_ integer.  Do not confuse unboundedness with infinity.

I know of no mathematical system in which the objects and axioms are not
recursively enumerable.  That means that they can be listed.  I have
referred above to having arbitrarily many variables.  The rules also
require a listing.  The ability to substitute an expression of an appropriate
type for a variable is actually an axiom schema, a separate axiom for each
variable and each expression.  Thus there must be an infinite number of
rules.

A Turing machine can do all mathematics in principle.  So can any computer
with an infinite tape if it of even moderate size.  But remember the infinity.
If a particular computer with a particular memory, including externals,
cannot do a particular job does not mean that that job cannot be done by
computers.

Also do not confuse the size of an object within a mathematical system
with the size as seen from a model.  There are also different notions of
size, and a competent mathematician has no problems in using the appropriate
notion in a given situation.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet, UUCP)

------------------------------

Date: Wed, 27 Jul 88 15:42:05 +0100
From: "Gordon Joly, Statistics, UCL"
      <gordon%stats.ucl.ac.uk@ESS.Cs.Ucl.AC.UK>
Subject: Re: Goedel's Theorem

> From AIList Vol 8 # 22
>    Shame on you, professor! Goedel's Theorem showed that you WILL
> have an unbounded number of axioms following the method you propose.
> That is why most mathematicians consider it an important theorem - it
> states you can never have an axiomatic system "as complex as"
> arithmetic without having true statements which are unprovable.
> Phil Goetz

Shame on who? Anyway, the theorem is important to *pure*
mathematicians, in particular number theorists, as is the concept of
infinity. But does that worry *applied* computer scientists?  I
believe you need to have system as complex as the real numbers
(countable infinity) to get into the "Goedel domain".

> op. cit.
> Since you CAN think of ways to disprove f=ma, you may avoid being
> run over by a bus.
> -- Rich Brandau

Einstein showed that f=ma was a first approximation. He also showed
that is was fundamentally incorrect to think if the apple as being
pulled; it falls since nothing holds it up. f=ma was DISproven in
this sense.

Gordon Joly.

P.S. Is cosmology a science?

------------------------------

Date: Wed, 27 Jul 88 11:35:48 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: self reference paradox


In AIList Digest for Wednesday, 27 Jul 1988 (Volume 8, Issue 28), in his
message of 24 Jul 88 23:36:25 GMT, buengc!bph@bu-cs.bu.edu (Blair P. Houghton)
writes:

BH> One of the simplest examples of unproveability is the paradox
  |
  | "This sentence is false."
  |
  | It drives you nuts if you analyze it semantically; but, it's blithering
  | at a very low level if you hit it with logic:  call the sentence S;
  | the sentence then says . . .
  |
  | "S = not-S."

The syntactic nexus of this and related paradoxes is that there is no
referent for the deictic phrase "this sentence" at the time when it is
uttered, nor even any basis for believing that the utterance in progress
will in fact be a sentence when (or if!) it does end.  A sentence cannot
be legitimately referred to qua sentence until it is a sentence, that
is, until it is ended.  Therefore, it cannot contain a legitimate
reference to itself qua sentence.  For this reason, the above
translation into symbolic notation is not licit.  It does not even
capture what we suppose the sentence to be saying, precisely because it
does not capture the paradox.  And why does it fail in this?  Because it
is a metalanguage statement referring to the sentence abbreviated by S.
It is not a self-reference by a sentence to itself, it is a reference to
a separate sentence, abbreviated S.

This failure of self-reference is a limitation, if you want to think of
it that way, due to the fact that natural languages contain their own
metalanguages, rather than having a separate metalanguage.  See Harris
_Mathematical Structures of Language_ and _Language and Information_ for
discussion.

The logical apparatus of deduction and other forms of inference are
required only for various uses to which language may be put, rather than
being the semantic basis for natural language, as has been sometimes
claimed.  Translation of natural language texts into logical notations
is always and necessarily incomplete.  (Same references, for starters.)

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

End of AIList Digest
********************

∂28-Jul-88  2313	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #30  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 28 Jul 88  23:13:12 PDT
Date: Thu 28 Jul 1988 23:07-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #30
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 29 Jul 1988       Volume 8 : Issue 30

Today's Topics:

 Queries:

  Response to - References to MBR?
  Ronald Brachman's address
  Response to - programs in cognitive science
  Chomsky
  Response to - online ai abstracts

----------------------------------------------------------------------

Date: 25 Jul 88 16:00:58 GMT
From: hpl-opus!hpccc!hp-sde!hpfcdc!hpfclp!jorge@hplabs.hp.com  (Jorge
      Gautier)
Subject: Response to - References to MBR?


David L. Waltz, "The Prospects for Building Truly Intelligent Machines,"
Daedalus, Winter 1988.

Craig Stanfill and David L. Waltz, "Toward Memory-Based Reasoning,"
Communications of the ACM 29 (12) (December 1986).

------------------------------

Date: 26-JUL-1988 12:06:45 GMT
From: POPX%VAX.OXFORD.AC.UK@MITVMA.MIT.EDU
Subject: Ronald Brachman's address

From: Jocelyn Paine,
      St. Peter's College,
      New Inn Hall Street,
      Oxford OX1 2DL.
      (mail address only; please don't phone).

Janet Address: POPX @ UK.AC.OX.VAX


                          E-MAIL ADDRESS WANTED


Could someone tell me the address of Ronald Brachman? I'm writing a book
on the implementation of frame  languages, and my editor has suggested I
get in touch with him. Thanks.

------------------------------

Date: 26 Jul 88 14:50:00 GMT
From: port@iuvax.cs.indiana.edu
Subject: Response to - programs in cognitive science


        A Program in Cognitive Science is in under development at
Indiana University.  We have permission to make some faculty appointments
this year but official state funding won't be received until spring '89.
We already have a number of students unofficially in such a program.
        Our program will serve both graduates and undergraduates.
All degrees will be joint degrees with another department
(especially Psychology, Linguistics, Computer Science and Philosophy,
although any other dept is possible).  Faculty appts will also
generally be joint appts.  A full curriculum proposal was prepared
and approved by faculty committees last year.
        Current well-known faculty include (from Psych) Rich Shiffrin,
David Pisoni, Esther Thelen, Ron Kettner, John Castellan,
(from Computer Science) Doug Hofstadter, Michael Gasser, Gregory
Rawlins, Dirk Van Gucht, (Linguistics) Dan Dinnsen,
(Philosophy) Jon Michael Dunn and many others.

        For further information contact Richard Shiffrin,
Acting Director of Cog Sci Prog, Dept of Psychology, Indiana
University, Bloomington, IN 47405 (812-335-4972)   -- or contact me.
                Robert Port
                Dept of Linguistics
                Dept of Computer Science
                Indiana University, Bloomington, IN, 47405
                812-335-9217, 812-335-6458

------------------------------

Date: 26 Jul 88 16:54:20 GMT
From: ucsdhub!hp-sdd!ncr-sd!ncrcae!gollum!rolandi@ucsd.edu  (Walter
      Rolandi)
Subject: Chomsky


>I was delighted to see Bob Freidin's posting on the winners of
>the Kyoto prizes.  Congratulations to Professors Chomsky, McCarthy,
>and Thieme!  I hope the world-wide attention will lead to better
>understanding and deeper appreciation of their contributions to
>Basic Science, Advanced Technology, and Creative Arts and Moral Sciences.
>
>--Barry Kort



Where can one find a good summary of Chomsky's experiments, his experimental
design preferences, measurement techniques, research methodology, and data
analysis procedures?


Walter Rolandi
rolandi@ncrcae.UUCP
rolandi@ncrcae.Columbia.NCR.COM
NCR Advanced Systems Development, Columbia, SC

------------------------------

Date: Tue, 26 Jul 88 14:18:07 CDT
From: niels@visual1.tamu.edu (Niels Bauer)
Subject: Response to - on-line ai abstracts

>Is an on-line AI abstracts service available?  Here's how one
>of our users described what they wanted:
>
>       "What I had in mind was something more on the order of a computer-
>        based literature search for AI topics like what is provided by
>        Psychological Abstracts or ERIC on-line systems for their content
>        domains."

There is an AI abstract service available, but only in printed form (as
far as I know).  It is the "The Artificial Intelligence Compendium:  Abstracts
and Index to Research on AI Theory and Applications", published by Scientific
DataLink.  Again, as far as I know, this is neither available as an on-line
service or on CD-ROM.  Many of the articles indexed are available through other
databases, however, this is obviously not as useful as if they were all
together in a single database.

Niels K. Bauer
Dept. of Computer Science
Texas A&M University
niels@visual1.tamu.edu



[Editor's note:

        It is my understanding that Scientific DataLink's products are
provided for a fee.  I think an on-line service would be more useful if
it were made available to the research community at no charge, possibly
via the internet.

        Finding funding for such a project, of course, is a problem.
There is a precedent for funding free access to information:  the Public
Library.  When these were first proposed about 150 years ago, the idea
must have sounded as un-workable and utopian as a no-charge search and
abstract service does today.  Nevertheless, it managed to work.

        The benefits of free access to information far outweigh the
short-term gains of restricting it, especially within the research
community.  This is why many journals have a copyright policy that
permits copying as long as it is for 'research purposes' or 'not for
direct commercial advantage'.  I applaud such journals, and do not
subscribe to any with a dissimilar policy.

        I invite readers of AIList to post ideas and comments on how an
online search/abstract/full text service could be made most useful.


                - nick]

------------------------------

End of AIList Digest
********************

∂29-Jul-88  0154	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #29  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 29 Jul 88  01:54:48 PDT
Date: Thu 28 Jul 1988 22:57-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #29
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 29 Jul 1988       Volume 8 : Issue 29

Today's Topics:

 Mathematical Logic:

  (This began as part of the 'free will' discussion, but
   seems to have branched out since ...)

  Are all reasoning systems inconsistent?
  Bounded systems are too limited
  Goedel's Theorem
  self reference paradox

----------------------------------------------------------------------

Date: Wed, 27 Jul 88 08:47:38 EDT
From: PAUL_MALENFANT@VOID.CEO.DG.COM
Reply-to: <PAUL_MALENFANT%VOID.CEO.DG.COM@adam.DG.COM>
Subject: Re: Are all reasoning systems inconsistent?

Jon, after reading your proof beginning with S = (S -> A), I constructed
the truth table for it.

     S    A  |  (S -> A) |  S = (S -> A)
  =========================================
     T    T  |     T     |      T
     F    T  |     T     |      F
     T    F  |     F     |      F
     F    F  |     T     |      F

As you can see, it can only be true iffi both S and A are true, so
your proof isn't saying anything new.  A must be true by definition,
not by deduction.

Similarly, S = (P(n) -> A) can be constructed in the same way.

  S  P(n)  A |  (P(n) -> A) | S = (P(n) -> A)
==============================================
  T   T    T |       T      |      T
  F   T    T |       T      |      F
  T   F    T |       T      |      T             *
  F   F    T |       T      |      F
  T   T    F |       F      |      F
  F   T    F |       F      |      T             *
  T   F    F |       T      |      T             *
  F   F    F |       T      |      F

Here, there are more true combinations, but the ones that are marked
by * must be discarded because they violate (S = P(n)) which was not
in the original expression - if it was, then the only instance of
truth for the expression would require S, P(n) and A to be all true.

So in answer to your question, the reasoning system wasn't inconsistent,
it just revealed something which had to be true about the expression
in the first place.

Paul

------------------------------

Date: 27 Jul 88 14:30:34 GMT
From: cik@l.cc.purdue.edu (Herman Rubin)
Subject: Bounded systems are too limited


In a previous article, John B. Nagle writes:

> Goetz writes:
>
> >                              Goedel's Theorem showed that you WILL have an
> > unbounded number of axioms following the method you propose. That is why
> > most mathematicians consider it an important theorem - it states you can
> > never have an axiomatic system "as complex as"
> > arithmetic without having true statements which are unprovable.
>
>       Always bear in mind that this implies an infinite system.  Neither
> undecidability nor the halting problem apply in finite spaces.  A
> constructive mathematics in a finite space should not suffer from either
> problem.  Real computers, of course, can be thought of as a form of
> constructive mathematics in a finite space.
>
>       There are times when I wonder if it is time to displace infinity from
> its place of importance in mathematics.  The concept of infinity is often
> introduced as a mathematical convenience, so as to avoid seemingly ugly
> case analysis.  The price paid for this convenience may be too high.
>
>       Current thinking in physics seems to be that everything is quantized
> and that the universe is of finite size.  Thus, a mathematics with infinity
> may not be needed to describe the physical universe.
>
>       It's worth considering that a century from now, infinity may be looked
> upon as a mathematical crutch and a holdover from an era in which people
> believed that the universe was continuous and developed a mathematics to
> match.
>
>                                       John Nagle

The finiteness of size of the universe is irrelevant to the question of
whether an infinite system is needed.  The number of points in the smallest
interval in one dimension is the same and the number needed for any finite-
dimensional model of the universe.  The resolution of the various paradoxes
require a concept of infinity, but not of an unbounded universe.

And even if the universe is finite and quantized, so that at any physical
time there are only finitely many points in space (with the appropriate
relativistic modifications), and any history has only a finite number of
time points, the probabilistic considerations require that the parameter
space is infinite.

Mathematics does not allow infinitely long arguments in most of its branches.
A proof must have finite length in its language.  However, bounded length
cannot be required.  I cannot imagine a remotely reasonable system which
would allow proofs to have 65535 lines but not 65536.  Or a system which
would allow an argument to use 1988 variables but not 1999.  The postulates
about the integers state that for every integer there is a next integer--
a next _larger_ integer.  Do not confuse unboundedness with infinity.

I know of no mathematical system in which the objects and axioms are not
recursively enumerable.  That means that they can be listed.  I have
referred above to having arbitrarily many variables.  The rules also
require a listing.  The ability to substitute an expression of an appropriate
type for a variable is actually an axiom schema, a separate axiom for each
variable and each expression.  Thus there must be an infinite number of
rules.

A Turing machine can do all mathematics in principle.  So can any computer
with an infinite tape if it of even moderate size.  But remember the infinity.
If a particular computer with a particular memory, including externals,
cannot do a particular job does not mean that that job cannot be done by
computers.

Also do not confuse the size of an object within a mathematical system
with the size as seen from a model.  There are also different notions of
size, and a competent mathematician has no problems in using the appropriate
notion in a given situation.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet, UUCP)

------------------------------

Date: Wed, 27 Jul 88 15:42:05 +0100
From: "Gordon Joly, Statistics, UCL"
      <gordon%stats.ucl.ac.uk@ESS.Cs.Ucl.AC.UK>
Subject: Re: Goedel's Theorem

> From AIList Vol 8 # 22
>    Shame on you, professor! Goedel's Theorem showed that you WILL
> have an unbounded number of axioms following the method you propose.
> That is why most mathematicians consider it an important theorem - it
> states you can never have an axiomatic system "as complex as"
> arithmetic without having true statements which are unprovable.
> Phil Goetz

Shame on who? Anyway, the theorem is important to *pure*
mathematicians, in particular number theorists, as is the concept of
infinity. But does that worry *applied* computer scientists?  I
believe you need to have system as complex as the real numbers
(countable infinity) to get into the "Goedel domain".

> op. cit.
> Since you CAN think of ways to disprove f=ma, you may avoid being
> run over by a bus.
> -- Rich Brandau

Einstein showed that f=ma was a first approximation. He also showed
that is was fundamentally incorrect to think if the apple as being
pulled; it falls since nothing holds it up. f=ma was DISproven in
this sense.

Gordon Joly.

P.S. Is cosmology a science?

------------------------------

Date: Wed, 27 Jul 88 11:35:48 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: self reference paradox


In AIList Digest for Wednesday, 27 Jul 1988 (Volume 8, Issue 28), in his
message of 24 Jul 88 23:36:25 GMT, buengc!bph@bu-cs.bu.edu (Blair P. Houghton)
writes:

BH> One of the simplest examples of unproveability is the paradox
  |
  | "This sentence is false."
  |
  | It drives you nuts if you analyze it semantically; but, it's blithering
  | at a very low level if you hit it with logic:  call the sentence S;
  | the sentence then says . . .
  |
  | "S = not-S."

The syntactic nexus of this and related paradoxes is that there is no
referent for the deictic phrase "this sentence" at the time when it is
uttered, nor even any basis for believing that the utterance in progress
will in fact be a sentence when (or if!) it does end.  A sentence cannot
be legitimately referred to qua sentence until it is a sentence, that
is, until it is ended.  Therefore, it cannot contain a legitimate
reference to itself qua sentence.  For this reason, the above
translation into symbolic notation is not licit.  It does not even
capture what we suppose the sentence to be saying, precisely because it
does not capture the paradox.  And why does it fail in this?  Because it
is a metalanguage statement referring to the sentence abbreviated by S.
It is not a self-reference by a sentence to itself, it is a reference to
a separate sentence, abbreviated S.

This failure of self-reference is a limitation, if you want to think of
it that way, due to the fact that natural languages contain their own
metalanguages, rather than having a separate metalanguage.  See Harris
_Mathematical Structures of Language_ and _Language and Information_ for
discussion.

The logical apparatus of deduction and other forms of inference are
required only for various uses to which language may be put, rather than
being the semantic basis for natural language, as has been sometimes
claimed.  Translation of natural language texts into logical notations
is always and necessarily incomplete.  (Same references, for starters.)

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

End of AIList Digest
********************

∂31-Jul-88  1448	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #31  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 31 Jul 88  14:48:11 PDT
Date: Sun 31 Jul 1988 17:31-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #31
To: AIList@AI.AI.MIT.EDU


AIList Digest             Monday, 1 Aug 1988       Volume 8 : Issue 31

Today's Topics:

 Queries:

  Expert Systems in computer design/configuration/administration
  FRL
  boolean reasoning
  neurocomputing software on a SUN
  computer chess
  Large corpora of English text
  a grammar for English

----------------------------------------------------------------------

Date: Mon, 25 Jul 88 17:21:30 CDT
From: Frank W Peters <PETERS%MSSTATE.BITNET@MITVMA.MIT.EDU>
Subject: Expert Systems in computer
         design/configuration/administration

Hello,

     I am beginning research into expert systems being used in the
design of computers, cofigurations of systems (that is, tailoring
a system to a specific users needs) and system administration
(such as problem diagnosis, resource alocation scheme advice and
so on).

     I am seeking the following information:

    1)  Names and vendors of any such systems.
    2)  References to any articles or papers
        relevant to this topic.

    I would be gratefull for any assistance you can give me.  Please
email any info directly to me and I will summarize to the list if
interest warrents.

                      Thank You
                     Frank Peters

*******************************************************************
*     Frank Wayne Peters      *     Phone:  (601) 325-2942        *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
*     Electronic Address:     *      Snail Mail Address:          *
*    PETERS@MSSTATE.BITNET    *           Drawer CC               *
*                             *     Miss State, Ms. 39762         *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
*  Disclaimers:                                                   *
*     I make it a point to speak only for myself.  If my boss     *
*     wants you to know what he thinks he'll tell you.            *
*                                                                 *
*                             ...it is a tale                     *
*     Told by an idiot, full of sound and fury                    *
*     Signifying nothing.                                         *
*                               - Bill Somebody-or-Another        *
*******************************************************************

------------------------------

Date: 27 Jul 88 05:11:06 GMT
From: mcvax!inria!crin!napoli@uunet.uu.net  (Amedeo NAPOLI)
Subject: FRL

Who can tell me the semantics of the slot "classification" in the frame based
language FRL (Goldstein, Roberts 1977) ?
This slot may be assigned with either "generic" or "individual".
What do these values actually mean ? In particuliar, is it possible to
instantiate again a frame that has "individual" in its classification slot.
Moreover, is the language still available and, if so, how can I get it ?

Thanks in advance,

Amedeo Napoli


--
--- Amedeo Napoli @ CRIN / Centre de Recherche en Informatique de Nancy
EMAIL : napoli@crin.crin.fr - POST : BP 239, 54506 VANDOEUVRE CEDEX, France

------------------------------

Date: Thu, 28 Jul 88 15:47:20 BST
From: mcvax!duttnph!zepp@uunet.UU.NET (Frank Zeppenfeldt)
Subject: boolean reasoning

I'm a graduate student and currently looking into some problems
concerning  low-level and boolean reasoning for (real-time) process
control. Till now the only literature I could find on this topic has
been the article :

  Logical Controls via Boolean Rule Matrix Transformations
  By : Carl G.Looney and Abdulrudah A.Alfize
  IEEE Trans. on Systems, Man and Cybernetics ,
  Vol SMC-17 no.6 Nov/Dec 1987

It describes how from an initial state of a truth-vector all the
possible truths can be deduced in one step, given a matrix  which
represents the rules. This is done via the transitive closure of
the graph that is represented by the matrix.

Example 1 : initial rulebase    a --> b  /* a,b and c are boolean */
                                b --> c  /* or 8 bits values */
            becomes after computing the transitive closure ,
                                a --> b  /* still the same */
                                b --> c  /* idem */
                                a --> c  /* new ! */

In this case, some matrix multiplications are sufficient.
Now suppose we have ,    a + b  --> c
                         ac + d --> e,
things are getting more complex to express the LHS of e in terms of
a,b and d. The reason for determining the transitive closure of the
rulebase is the fact that I want to load these rules in their
Conjunctive Normal Format in a logic array and apply them in
parallel. Every use of some RHS result in a LHS of a rule will
generate a recursive entry. I want to avoid that in hardware.
My prolog solution is of complexity  O(#rules↑4) ! .

My questions :
1.      The E-mail addresses of Mr. Looney or Mr.Alfize at
        the University of Nevada.
2.      Does somebody know about algorithms to compute the transi-
        tive closure of such a rulebase or AND/OR graph (using the
        available boolean instructions on microprocessors) ?
3.      Is there anyone who can give me examples or references
        using this kind of parallel production systems or low-
        level reasoning ?

I thank you in advance for any reactions,

        Frank Zeppenfeldt  ..mcvax!dutrun!duttnph!zepp
        University of Technology Delft
        Department of Applied Physics
        Pattern Recognition Group
        Lorentzweg 1
        2628 CJ Delft
        The Netherlands.

------------------------------

Date: Thu, 28 Jul 88 11:24:16 EDT
From: Allen Wilkinson <urt@nav.icst.nbs.gov>
Subject: neurocomputing software on a SUN

Hello,

I am not a reader of this news group so if you can help me
with my problem please mail your responses to the address
below.

I am in need of information about neurocomputing software
and hardware for a SUN 3. Any information would be greatly
appreciated.

Thanks in advance,

/----------------------------------------------------------------------\
|                                   -------                            |
|  R. Allen "Urt" Wilkinson       / o   o /| "Reality is only someone  |
|  National Bureau of Standards  / o   o /o|    else's Fantasy."       |
|  Bldg 225  Room A216           ------- oo|                           |
|  Gaithersburg, MD 20899       | o   o |oo                ---[        |
|  ARPA: urt@icst-cmr.arpa      |   o   |o/     -============( ]---O   |
|  DOMAIN: urt@cmr.icst.nbs.gov | o   o |/                 ---[        |
|                                -------                               |
\----------------------------------------------------------------------/

------------------------------

Date: 28 Jul 88 15:41:00 GMT
From: uxe.cso.uiuc.edu!gupta@uxc.cso.uiuc.edu
Subject: computer chess


I will be starting my Master's this fall and am fascinated by Artificial
Intelligence - especially in computer chess.

Does anyone know of any good info (books, papers, authors, professors,
articles, research projects) on this subject?

Thanks


---
Rohit Gupta               Internet:   gupta%uxe.cso.uiuc.edu@uxc.cso.uiuc.edu
Champaign, Illinois       UUCP: uunet!uiucuxc!uxe!gupta

------------------------------

Date: Thu, 28 Jul 88 12:30 PDT
From: Bagley.PA@Xerox.COM
Subject: Large corpora of English text

I am looking for public domain or commercially available corpora of
either written English or transcriptions of spoken English, preferably
significantly longer than a million characters.  If it is tagged with
part-of-speech that would be great, but it isn't necessary.  Thanks for
all assistance.

Steve Bagley
System Sciences Laboratory
Xerox PARC
3333 Coyote Hill Road
Palo Alto CA 94301
Bagley.pa@xerox.com
415-494-4331

------------------------------

Date: Thu, 28 Jul 88 18:01
From: "H.Ludwig Hausen +49-2241142426"           
      <HAUSEN%DBNGMD21.BITNET@MITVMA.MIT.EDU>
Subject: a grammar for English

We are also interested to obtain
grammars for English.  Any computer-readable form is welcomed.
Thanks for any help.
                               H A N S - L U D W I G  H A U S E N
GMD Schloss Birlinghoven       Telefax   +49-2241-14-2618
D-5205 Sankt Augustin 1        Teletex   2627-224135=GMD VV
       West  GERMANY           Telex     8 89 469 gmd d
                               E-mail    hausen@dbngmd21.BITNET
                               Telephone +49-2241-14-2440 or 2426
P.S.:GMD (Gesellschaft fuer Mathematik und Datenverarbeitung)
     German National Research Institute of Computer Science
     German Federal Ministry of Research and Technology (BMFT)

------------------------------

End of AIList Digest
********************

∂31-Jul-88  1850	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #32  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 31 Jul 88  18:50:21 PDT
Date: Sun 31 Jul 1988 17:35-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #32
To: AIList@AI.AI.MIT.EDU


AIList Digest             Monday, 1 Aug 1988       Volume 8 : Issue 32

Today's Topics:

 Query Responses:

  Cognitive Science Programs
  FRL
  Ronald Brachman's address
  computer chess
  Expert System Applications in Government

----------------------------------------------------------------------

Date: Fri, 29 Jul 88 15:10:23 BST
From: Ian Pratt
      <ipratt%research2.computer-science.manchester.ac.uk@NSS.Cs.Ucl.A
      C.UK>
Subject: Cognitive Science Programs


There is also a Cognitive Science Master's program at Manchester
University, organized jointly by the Departments of Psychology, Computer
Science, Medical Biophysics, General Linguistics, and the Centre for
Computational Linguistics atUMIST.

------------------------------

Date: 28 Jul 88 23:18:06 GMT
From: finin%antares@burdvax.prc.unisys.com  (Tim Finin)
Subject: FRL

In article <576@crin.crin.fr>, napoli@crin (Amedeo NAPOLI) writes:
>Who can tell me the semantics of the slot "classification" in the frame based
>language FRL (Goldstein, Roberts 1977) ?
>This slot may be assigned with either "generic" or "individual".
>What do these values actually mean ? In particuliar, is it possible to
>instantiate again a frame that has "individual" in its classification slot.
>Moreover, is the language still available and, if so, how can I get it ?

As I recall, FRL didn't assign any semantics to the classification
slot.  It was, however, commonly used by applications to differentiate
frames which represented generic objects from those representing
individual ones.  What this meant, exactly, was usually very
application dependent.  In fact, the difficulty in deciding what such
a distinction should mean in general in FRL and similar early frame
representation languages was the focus of a lot of discussion and
research in the early eighties.  One result was the architecture used
in many representation systems which includes an object oriented
descriptive component and an expression oriented assertional
component.  Information about individuals is then relegated to the
assertional component.
--
  Tim Finin                     finin@prc.unisys.com
  Paoli Research Center         ..!{psuvax1,sdcrdcf,cbmvax}!burdvax!finin
  Unisys                        215-648-7446 (office)  215-386-1749 (home)
  PO Box 517, Paoli PA 19301    215-648-7412 (fax)

------------------------------

Date: Fri, 29 Jul 88 11:49:57 EDT
From: rjb@research.att.com
Subject: Ronald Brachman's address

Ronald Brachman is alive and well at Bell Labs, in the beautiful Garden State.
His net address is rjb@research.att.com.

--Ron Brachman

------------------------------

Date: 29 Jul 88 12:50:34 GMT
From: ksr!breakpoint!richt@uunet.uu.net  (Rich Title)
Subject: computer chess

>... computer chess.
>Does anyone know of any good info (books, papers, authors, professors,
>articles, research projects) on this subject?
>---
>Rohit Gupta              Internet:   gupta%uxe.cso.uiuc.edu@uxc.cso.uiuc.edu
>Champaign, Illinois       UUCP: uunet!uiucuxc!uxe!gupta

There's a Carnegie Mellon PhD thesis by Carl Eberling,
that was published (by MIT press
I think) under the title "All the Right Moves". It describes HiTech,
the current world computer chess champion. That thesis in turn
points to other papers on computer chess.

Carnegie Mellon seems to be *the* place for computer chess.
Hans Berliner, former postal chess champion, is a comp sci
professor there.

The techniques used in the top machines such as HiTech represent
impressive engineering, but aren't what most people think of
as "AI". Very fast searching, aided by hardware that generates
and evaluates moves in parallel and evaluates positions
in parallel.

    - Rich

------------------------------

Date: Wed, 27 Jul 88 09:52:52 EDT
From: Wm A. Carpenter <WCARPENT%MDF@MITRE.ARPA>
Subject: Expert System Applications in Government
     
 The IEEE Computer Society (1730 Massachusetts Ave., N.W., Washington, D.C.
 20036-1903) and The MITRE Corporation have sponsored the Expert Systems in
 Government (ESIG) Conference for the past three years (1985, 1986, 1987).  In
 1989, the conference title is being changed to AI Systems in Government
 (AISIG).  Copies of the previous conference proceedings can be obtained from
 the IEEE.
     
 A call for papers for AISIG'89 has been issued.  Papers are due 1 Sept 1988
 (address:  AISIG'89, MS W418, The MITRE Corporation, 7525 Colshire Dr.,
 McLean, VA 22102).  AISIG'89 will be held in Washington, D.C. on 27-31 March
 1989.  The theme of AISIG'89 will be:  Intelligent Systems--Realizing the
 Payoff.
 *
 *        Wm A. Carpenter <WCARPENT%MDF@MITRE.ARPA>
 *        The MITRE Corporation
::

------------------------------

End of AIList Digest
********************

∂01-Aug-88  1145	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #33  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 1 Aug 88  11:45:08 PDT
Date: Mon  1 Aug 1988 14:22-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #33
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 2 Aug 1988       Volume 8 : Issue 33

Today's Topics:

 Announcements:

  Object-Oriented Database Workshop
  Network Computing Forum - call for papers
  ACL 1989 Annual Meeting Call for Papers; Vancouver, 26-29 June
  First annual meeting of the International Neural Network Society
  New Special Interest Group - INFO-FRAME

----------------------------------------------------------------------

Date: 26 Jul 88 20:05:21 GMT
From: root@mips.ti.com (Super-user)
Reply-to: thatte@ti-csl.ti.com (Satish Thatte)
Subject: Object-Oriented Database Workshop
Article-I.D.: ti-csl.54967


                  OBJECT-ORIENTED DATABASE WORKSHOP

                 To be held in conjunction with the

                             OOPSLA '88

              Conference on Object-Oriented Programming:
                 Systems, Languages, and Applications

                          26 September 1988

                    San Diego, California, U.S.A.


Object-oriented database systems combine the strengths of
object-oriented programming languages and data models, and database
systems.  This one-day workshop will expand on the theme and scope of a
similar OODB workshop held at OOPSLA '87.  The 1988 Workshop will
consist of the following four panels:

  Architectural issues: 8:30 AM - 10:00 AM

    Therice Anota (Graphael), Gordon Landis (Ontologic),
    Dan Fishman (HP), Patrick O'Brien (DEC),
    Jacob Stein (Servio Logic), David Wells (TI)

  Transaction management for cooperative work: 10:30 AM - 12:00 noon

    Bob Handsaker (Ontologic), Eliot Moss (Univ. of Massachusetts),
    Tore Risch (HP), Craig Schaffert (DEC),
    Jacob Stein (Servio Logic), David Wells (TI)

  Schema evolution and version management:  1:30 PM - 3:00 PM

    Gordon Landis (Ontologic), Mike Killian (DEC),
    Brom Mehbod (HP), Jacob Stein (Servio Logic),
    Craig Thompson (TI), Stan Zdonik (Brown University)

  Query processing: 3:30 PM - 5:00 PM

    David Beech (HP), Paul Gloess (Graphael),
    Bob Strong (Ontologic), Jacob Stein (Servio Logic),
    Craig Thompson (TI)


Each panel member will present his position on the panel topic in 10
minutes.  This will be followed by questions from the workshop
participants and discussions.  To encourage vigorous interactions and
exchange of ideas between the participants, the workshop will be limited
to 60 qualified participants.  If you are interested in attending the
workshop, please submit three copies of a single page abstract to the
workshop chairman describing your work related to object-oriented
database systems.  The workshop participants will be selected based on
the relevance and significance of their work described in the abstract.

Abstracts should be submitted to the workshop chairman by 15 August 1988.
Participants selected will be notified by 5 September 1988.

                        Workshop Chairman:

                       Dr. Satish M. Thatte
           Director, Information Technologies Laboratory
                Texas Instruments Incorporated
                   P.O. Box 655474, M/S 238
                        Dallas, TX 75265

                      Phone: (214)-995-0340
  Arpanet: Thatte@csc.ti.com   CSNet: Thatte%ti-csl@relay.cs.net

------------------------------

Date: 29 Jul 88 12:31 PDT
From: William Daul / McAir / McDonnell-Douglas Corp 
      <WBD.MDC@OFFICE-8.ARPA>
Subject: NETWORK COMPUTING FORUM - CALL FOR PAPERS

NETWORK COMPUTING FORUM

   CALL FOR PAPERS

   OCTOBER 5-8, 1988

   HOLIDAY INN WESTPORT, ST. LOUIS, MISSOURI

The next meeting of the Network Computing Forum will be held on October 5-7 in
St. Louis, Missouri.  This will be the fourth meeting of the Forum, and will
focus on the role of the Forum as a catalyst for change in the industry.  The
Forum is an industry group chartered to lead the way for rapid adoption of
multi-vendor network computing concepts and technologies.  Forum meetings allow
representatives from users and vendors to work together on common issues in an
open, informal atmosphere.  The Forum has over 100 member organizations, and
more than 220 representatives attended the May 1988 meeting.

Forum meetings are organized into three sessions:  a conference featuring
invited papers and panel sessions, meetings of interest groups and working
groups, and a policy making executive committee meeting.  Some areas of
interest to the Forum member organizations are listed, to suggest possible
topics for papers:

   Definition of user requirements for network computing

   Practical experiences using network computing concepts & technologies

   Partitioning and/or integration of applications across networks

   Remote procedure calls and other core services for network computing

   System and network administration for networks of heterogeneous computers

   User interfaces and user environments for network computing

   Software licensing in a network environment

   Data representation and command scripting across heterogeneous networks

   Use of network computing with IBM mainframes (MVS and VM)

Invited Papers

   As part of each Forum meeting, papers are invited from the community at
   large for presentation and discussion.  These papers should address the use
   or development of network based applications and services.  Emphasis should
   be placed on creating and using tightly coupled links between multiple,
   heterogeneous computer systems.  Technical descriptions of research
   projects, user experiences, as well as commerically available products are
   welcome.  Invitations are also extended for more informal talks on practical
   experience in administering heterogeneous computer networks.  All
   presentations should be 35 minutes in length, with 15 minutes of discussion
   following each presentation.

   Abstracts must be received by August 10, 1988.  Abstracts should summarize
   the paper in two or three paragraphs and include the mailing address,
   affiliation, and phone number of the author(s).  Notification of abstracts
   selected will be sent on August 19, 1988 and papers must be submitted no
   later than September 20, 1988.  Papers can be copyrighted, but must include
   authorization for unrestricted reproduction by the Network Computing Forum.
   Papers can be marked as working papers to allow future publication.

SEND ABSTRACTS BY AUGUST 10, 1988 TO the Program Chairman for the October 1988
meeting:

   T.D.  Carter
   c/o Jan McPherson
   McDonnell Douglas Travel Company
   944 Anglum Drive, Suite A
   Hazelwood, MO 63042
   (314)  233-2951
   Internet Address:  TDC.MDC@OFFICE-8.ARPA

------------------------------

Date: 1 Aug 88 14:30:14 GMT
From: flash.bellcore.COM!walker@ucbvax.berkeley.edu  (Donald E Walker)
Subject: ACL 1989 Annual Meeting Call for Papers; Vancouver, 26-29
         June


                           CALL FOR PAPERS

  27th Annual Meeting of the Association for Computational Linguistics

                           26-29 June 1989
                   University of British Columbia
                Vancouver, British Columbia, Canada

TOPICS OF INTEREST:  Papers are invited on substantial, original,
and unpublished research on all aspects of computational linguistics,
including, but not limited to, pragmatics, discourse, semantics,
syntax, and the lexicon; phonetics, phonology, and morphology;
interpreting and generating spoken and written language; linguistic,
mathematical, and psychological models of language; machine translation
and translation aids; natural language interfaces; message
understanding systems; and theoretical and applications papers of every
kind.

REQUIREMENTS:  Papers should describe unique work that has not been
submitted elsewhere; they should emphasize completed work rather than
intended work; and they should indicate clearly the state of completion
of the reported results.

FORMAT FOR SUBMISSION:  Authors should submit twelve copies of an
extended abstract not to exceed eight double-spaced pages (exclusive of
references) in a font no smaller than 10 point (elite).  The title page
should include the title, the name(s) of the author(s), complete
addresses, a short (5 line) summary, and a specification of the topic
area.  Submissions that do not conform to this format will not be
reviewed.  Send to:

                Julia Hirschberg
                ACL89 Program Chair
                AT&T Bell Laboratories, 2D-450
                600 Mountain Avenue
                Murray Hill, NJ 07974, USA
                (201)582-7496; julia@btl.att.com

SCHEDULE:  Papers are due by 6 January 1989.  Authors will be notified
of acceptance by February 20.  Camera-ready copies of final papers
prepared in a double-column format, either on model paper or in a
reduced font size using laserprinter output, must be received by 20
April along with a signed copyright release statement.

OTHER ACTIVITIES:  The meeting will include a program of tutorials
organized by Martha Pollack, AI Center, SRI International, 333
Ravenswood Avenue, Menlo Park, CA 94025, USA; (415)859-2037;
pollack@ai.sri.com.  Anyone wishing to arrange an exhibit or present a
demonstration should send a brief description together with a
specification of physical requirements (space, power, telephone
connections, tables, etc.) to Richard Rosenberg at the address below.

CONFERENCE INFORMATION:  Local arrangements are being handled by
Richard Rosenberg, Department of Computer Science, University of
British Columbia, Vancouver, BC, CANADA V6T 1W5; (604)228-4142;
rosen%cs.ubc.ca@relay.cs.net.  For other information on the conference
and on the ACL more generally, contact Don Walker (ACL), Bellcore, MRE
2A379, 445 South Street, Box 1910, Morristown, NJ 07960-1910, USA;
(201)829-4312; walker@flash.bellcore.com or bellcore!walker.

PROGRAM COMMITTEE:  Joyce Friedman, Barbara Grosz, Julia Hirschberg,
Bob Kasper, Richard Kittredge, Beth Levin, Steve Lytinen, Len Schubert,
Martha Palmer, Fernando Pereira, Carl Pollard, Mark Steedman.

------------------------------

Date: Mon, 1 Aug 88 12:08:42 EDT
From: mike%bucasb.bu.edu@bu-it.BU.EDU (Michael Cohen)
Subject: FIRST ANNUAL MEETING OF THE INTERNATIONAL NEURAL NETWORK
         SOCIETY

-----Meeting Update-----
September 6--10, 1988
Park Plaza Hotel
Boston, Massachusetts

The first annual INNS meeting promises to be a historic event. Its program
includes the largest selection of investigators ever assembled to present
the full range of neural network research and applications.

The meeting will bring together over 2000 scientists, engineers, students,
government administrators, industrial commercializers, and financiers. It
is rapidly selling out. Reserve now to avoid disappointment.

Call J.R. Shuman Associates, (617) 237-7931 for information about registration
For information about hotel reservations, call the Park Plaza Hotel at
(800) 225-2008 and reference "Neural Networks." If you call
from Massachusetts, call (800) 462-2022.

There will be 600 scientific presentations, including tutorials, plenary
lectures, symposia, and contributed oral and poster presentations. Over 50
exhibits are already reserved for industrial firms, publishing houses, and
government agencies.

The full day of tutorials presented on September 6 will be given by Gail
Carpenter, John Daugman, Stephen Grossberg, Morris Hirsch, Teuvo Kohonen,
David Rumelhart, Demetri Psaltis, and Allen Selverston. The plenary lecturers
are Stephen Grossberg, Carver Mead, Terrence Sejnowski, Nobuo Suga, and Bernard
Widrow. Approximately 30 symposium lectures will be given, 125 contributed oral
presentations, and 400 poster presentations.

Fourteen professional societies are cooperating with the INNS meeting. They
are:

     American Association of Artificial Intelligence
     American Mathematical Society
     Association for Behavior Analysis
     Cognitive Science Society
     IEEE Boston Section
     IEEE Computer Society
     IEEE Control Systems Society
     IEEE Engineering in Medicine and Biology Society
     IEEE Systems, Man and Cybernetics Society
     Optical Society of America
     Society for Industrial and Applied Mathematics
     Society for Mathematical Biology
     Society of Photo-Optical Instrumentation Engineers
     Society for the Experimental Analysis of Behavior

DO NOT MISS THE FIRST BIRTHDAY CELEBRATION OF THIS IMPORTANT NEW
RESEARCH COALITION!

------------------------------

Date: Mon, 1 Aug 88 10:04:34 pdt
From: mcdonald@loki.edsg (louis mcdonald)
Subject: New Special Interest Group

New Special Interest Group


INFO-FRAME          -------     System Frameworks
-------------------------------------------------

This group is designed to provide information for software tool
developers that are responsible for integrating heterogenous
software products. This can include in-house and vendor supplied.
Usually, the integration of the products is designed to provide
an environment that makes using the tools easier. The basic
issue is to build a `framework' around the tools that provides
a common and consistent view of the system.

The framework is not limited to homogenus environments, but
also can span heterogeneous systems. Companies like EDA
and government sponsored projects like EIS are trying to
tackle this problem. This group can be viewed as a forum
for users and developers to voice their opinions on this subject.
Frameworks are common in the area of CAD/CAE, CASE and office
automation; but they are not limited to only these areas.

Topics open for discussion are:

Tool encapsulation    Data Management             Network Computing
User Interface        Data Transfer Languages     Tool Portability
Process Control/Flow  Object Programming          Anything Else

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Moderator: Louis McDonald; Hughes Aircraft
           mcdonald%loki.edsg@hac2arpa.hac.com
           Digest format; hopefully release a digest a week, put is
                dependent on amount of input.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To be added to/deleted from/corrections made to list, send message to:

            info-frame-request%loki.edsg@hac2arpa.hac.com
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All other messages should be sent to:

            info-frame%loki.edsg@hac2arpa.hac.com
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

------------------------------

End of AIList Digest
********************

∂01-Aug-88  1535	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #34  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 1 Aug 88  15:35:41 PDT
Date: Mon  1 Aug 1988 14:37-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #34
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 2 Aug 1988       Volume 8 : Issue 34

Today's Topics:

 Free Will:

  How to dispose of naive science types (short)
  The deterministic robot determines that it needs to become nondeterministic.
  Root issue of free will and problems in war zones

----------------------------------------------------------------------

Date: 27 Jul 88 09:09:44 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: How to dispose of naive science types (short)

In article <531@ns.UUCP> logajan@ns.UUCP (John Logajan x3118) writes:
>Please explain to me how an unproveable theory (one that makes no unique
>predictions) can be useful?
>
Because people use them.  Have a look at the social cognition
literature.

I understood your argument as saying that non-scientific theories
(a.k.a assumptions) cannot be useful, and conversely, that the only
useful theories are scientific ones.

If my understanding is correct, then this is very narrow minded and
smacks of epistemelogical bigotry which no-one can possibly match up
to in their day to day interactions.

Utility must not be confounded with one text-book epistemology.
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

------------------------------

Date: 27 Jul 88 15:34:09 GMT
From: bwk@mitre-bedford.ARPA (Barry W. Kort)
Reply-to: bwk@mbunix (Kort)
Subject: The deterministic robot determines that it needs to become
         nondeterministic.

In article <19880727030413.0.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU>
JMC@SAIL.STANFORD.EDU (John McCarthy) writes:
>Almost all the discussion is too vague to be a contribution.  Let me
>suggest that AI people concentrate their attention on the question of how
>a deterministic robot should be programmed to reason about its own free
>will, as this free will relates both to its past choices and to its future
>choices.  Can we program it to do better in the future than it did in the
>past by reasoning that it could have done something different from what it
>did, and this would have had a better outcome?  If yes, how should it be
>programmed?  If no, then doesn't this make robots permanently inferior to
>humans in learning from experience?

To my mind, the robot's problem becomes interesting precisely when it
runs out of knowledge to predict the outcome of the choices open to it.

The classical metaphors for this state are "The Lady or the Tiger?",
the Parable of Buridan's Ass, and "Dorothy meets the Scarecrow at
the fork in the road."  The children's game of Rock, Scissors, Paper
illustrates the predicament faced by a deterministic robot.

In the above scenarios, the resolution is to pick a path at random
and pursue it first.  To operationalize the decision, one needs to
implement the Axiom of Choice.  One needs a random number generator.
Fortunately, it is possible to build one using a Quantum Amplifier.
(Casting lots will do, if you live in a low-tech society.)

Thus, I conclude that a deterministic robot will perceive itself at
a disadvantage relative to a robot who can implement the Axiom of
Choice, and will decide (of its own free will) that it must evolve
to include nondeterministic behavior.

Note, by the way, that decision in the face of uncertainty entails a
risk, so a byproduct of such behavior is anxiety.  In other words,
emotion is the expression of vanishing ignorance.

--Barry Kort

------------------------------

Date: Wed 27 Jul 88 13:53:59-PDT
From: Leslie DeGroff <DEGROFF@INTELLICORP.ARPA>
Subject: root issue of free will and problems in war zones


        The new AI in the war zone and the on going free will discussions
seem to both skirt around one of the fundamental crux's of Intelligence,
natural and artificial (and even "Non Intelligent" decision making processes)
   There is a pair of Quantities that appear in all decision
processes, one is the information/knowledge in the "system/agent/individual"
and the other is the scale and variation of the universe to be modeled.
For the real world the latter is always much much greater than the prior.
Universe >> some subsystem. Even if we take out infinities there is this
many order of magnitude scale problem.  This inequally holds regaurdless of
 the equivalence of the internal representation to the external "facts"
Engineers, Programmers, and line managers  get their noses rubbed in this
fact pretty often (but perhaps not enough to prevent horrible/scary/dumb
mistakes from being made) This ratio more or less means that systems
working in the real world can always be surprised and/or make mistakes.
The universe does have regularities that allow the causal and structural
mapping of a smaller "Mind" or "representation" to cover a lot of ground
but it also remains filled with places where you need to know the specifics
to know what is happening.  Even simple Newtonian physics of multiple
orbiting bodies becomes a combinatorial problem very quickly.
   In regards to the war zone, we have a similar case (the Russians and KAL)
which had no particular computer component... Just miss or missing
communications/information and a human decision.   There is a limit to
the precision and availability of knowledge and an even lower limit to
the amount of processing that can be done.  The universe and Murphy
will get us everytime we start thinking "it's ALL under control".
        Related to this fundamental fact is that in many cases "WILL" turns
out to be a concept used by humans to represent the immediate
uncomputability/unpredictability of peices of the real Universe including
our own actions and conciousness. I find WILL to be a much more productive
concept to contemplate than FREE WILL.   I can be scientifically educated
and still talk and think of inanimate objects like a truck or a storm as
having willful behavior.  Even simple physical systems with unsensed
or unpredictable variability will often be treated as if decisions are
being made; ?Will my door handle give me a static spark today?  Much of
the discussion on determinism  vs non is simply missing the point that neither
our brains nor our computers will be able to "compute" in real time all
that might be of importance to a given situation and no realistic set of
sensors can gather enough information to be "complete".
      From an AI perspective these issues are at the heart of the hardness
in the problems;  how can we have an open ended learning system with out
catatonic behavior? (computation of all derivations from an ever increasing
fact base)and what kind of knowledge representation is efficient for learning
from sensors, effective at cutting off computation so that time critical
decisions can be made and knowing when knowledge contained dosn't apply
(classic case of the potential infinity of negations)
(Trick question for the brain modelers, Does sleep act like a Lisp Garbage
collector ie is part of the sleep process an elimination of material that
is not to be stored and reorganizing the rest of the material)
      Much of applied statistics and measurement theory is oriented to
 METRICs for comparing systems and models and determining "predicts correctly
and fails to predict" where the models are parametric equations.
Question is how to evaluate a model for "surprise potential" or
"unincluded critcal factors".
Les Degroff   DeGroff@intellicorp.com






(I disclaim all blame, I aint paid to think but I have this bad habit,
 neither parents, schools or employers have been able to cure it)

------------------------------

Date: 28 Jul 88 18:20:16 GMT
From: umix!umich!eecs.umich.edu!itivax!dhw@uunet.UU.NET (David H.
      West)
Subject: Re: free will


In a previous article, John McCarthy writes:
> Let me
> suggest that AI people concentrate their attention on the question of how
> a deterministic robot should be programmed to reason about its own free
> will, as this free will relates both to its past choices and to its future
> choices.  Can we program it to do better in the future than it did in the
> past by reasoning that it could have done something different from what it
> did, and this would have had a better outcome?  If yes, how should it be
> programmed?  If no, then doesn't this make robots permanently inferior to
> humans in learning from experience?

At time t0, the robot has available a partial (fallible) account of:
the world-state, its own possible actions, the predicted
effects of these actions, and the utility of these
effects.  Suppose it wants to choose the action with maximum
estimated utility, and further suppose that it can and does do this.
Then its decision is determined.  Free-will (whatever that is)
is merely the freedom to do something that doesn't maximize its
utility, which is ex hypothesi not a freedom worth exercising.

At a later time t1, the robot has available all of the above, plus
the outcome of its action.  It is therefore not in the same state as
previously. It would make no sense to ignore the additional
information.  If the outcome was as expected, then there is no
reason to make a different choice next time unless some other
element of the situation changes.  If the outcome was not as
predicted, the robot needs to update its models.  This updating is
another choice-of-action-under-incomplete-information problem, so
again the robot can only maximize its own fallibly-estimated
utility, and again its behavior is determined, not (just) by its
physical structure, but by the meta-goal of acting coherently.

If the robot thought about its situation, it would presumably
conclude that it felt no impediment to doing what was obviously the
correct thing to do, and that it therefore had free will.

-David West        dhw%iti@umix.cc.umich.edu

------------------------------

Date: 29 Jul 88 18:18:48 GMT
From: well!sierch@lll-lcc.llnl.gov  (Michael Sierchio)
Subject: Re: How to dispose of naive science types (short)


Theories are not for proving!

A theory is a model, a description, an attempt to preserve and describe
phenomena -- science is not concerned with "proving" or "disproving"
theories. Proof may have a slightly different meaning for attorneys than
for mathematicians, but scientists are closer to the mathematician's
definition -- when they use the word at all.

A theory may or may not adequately describe the phenomena in question, in
which case it is a "good" or "bad" theory -- of two "good" theories, the
theory that is "more elegant" or "simpler" may be preferred -- but this
is an aesthetic or performance judgement, and again has nothing to do with
proof.

Demonstration and experimentation show (to one degree or another) the value
of a particular theory in a particular domain -- but PROOF? bah!
--
        Michael Sierchio @ Small Systems Solutions

        sierch@well.UUCP
        {pacbell,hplabs,ucbvax,hoptoad}!well!sierch

------------------------------

End of AIList Digest
********************

∂02-Aug-88  1541	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #35  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 2 Aug 88  15:41:34 PDT
Date: Tue  2 Aug 1988 18:19-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #35
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 3 Aug 1988      Volume 8 : Issue 35

 Queries:

  MACSYMA
  English grammar
  Data fusion and correlation expert systems
  Ornithology as an AI domain?
  Bridge bidding/playing expert system
  Response to - Manchester Cognitive Science Course

----------------------------------------------------------------------

Date: Wed, 27 Jul 88 13:50:26 +0200
From: Johan Buelens <FGCBA11%BLEKUL11.BITNET@MITVMA.MIT.EDU>
Subject: MACSYMA

Does anyone know about an ES called MACSYMA ?

All information about the product and its (potential) uses is welcome.

Johan.

/ / / / / / / / / / /
/ Johan BUELENS
/ KUL / Dept. Scheikunde / Celestijnenlaan  200 F / B - 3030  Heverlee
/ tel. (+32) (16) 20 06 56ext. 3595
/ e-mail : fgcba11@blekul11.bitnet
/          mzzzc13@blekul21.bitnet
/


[Editor's Note:

        Although MACSYMA was considered an AI program when it was first
being written, many people today would say that it does not really fit
into the category of 'expert systems', since too much of its knowledge
is represented procedurally.

        One version is available from Symbolics, Inc. (617) 621-7770.

        Another version, considerably cheaper (I am told), comes from the
Department of Energy (DOE).

        A good intro to MACSYMA's capabilities is available from the
Naval Underwater Systems Center, Newport RI 02840 as 'Technical Document
6401'.


                - nick]

------------------------------

Date: Sun, 31 Jul 88 17:16:44 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: English grammar


     I understand that there is an approach to English grammar based on
the following assumptions.

      1.  There are four main categories of words, essentially nouns,
          verbs, adjectives, and adverbs.  These categories are
          extensible; new words can be added.

      2.  There are about 125 "special" words, not in one of the four
          main categories.  This list is essentially fixed.  (New
          nouns appear all the time, but new conjunctions and articles
          never.)

Does anyone have a reference to this, one that lists all the "special"
words?

                                        John Nagle

------------------------------

Date: Mon, 1 Aug 88 08:26:40 EDT
From: sharon%mwcamis@mitre.arpa
Subject: Data fusion and correlation expert systems

--------

Does anyone have information about expert systems for the correlation
and fusion of data or situation monitoring and evaluation that have
been built OUTSIDE of the U.S.??     Thanks,

                                Sharon Laskowski

                                laskowsk@mitre.arpa

------------------------------

Date: 1 Aug 88 17:58:35 GMT
From: paul.rutgers.edu!pratt@rutgers.edu  (Lorien Y. Pratt)
Subject: Ornithology as an AI domain?

I am a PhD student in computer science at Rutgers and will do my thesis
in AI.  I am also a bird watcher and am concerned with environmental issues.
This is a long shot, but does anyone know of any problems in ornithology
which are in need of an AI approach?  I am particularly interested in
problems relating ornithology to larger ecological issues, and would
also be interested in pointers to NJ area people who might be willing to talk
with me.  Thanks!
--
-------------------------------------------------------------------
Lorien Y. Pratt                            Computer Science Department
pratt@paul.rutgers.edu                     Rutgers University
                                           Busch Campus
(201) 932-4714                             Piscataway, NJ  08854

------------------------------

Date: Tue, 2 Aug 88 08:18 EDT
From: "David S. Gibson" <DSGibson@DOCKMASTER.ARPA>
Subject: Bridge bidding/playing expert system


     Does anyone know of any public domain expert systems that bid
and/or play hands of contract bridge?  I would like to get the source
code for such a system, preferably written in Lisp or Prolog.  Any
pointers would be greatly appreciated.

David Gibson

EMAIL:  DSGibson@DOCKMASTER.ARPA

------------------------------

Date: Tue, 2 Aug 88 15:38:39 BST
From: Ian Pratt
      <ipratt%research2.computer-science.manchester.ac.uk@NSS.Cs.Ucl.A
      C.UK>
Subject: Response to - Manchester Cognitive Science Course


My apologies for the rather terse notice I originally sent out. Herewith a
fuller advertisement.

Manchester university offers a one year MSc programme in Cognitive Science.
The first two terms consist of taught courses in the following areas:
        Artificial Intelligence (2  one-term courses)
        Topics in Cognitive Psychology
        Psycholinguistics
        Theoretical Linguistics (2 one-term courses)
        Computational Linguistics
        Psychology of Vision
        Computer Vision (1 one-term course + 1/2 term course on relevant math)
        Human-Computer Interaction
The third term (and summer `vacation') is devoted to extended projects. These
projects may be theoretical, experimental (e.g. in cognitive psychology)
or programming projects; however, the hope is that students' projects will
draw on several of the contributing disciplines.

In addition, there is a series of seminars to discuss philosophical and
foundational issues, to which staff and students contribute.

The programme is heavily computational: students will be expected to master
at least prolog and pascal, as well as other languages if needed for projects.
There is also a considerable bias towards computer vision and computational
linguistics.

The programme should prove suitable to students with good honours degrees in
psychology, philosophy, mathematics, natural science, computer science and
linguistics. We expect that all students will arrive already possessing a
reasonable facility in one or two of the taught subjects; the workload is
set accordingly.  (The backgrounds of next year's intake of 15 students
are pretty evenly distributed over the above disciplines.)

For details, contact:
                Dr. Ian Pratt,
                Department of Computer Science,
                The University of Manchester,
                Manchester, M13 9PL,
                United Kingdom.

------------------------------

End of AIList Digest
********************

∂03-Aug-88  1704	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #36  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 3 Aug 88  17:03:54 PDT
Date: Wed  3 Aug 1988 15:10-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT Mail Stop 38-390, Cambridge MA 02139
Phone: (617) 253-2737
Subject: AIList Digest   V8 #36
To: AIList@AI.AI.MIT.EDU


AIList Digest            Thursday, 4 Aug 1988      Volume 8 : Issue 36

 Mathematics and Logic:

  Non-r.e. systems, Godel, and Zermelo
  Self-reference and the Liar
  Re: undecidability

----------------------------------------------------------------------

Date: Fri, 29 Jul 88 21:45 CDT
From: <CMENZEL%TAMLSR.BITNET@MITVMA.MIT.EDU>
Subject: Non-r.e. systems, Godel, and Zermelo

In AIList vol. 8, no. 29 (July 29, 1988), Herman Rubin writes:

> I know of no mathematical system in which the objects and axioms are not
> recursively enumerable.

I'm not sure what Rubin means by a mathematical system here, since the notion
of an object suggests systems are mathematical structures like the natural
numbers or the finite sets, while the notion of an axiom suggests systems are
axiomatic theories.  There is a counterexample to his claim in either case.  If
he means the former, consider the real numbers.  Since no uncountable set is
r.e., the set of reals isn't.  If you want a countable set, consider the set of
Godel numbers of sentences of the first-order language of arithmetic that are
true in the natural numbers (relative to some coding for the language). Godel's
theorem says that this set is not r.e.  For a non-r.e axiomatic theory, take
as axioms the set of the above true sentences of arithmetic.  Same result.
Granted the theory ain't good for much; but that's another kettle of fish.

> A Turing machine can do all mathematics in principle.

Certainly you don't mean that every mathematical truth is provable; cf. Godel
again.  But if not, what?  Zermelo constructed an apparently--but not provably
(Godel yet again)--consistent, and very powerful, set of axioms for set theory.
Surely he was doing mathematics.  Now write me a program for generating set
theoretic axioms that avoid the paradoxes of naive set theory, and preserve
arithmetic, classical analysis, and transfinite number theory.


Chris Menzel
Dept. of Philosophy/Knowledge Based Systems Lab
Texas A&M University

BITNET:  cmenzel@tamlsr
ARPANET: chris.menzel@lsr.tamu.edu

------------------------------

Date: Sat, 30 Jul 88 17:09 CDT
From: <CMENZEL%TAMLSR.BITNET@MITVMA.MIT.EDU>
Subject: Self-reference and the Liar

In AIList vol. 8 no. 29 Bruce Nevin provides the following analysis of the liar
paradox arising from the sentence "This sentence is false":

> The syntactic nexus of this and related paradoxes is that there is no
> referent for the deictic phrase "this sentence" at the time when it is
> uttered, nor even any basis for believing that the utterance in progress
> will in fact be a sentence when (or if!) it does end.  A sentence cannot
> be legitimately referred to qua sentence until it is a sentence, that
> is, until it is ended.  Therefore, it cannot contain a legitimate
> reference to itself qua sentence.

There are type-token problems here, but never mind.  If what Nevin says
is right, then there is something semantically improper in general about
referring to the sentence one is uttering; note there is nothing about the
liar per se that appears in his analysis.  If so, however, then there is
something semantically improper about an utterance of "This sentence is
in English", or again, "This sentence is grammatically well-formed."  But
both are wholly unproblematically, aren't they?  Wouldn't any English speaker
know what they meant?  It won't do to trash respectable utterances like
this to solve a puzzle.

Nevin's analysis gets whatever plausibility it has by focusing on *English
utterances*, playing on the fact that, in the utterance of a self-referential
sentence, the term allegedly referring to the sentence being uttered has no
proper referent at the time of the term's utterance, since the sentence yet
isn't all the way of the speaker's mouth. But, first, it's just an accident
that noun phrases usually come first in English sentences; if they came last,
then an utterance of the liar or one of the other self-referential sentences
above would be an utterance of a complete sentence at the time of the utterance
of the term referring to it, and hence the term would have a referent after
all.  Surely a good solution to the liar can't depend on anything so contingent
as word order in English.  Second, the liar paradox arises just as robustly for
inscriptions, where the ephemeral character of utterances has no part.  About
these, though, Nevin's analysis has nothing to say.  A proper solution must
handle both cases.

Recommended reading:  R. L. Martin, {\it Recent Essays on Truth and the Liar
                           Paradox}, Oxford, 1984.
                      J. Barwise and J. Etchemendy, {\it The Liar:  An Essay
                           on Truth and Circularity}, Oxford, 1987.


---Chris Menzel
   Dept. of Philosophy/Knowledge Based Systems Lab
   Texas A&M University

            BITNET:  cmenzel@tamlsr
            ARPANET: chris.menzel@lsr.tamu.edu

------------------------------

Date: 2 Aug 1988  10:43 EDT
From: pyuxf!asg
Subject: Re: undecidability

Path: pyuxf!asg
From: asg@pyuxf.UUCP (alan geller)
Newsgroups: comp.ai.digest
Subject: Re: undecidability
Summary: Infinity IS natural
Message-ID: <375@pyuxf.UUCP>
Date: 2 Aug 88 13:51:14 GMT
Article-I.D.: pyuxf.375
Posted: Tue Aug  2 09:51:14 1988
References: <19880727030404.9.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU>
Organization: Bell Communications Research
Lines: 55

In a previous article, John B. Nagle writes:
> Goetz writes:
> >                              Goedel's Theorem showed that you WILL have an
> > unbounded number of axioms following the method you propose. That is why
> > most mathematicians consider it an important theorem - it states you can
> > never have an axiomatic system "as complex as"
> > arithmetic without having true statements which are unprovable.
>       Always bear in mind that this implies an infinite system.  Neither
> undecidability nor the halting problem apply in finite spaces.  A
> constructive mathematics in a finite space should not suffer from either
> problem.  Real computers, of course, can be thought of as a form of
> constructive mathematics in a finite space.
>       There are times when I wonder if it is time to displace infinity from
> its place of importance in mathematics.  The concept of infinity is often
> introduced as a mathematical convenience, so as to avoid seemingly ugly
> case analysis.  The price paid for this convenience may be too high.
>       Current thinking in physics seems to be that everything is quantized
> and that the universe is of finite size.  Thus, a mathematics with infinity
> may not be needed to describe the physical universe.
>       It's worth considering that a century from now, infinity may be looked
> upon as a mathematical crutch and a holdover from an era in which people
> believed that the universe was continuous and developed a mathematics to
> match.
>                                       John Nagle


Actually, infinity arises in basic set theory, long before any notion
of 'finite space' is introduced (viewing mathematics as an inverted
pyramid, from lowest-level set theory and logic up).  Two axioms suffice
to introduce infinity:  the axiom of the null set, which says that there
exists a set 0, which is empty; and the axiom of construction (or of union,
or whatever you prefer to call this axiom), which says that if a and b
are sets, then so is {a, b}.  These two axioms allow one to construct
0, {0}, {{0}}, etc., which is an infinite series.  In fact, it is possible
to create models of set theory which are constructed using only sets of
this form.

In physics, 'quantization' does not mean 'granularization', despite
the popular understanding that this is so.  While there are physicists
who work on theories of granular space, mainstream quantum physics
interprets space as a continuum.  Indeed, even quantized measurables
such as energy levels are seen as selected values 'chosen' out of
a continuum by being the eigenvalues of some operator.

Also, the notion that the universe is finite is still contraversial;
while most cosmologists seem to believe that the universe is closed
(i.e., finite), there is still no experimental evidence to support
this view (this is why cosmologists talk about the 'missing mass',
which is needed to close the universe gravitationally; nobody's found
it yet).

Alan Geller
Bellcore

Nobody at Bellcore takes me seriously.

------------------------------

End of AIList Digest
********************

∂03-Aug-88  1956	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #37  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 3 Aug 88  19:48:36 PDT
Date: Wed  3 Aug 1988 17:55-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #37
To: AIList@AI.AI.MIT.EDU


AIList Digest            Thursday, 4 Aug 1988      Volume 8 : Issue 37

 Queries:

  Moral Sciences?
  Attendees of ECCE
  Church's Y-operator
  D.Goldberg Adress

----------------------------------------------------------------------

Date: 2 Aug 88 20:18:42 GMT
From: sdcc6!calmasd!jnp@ucsd.edu  (John Pantone)
Subject: Moral Sciences?


Re: the recent Kyoto prizes. (Japanese "Nobel"s)

I notice that one category was Creative Arts and Moral Sciences.
I understand the Creative Arts part - but I cannot imagine what Moral
Sciences could mean.  Would someone who knows what the Kyoto
prize-givers are describing please enlighten me?

Please e-mail.

--
These opinions are solely mine and in no way reflect those of my employer.
John M. Pantone @ GE/Calma R&D, 9805 Scranton Rd., San Diego, CA 92121
...{ucbvax|decvax}!sdcsvax!calmasd!jnp   jnp@calmasd.GE.COM   GEnie: J.PANTONE

------------------------------

Date: 2 Aug 88 21:27:22 GMT
From: mcvax!unido!cosmo!JS%cosmo.UUCP@uunet.uu.net  (Juergen Seeger)
Subject: Attendees of ECCE

Has anyone participated at the ECCE-Conference in Suisse
last weekend?
If so, please send a message to
JS@cosmo

------------------------------

Date: Wed, 3 Aug 88 09:16 EDT
From: GODDEN%gmr.com@RELAY.CS.NET
Subject: Church's Y-operator

In the new >Lisp and Symbolic Computation< vol.1, no.1 Gabriel and Pitman
make reference to "the Y operator" (p.85).  There is also a reference
to it in a footnote in "The Art of the Interpreter" by Steele and Sussman
where they supply a pointer to McCarthy "History of LISP", ACM SIGPLAN
Notices, Aug. 78.  McCarthy refers to "Church's Y-operator".  I've been
scanning through Church's >The Calculi of Lambda-Conversion< but have
been unable to find any mention of it (alas, Church has no index).  Can
anyone help direct me to where this is originally discussed by Church?
Perhaps it appears in some other work of Church?  FYI: The Y operator,
defined in a Scheme-like language is:

  (defun y (f)
   ((lambda (g) (lambda (h) ((f (g g)) h)))
    (lambda (g) (lambda (h) ((f (g g)) h)))))

Interesting, huh?  You can use it to implement recursive procedures
even when your interpreter does not explicitly support recursion.
Thus, to calculate 6! recursively, it could be invoked as

  ((y (lambda (fn)
       (lambda (x)
        (if (zerop x) 1 (* x (fn (- x 1)))))))
   6)

-Kurt Godden
 godden@gmr.com

------------------------------

Date: Wed, 03 Aug 88 12:28:40
From: Perfecto Herrera Boyer <D4PBPHB2%EB0UB011.BITNET@MITVMA.MIT.EDU>
Subject: D.Goldberg Adress

Dear Colleagues:

      Could anybody send me the adress of Dr. D. Goldberg ?  The only
      available information I possess to identify him is that his Ph. D.
      dissertation was called "Computer-aided gas pipeline operation
      using genetic algorithms and rule learning" (1983 at the University
      of Michigan). (I am interested in receiving that thesis). Please,
      send the adress to my e-mail adress D4pbphb2@eb0ub011.


                                             Thank you.

------------------------------

End of AIList Digest
********************

∂04-Aug-88  2130	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #38  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 4 Aug 88  21:30:45 PDT
Date: Fri  5 Aug 1988 00:13-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #38
To: AIList@AI.AI.MIT.EDU


AIList Digest             Friday, 5 Aug 1988       Volume 8 : Issue 38

 Today's Topics:

  Dual encoding, propostional memory and the epistemics of imagination
  Are all Reasoning Systems Inconsistent?
  AI and the future of the society
  global self reference

----------------------------------------------------------------------

Date: Tue, 26 Jul 88 10:10:40 BST
From: Gilbert Cockton <gilbert%cs.glasgow.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Dual encoding, propostional memory and the epistemics of
         imagination


>Now think of all the
>other stuff your episodic memory has to be able to represent.  How is this
>representing done?  Maybe after a while following this thought you will begin
>to see McCarthys footsteps on the trail in front of you.
>
>Pat Hayes

Watch out for the marsh two feet ahead though :-)
Computationalists who are bound to believe in propositional
representations (yes, I encode all my knowledge of a scene into
little FOPC like tuples, honest) have little time for dual (or more)
coding theories of memory.

The dual coding theory, which normally distinguishes between iconic
and semantic memory, has caused endless debate, more often than not
because of the tenacity of researchers who MUST, rationally or
otherwise, believe in a single propositional encoding, or else admit
limitations to computational paradigms.

Any competent text book on cognitive psychology, and especially ones
on memory, will cover the debate on episodic, iconic and semantic
memory (as well as short term memory, working memory and other
gatherings of angels in restricted spaces).  These books will lay
several trails in other directions to McCarthy's.  The barbeque spots
on the way are better too.

Pat's argument hinges on the demand that we think about something
called representation (eh?) and then describe the encoding.  The
minute you are tricked into thinking about bit level encoding
protocols, the computationalists have you.  Sure enough, the best
thing you can imagine is something like formal logic.  PDP networks
will work of course, but you can't of course IMAGINE the contents of
the network, and thus they cannot be a representation :-)

Since when did reality have anything to do with the quality of our
imagination, especially when the imaginands are rigged from the outset?
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

------------------------------

Date: Tue, 26 Jul 88 10:42:06 EDT
From: mclean@nrl-css.arpa (John McLean)
Subject: Are all Reasoning Systems Inconsistent?

In AIList vol 8. issue 23, Jonathan Leivent presents the following argument
where P(x) asserts that the formula with Godel number x is provable
and the Godel number of S is n where S = (P(n) -> A):

   >1.  S = (P(n) -> A)
   >2.  P(n) -> S
   >3.  P(n) -> (P(n) -> A)
   >4.  P(n) -> P(n)
   >5.  P(n) -> (P(n) ↑ (P(n) -> A))
   >6.  (P(n) ↑ (P(n) -> A)) -> A
   >7.  P(n) -> A
   >8.  S
   >9.  P(n)
   >10. A

What Jonathan is pointing out was proven by Tarksi in the 30's:  a theory
is inconsistent if it contains arithmetic and has the property that for
all all formulae A we can prove P("A") --> A, where "A" is the Godel number
of A.  [Tarski actually proved the theorem for any predicate T such that
T("A") <--> A, but it is easy to show that the provability predicate P
has the property that A --> P("A").]  This is not so strange if we
realize that P(n) is really an existential formula (Ex)Bew(x,n),
where Bew(x,y) is derivable iff x is the Godel number of a proof whose
last line is the formula whose Godel number is y.  It follows that if y
is the Godel number of a theorem then Bew(x,y) is derivable and hence,
so is P(n) by existential generalization.  However, the converse is false.
(Ex)Bew(x,y) may be derivable when the formula corresponding to y is not.
In other words, arithmetic is not omega-complete.  This does not affect our
proof theory, however, beyond showing that we cannot have a general proof
rule of the form P("A") --> A.  We can assert P("A") --> A as a theorem
only when we derive it from the basic theorems of number theory and logic.

John McLean

------------------------------

Date: Wed, 27 Jul 88 15:55 O
From: Antti Ylikoski tel +358 0 457 2704
      <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: AI and the future of the society

I once heard an (excellent) talk by a person working with Symbolics.
(His name is Jim Spoerl.)

One line by him especially remained in my mind:

"What we can do, and animals cannot, is to process symbols.
(Efficiently.)"


In the human brain, there is a very complicated real-time symbol
processing activity going on, and the science of Artificial
Intelligence is in the process of getting to know and to model this
activity.

A very typical example of the human real-time symbol processing is
what happens when a person drives a car.  Sensory input is analyzed
and symbols are formed of it: a traffic sign; a car driving in the
same direction and passing; the speed being 50 mph.  There is some
theory building going on: that black car is in the fast lane and
drives, I guess, some 10 mph faster than me, therefore I think it's
going to pass me after about half a minute.  To a certain extent, the
driver's behaviour is rule-based: there is for example a rule saying
that whenever you see a red traffic light in front of you you have to
stop the car.  (I remember someone said in AIList some time ago that
rule-based systems are "synthetic", not similar to human information
processing.  I disagree.)


How about a very wild 1984-like fantasy: if there were people who knew
the actual functioning of the human mind as a real-time symbol
processor very well then they would have unbelieveable power upon the
souls of the poor, ignorant people, whose feelings and thoughts could
be manipulated without them being aware of it.  (Is this kind of thing
being done in the modern world or is this mere wild imagination?
Listen to of the Californian band Death Angel and their piece Mind
Rape, on the LP Frolic through the Park!)  And, of course, anyone
possessing this kind of knowledge certainly would do everything in his
power to prevent others from inventing it ...  and someone might make
it public to prevent minds from being manipulated ... oh, isn't this
farfetched.


Whether that fantasy of mine is interesting or just plan ridiculous,
it is a fact that AI opens frightening possibilities for those who
want to use the knowledge involving the human symbol processor as a
tool to manipulate minds.

Perhaps we will have to take care that AI will lead the future of the
human race to a democratic, not to a 1984-like society.


--- Andy Ylikoski

Disclaimer: just don't take me too seriously.

------------------------------

Date: Thu, 4 Aug 88 19:19:10 PDT
From: kk@Sun.COM (Kirk Kelley)
Subject: global self reference

To those who are familiar with literature on self reference,
I am curious about the theoretical nature of global self reference.
Consider the following question.

What is the positive rate of change to all models of the fate of that
rate?

Assume a model can be an image or a collection of references that
represent some phenomenon.  A model of the fate of a phenomenon is a
model that can be used to estimate the phenomenon's lifetime.  A
change to the model is any edit that someone can justify, to users of
the model, improves the model's validity.  A positive rate of change
to a model is a rate that for a given discrete unit of time, contains
at least one change to the model.  Hence, if the rate goes to 0, it
is the end of the lifetime of that positive rate.

Surely an interesting answer to this question falls in the realm of
what might be called global self reference: a reference R that refers
to all references to R.  In our case, an interesting answer would be
a model M of all models that model M.

I have implemented such a model as a game that runs in STELLA (on the
Mac).  I have played with it as a foundation for analyzing decisions
in the development of emerging technologies such as published
hypertext, and such as itself.  So I have some practical experience
with its nature.  My question is, what is the theoretical nature of
global self reference?  What has been said about it?  What can
be said about it?

I can show that the particular global self reference in the question
above has the curious property of including anything that attempts
to answer it.

 -- kirk

------------------------------

End of AIList Digest
********************

∂06-Aug-88  2033	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #39  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 6 Aug 88  20:32:40 PDT
Date: Sat  6 Aug 1988 23:16-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #39
To: AIList@AI.AI.MIT.EDU


AIList Digest             Sunday, 7 Aug 1988       Volume 8 : Issue 39

 Mathematics and Logic:

  Re: undecidability
  Liar's paradox, AI .vs. human error
  Are all Reasoning Systems Inconsistent?
  Re: Self-reference and the Liar
  paradox and metalanguage gaffes

----------------------------------------------------------------------

Date: 4 Aug 88 13:09:36 GMT
From: steve@hubcap.UUCP ("Steve" Stevenson)
Subject: Re: undecidability

From a previous article, by asg@pyuxf.UUCP:
> In a previous article, John B. Nagle writes:
>> Goetz writes:
>>> [Goedel's incompleteness ... unbounded number of axioms]

>>       Always bear in mind that this implies an infinite system.
>>       There are times when I wonder if it is time to displace infinity from
>> its place of importance in mathematics....

> Actually, infinity arises in basic set theory, ...

But isn't this the point?  The nominalists/finitist won't let you get
to that step.  Take for example your mythical perfect(?) computer programmer.
To such a person, the discussion of infinity in any guise is lost: there
just aren't any infinite processes (by almost anybody's) definition.
Intuitionist as a little better - only countable infinity allowed.

The foundational issue is whether or not it is legit to propose successor
and related things as legit bases for mathematics.  That's John's point:
The canon of infinity may not be all that good an idea.--
Steve (really "D. E.") Stevenson           steve@hubcap.clemson.edu
Department of Computer Science,            (803)656-5880.mabell
Clemson University, Clemson, SC 29634-1906

------------------------------

Date: Thu 4 Aug 88 09:03:31-PDT
From: Mike Dante <DANTE@EDWARDS-2060.ARPA>
Subject: Liar's paradox, AI .vs. human error

1.  Bruce Nevin suggests that the solution to the "liar's paradox" lies in the
    self reference to an incomplete utterance.  How would that analysis apply to
the following pair of sentences?
                  The next sentence I write will be true.
                  The previous sentence is false.

2.  Back when it appeared that the shooting down of the airliner in the Gulf
    was the result of an "AI" system error, there were a series of digests
using the incident as a proof of the dangers of relying on computers to make
decisions.  Now that the latest analysis seems to show that the computer
correctly identified the airliner but that the human operators erroneously
interpreted the results, I look forward to an equally extensive series of
postings pointing out that we should not leave such decisions to fallible
humans but should rely on the AI systems.
---  Or were the previous postings based more on presuppositions and political
considerations than on an analysis of evidence?

------------------------------

Date: Fri, 5 Aug 88 15:52:04 EDT
From: jon@XN.LL.MIT.EDU (Jonathan Leivent)
Subject: Are all Reasoning Systems Inconsistent?


A while ago, I posted a proof I had stumbled upon that seemed
to lead to inconsistencies.  After some more thought about
it, and after reading some replies, I dcided to reformulate
it.  Originally, I claimed that the construction of the
sentence S = P(n) -> A leads to a contradiction - what I meant
is that a particular sentence could be constructed in a reasoning
system such that the mere assertion of the existence of that
sentence leads to a contradiction.  I phrased the sentence as
indicated above because I had thought of it while reading
something about Lob's Theorem in a paper by Raymond Smullyan
in the 1986 conf proceedings of Theoretical Aspects of
Reasoning about Knowledge (title: "Logicians who reason about
Themselves" - an interesting paper for people into Godel's
theorem).  Anyway, I decided that the sentence I was interested
in was really:

(En)[P(n) = ~P(n)]

Where P(m) means "m is the Godel number of a theorem in this
reasoning system".  This theorem is actually a corollary of
Godel's theorem - it is proven by constructing the sentence
with Godel number n that satisfies the above theorem.  The thing
that bothers me is that the statement (En)[P(n) = ~P(n)] is
contradictory itself, yet obviously true (by construction).

I'm sorry about the delay in reposting: I was away from work for
a week.

-- Jon Leivent

------------------------------

Date: 5 Aug 88 21:33:47 GMT
From: bwk@mitre-bedford.ARPA (Barry W. Kort)
Reply-to: bwk@mbunix (Kort)
Subject: Re: Self-reference and the Liar

While we are having fun with self-referential sentences, perhaps
we can have a go at this one:

        My advice to you is: Take no advice from me,
        including this piece.

(At least the self referential part comes at the end, so that
the listener has the whole sentence before parsing the deictic
phrase, "this piece".)

--Barry Kort

------------------------------

Date: Sat, 6 Aug 88 09:30:16 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: paradox and metalanguage gaffes

In AIList Digest for Thursday, 4 Aug 1988 (Volume 8, Issue 36)
Chris Menzel <CMENZEL%TAMLSR.BITNET@MITVMA.MIT.EDU> writes:

CM>| In AIList vol. 8 no. 29 Bruce Nevin provides the following analysis of the
   | liar paradox arising from the sentence "This sentence is false":

   | > The syntactic nexus of this and related paradoxes is that there is no
   | > referent for the deictic phrase "this sentence" at the time when it is
   | > uttered, nor even any basis for believing that the utterance in progress
   | > will in fact be a sentence when (or if!) it does end.  A sentence cannot
   | > be legitimately referred to qua sentence until it is a sentence, that
   | > is, until it is ended.  Therefore, it cannot contain a legitimate
   | > reference to itself qua sentence.

   | . . . If what Nevin says is right, then there is . . .
   | something semantically improper about an utterance of "This sentence is
   | in English", or again, "This sentence is grammatically well-formed."  But
   | both are wholly unproblematically, aren't they?  Wouldn't any English spkr
   | know what they meant?  It won't do to trash respectable utterances like
   | this to solve a puzzle.

They are not "wholly unproblematical," they engender a double-take kind
of reaction.  Of course people can cope with paradox, I am merely accounting
for the source of the paradox.

CM>| Nevin's analysis gets whatever plausibility it has by focusing on *English
   | utterances*, playing on the fact that, in the utterance of a self-ref
   | sentence, the term allegedly referring to the sentence being uttered has no
   | proper referent at the time of the term's utterance, since the sentence yet
   | isn't all the way of the speaker's mouth. . . .
   | it's just an accident that noun phrases usually come first in English
   | sentences; if they came last, then an utterance of the liar or one of
   | the other self-referential sentences above would be an utterance of a
   | complete sentence at the time of the utterance of the term referring to
   | it, and hence the term would have a referent after all.  Surely a good
   | solution to the liar can't depend on anything so contingent as word
   | order in English.

If I say it in Modern Greek, where the noun followed by deictic can
come last, the normal reading is still for "this" to refer to a nearby
prior sentence in the discourse.  The paradoxical reading has to be
forced by isolating the sentence, usually in a discourse context like
"The sentence /psema ine i frasi afti/, translated literally 'Falsehood
it is the sentence this', is paradoxical because if I suppose that it
is false, then it is truthful, and if I suppose it is truthful, then it
is false." These are metalanguage statements about the sentence.  The
crux of the matter (which word order in English only makes easier to
see), is that a sentence (or any utterance) cannot be a metalanguage
statement about itself--cannot be at the same time a sentence in the
object language (English or Greek) and in the metalanguage (the
metalanguage that is a sublanguage of English or of Greek).

CM>| Second, the liar paradox arises just as robustly for
   | inscriptions, where the ephemeral character of utterances has no part.

When you are reading the words "this sentence" or /frasi afti/ the
thing referred to is not complete as an object for reference until you
have finished reading it and have resolved all the referentials in it.
But to resolve the reference of the deictic "this" or /afti/, the
sentence must be complete.  This is the bind.  The metalanguage
information necessary to understand a sentence must be in that sentence
itself, else it could not be understood.  One may make this
metalanguage information explicit in the form of conjoined
metalinguistic sentences that refer to already-completed *parts* of the
sentence in process, but such conjuncts may not refer to the *whole*
sentence, which includes themselves still in process.

Having read the paradoxical sentence, and in the attempt to resolve the
referentials, one mentally supplies the additional metalinguistic
context indicated above in order to appreciate the paradox.  One
rereads the sentence as object language sentence and rereads it again
as metalanguage sentence, mentally treating them as two tokens with one
referring to the other.  But they are not two, and to act as though
they were is to step on the mental banana peel and take the pratfall of
paradox.

CM>| note there is nothing about the liar per se that appears in his analysis.

I'm sorry, did I promise to say something about the liar paradox?  I
can't find any explicit reference prior to this.  Blair Houghton didn't
mention the liar paradox either.  But since Chris Menzel brings it up,
and since it is closely related. . . .  To appreciate the paradox of
the sentence "I am [always] a liar" one must adduce further contextual
sentences, such as:

"This_0 implies that everything I say is false; this_1 is something I
say therefore this_1 is false; when something_0 is false then the
opposite of that something_0 is true; the opposite of this_1 is
'Everything I say is true'; this_2 is something I say [because it is
implied by . . .]; therefore this_2 is true; furthermore, therefore
this_1 is true [[but this_1 contradicts this_2, TILT!  And the
preceding, this_3, contradicts the prior sentence, this_4:  'Therefore
this_1 is false,' TILT!]]; when something is false . . .

As many have noted, we are looping here, loops which would or could
come to a halt when your (imputed) inference engine comes up with the
metalanguage observations in doubled [[brackets]], but we might never get
that far because within it the portion in single [braces] is also
occasion for an infinite subloop.  As usual, dualism gets you into
a hall of mirrors.  The dizzying effect is the titillating pleasure of
paradox.  (The benefit is or can be a release from dualism, but that
is another tale.)

To repeat one of the points that Chris Menzel ignored, the translation into
logical symbolism as S <=> ~S and the like fails to capture the paradox
precisely because it is uniquivocally and only a *separate*
metalanguage statement about the sentence symbolized S, comparing it
with its negation symbolized ~S.  ~S represents a conclusion reached at
a certain point in the loop of metalinguistic conjuncts, and so is part
of the metalinguistic context; <=> is the metalanguage assertion that
they are equivalent.  This formula captures a small part of the
problem.  Try to formulate a metalanguage proposition in logical
formalism such that it is also the object language proposition to which
it refers.  Not only can it not be done it is also improper to do, and
it is that impropriety to which I refer.  (In logical formalisms with
which I am familiar, the metalanguage is separate from the object
language, partly to prevent such errors.  It has been observed that the
ultimate metalanguage for mathematics and logic is the natural language
shared by the mathematicians or logicians.)

The subscripts of course are just a notational convenience.  Natural
language doesn't have a mechanism for addressing particular words by
counting or the like.  It can do it by next adjacency in an
interrupting conjoined sentence, as follows:

 Our old friend Fred--Fred you always liked for his brinksmanship--
 typically carries a dozen unclosed parens in his head when he writes
 anything.

This reduces by elision and other commonplace operations to:

 Our old friend Fred, who you always liked for his
 brinksmanship, typically carries . . .

This reduction from an interrupting subsentence with subordinated
intonation under paratactic conjunction is the source for all the
modifiers.  (There is historical as well as syncronic evidence for
this.  This is formalized in the operator grammar of
construction-reduction theory, see references cited earlier and S.
Johnson's 1987 NYU dissertation implementing an analyzer for
information content.)  Words that cannot be made adjacent in this way
at some point in the construction of a sentence, if necessary by
topicalization as above, showing that the second occurrence carries
very little information, cannot be reduced to pronouns, deictics, and
so on.  The condition of next adjacency, however, is not possible for a
sentence that seems to refer to itself as a whole.  I won't try, but
just to illustrate the problem and display the loops in another form:

  Everything I say is false
 ↑ ('something
 |  ('something' is this present sentence
 |   (the opposite of this present sentence is 'Everything I say
 |                                         |
 +-----------------------------------------+
 |    ('Everything I say' includes this present sentence
 +-----which is
 |    )
 |   is true')
 |  )
 | is false' means the opposite of something
 |  ('something' is this present sentence
 |   (the opposite of this present sentence is 'Everything I say
 |                                         |
 +-----------------------------------------+
 |    ('Everything I say' includes this present sentence
 +-----which is
      )
     is true')
    )
   is true')

One never can finish resolving the deictics.  The reason again is that
the metalanguage information necessary to understand a sentence must
obviously be in that sentence itself, and finitely, else it could not
be understood.  And again, the only way we try to resolve this sentence
and come to see it as paradoxical is to read it repeatedly, taking the
second reading and each even-numbered reading thereafter as a separate
token referring to the prior odd-numbered reading, and when we get
tired of that loop to then turn around and say that these
reading-tokens are not separate, that there is but one sentence.  It
is, as Chris Menzel points out, an error of logical typing to confuse
metalanguage readings with object-language readings.  The pratfall is
to suppose that the sentence can as a whole at one and the same time be
both.  It cannot.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

End of AIList Digest
********************

∂07-Aug-88  2142	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #40  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 7 Aug 88  21:41:47 PDT
Date: Mon  8 Aug 1988 00:22-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #40
To: AIList@AI.AI.MIT.EDU


AIList Digest             Monday, 8 Aug 1988       Volume 8 : Issue 40

 Queries:

  Consistent Labelling Problem
  Response to - Ornithology as an AI domain
  Response to - Gardening ES
  Response to - Dave Goldbery's address
  Response to - RightWriter and Grammatik II (AILIst v8 #27)

----------------------------------------------------------------------

Date: 2 Aug 88 16:14:39 GMT
From: mcvax!ukc!reading!onion!isg.cs.reading.ac.uk!rmb@uunet.uu.net 
      (Rob Bodington)
Subject: Consistent Labelling Problem


I am researching into the Consistent Labelling Problem.
Does anyone have any references on this topic?
I will summarise the response on the net.
Thanks.
Intelligent Systems Group             JANET: Rob.Bodington@reading.ac.uk
Dept. Computer Science
University of Reading
Whiteknights, Berks. U.K

------------------------------

Date: Wed,  3 Aug 88 00:15:03 EDT
From: Marvin Minsky <MINSKY@AI.AI.MIT.EDU>
Subject: Response to - Ornithology as an AI domain


Have you considered ethology, rather than ecology?  There is a world
of things to do in that realm, because the traditional models of
animal behavior are based on virtually no methodology.  For example,
Tinbergen and later workers have described hypothetical mechanisms
involved in building bird nests - but no one knows how adequate they
are.  A great research thesis would be to see if the sorts of
computational structures proposed by ethologists could actually build
a respectable nest, using some sensors, robot manipulators, and
suitable behavioral control structures.  I have a couple of students
simulating various sorts of animal behavior, but there is a whole
unexplored universe there.
 - minsky

------------------------------

Date: Fri, 05 Aug 88 16:58:20 GMT
From: IT21%SYSB.SALFORD.AC.UK@MITVMA.MIT.EDU
Subject: Response to - Gardening ES

Re. parvis@gitpyr.gatech.edu  and
    otter!ijd@hplabs.hp.com  (Ian Dickinson)

Parvis wanted details of gardenign packages, and Ian Dickinson replied
that there was a farming ES by ICI plc.

The system was called COUNSELLOR, and provided advice to farmers via a
ViewData system on the use of fungicides.  It predicted the disease
profile and then recommended sprays throughout the season.  It allowed
the farmer do to what-iffing for cost-benefit analysis.  I worked on it
in the early days, and am pleased to see that it is one of the ESs that
went into use (about 3 years ago) and, I believe is still being used.

Looking back, I rather wish I had built an ES to advise on organic
farming instead!

For fuller details see

Jones M.J., Crates D.T. (1985) 'Expert Systems and Videotext: an
application in the marketing of agrochemicals' in 'Research and
Development in Expert Systems - proceedings of the Fourth Technical
Conference of the British Computer Society Specialist Group on Expert
Systems' ed. Bramer M.A., Cambridge University Press.

Andrew Basden.

------------------------------

Date: 5 Aug 88 13:47:00 EDT
From: "NRL::MEISENBACHER" <meisenbacher%nrl.decnet@nrl.arpa>
Subject: Response to - Dave Goldbery's address


David Goldbery is now at the University of Alabama
                             Dept of Engineering Mechanics
                             210 Hardaway Hall
                             University, Alabama 35486
        Phone: (205) 348-1618


His dissertation is availabe from
                Disseratation Abstracts International,
                44(10), 3174b.
                (University Microfilms No. 8402282)

------------------------------

Date: 5 Aug 88 14:17:00 EDT
From: Nahum (N.) Goldmann <ACOUST%BNR.CA@MITVMA.MIT.EDU>
Subject: Response to - RightWriter and Grammatik II (AILIst v8 #27)

In the following text I'll give both packages every chance to
demonstrate their capabilities.

I was using both packages for several years.  Both are strong on
the evaluation and improvement of readibility (sentence length,
average word duration, presence of esoteric words), and not as
strong on suggesting alternative grammar rules (several thousand
rules in each package are obviously not sufficient for a
thorough evaluation of a typical technical text).  User
interfaces in both packages could also be somewhat improved.

I can't comment whether they use expert system technology in
both packages, simply because I do not know how one defines what
is proper ES and what is not.  My impression is that they use
something similar.

As far as readibility evaluation goes, their results are more or
less consistent.  In grammar analysis I found them
complimentary, with very little correlation between their
suggestions for text improvements.

I found both packages useful for two reasons:

    1.   They impose on a writer the discipline of writing prose
         in short sentences, and help to eliminate complex
         worlds, making text more readable.  This, however, is
         important only for a short period, after which an
         average writer does it more or less automatically.

    2.   They are invaluable in dealing with colleagues and
         students of mine, who otherwise would not believe that
         their latest report is totally unreadable (for some
         reason or another everybody believes it when told by
         computer!).

Overall, both packages provide good value for money (about
US$100-125 per package).  I have not seen any announcement on
Grammatik III so far, but when it comes I will probably order
it.

Greetings and love.

Nahum Goldmann

e-mail: <acoust@bnr.CA>
(613)763-2329


Analysis by Grammatik II:

Subject: In response to Robert Dale's message on RightWriter
[#Capitalization              : don't mix cases]
and
Grammatik II (AILIst v8 #
[#Capitalization              : don't mix cases]
27)

In the following text I'll give both packages every chance to
demonstrate
[#Overstated or pretentious   : show or prove]
their capabilities.

I was using both packages for several years.  Both are strong on
the evaluation and improvement of readibility (sentence length,
average word duration, presence of esoteric words), and not as
strong on suggesting alternative grammar rules (several thousand
rules in each package are obviously
[#Hackneyed, Cliche, or Trite : use this word sparingly]
not sufficient for a
thorough evaluation of a typical technical text).  User
interfaces in both packages could also be somewhat improved.

I can't comment whether they use expert system technology in
both packages, simply because I do not know how one defines what
is proper ES and what is not.  My impression is that they use
something similar.

As far as readibility evaluation goes, their results are more or
less
[#Misused often               : use less for nonnumerical quantity, fewer for nu
mber]
consistent.  In grammar analysis I found them
complimentary,
[#Misused often               : this means flattering or free; complementary is
completing]
 with very little correlation between their
suggestions for text improvements.

I found both packages useful for two reasons:

    1.   They impose on a writer the discipline of writing prose
         in short sentences, and help to eliminate complex
         worlds, making text more readable.  This, however, is
         important only for a short period, after which an
         average writer does it more or less
[#Misused often               : use less for nonnumerical quantity, fewer for nu
mber]
automatically.

    2.   They are invaluable in dealing with colleagues and
         students of mine, who otherwise would not believe that
         their latest report is totally unreadable (for some
         reason or another everybody believes it when told by
         computer!).

Overall,
[#Hackneyed, Cliche, or Trite : total or general]
 both packages provide good value for money (about
US$100-125 per package).  I have not seen any announcement on
Grammatik III so far, but when it comes I will probably order
it.

Greetings and love.

Nahum Goldmann

e-mail: <acoust@bnr.C
[#Punctuation                 : add space after punctuation]
A>
(613)763-2329

SUMMARY FOR gram.out                     Suspect problems marked:    9

---------------------------------------------------------------------------
Grade School        High School      College           Graduate School
3  4  5  6  7  8    9  10  11  12    Fr  So  Jr  Sr    +1  +2  +3  +4  PhD
---------------------------*-----------------------------------------------
                           * - Flesch Grade Level (Reading Ease: 55)

Sentence Statistics
   Number of Sentences:   16           Short (< 14 words):    8 (50%)
   Average Length:        18.0 words   Long  (> 30 words):    3 (19%)
   End with ?:           0 ( 0%)     Shortest (#  12):      1 words
   End with !:           1 ( 6%)     Longest  (#   3):     47 words

Word Statistics
   Number of Words:      288           Average Length:        5.0 letters

Special Statistics  (as estimated % of Words or Sentences)
   Passive voice:          0 ( 0% S)   Prepositions:         33 (11% W)


RightWriter Analysis:

Subject: In response to Robert Dale's message on RightWriter and
Grammatik II (AILIst<<*+36. UNUSUAL CAPITALIZATION? *>> v8 #27)

In the following text I'll give both packages every chance to
demonstrate their capabilities.<<*+17. LONG SENTENCE: 28 WORDS
*>>

I was using both packages for several years.  Both are strong on
the evaluation and improvement of readibility (sentence length,
average word duration, presence of esoteric words), and not as
strong on suggesting alternative grammar rules (several thousand
rules in each package are obviously not sufficient for a
thorough evaluation of a typical technical text)<<*+17. LONG
SENTENCE: 47 WORDS *>><<*+31. COMPLEX SENTENCE *>><<*+39. CAN
SIMPLER TERMS BE USED? *>>.  User interfaces in both packages
could also be somewhat improved<<*+21. PASSIVE VOICE: be
somewhat improved *>>.

I can't comment whether they use expert system technology in
both packages, simply because I do not know how one defines what
is proper ES and what is not.<<*+17. LONG SENTENCE: 29 WORDS
*>>  My impression is that they use something similar.

As far as readibility evaluation goes, their results are more or
less consistent.  In grammar analysis I found them
complimentary, with very little correlation between their
suggestions for text improvements.

I found both packages useful for two reasons:

    1.   They impose on a writer the discipline of writing prose
         in short sentences, and help to eliminate<<*+7. REPLACE
         eliminate BY SIMPLER cut out *>> complex worlds, making
         text more readable.<<*+17. LONG SENTENCE: 32 WORDS *>>
         This, however, is important only for a short period,
         after which an average writer does it more or less
         automatically.

    2.   They are invaluable in dealing with colleagues and
         students of mine, who otherwise would not believe that
         their latest report is totally unreadable (for some
         reason or another everybody believes it when told by
         computer!<<*+17. LONG SENTENCE: 36 WORDS *>><<*+31.
         COMPLEX SENTENCE *>>).

Overall, both packages provide good value for money (about
US$100-125 per package).  I have not seen any announcement on
Grammatik III so far, but when it comes I will probably order
it.

Greetings and love.

Nahum Goldmann

e-mail: <acoust@bnr.CA>
(613)763-2329

                        <<** SUMMARY **>>

     OVERALL CRITIQUE FOR: g:cocos.doc

     READABILITY INDEX: 12.12
 Readers need a 12th grade level of education to understand.

       Total Number of Words in Document: 290
       Total Number of Words within Sentences: 285
       Total Number of Sentences:  15
       Total Number of Syllables: 499

     STRENGTH INDEX: 0.32
 The writing can be made more direct by using:
                       - the active voice
                       - shorter sentences
                       - more common words

     DESCRIPTIVE INDEX: 0.92
 The writing style is overly descriptive.
 Many adverbs are being used.

     JARGON INDEX: 0.26

  SENTENCE STRUCTURE RECOMMENDATIONS:
            1. Most sentences contain multiple clauses.
               Try to use more simple sentences.
           14. Consider using more predicate verbs.


                   << WORDS TO REVIEW >>
Review the following list for negative words (N), colloquial
words (C), jargon (J), misspellings (?), misused words (?),
or words which your reader may not understand (?).
      ACOUSTBNR(?)  1                   AILIST(?)  1
     COLLEAGUES(?)  1            COMPLIMENTARY(?)  1
    CORRELATION(J)  1                   DALE'S(?)  1
             ES(?)  1                 ESOTERIC(J)  1
       GOLDMANN(?)  1              GRAMMATIKII(?)  1
   GRAMMATIKIII(?)  1                    NAHUM(?)  1
       READABLE(?)  1              READIBILITY(J)  2
       US100125(?)  1                       V8(?)  1
              << END OF WORDS TO REVIEW LIST >>

------------------------------

End of AIList Digest
********************

∂07-Aug-88  2358	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #41  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 7 Aug 88  23:57:58 PDT
Date: Mon  8 Aug 1988 00:39-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #41
To: AIList@AI.AI.MIT.EDU


AIList Digest             Monday, 8 Aug 1988       Volume 8 : Issue 41

 Queries:

  Sigmoid transfer function
    (3 responses)
  refs. for stochastic relaxation
    (1 response)

----------------------------------------------------------------------

Date: 4 Aug 88 20:28:13 GMT
From: amdahl!pacbell!hoptoad!dasys1!cucard!aecom!krishna@ames.arpa 
      (Krishna Ambati)
Subject: Sigmoid transfer function


I am looking for a "black box" circuit that has the product transfer
function:

Output voltage = 0.5 ( 1 + tanh ( Input voltage / "Gain" ) )

               = 1 / ( 1 + exp ( -2 * Input voltage / "Gain" ) )

When plotted, this function looks like an elongated S

When IV = - infinity, OV = 0
When IV = + infinity, OV = 1
When IV = 0         , OV = 0.5

By the way, this question arose in connection with a neural network
problem.

Thanks.

Krishna Ambati
krishna@aecom.uucp

------------------------------

Date: 6 Aug 88 16:58:05 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Sigmoid transfer function


     Recognize that the transfer function in a neural network threshold unit
doesn't really have to be a sigmoid function.  It just has to look roughly
like one.  The behavior of the net is not all that sensitive to the
exact form of that function.  It has to be continuous and monotonic,
reasonably smooth, and rise rapidly in the middle of the working range.
The trigonometric form of the transfer function is really just a notational
convenience.

     It would be a worthwhile exercise to come up with some other forms
of transfer function with roughly the same graph, but better matched to
hardware implementation.  How do real neurons do it?

                                        John Nagle

------------------------------

Date: 7 Aug 88 00:04:26 GMT
From: ankleand@athena.mit.edu  (Andy Karanicolas)
Subject: Re: Sigmoid transfer function

In article <1945@aecom.YU.EDU> krishna@aecom.YU.EDU (Krishna Ambati) writes:
>
>I am looking for a "black box" circuit that has the product transfer
>function:
>
>Output voltage = 0.5 ( 1 + tanh ( Input voltage / "Gain" ) )
>
>              = 1 / ( 1 + exp ( -2 * Input voltage / "Gain" ) )
>
>When plotted, this function looks like an elongated S
>
>When IV = - infinity, OV = 0
>When IV = + infinity, OV = 1
>When IV = 0         , OV = 0.5
>
>By the way, this question arose in connection with a neural network
>problem.
>
>Thanks.
>
>Krishna Ambati
>krishna@aecom.uucp

The function you are looking for is not too difficult to synthesize from
a basic analog circuit builing block; namely, a differential amplifier.
The accuracy of the circuit will depend on the matching of components, among
other things.  The differential amplifier is discussed in many textbooks
concerned with analog circuits (analog integrated circuits especially).

You can try:

Electronic Principles, Grey and Searle, Wiley 1969.
Bipolar and MOS Analog IC Design, Grebene, Wiley 1984.
Design and Analysis of Analog IC's, Gray and Meyer, Wiley 1984.

Unfortunately, drawing circuits on a text editor is a pain; I'll
attempt it if these or other books are not available or helpful.


Andy Karanicolas
Microsystems Technology Laboratory

ankleand@caf.mit.edu

------------------------------

Date: 7 Aug 88 19:55:49 GMT
From: icsia!munro@ucbvax.berkeley.edu  (Paul Munro)
Subject: Re: Sigmoid transfer function

In article <17615@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
[JN]Recognize that the transfer function in a neural network threshold unit
[JN]doesn't really have to be a sigmoid function.  It just has to look roughly
[JN]like one.  The behavior of the net is not all that sensitive to the
[JN]exact form of that function.  It has to be continuous and monotonic,
[JN]reasonably smooth, and rise rapidly in the middle of the working range.
[JN]The trigonometric form of the transfer function is really just a notational
[JN]convenience.
[JN]
[JN]   It would be a worthwhile exercise to come up with some other forms
[JN]of transfer function with roughly the same graph, but better matched to
[JN]hardware implementation.  How do real neurons do it?
[JN]
[JN]                                    John Nagle


Try this one :   f(x) = x / (1 + |x|)


It is continuous and differentiable:

f'(x)  =  1 / (1 + |x|) ** 2   =   ( 1 - |f|) ** 2 .

- Paul Munro

------------------------------

Date: 5 Aug 88 18:38:42 GMT
From: tness7!tness1!nuchat!moray!uhnix1!sun2.cs.uh.edu!rmdubash@bellco
      re.bellcore.com
Subject: refs. for stochastic relaxation

I am currently working on stochastic relaxation and relaxation algorithms for
finely grained  parallel  architectures.  In particular, I am  studying their
implementation on neural and connectionist models, with emphasis on  inherent
fault tolerance property of such implementations.

I will be grateful if any of you can provide me with pointers, references etc.
on this ( or related ) topics.

Thanks.




_______________________________________________________________________________
Rumi Dubash, Computer Science, Univ. of Houston,
Internet : rmdubash@sun2.cs.uh.edu
U.S.Mail : R.M.Dubash, Computer Science Dept., Univ. of Houston,

------------------------------

Date: 6 Aug 88 18:02:44 GMT
From: brand.usc.edu!manj@oberon.usc.edu  (B. S. Manjunath)
Subject: Re: refs. for stochastic relaxation

In article <824@uhnix1.uh.edu> rmdubash@sun2.cs.uh.edu () writes:
>I am currently working on stochastic relaxation and relaxation algorithms for
>finely grained  parallel  architectures.  In particular, I am  studying their
>implementation on neural and connectionist models, with emphasis on  inherent
>fault tolerance property of such implementations.
>
>I will be grateful if any of you can provide me with pointers, references etc.
>on this ( or related ) topics.

>Rumi Dubash, Computer Science, Univ. of Houston,

 Geman and Geman (1984) is an excellent paper to start with. It also
contains lot of refernces. The paper mainly deals with Markov Random Fields
and applications to image processing.

S.Geman and D.Geman,"Stochastic relaxation, Gibbs distributions and
the bayesian restoration of images", IEEE trans. on pattern analysis
and machine intelligence", PAMI-6,Nov 1984, pp. 721-742.

Another reference that I feel might be useful is Marroquin,J.L.
Ph. D Thesis "Probabilistic solution of Inverse problems",
M.I.T. 1985.

bs manjunath.

------------------------------

End of AIList Digest
********************

∂08-Aug-88  2031	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #42  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 8 Aug 88  20:31:03 PDT
Date: Mon  8 Aug 1988 23:07-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #42
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 9 Aug 1988       Volume 8 : Issue 42


  Does AI kill?   Sixth in a series ...

----------------------------------------------------------------------

Date: 26 Jul 88 21:33:29 GMT
From: bph@buengc.bu.edu (Blair P. Houghton)
Reply-to: bph@buengc.bu.edu (Blair P. Houghton)
Subject: Re: Does AI kill?


In a previous article, MORRISET@URVAX.BITNET writes:
>
>  Suppose we eventually construct an artificially intelligent
>  "entity."  It thinks, "feels", etc...  Suppose it kills someone
>  because it "dislikes" them.  Should the builder of the entity
>  be held responsible?
>
You have to keep it strictly "artificial."  I've always thought
of advanced neuromorphic devices as an artificial medium in
which *real* intelligence can occur.  In which case the entity
would be held responsible.

Out of artificial-land:  real children aren't held responsible,
unless they are.

                                --Blair

------------------------------

Date: 27 Jul 88 11:52:16 GMT
From: munnari!goanna.oz.au!jlc@uunet.UU.NET (Jacob L. Cybulski)
Subject: Re: does AI kill?


The Iranian airbus disaster teaches us one thing about "AI Techniques",
and this is that most of the AI companies forget that the end product
of AI research is just a piece of computer software that needs to be
treated like one, i.e. it needs to go through a standard software
life-cycle and proper software engineering principles still apply to
it no matter how much intelligence is burried in its intestines.

I don't even mention the need to train the system users.

Jacob

------------------------------

Date: 30 Jul 88 03:49:24 GMT
From: uplherc!sp7040!obie!wsccs!dharvey@gr.utah.edu  (David Harvey)
Subject: Re: AI...Shoot/No Shoot

In article <854@lakesys.UUCP>, tomk@lakesys.UUCP (Tom Kopp) writes:
>
> I still don't understand WHY the computers mis-lead the crew as to the type of
> aircraft, especially at that range.  I know that the Military has tested

I don't know what news sources you have been following lately, but it
was NOT the fault of the computer that misled the crew as to what kind
of airplane it was.  They were switching the signals between one that
was known to be a military aircraft and the one for the civilian plane.
Also, whoever decided that it was not important to determine the size
of a plane, et al,  when they made the system really made an error in
judgement.  You are bound to have civilian aircraft in and around battle
areas eventually.  Don't expect any system to perform any better than
the designer has given it capabilities to perform!

                                dharvey@wsccs  (Dave Harvey)

------------------------------

Date: 1 Aug 88 19:53:38 GMT
From: cs.utexas.edu!sm.unisys.com!ism780c!logico!david@ohio-state.arpa
        (David Kuder)
Subject: Re: AI...Shoot/No Shoot

In article <603@wsccs.UUCP> dharvey@wsccs.UUCP (David Harvey) writes:
>Also, whoever decided that it was not important to determine the size
>of a plane, et al,  when they made the system really made an error in
>judgement.  You are bound to have civilian aircraft in and around battle
>areas eventually.
        Radar cross-section doesn't correlate well to visual cross section.
This is the main idea behind "stealth" aircraft -- they have a small radar
cross-section.

Also in article <854@lakesys.UUCP> tomk@lakesys.UUCP (Tom Kopp) writes:
> I still don't understand WHY the computers mis-lead the crew as to the type of
> aircraft, especially at that range.

        A common trick (tactic) is to hide behind a commercial aircraft.
Whether this is what the Iranians did or not, I don't know.  If New York
is ever bombed it'll look like a 747 did it, though.

--
David A. Kuder                                   Logicon, O.S.D.
{amdahl,uunet,sdcrdcf}!ism780c! \                6300 Variel Ave. Suite H,
        {ucbvax,decvax}!trwrb! -> !logico!david Woodland Hills, Ca. 91367
{genius,jplgodo,psivax,slovax}! /                (818) 887-4950

------------------------------

End of AIList Digest
********************

∂08-Aug-88  2236	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #43  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 8 Aug 88  22:36:21 PDT
Date: Mon  8 Aug 1988 23:17-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #43
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 9 Aug 1988       Volume 8 : Issue 43

 Seminars:

  Describing Program Transformers with Higher-order Unification
  Call for Commentators: Control of Voluntary Movements
  Call for Commentators: Primate Tool Use
  Object-Oriented Database Workshop - OOPSLA 88

----------------------------------------------------------------------

Date: Thu, 21 Jul 88 14:27:56 EDT
From: finin@PRC.Unisys.COM
Subject: Unisys AI Seminar: Describing Program Transformers with
         Higher-order Unification


                              AI SEMINAR
                     UNISYS PAOLI RESEARCH CENTER

    Describing Program Transformers with Higher-order Unification

                            John J. Hannan
                   Computer and Information Science
                      University of Pennsylvania


Source-to-source program transformers belong to the class of
meta-programs that manipulate programs as objects. It has previously
been argued that a higher-order extension of Prolog, such as
Lambda-Prolog, makes a suitable implementation language for such
meta-programs. In this paper, we consider this claim in more detail.
In Lambda-Prolog, object-level programs and program schemata can be
represented using simply typed lambda-terms and higher-order
(functional) variables. Unification of these lambda-terms, called
higher-order unification, can elegantly describe several important
meta-level operations on programs. We detail some properties of
higher-order unification that make it suitable for analyzing program
structures. We then present (in Lambda-Prolog) the specification of
several simple program transformers and demonstrate how these can be
combined to yield more general transformers. With the depth-first
control strategy of Lambda-Prolog for both clause selection and
unifier selection all the above mentioned specifications can be and
have been executed and tested.



                     2:00 pm Wednesday, August 3
                     Unisys Paloi Research Center
                         BIC Conference Room
                      Route 252 and Central Ave.
                            Paoli PA 19311

   -- non-Unisys visitors who are interested in attending should --
   --   send email to finin@prc.unisys.com or call 215-648-7446  --

------------------------------

Date: 4 Aug 88 05:42:16 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Behav. Brain Sci: Call for Commentators


Below is the abstract of a forthcoming target article to appear in
Behavioral and Brain Sciences (BBS), an international journal of "open
peer commentary" in the biobehavioral and cognitive sciences, published
by Cambridge University Press. For information on how to serve as a
commentator or to nominate qualified professionals in these fields as
commentators, please send email to:         harnad@mind.princeton.edu
or write to:          BBS, 20 Nassau Street, #240, Princeton NJ 08542
                                                  [tel: 609-921-7771]

Strategies for the Control of Voluntary Movements with One Degree of Freedom

Gerald L. Gottlieb (Physiology, Rush Medical Center),
Daniel M. Corcos (Physical Education, U. Illinois, Chicago),
Gyan C. Agarwal (Electr. Engineering & Computer Science, U. Illinois, Chicago)

A theory is presented to explain how people's accurate single-joint
movements are controlled. The theory applies to movements across
different distances, with different inertial loads, toward targets of
different widths over a wide range of experimentally manipulated
velocities. The theory is based on three propositions:
(1) Movements are planned according to "strategies," of which there are at
least two: a speed-insensitive (SI) and a speed-sensitive (SS) strategy.
(2) These strategies can be equated with sets of rules for performing
diverse movement tasks. The choice between (SI) and (SS) depends on
whether movement speed and/or movement time (and hence appropriate
muscle forces) must be constrained to meet task requirements.
(3) The electromyogram can be interpreted as a low-pass filtered
version of the controlling signal to motoneuron pools. This
controlling signal can be modelled as a rectangular excitation pulse
in which modulation occurs in either pulse amplitude or pulse width.
Movements with different distances and loads are controlled by the SI
strategy, which modulates pulse width. Movements in which speed must
be explicitly regulated are controlled by the SS strategy, which
modulates pulse amplitude. The distinction between the two movement
strategies reconciles many apparent conflicts in the motor control literature.
--
Stevan Harnad   ARPANET:  harnad@mind.princeton.edu         harnad@princeton.edu
harnad@confidence.princeton.edu     srh@flash.bellcore.com      harnad@mind.uucp
BITNET:   harnad%mind.princeton.edu@pucc.bitnet    UUCP:   princeton!mind!harnad
CSNET:    harnad%mind.princeton.edu@relay.cs.net

------------------------------

Date: 4 Aug 88 05:55:48 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Behav. Brain Sci. Call for Commentators: Primate Tool Use


Below is the abstract of a forthcoming target article to appear in
Behavioral and Brain Sciences (BBS), an international journal of "open
peer commentary" in the biobehavioral and cognitive sciences, published
by Cambridge University Press. For information on how to serve as a
commentator or to nominate qualified professionals in these fields as
commentators, please send email to:         harnad@mind.princeton.edu
or write to:          BBS, 20 Nassau Street, #240, Princeton NJ 08542
                                                  [tel: 609-921-7771]

Spontaneous tool use and sensorimotor intelligence in Cebus and other
monkeys and apes

Suzanne Chevalier Skolnikoff
Human Interaction Laboratory
Department of Psychiatry
University of California, San Fransisco

Spontaneous tool use and sensorimotor intelligence in Cebus
were observed to determine whether tool use is discovered
fortuitously and learned by trial and error or, rather, advanced
sensorimotor abilities (experimentation and insight) are critical in
its development and evolution. During 62 hours of observing captive
groups of cebus monkeys (a total of 12 animals), 38 series and 66 acts
of spontaneous tool use were recorded. Nine monkeys (75%) used tools;
14 kinds of tool use were observed. Of seven captive spider monkeys
observed for 21 hours, none used tools. Comparative observations of
sensorimotor intelligence were made using Piaget's model. The
sensorimotor basis of tool use was also analyzed. Cebus showed all six
of Piaget's levels of sensorimotor intelligence, whereas spider
monkeys showed only the first four stages. Besides these correlations
between tool use and advanced sensorimotor ability, 37 of the 38
tool-use series and 65 of the 66 individual acts involved Stage 5 and
6 sensorimotor mechanisms in Cebus; only one series involved Stage 3
fortuitous discovery and Stage 4 coordinations. This study and a
literature survey suggest that high tool-using propensity among
primates is based on advanced sensorimotor ability rather than
fortuitous discovery.
--
Stevan Harnad   ARPANET:  harnad@mind.princeton.edu         harnad@princeton.edu
harnad@confidence.princeton.edu     srh@flash.bellcore.com      harnad@mind.uucp
BITNET:   harnad%mind.princeton.edu@pucc.bitnet    UUCP:   princeton!mind!harnad
CSNET:    harnad%mind.princeton.edu@relay.cs.net

------------------------------

Date: 2 Aug 88 15:14:35 GMT
From: killer!pollux!ti-csl!keith%tilde.csc.ti.com@ames.arpa  (Keith
      Sparacin)
Subject: Object-Oriented Database Workshop


                  OBJECT-ORIENTED DATABASE WORKSHOP

                 To be held in conjunction with the

                             OOPSLA '88

              Conference on Object-Oriented Programming:
                 Systems, Languages, and Applications

                          26 September 1988

                    San Diego, California, U.S.A.


Object-oriented database systems combine the strengths of
object-oriented programming languages and data models, and database
systems.  This one-day workshop will expand on the theme and scope of a
similar OODB workshop held at OOPSLA '87.  The 1988 Workshop will
consist of the following four panels:

  Architectural issues: 8:30 AM - 10:00 AM

    Therice Anota (Graphael), Gordon Landis (Ontologic),
    Dan Fishman (HP), Patrick O'Brien (DEC),
    Jacob Stein (Servio Logic), David Wells (TI)

  Transaction management for cooperative work: 10:30 AM - 12:00 noon

    Bob Handsaker (Ontologic), Eliot Moss (Univ. of Massachusetts),
    Tore Risch (HP), Craig Schaffert (DEC),
    Jacob Stein (Servio Logic), David Wells (TI)

  Schema evolution and version management:  1:30 PM - 3:00 PM

    Gordon Landis (Ontologic), Mike Killian (DEC),
    Brom Mehbod (HP), Jacob Stein (Servio Logic),
    Craig Thompson (TI), Stan Zdonik (Brown University)

  Query processing: 3:30 PM - 5:00 PM

    David Beech (HP), Paul Gloess (Graphael),
    Bob Strong (Ontologic), Jacob Stein (Servio Logic),
    Craig Thompson (TI)


Each panel member will present his position on the panel topic in 10
minutes.  This will be followed by questions from the workshop
participants and discussions.  To encourage vigorous interactions and
exchange of ideas between the participants, the workshop will be limited
to 60 qualified participants.  If you are interested in attending the
workshop, please submit three copies of a single page abstract to the
workshop chairman describing your work related to object-oriented
database systems.  The workshop participants will be selected based on
the relevance and significance of their work described in the abstract.

Abstracts should be submitted to the workshop chairman by 15 August 1988.
Participants selected will be notified by 5 September 1988.

                        Workshop Chairman:

                       Dr. Satish M. Thatte
           Director, Information Technologies Laboratory
                Texas Instruments Incorporated
                   P.O. Box 655474, M/S 238
                        Dallas, TX 75265

                      Phone: (214)-995-0340
  Arpanet: Thatte@csc.ti.com   CSNet: Thatte%ti-csl@relay.cs.net

------------------------------

End of AIList Digest
********************

∂11-Aug-88  2254	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #44  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 11 Aug 88  22:53:41 PDT
Date: Fri 12 Aug 1988 00:07-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #44
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 12 Aug 1988       Volume 8 : Issue 44

 Today's Topics:

  Spang Robinson Reports
  Will computers dominate chess? (EURISKO)

----------------------------------------------------------------------

Date: Wed, 3 Aug 88 21:23:47 CDT
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: bm940

Summary of Spang Robinson Report on Artificial Intelligence
Volume 4, No. 6, June 1988

The lead article is on research directions.

Randy Davis at MIT is developing deep-knowledge based systems for
dealing with relationships between devices.  The work focusses on
digital circuits to do circuit design, test generation and
diagnosis.    AI Squared is a new company using this technology
for medical instrumentations.  They are delivering a system for
CAT Scanners called FELIX.

The article discusses Xerox Parc, how they bring such disciplines
as anthropology and psychology into AI efforts and study how people
"actually do design work."  Xerox Parc is also looking at office system
to keep track of office documents.

Price
Waterhouse is doing research into auditing, tax planning
and consulting with a Big Eight accounting systems.
Richard Fikes, now at Price Waterhouse, is working on aprojectin international
corporate tax planning.  They are also working on integrating textual
material that does notfit into a structureddomain model, but which
is applicable, using a hypertext-like technique.

Lockheed is adding spatial and temporal systems to their expert system
tool, LES.  They are also working on result explanation by means other
than backtracking through rules and rule-base validation.

________________________________________

Neural Networks:

Robert Hecht-Neilsen, et. al. will have a proof that a three layer
Back Error Propagation neural network will always converge,under
certain conditions.  Conditions are the use of 32 bit floating
point math, a square integrable mapping function and the mapping
regions must be compact and bounded.

Stephen Gallant has constructed a mechanism for neural-network
explanation.  It uses an input vector with only three
values (false, unknown and true) and is faster than Back error Propagation.
The system has been patented.

++++++++++++++++++++++++++++++++++++++++

DARPA has appropriated 60 million for AI research, which is 80 percent
of total AI research.  Jack Schwartz, the new DARPA Information
Science and Technology Office director, is favoring AI research
which lasts two to three years and has clearly definable results.  Areas like
logic and those needing "intensive computation" are "considered overly
ambitious."

Cuts of between ten and thirty three percent are expected for AI research.
There will be emphasis on robotics and algorithms including AI.

________________________________________

Shorts:

Gold Hill Computers let go 20 out of 105 employees, most in sales.
1988 sales flat after a tripling in 1987.  No cuts in development
staff.

Financia is a new company  in England that will develop packaged
PC Expert systems to advise in Equities and Futures markets.

Intellicorp has been selected to be an Autorized Marketing Aid
for IBM RT's.

Geosource and Knowledge Systems are joining forces to develop and market
geophysical and geological applications for the energy industry.

Coopers and Lybrand has created Insurance ExperTAX that helps Insurance
companies identifying tax accrual issues and tax planning opportunities.

Neuron Data announced that its product now runs on HP 9000 series 300
and series 800 technical work stations.

Inference has made its ART available for the TI MicroExplorer and Sun 4.

------------------------------

Date: Mon, 8 Aug 88 11:12:13 CDT
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: bm954

Summary of
Spang Robinson Report on Artificial Intelligence, July 1988, Volume 4, No 7

Lead Article is on Knowledge-Based System Methodology and teaching of
same.
It describes training efforts at various firms such as IBM, TI, DEC
and accounting firms.  Some of these training programs provide "automated
methodologies" and Arthur D. Little provides automated assistants
for these items.  Some include sample systems, e. g. Cullinet's database
performance analyzer.
((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((
Learning Systems

System           Technique      Price
VP-Expert        Kavanaugh Maps $124.95
Mac Smarts       Kavanaugh Maps $250.00
Super Expert     ID3            $195.00
Rule Master      ID3            $495.00  generates C code
KnowledgeMaker   ID3            $ 95.00  generates Prolog roles, M. 1, Insight 2
1st Class        ID3            $495.00  tree can be edited
Fusion           ID3           $1295.00  produces C or pascal code
IXL              ID3            $495.00  uses statistical methods to predict
                                         relationships, produces confiedence
                                         factors
Beagle            genetic       $200.00  produces Fortran, Cor pPascal
                  learning

DuPont has used 1st Class and VP-Expert.  An example-based protottype
for a Mylar manufactuirng machine was up in an afternoon after
conventional rule-based aporaches failed.  An insurance company
achieved expert level performance in two weeks, 400 examples
and is now in "beta"

()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
AI Software/International Marketing.

Crystal is a system that supports an inductive approach and costs about
a thousand dollars.
Systems Designer Internation sells a SD-Prolog for $499.

Gold Hill percentage of sales in:
  Japan 12 percent
  Europe 10 percent

*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(
Shorts:

Hect-Nielsen (neural networks) received 3 million dollars in second
financing round.

Gensym and GigaMos settled their lawsuit confidentially.

Survey of large financial servies show 43 percent doing something regarding
expert systems.  Banks have 60 percent.

Texas Instruments will merge the Data Systems Group to the Comptuer
systems Group.  AI activities were in the former.

Gensym will offer G2 on HP 9000 system.  (real time xpert system)

Lucid has joint marketing agreement to sell products in Japan.

Aion Corporation and Cincom systems have cooperative marketing
agreement.

Intelligent Technology will be distributing ClienTrak
relationship management system.  It manages Key sales activities.

A bridge between for V. I. Dataviews interactive graphics and
Neuron NEXPERT object will be developed by the companies.

Canadian Artificial Intelligence Products has received grant from
Telecom Canada to develop hypertext system.

------------------------------

Date: Mon, 8 Aug 88 11:42:58 CDT
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: bm953

Summary of
Spang Robinson Report on Supercomputing and Parallel Prcessing
July 1988, Volume 2, No. 7

Lead Issue is on "Linda"

14 VAXen losely coupled outperformed CRAY-1 on a Compute-Intensive
task at Sandia.  Linda is a Yale-developed
package being enhanced by Scientific Computing Associates.
They have versions for Encore, Sequent.  Implementations have been
developed for shared memory systems rimarily but can be run
on INtel Scinetific systems.  Implementations of Linda exist for Fortran,
Lisp and Pascal.

U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7U7

Applied Intelligent systm  has developped a masive machine vision systems
using a proprietary chip featuring eight single-bit processes.  It is similar
to the Thinking machine.    The system ranges from a one board system
with sixty-four processing elements to one with 1024 processor elements.
A new system will have a 32-processor chips.    A new system will have
10 MFLOP system at $500.00.     They also have a product called LAYERS
which provides an object-oriented C-based system.

()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()

The next article is on the Kartashevs who hold various Supercumputing
conferences.  However, some people find the "kartashev style" abrasive.
A competing conference, "Supercomputing World 1989" is a reaction
against the system.

%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%↑%

Shorts:

Cray received orders for its CRAY 2S from:
  National Center for Supercomputer Applications,  University of Illinois
  National Test Bed Facility at Falcon Airforce  installation
Cray YMP
  Ohio State University Ohio Supercomputer Center
  Shell Research B. V., Exploration Lab, Netherlands

Saxpy will be selling its technology and assets.

Sequent will be joining with Franz and Quintus to offer versions of
LISP and Prolog for their system.

Parasoft will be reselling DEfinicon's add-in Transputer boards.

Floating Point System reported a quarterly loss of 7.3 million.

Multiflow layed off 25 person in manufacturing.

MASSCOMP reported $250,000 revenues.

ETA announces "native UNIX" System V on its system.

Encore announced a fully parallel ADA for the Multimax configuration.  It
received the fastest completion time for the Ada Validation Suite on record.

Sequent announced a version of X-Windows for its systems.

Pacific Cyber/Metrix announced a 250 MIPS VMEBus Data flow machine.
The basisc system consisiting of four processors costs $20,300.
Japanese government has announced a 640 MFLOP dataflow system.
It has 128 processors and 128 microprocessors to data read/store.

------------------------------

Date: 3 Aug 88 18:17:44 GMT
From: Martin-Charles@yale-zoo.arpa  (Charles Martin)
Subject: Re: Will computers dominate chess? (was Re: Computers &
         Chess)

In article <35187@aero.ARPA>, srt@aero (Scott R. Turner) writes:

    Lenat's EURISKO program was innovative enough in Starfleet Battles that
    it was eventually barred from tournament play - after having invented a
    new winning strategy two years running.

Surely you mean /Trillion Credit Squadron/.  Also, my impression is that
EURISKO was not solely responsible for the strategies; considerable editing
was required by Lenat, adjustment of weights, etc.  I believe he cited some
figure such as 60/40 Lenat/EURISKO, which if nothing else at least reflects
his own estimation of the limitations of this program applied to this task.

TCS, while requiring large amounts of data for the various weapon and
defensive systems, is an extremely simple game.  It is the large amount of
data which makes it difficult for humans to "grasp" the game.  The TCS
system was designed to be simple---as the previous three-dimensional game
of maneuver was too complex for people to play with more than a couple of
ships.  Concepts at the level of "fork," basic to tic-tac-toe and chess, do
not play a role in TCS.

The EURISKO line of research was not pursued (as far as I am aware) into
more complex games with less human intervention.

Charles Martin // INTERNET: martin@cs.yale.edu // BITNET: martin@yalecs
UUCP: {cmcl2,harvard,decvax}!yale!martin

------------------------------

Date: 3 Aug 88 19:47:36 GMT
From: att!alberta!jonathan@bloom-beacon.mit.edu  (Jonathan Schaeffer)
Subject: Re: computer chess

In article <376@ksr.UUCP>, richt@breakpoint.ksr.com (Rich Title) writes:
> There's a Carnegie Mellon PhD thesis by Carl Eberling,
> that was published (by MIT press
> I think) under the title "All the Right Moves". It describes HiTech,
> the current world computer chess champion. That thesis in turn
> points to other papers on computer chess.

Hitech is NOT the World Computer Chess Champion.  In the last championship
in 1986, there was 1 4-way tie for first place between Cray Blitz,
Hitech, Bebe, and Phoenix.  Cray Blitz was awarded first place on tiebreak.

"All the Right Moves" is a good thesis, but is not the best place to
look for references.  The International Computer Chess Journal is published
quarterly with the latest in research results, tournaments, games, etc.
That is the best place to look.  Also, several computer chess bibliographies
have been published.  Perhaps the most comprehensive, albiet slightly out of
date, is Tony Marsland's (available as a technical report from the University
of Alberta).

> Carnegie Mellon seems to be *the* place for computer chess.
> Hans Berliner, former postal chess champion, is a comp sci
> professor there.

CMU is only one of a number of places with active computer chess groups.
Others include University of Alberta, McGill University, University of
Limburg, Bell Labs, Los Alamos National Lab, etc.

> The techniques used in the top machines such as HiTech represent
> impressive engineering, but aren't what most people think of
> as "AI". Very fast searching, aided by hardware that generates
> and evaluates moves in parallel and evaluates positions
> in parallel.

True, but that is not all the things people are doing in computer chess.
As it stands right now, the strongest chess playing machines are more
engineering than science.  But do not underestimate the scientific
component of computer chess.  A lot of this work may not be high profile
unless it is incorporated as part of a winning chess program, but it
is still important, core AI research.

>     - Rich

      - Jonathan

------------------------------

End of AIList Digest
********************

∂12-Aug-88  1032	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #45  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 12 Aug 88  10:32:36 PDT
Date: Fri 12 Aug 1988 13:13-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #45
To: AIList@AI.AI.MIT.EDU


AIList Digest           Saturday, 13 Aug 1988      Volume 8 : Issue 45

 Announcements:

  NSF Robotics and Machine Intelligence Funding
  NSF-DARPA Program in Image Understanding and Speech Recognition
  Announcement of new Journal on Data and Knowledge Engineering
  Neural Computation

----------------------------------------------------------------------

Date: Mon, 08 Aug 88 09:16:26 -0400
From: "Kenneth I. Laws" <klaws@note.nsf.gov>
Subject: NSF Robotics and Machine Intelligence Funding


I am now back on the network, after having a month at NSF to
learn my new job.  (The hardest part was growing an extra finger
on my left hand to accomodate an IBM PC's keyboard.  I can now
hit the shift key almost evey time, and the control key better
than seven times out of ten.)

My new address is

  Kenneth I. Laws
  Director, Robotics & Machine Intelligence
  National Science Foundation
  1800 G St. NW, Room 310
  Washington, DC  20550

I intend to conduct as much business as possible over the net,
but, if you must call, the number is (202) 357-9586.  My Bitnet
address is klaws@nsf; Arpanet/Internet klaws@note.nsf.gov.

The ampersand in Robotics & Machine Intelligence can be taken as
a union or an intersection.  Many of the current grants are for
computer vision; a few are for robotic control.  I am
funding some work in acoustic analysis, prosody, and other
aspects of speech recognition.  I also support research on
architecture-related algorithms for vision and AI, since robots
have to deal with the world in real time.  The research that I
most want to encourage, however, is that which will add
intelligence to the golem.  NSF has other programs that include
AI, NL, interactive systems, expert systems, and man/machine
interfaces, as well as manufacturing, control theory, cognitive
modeling, etc., but my program is the one most focused on AI
interacting with the real world.

I have a subprogram on Automated Reasoning and Problem Solving
that includes many traditional AI topics:  knowledge
representation, intelligent databases, heuristic search,
constraint satisfaction, commonsense reasoning, theorem proving,
problem solving, hierarchical reasoning, fuzzy logic, approximate
reasoning, automated design, analogical reasoning, decision
theory, evidential reasoning, machine learning, concept inference,
etc.  Other parts of my program include neural networks,
parameter nets, distributed systems, connectionist expert
systems, and genetic algorithms.  AI work with an analog,
perceptual, or applied flavor is likely to end up in my program,
although other NSF program directors do share these interests.

The available funds can only cover the very best proposals in
so broad a field.  I hope to fund a large number of small grants
rather than a few large ones, so tailor your requests
accordingly.  I will also be asking sharp questions about
"Where's the Science?" and "If this pays off, who will benefit?".
I'm charged with supporting the scientific infrastructure,
and will give top priority to research that could open new areas
of study, transfer promising techniques to new labs, or close out
an unproductive (but seductive) approach.

I am also likely to favor proposals that are crisply and concisely
written, although the peer review system can occassionally uncover
merit in proposals full of equations and jargon.  (Thanks, all
you volunteers!)  Keep in mind that I can't approve your grant
unless you convince me that the scientific community needs the
work done and that you are capable of doing it.  The burden is
on the author!

Even if you win a grant, it is unlikely to support you
continuously for any lengthy period of time.  Our grants are to
fund specific research efforts, not specific researchers or
laboratories.  (You may want to contact me before writing a
proposal, in order to tailor your pitch to the current needs of
the RMI program.)  The review-and-action cycle takes six to nine
months -- sometimes longer -- and there are no guarantees until
the day you get the award letter.  We can provide some continuity
via multiyear "continuing grants", but I would like to reduce the
number of such grants being made by my program.

You can write to me if you would like more information about my
program or the submission of NSF proposals.  You may also want to
check with your local grants officer, who can identify the
sections of our somewhat daunting literature that apply to your
situation. (We have several types of grants, and sending for all
the literature may get you more than you want to know.)  The
actual submission requirements are fairly minor: 15 copies of
a cover sheet, budget, and your proposal.  Short proposals probably
get fastest consideration by reviewers.

See also the following announcement of a joint NSF/DARPA
program in Image Understanding and Speech Recognition, designed
for team efforts and somewhat larger awards than I can manage
from my regular program.

                                        -- Ken Laws

------------------------------

Date: Mon, 08 Aug 88 10:47:06 -0400
From: "Kenneth I. Laws" <klaws@note.nsf.gov>
Subject: NSF-DARPA Program in Image Understanding and Speech
         Recognition


NSF and DARPA are initiating joint support of research in AI,
beginning with Image Understanding and Speech Recognition.  This
year's proposals should be submitted to NSF by November 1, 1988.
Announcements will be distributed through all the usual NSF and
DARPA channels, but you can contact me if you want to be sure of
receiving a copy.

This program is designed to support interdisciplinary,
experimental team research with potential for transfer to
industry or other national use.  Institutions outside the current
NSF and DARPA programs are particularly encouraged to apply.
Initial screening will be done through NSF peer review, with a
joint panel selecting the final grantees.  Funding of up to
$350,000 per year for three years may be available.

Technical inquiries may be made to any of the following:

  NSF   - Dr. Kenneth I. Laws; Division of Information, Robotics,
          and Intelligent Systems; 1800 G St. NW, Room 310;
          Washington, DC  20550; (202) 357-9586.

  DARPA - IU:  Lt. Col. Robert L. Simpson, Jr., Ph.D.;
          (202) 694-4002.

          SR:  Dr. J. Allen Sears; (202) 694-5921.

Submission requirements for this program (NSF-DARPA Initiative,
IRIS/CISE) are described in the standard NSF grant publications.
The announcement sheet will list relevant publications in
somewhat more detail.

                                        -- Ken Laws
                                           klaws@note.nsf.gov
                                           klaws@nsf.bitnet

------------------------------

Date: Tue, 9 Aug 88 19:01:09 EDT
From: Benjamin W. Wah <bwah@large.CISE.NSF.GOV>
Subject: Announcement of new Journal on Data and Knowledge Engineering

              ANNOUNCING THE NEW IEEE TRANSACTIONS
               ON KNOWLEDGE AND DATA ENGINEERING

                              AIM
     The new Transactions on Knowledge and Data Engineering aims
to  provide an international and interdisciplinary forum to pub-
lish  results  on  the  research,  design,  and  development  of
knowledge  and  data  engineering  methodologies, strategies and
systems.

                             FOCUS
     This  Transactions  will  focus  on  knowledge   and   data
engineering.   The  key  technical  issues  it  will address are
related to (a) the acquisition and management of  knowledge  and
data  in the development and utilization of information systems;
(b) strategies to capture and store new knowledge and data;  (c)
methods  to  lessen the burden of software and hardware develop-
ment and maintenance; (d) mechanisms to provide system modeling,
design,  access,  and security and integrity control; (e) archi-
tectures, systems, and components to provide knowledge and  data
services within centralized and distributed information systems;
(f) designs to provide increased intelligence and  ease  of  use
through speech, voice, graphics, images and documents; (g) tech-
niques to provide improved overall functions and performance  to
meet  new  social needs; and (h) the development of ways to pro-
long the useful life of knowledge  and  data  and  its  graceful
degradation.

               SOME PERTINENT AREAS TO BE COVERED
(a)  Knowledge and data engineering aspects of  knowledge  based
     and expert systems
(b)  Artificial Intelligence techniques  relating  to  knowledge
     and data management
(c)  Knowledge and data engineering tools and techniques
(d)  Parallel   and  distributed  knowledge  base  and  database
     processing
(e)  Real-time knowledge bases and databases
(f)  Architectures for knowledge and data based systems
(g)  Data management methodologies
(h)  Database design and modeling
(i)  Query, design, and implementation languages
(j)  Integrity, security, and fault tolerance
(k)  Distributed database control
(l)  Statistical databases
(m)  System integration and  modeling  of  data  and  knowledge
     engineering systems
(n)  Algorithms for data and knowledge management
(o)  Performance evaluation of data and  knowledge  engineering
     algorithms
(p)  Data   communications   aspects   of  data  and  knowledge
     engineering systems
(q)  Applications of data and knowledge engineering systems
(r)  Experience in knowledge and data engineering


                    FREQUENCY OF PUBLICATION
     Quarterly.  The first issue is scheduled to appear in March
1989.

                          SUBSCRIBERS
     Researchers,  developers,  managers,  strategic   planners,
users,  and  others interested in state-of-the-art and state-of-
the-practice activities in the knowledge  and  data  engineering
area.

                  ARTICLE SELECTION PROCEDURES
     This new periodical is  at  the  Transactions  level.   The
selection of articles for publication will follow the guidelines
used by other IEEE Computer Society Transactions,  such  as  the
IEEE  Transactions on Software Engineering and the IEEE Transac-
tions on Computers.  A minimum of three reviews will be required
for a decision to be made on each submitted or solicited paper.

                   TYPE OF ARTICLES PUBLISHED
     The proposed periodical is a Transactions intended to  pub-
lish  original  results  in  research  and  development in areas
relevant to knowledge and data engineering.
     Papers that can  be  submitted  for  consideration  include
those  that  have not previously been published in another jour-
nal, or are not currently being published, as well as those that
have  been  published  in  Conference  Proceedings, Digests, and
Records and that have undergone substantial  revision.   Invited
papers  from  leading  authorities  in  the  knowledge  and data
engineering area will also be published.
     Three types of papers will be published:
(1) Regular  technical  articles  (25-35  double  spaced  pages,
    including  figures, tables, and references): (a) papers with
    extensive original results and (b) in-depth  surveys,  which
    contribute   to   the  understanding  and  advances  in  the
    knowledge and data engineering area;
(2) Concise short articles (maximum  12  double  spaced  pages):
    papers  with results that are important and original and are
    presented in a concise form;
(3) Correspondence articles (maximum  3  double  spaced  pages):
    comments  on previously published articles, short extensions
    to current results, critiques on previous results, responses
    from  authors, and corrections to previously published arti-
    cles.
An effort will be made to shorten the turnaround time  for  con-
cise papers and correspondence articles.

GUIDELINES FOR SUBMITTING PAPERS AND PROPOSALS ON SPECIAL ISSUES
(1) For invited papers and proposals for special issues, send  6
    copies to
                    C. V. Ramamoorthy, Editor-in-Chief
                    Computer Science Division
                    University of California, Berkeley
                    Berkeley, CA 94720
                    ram@ernie.berkeley.edu
(2) For all other submissions, including regular articles,  con-
    cise articles, and correspondence articles, send 6 copies of
    manuscript, complete with illustrations, abstract, and index
    terms, to
                    Benjamin W. Wah, Associate Editor-in-Chief
                    Coordinated Science Laboratory
                    University of Illinois at Urbana-Champaign
                    1101 West Springfield Avenue
                    Urbana, IL 61801
                    (217) 333-3516, (217) 244-7175
                    wah%aquinas@uxc.cso.uiuc.edu
    IEEE copyright transfer form and similar guidelines for sub-
    missions  can  be  found  in  the January 1988 issue of IEEE
    Transactions on Software Engineering.

                      FURTHER INFORMATION
Any questions regarding the journal can be  directed  to  either
the Editor-in-Chief or the Associate Editor-in-Chief.

------------------------------

Date: Thu, 11 Aug 88 17:41:04 edt
From: terry@cs.jhu.edu (Terry Sejnowski <terry@cs.jhu.edu>)
Subject: Neural Computation

                       Announcement and
                        Call for Papers

                      NEURAL COMPUTATION

                   First Issue:  Spring 1989



Editor-in-Chief

Terrence Sejnowski
The Salk Institute and
The University of California at San Diego


Neural Computation will provide a unique interdisciplinary forum
for the dissemination of important research results and for
reviews of research areas in neural computation.

Neural computation is a rapidly growing field that is attracting
researchers in neuroscience, psychology, physics, mathematics,
electrical engineering, computer science, and artificial
intelligence.  Researchers within these disciplines address,
from special perspectives, the twin scientific and engineering
challenges of understanding the brain and building computers.
The journal serves to bring together work from various
application areas, highlighting common problems and techniques
in modeling the brain and in the design and construction of
neurally-inspired information processing systems.

By publishing timely short communications and research reviews,
Neural Computation will allow researchers easy access to
information on important advances and will provide a valuable
overview of the broad range of work contributing to neural
computation.  The journal will not accept long research
articles.

The fields covered include neuroscience, computer science,
artificial intelligence, mathematics, physics, psychology,
linguistics, adaptive systems, vision, speech, robotics, optical
computing, and VLSI.

Neural Computation is published quarterly by The MIT Press.


                       Board of Editors


Editor-in-Chief:  Terrence Sejnowski, The Salk Institute and
                      The University of California at San Diego

Advisory Board:

Shun-ichi Amari, University of Tokyo, Japan
Michael Arbib, University of Southern California
Jean-Pierre Changeux, Institut Pasteur, France
Leon Cooper, Brown University
Jack Cowan, University of Chicago
Jerome Feldman, University of Rochester
Teuovo Kohonen, University of Helsinki, Finland
Carver Mead, California Institute of Technology
Tomaso Poggio, Massachusetts Institute of Technology
Wilfrid Rall, National Institutes of Health
Werner Reichardt, Max-Planck-Institut fur Biologische Kybernetik
David A. Robinson, Johns Hopkins University
David Rumelhart, Stanford University
Bernard Widrow, Stanford University

Action Editors:

Joshua Alspector, Bell Communications Research
Richard Andersen, MIT
James Anderson, Brown University
Dana Ballard, University of Rochester
Harry Barrow, University of Sussex
Andrew Barto, University of Massachusetts
Gail Carpenter, Northeastern University
Gary Dell, University of Rochester
Gerard Dreyfus, Paris, France
Jeffrey Elman, University of California at San Diego
Nabil Farhat, University of Pennsylvania
Francois Fogelman-Soulie, Paris, France
Peter Getting, University of Iowa
Ellen Hildreth, Massachusetts Institute of Technology
Geoffrey Hinton, University of Toronto, Canada
Bernardo Huberman, Xerox, Palo Alto
Lawrence Jackel, AT&T Bell Laboratories
Scott Kirkpatrick, IBM Yorktown Heights
Christof Koch, California Institute of Technology
Richard Lippmann, Lincoln Laboratories
Stephen Lisberger, University of California San Francisco
James McClelland, Carnegie-Mellon University
Graeme Mitchison, Cambridge University, England
David Mumford, Harvard University
Erkki Oja, Kuopio, Finland
Andras Pellionisz, New York University
Demetri Psaltis, California Institute of Technology
Idan Segev, The Hebrew University
Gordon Shepherd, Yale University
Vincent Torre, Universita di Genova, Italy
David Touretzky, Carnegie-Mellon University
Roger Traub, IBM Yorktown Heights
Les Valiant, Harvard University
Christoph von der Malsburg, University of Southern California
David Willshaw, Edinburgh, Scotland
John Wyatt, Massachusetts Institute of Technology
Steven Zucker, McGill University, Canada

Instructions to Authors

The journal will consider short communications, having no more
than 2000 words of text, 4 figures, and 10 citations; and area
reviews which summarize significant advances in a broad area of
research, with up to 5000 words of text, 8 figures, and 100
citations.  The journal will accept one-page summaries for
proposed reviews to be considered for solicitation.

All papers should be submitted to the editor-in-chief.  Authors
may recommend one or more of the action editors.  Accepted
papers will appear with the name of the action aditor that
communicated the paper.

Before January 1, 1989, please address submissions to:

                Dr. Terrence Sejnowski
                Biophysics Department
                Johns Hopkins University
                Baltimore, MD  21218

After January 1, 1989, please address submissions to:

                Dr. Terrence Sejnowski
                The Salk Institute
                P.O. Box 85800
                San Diego, CA  92138

Subscription Information

Neural Computation

Annual subscription price (four issues):

                $90.00 institution
                $45.00 individual
                (add $9.00 surface mail or $17.00 airmail postage
                         outside U.S. and Canada)

Available from:

                MIT Press Journals
                55 Hayward Street
                Cambridge, MA  02142
                USA
                617-253-2889

------------------------------

End of AIList Digest
********************

∂12-Aug-88  1424	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #46  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 12 Aug 88  14:24:02 PDT
Date: Fri 12 Aug 1988 15:30-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #46
To: AIList@AI.AI.MIT.EDU


AIList Digest           Saturday, 13 Aug 1988      Volume 8 : Issue 46

 Query Responses:

  Feigenbaum's citation
  Sigmoid transfer function

----------------------------------------------------------------------

Date: 11 Aug 88 06:07:38 GMT
From: mcvax!inria!crin!napoli@uunet.uu.net  (Amedeo NAPOLI)
Subject: Feigenbaum's citation

Is there anybody to tell me the title of the book in which E. Feigenbaum
says:

``AI focused its attention most exclusively on the development of clever
  inference methods. But the power of its systems does not reside in the
  inference methods; almost any inference method will do. The power
  resides in the knowledge''

Many thanx in advance,
--
--- Amedeo Napoli @ CRIN / Centre de Recherche en Informatique de Nancy
EMAIL : napoli@crin.crin.fr - POST : BP 239, 54506 VANDOEUVRE CEDEX, France

------------------------------

Date: 11 Aug 88 16:29:22 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Feigenbaum's citation


      I heard him say things very similar to that around Stanford in 1983.
In the early days of expert systems, that was a common remark.  It reflects
a turf battle with the logicians that was taking place at the time, theorem-
proving having been a dominant paradigm in AI in the late 1970s and early
1980s.

      It's not clear that such a remark has relevance today.  The optimistic
hope that dumping lots of rules into a dumb inference engine would produce
something profound has faded.  Experience with that approach has produced
more understanding of what can and cannot be accomplished in that way.
More work is taking place on the underlying machinery again.  But now,
there is the realization that the machinery exists to process the knowledge
base, not to implement some interesting logical function.  In retrospect,
both camps (and there were camps, at Stanford, in separate buildings)
were taking extreme positions, neither of which turned out to be entirely
satisfactory.  Work today lies somewhere between those poles.

      Plans are underway, amusingly, to get both groups at Stanford under
one roof again in a new building.

                                        John Nagle

------------------------------

Date: 11 Aug 88 17:17:26 GMT
From: pasteur!agate!garnet.berkeley.edu!ked@ames.arpa  (Earl H.
      Kinmonth)
Subject: Re: Feigenbaum's citation

As I remember, Feigenbaum achieved notoriety for his (probably
largely ghosted book) on the Japanese "Fifth Generation
Project."  Did anything ever come out of the Fifth Generation
project other than big lecture fees for Feigenbaum to go around
warning about the Japanese peril?

Is he really a pompous twit (the impression given by the book) or
is that due to the scatter-brained ghost writer?

------------------------------

Date: 11 Aug 88 23:35:44 GMT
From: prost.llnl.gov!daven@lll-winken.llnl.gov  (David Nelson)
Subject: Re: Feigenbaum's citation

In article <17626@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
>

[stuff about Feigenbaum's remark omitted]

>      It's not clear that such a remark has relevance today.  The optimistic
>hope that dumping lots of rules into a dumb inference engine would produce
>something profound has faded. ....

To be replaced by the optimistic hope that dumping lots of examples into a
dumb neural net will produce something profound :-)

daven


daven (Dave Nelson)
arpa:  daven @ lll-crg.llnl.gov
uucp:  ...{seismo,mordor,sun,lll-lcc}!lll-crg!daven

------------------------------

Date: 8 Aug 88 03:00:35 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Sigmoid transfer function

In article <25516@ucbvax.BERKELEY.EDU> munro@icsia.UUCP (Paul Munro) writes:
>
>Try this one :   f(x) = x / (1 + |x|)
>

      The graph looks OK, although some scaling is needed to make it comparable
to the sigmoid.  Someone should try it in one of the popular neural net
simulators and see how the results change.

                                        John Nagle

------------------------------

Date: 10 Aug 88 17:46:01 GMT
From: amdahl!pyramid!prls!philabs!aecom!krishna@ames.arpa  (Krishna
      Ambati)
Subject: Re: Sigmoid transfer function

In a previous article, John B. Nagle writes:
> In article <25516@ucbvax.BERKELEY.EDU> munro@icsia.UUCP (Paul Munro) writes:
> >
> >Try this one :   f(x) = x / (1 + |x|)
> >
>
>       The graph looks OK, although some scaling is needed to make it
> comparable
> to the sigmoid.  Someone should try it in one of the popular neural net
> simulators and see how the results change.
>
>                                       John Nagle




I did try it out in a simulation for The Traveling Salesman using
the Hopfield Tank model.  Unfortunately, it yields pretty poor results.
Probably because it does not rise quickly in the middle region, and
furthermore, its convergence to 1 (after scaling) is pretty slow.
I would be happy to hear more positive results.


Krishna Ambati
krishna@aecom.uucp

------------------------------

Date: 11 Aug 88 16:14:21 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Sigmoid transfer function

In article <1960@aecom.YU.EDU> krishna@aecom.YU.EDU (Krishna Ambati) reports
that

        f(x) = x / (1 + |x|)

is a poor transfer function for neural net units, not rising steeply enough
near the transition point.  This seems reasonable.

What we may need is something that looks like a step function fed through
a low-pass filter.  The idea is to come up with a function that works but
can be computed with less hardware (analog or digital) than the sigmoid.
Try again?

                                        John Nagle

------------------------------

Date: 11 Aug 88 20:09:22 GMT
From: phri!cooper!gene@nyu.edu  (Gene (the Spook) )
Subject: Re: Sigmoid transfer function

in article <17615@glacier.STANFORD.EDU>, John B. Nagle says:
> Xref: cooper sci.electronics:2943 comp.ai:1519 comp.ai.neural-nets:158
>
>
>      Recognize that the transfer function in a neural network threshold unit
> doesn't really have to be a sigmoid function.  It just has to look roughly
> like one.  The behavior of the net is not all that sensitive to the
> exact form of that function.  It has to be continuous and monotonic,
> reasonably smooth, and rise rapidly in the middle of the working range.
> The trigonometric form of the transfer function is really just a notational
> convenience.
>
>      It would be a worthwhile exercise to come up with some other forms
> of transfer function with roughly the same graph, but better matched to
> hardware implementation.  How do real neurons do it?

Oooh, yeah! Why not make a differential amplifier out of two transistors
or so? Just look up how ECL gates are constructed to get the basic design.
If you want, you can take a basic op amp and "compare" an input reference
of the median voltage, then scale up/down and level shift if necessary.
I'm assuming that you mostly care about just the three input levels you
mentioned. Try this:

Vi = -oo        Vo = 0.0
Vi = 0.0        Vo = 0.5
Vi = +oo        Vo = 1.0

so take an op amp and run it off of, say, +-10V.
With a gain of around +10, an absolute value of around 1V will saturate
the output at the respective supply rail. For all practical purposes,
you'll have a linear output within +-1.0V, with maximum output being the
supply voltages. To increase the linear "spread", lower the gain; to
decrease it, increase the gain.

So fine, that'll get you a +-10V output. Now use two resistors to make a
voltage divider. A 1k and 9k will give you a /10 divider, now giving you
a +-1.0V output, for example. Use a trimmer if you want to and get the
right voltage swing for your purposes. In your case, a /20 divider will
get you a +-0.5V swing. Use a second op amp as a level shifter, set the
shift at 0.5V, and voila! Now you have a 0.0 to 1.0 voltage swing!

If that's acceptable for your purposes, fine. If you want to "soften" the
corners, use a pair of inverse-parallel diodes which will start to
saturate as you get near their corner- or knee-voltage (V-sub-gamma).
In short, just play around with whatever comes to mind, and see if it
suits your purpose. Have fun!

                                                Spookfully yours,
                                                Gene

                                                ...!cmcl2!cooper!gene

------------------------------

Date: 11 Aug 88 21:30:52 GMT
From: ankleand@athena.mit.edu  (Andy Karanicolas)
Subject: Re: Sigmoid transfer function (long)


      THE VIEWS AND OPINIONS HERE ARE NOT NECESSARILY THOSE OF M.I.T.

Here is a schematic of a circuit that should perform the "sigmoid" transfer
function talked about.  The op-amp could be replaced with current mirrors to
perform a subtraction but this circuit is easier (to draw!).  I'm sure there
are plenty of other (better, simpler) ways to accomplish the task. Maintaining
voltage as the analog variable adds to circuit complexity.

              * PLEASE, NO FLAMES; THIS IS JUST A SUGGESTION *

NOTE:  The 'X'  indicates a connection where ambiguity with
       a crossing may exist.
       The 'N' on the transistors indicates emitter for NPN device.


           ___________________VC1
          |          |
          /          /
      R0  \          \  R0
          /          /                                 R1
          \          \                            ___/\/\/\______
          |          |                  R1       |               |
          |          X _______________/\/\/\_____X___|\ --VC2    |
          |          | V2                            |- \ _______X____
          X _________|________________/\/\/\_____ ___|+ /             |
          | V1       |                  R1       X   |/ --(-VC2)      |
     Q1   |          |   Q2                      |      A2            |
        |/            \|                         /                    /
    ----|              |----                  R1 \                RP2 \___VOUT
   |    |\N     RP0  N/|    |                    /                    /
   |      |___/\/\___|      |                    \                    \
   |            |           |                    | VREF               |
   |            |           |                RP1 |                    |
   |            |           |        GND____/\/\/\/\/\____VC1        GND
   |            |           |
   |            |           |
   |            |___________|__________________        ______ __/\/\/\__GND
   |                        |     --> IX       |      |      X   R4
   |          R2  ___/\/\___X                  |      |      |
   |      R2     |          |                   \|    |    |/
   | ____/\/\____X__|\--VC2 |                Q3  |----X----|  Q4
   X                |- \ ___|                  N/|         |\N
   |            ____|+ /                       |             |
   VIN       GND    |/--(-VC2)                 |             |
                      A1                       /             /
                                               \ R3          \ R3
                                               /             /
                                               \             \
                                               |_____________X____(-VC2)


The transfer function of this circuit is:

        VOUT ~= B2 * { B1 * VC1   +   IX * R0 * TANH[ VIN / VTH ] }

        where VTH = kT/q and is about 25mV at room temp.

The constants B2 and B1 are less than unity and are set by potentiometers
RP2 and RP1 respectively.

Circuit description:

        Q1 and Q2 form a differential amplifier that provides the tanh
        function.  The potentiometer RP0 helps to equalize transistor
        mismatches.  RP0 should be as small as possible to maintain the
        tanh function of this amplifier.  Choosing a large RP0 will cut
        down the gain at the midpoint of the 'S'; the tanh function gets
        'linearized' and the above transfer equation becomes invalid.


        The op-amp A1 provides an invereted version of the input voltage;
        together with the input itself, the input to the diff. amp is a
        differential mode signal (within component tolerances) equal to
        2 * VIN.  The input should be from a low impedance source or an
        input buffer will be needed.

        The op-amp A2 is configured in a differencing mode.  The transfer
        function of this amplifier is:  VOUT = V1 - V2 + VREF.  The
        adjustable reference VREF adjusts the constant B1 and the attenuation
        pot. on the ouput of A2 adjusts the constant B2.

        Q3 and Q4 form a current source IX.  It can be replaced by a simple
        resistor but the current source should help maintain the tanh
        function for large input signals.  IX is set by R3, R4 and VC2.

One design example:

        VC1 = 5V
        VC2 = 15V   (typical supply is +5, +15, -15)
        set IX * R0 = 0.5
        set B1 * VC1 = 0.5  (VREF = 0.5)
        use RP0 = 25 ohms
        set IX = 0.5mA   so that R0 = 1K
        to cut down loading effects, set R1 = 47K (arbitrarily)
        set RP1 to 1K (much smaller than R1)
        set RP2 to 1K
        for IX = 0.5mA, set R3 = 100 ohms; R4 then is about (15 - .65)/.5mA
        set R4 to 27K
        the choice of R2 is not critical; use 1K
        Q1-4 can be standard 2N2222 or 2N3904 NPN's
        A1 and A2 can be LM301s (or, UGHH!!, 741's even..)


Have fun and good luck!

Andy Karanicolas
Microsystems Technology Laboratory
ankleand@caf.mit.edu

------------------------------

End of AIList Digest
********************

∂12-Aug-88  1753	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #47  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 12 Aug 88  17:53:09 PDT
Date: Fri 12 Aug 1988 15:35-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #47
To: AIList@AI.AI.MIT.EDU


AIList Digest           Saturday, 13 Aug 1988      Volume 8 : Issue 47

 Philosophy:

  AI and the future of the society
  Dual encoding, propostional memory and...
  Self-reference in Natural Language
  point of metalanguage in language
  The Godless assumption

----------------------------------------------------------------------

Date: 5 Aug 88 17:24:30 GMT
From: jbn@glacier.stanford.edu (John B. Nagle)
Reply-to: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: AI and the future of the society


Antti Ylikoski (YLIKOSKI@FINFUN.BITNET) writes:
>I once heard an (excellent) talk by a person working with Symbolics.
>(His name is Jim Spoerl.)
>
>One line by him especially remained in my mind:
>
>"What we can do, and animals cannot, is to process symbols.
>(Efficiently.)"
>
>
>In the human brain, there is a very complicated real-time symbol
>processing activity going on, and the science of Artificial
>Intelligence is in the process of getting to know and to model this
>activity.
>
>A very typical example of the human real-time symbol processing is
>what happens when a person drives a car.  Sensory input is analyzed
>and symbols are formed of it: a traffic sign; a car driving in the
>same direction and passing; the speed being 50 mph.  There is some
>theory building going on: that black car is in the fast lane and
>drives, I guess, some 10 mph faster than me, therefore I think it's
>going to pass me after about half a minute.  To a certain extent, the
>driver's behaviour is rule-based: there is for example a rule saying
>that whenever you see a red traffic light in front of you you have to
>stop the car.  (I remember someone said in AIList some time ago that
>rule-based systems are "synthetic", not similar to human information
>processing.  I disagree.)

      As someone who works on automatic driving and robot navigation,
I have to question this.  One notable fact is that animals are quite
good at running around without bumping into things.  Horses are capable
of running with the herd over rough terrain within hours of birth.
("Horses of the Camargue" has some beautiful pictures of this.)  This
leads one to suspect that the primary mechanisms are not based on
symbols or rules.  Definitely, learning is not required.  Horses are
born with the systems for walking, obstacle avoidance, running, standing up,
motion vision, foot placement, and small-obstacle jumping fully functional.

      More likely, the basics of navigation are based on geometric
processing, or what some people like to call "spatial reasoning".
See Witkin and Kass's work at Schlumberger for some
idea of what this means.  Ossana Khatib's approach to path planning
(1979) is also very relevant.  Geometry has the advantage of being
compatible with the real world without abstraction.  Recognize that
abstraction is not free.  In real-world situations, as faced by robots,
the processing necessary to put the sensory data into a form where rule-based
approaches can even begin to operate is formidable, and in most non-trivial
cases is beyond the state of the art.

     I would encourage people moving into the AI field to work in the
vision, spatial, and geometric domains.  There are many problems that
need to be solved, and enough computational power is becoming available
to address them.  Much of the impetus for the past concentration on highly
abstract domains came from the need to find problems that could be
addressed with modest computational resources.  This is much less of a
problem today.  We are beginning to have adequate tools.

     Personally, I suspect that horse-level performance in navigation
and squirrel-level performance in manipulation can be achieved without
any component of the system using mathematical logic.

                                        John Nagle

------------------------------

Date: 5 Aug 88 10:52 PDT
From: hayes.pa@Xerox.COM
Subject: Re: Dual encoding, propostional memory and...


> (yes, I encode all my knowledge of a scene into
>little FOPC like tuples, honest) ......
>...  thinking about bit level encoding
> protocols, ....

No: thats not the claim. You are here talking about the implementation level of
encoding: we are talking about the semantic level.  You might encode your
epistmologically adequate ( see McCarthy and Hayes 1969 ) representation in all
sorts of ways, perhaps as states of a connectionist network ( although I havnt
yet seen a way in which it could really be done ), probably not as lots of
little n-tuples ( very inefficient ).  The point at issue was whether the
knowledge is encoded or not, and whether, if it is, we can make much progress
without thinking about how it does its representing.

>The dual coding theory, which normally distinguishes
>between iconic and semantic memory, has caused
>endless debate

Yes, but much of this debate has been between psychologists, and so has little
relevance to the issues we are discussing here.   These trails arent in
different directions to McCarthys, they are in a different landscape.  Ive had
interesting arguments with some of them, about such terms as `iconic memory'.
Are there iconic and propositional representations in the head?  Of course, the
psychologist says; if we can visualise, producing different behaviour than when
we remember: thats what `different' MEANS.  Thats not what the AI modeller means
by `different', though.  If one takes the iconic/semantic distinction to refer
to different ways in which information can be encoded, then it isnt at all
obvious that different behavior means different representations ( though it
certainly suggests different implementations ).

>Pat's argument hinges on the demand that we think
>about something called representation (eh?) and then
>describe the encoding.  The minute you are tricked...

Well now, lets be clear.  The argument goes like this. People know things -
facts, lets say, but use a different word if you like - and their behavior is
influenced in important ways by the things they know and what they are able to
conclude from the.  It seems reasonable to conclude that these facts that they
know are somehow encoded in their heads, ie a change of knowledge-state is a
change of physical state.  Thats all the trickery involved in talking about
`representation', or being concerned with how knowledge is encoded.  All the
rest is just science: guesses about how this encoding is done, observations
about good and bad ways to describe it, etc..
Do you disagree with any of this, Gilbert?  If so, what alternative account
would you suggest for describing, for example, whatever it is that we are doing
sending these messages to one another?

>PDP networks will work of course,...

Well, will they?  Lets see them do some cognitive task of the sort McCarthy has
been aiming at.   But it must be possible, I agree, to implement it all this
way, since we are ourselves walking, talking networks.

> ....but you can't of
>course IMAGINE the contents of the network, and thus
>they cannot be a representation

Sure they can be a(n implemetation of a ) representation. And sure we can
imagine the contents of the network: people do it all the time.  When someone
shows me a network doing a bit of semantic memorising, you can bet they are
explaining to me how to imagine whats in the network.

If people who attack AI or the Computational Paradigm, simultaneously tell me
that PDP networks are the answer, I know they havnt understood the point of the
representational idea.  Go back and (re)read that old 1969 paper CAREFULLY,
Gilbert, before you find yourself writing a book like Dreyfusss.

Pat Hayes

------------------------------

Date: Tue, 9 Aug 88 10:55 CDT
From: <CMENZEL%TAMLSR.BITNET@MITVMA.MIT.EDU>
Subject: Self-reference in Natural Language

In AIList Digest vol. 8 num. 29 I claimed that Bruce Nevin's
(bnevin@cch.bbn.com) analysis of self-reference in natural language
entailed there was something semantically improper about sentences like
"This sentence is in English" and "This sentence is grammatical."  My claim
was that they are wholly unproblematic, and hence that there was something
wrong with the analysis. In his lengthy and very interesting reply, Nevin
demurs:

> They are not "wholly unproblematical," they engender a double-take kind of
> reaction.  Of course people can cope with paradox, I am merely accounting
> for the source of the paradox.

First, I'm dubious about whether they do engender the sort of double-take
Nevin refers to here.  But even so, it's not at all clear to me what that's
supposed to signal.  People sometimes have a similar reaction to sentences
containing several negatives, but for all that we wouldn't want to cast
aspersions on them.  Second, Nevin seems to be implying that people have to
"cope with paradox" when they are confronted with the self-referential
sentences above.  But again, even if we grant that there are problematic,
there's surely no paradox lurking anywhere nearby.

> If I say it in Modern Greek, where the noun followed by deictic can
> come last, the normal reading is still for "this" to refer to a nearby
> prior sentence in the discourse.  The paradoxical reading has to be
> forced by isolating the sentence, usually in a discourse context like
> "The sentence /psema ine i frasi afti/, translated literally 'Falsehood
> it is the sentence this', is paradoxical because if I suppose that it
> is false, then it is truthful, and if I suppose it is truthful, then it
> is false." These are metalanguage statements about the sentence.  The
> crux of the matter (which word order in English only makes easier to
> see), is that a sentence (or any utterance) cannot be a metalanguage
> statement about itself--cannot be at the same time a sentence in the
> object language (English or Greek) and in the metalanguage (the
> metalanguage that is a sublanguage of English or of Greek).

The assumption that there is a metalanguage/object language distinction in
ordinary language is carrying an awful lot of weight here.  There's no
doubt we have to make and heed such a distinction when we're doing formal
semantics--where we've actually got a rigorously defined formal language,
and we're describing how it's to be interpreted in a formal model--but it's
not clear there is such a distinction to be made in natural language.  It
seems to me far more natural just to say that English (for example), in
addition to containing terms that refer to planets, numbers, and the like,
also has terms that refer to elements of the language itself, and in the
limiting case, to expressions that contain those very terms.  Granted, this
is precisely the capacity that leads us to paradox; but I would like to see
some evidence that this account is wrong other than the fact that it gets
us into trouble.  That is, is there any linguistic intuition that Nevin can
cite to justify his account in addition to his claim that he's got a
solution to the paradoxes?  Consider an analogy in set theory.  Russell's
paradox initially engendered all sorts of confusion and consternation--the
intuitive assumption that for every property there is a corresponding set
of things that have the property leads to contradiction.  Eventually, there
came something of an explanation in the form of the so-called {\it
iterative} conception of set--sets are collections that are "built up" from
some initial collection of atomic elements by certain operations.  Some
properties pick out collections that could never be the result of any such
building-up process, and hence are not SETS.  Not exactly airtight, but
there is an appealing intuition there.  Is there anything analogous for
Nevin's account?  I'm not being rhetorical here; there may well be, I just
haven't been able to think of any.

Chris Menzel
cmenzel@tamlsr.bitnet
chris.menzel@lsr.tamu.edu

ps:  In my first reply to Nevin I included two books for "Recommended
reading."  This I fear came off looking as if I were patronizing him, which
was most definitely not my intent, and I apologize for the misimpression.
I have learned a great deal from both books (Martin's {\it Recent Essays on
Truth and the Liar Paradox} and Barwise and Etchemendy's {\it The Liar: An
Essay on Truth and Circularity}), and my recommendations were sincere, and
intended as a genuine contribution for the benefit of the readers of
AIList Digest.

------------------------------

Date: Tue, 9 Aug 88 16:07:24 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: point of metalanguage in language

We are talking about self-referential sentences of the type:

        This sentence is English { grammatical | long | . . . }

I agree that these sentences are paradoxical only if the adjective is
something like `false, a lie'.  You get paradox only when successive
readings contradict each other and must be reconciled because it is
after all but one sentence.  The contradiction makes it impossible to
ignore the semantic problem of not being able to resolve referentials.

In the paradoxical case, the infinite regress of reading-tokens cannot
be ignored because of the contradiction.  But even without the
contradiction between successive readings, you always get an effect that
you might call `referential reverbration', since to evaluate the truth
or appropropriateness of a sentence containing a deictic you naturally
examine the thing to which it refers, which in these cases happens to be
the sentence itself.  The rereading (checking out the referent) refers
again to itself.  Most language users don't continue doing this for very
many iterations.  Stopping runaway loops presumably has some adaptive
value for intelligent entities!  Contrast the following case:

        Cesuwi tini:maCQati.  The preceding sentence is in Achumawi.

No reverbration, just one glance back.  (Uppercase letters are for
glottalized stops.)

I think self-referential sentences (sentences that refer to themselves
as a whole, not just to words or constructions in themselves) are
perhaps initially amusing and then later annoying to people because they
are rather a perversion of the machinery of deixis.  Such sentences
occur only in the most artificial circumstances.  Virtually anything
they can say about themselves is self evident and therefore redundant.
Except for illustrating some oddities about language, they are
pointless, whether true or false.  I mean, who cares that `Afti i frasi
ine st anglika' or `This sentence is in Greek' is false?  To say such a
self-evident falsehood is foolishness without even the point of humor in
any circumstances that I can think of.  This surely contributes to the
feeling of anomaly and is another reason why they are not `wholly
unproblematic,' but the real tale is in the process of resolving
referentials, as described previously.

There is a somewhat similar case in which simple falsehood goes usually
unnoticed, an oversight which is certainly not characteristic of
non-selfreferential situations:  I am thinking here of the familiar
notice on an otherwise blank page that says `this page intentionally
left blank'.  This is like +-------------------+
                           + This box is empty +
                           +-------------------+.  Is the preceding a
sentence in English?  It would not go unnoticed.  One can construct
cases like

        This sentence, about `psemata sta anglika,' is in English.
        Afti i frasi, epi `falsehoods in Greek,' sta elinika ine.

but they too would scarcely go unnoticed.  Hofstedter plays these
recursion games very nicely.

Multiple negatives do indeed involve similar processes, since negation
is a metalinguistic operator, one of denial.  Such cases are problematic
because language users try to resolve them to a simple assertion or
denial.  (Easier to do in languages or dialects admitting multiple
negation for intensive expression, as in West African languages and
Black English.)  Things like `I don't disagree' and `not unlike the
denial of' are considered stylistically bad but are not semantically
flawed in the way that self-referential sentences are, because the
multiple rereadings have a limit rather than implying infinite regress.

One might argue that inability to resolve referentials is a matter of
performance rather than competence.  That limb is open if anyone wants
to go out on it.

The point about natural language containing its own metalanguage is not
that it can be abused in degenerate cases, but rather that we need no
other a prioristic metalanguage for grammar and semantics.  Indeed,
language users have recourse to no such external metalanguage for
learning and using their native language.  If a description of a
language cannot be stated in the language itself (that is, in its
intrinsic, built-in metalanguage), then it is incorrect.  It is bound to
introduce redundancy into the description over and above the redundancy
used by the language for informational purposes, and this has the status
of noise obscuring any account of the information in texts.  Formal
notations may be convenient for computational and other purposes, but
they must be straightforward graphic variants of words and constructions
in the metalanguage that is a part of the language that they describe.
The question of the status of the metalanguage thus points to a
criterion for comparison of different descriptions as to their adequacy
for representing language and what it does.  See e.g. Z. S. Harris
_Language and Information_.  My review of this book should appear in
_Computational Linguistics_ 14.4, scheduled to be mailed in January
1989.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

PS:  I have not read the books you cite but will look for them.

------------------------------

Date: Thu, 11 Aug 88 11:58:35 GMT
From: IT21%SYSB.SALFORD.AC.UK@MITVMA.MIT.EDU
Subject: The Godless assumption

Subject: The Godless assumption

In going through my backlog of AI Mail I found two rather careless
statements.

In article <445@proxftl.UUCP>, bill@proxftl.UUCP (T. William Wells)
writes: > that,.... This means
> that I can test the validity of my definition of free will by
> normal scientific means and thus takes the problem of free will
> out of the religious and into the practical.

Why should 'religious' not also be 'practical'?  Many people - especially
ordinary people, not AI researchers - would claim their 'religion' is
immensely 'practical'.  I suggest the two things are not opposed.  It may
be that many correspondents *assume* that religion is a total falsity or
irrelevance, but this assumption has not been proved correct, and many
people find strong empirical evidence otherwise.


Date: Sun, 03 Jul 88 03:47:51 EST
Jeff Coggshall <KLEMOSG%YALEVM.BITNET@MITVMA.MIT.EDU> writes
>Subject: Metaepistemology & Phil. of Science
>    Once we assume that there is no priveledged source knowledge about
>the way things really are, then, it seems, we are left with either
>saying that "anything goes" ...

That there is 'no priviledged source knowledge' is a mere assumption that
has very little evidence to support it.  And there are many who do not
make that assumption, believing in religious revalation.  Many would
claim the Bible, for instance, is God's revelation to humankind.  Some
other religions would make equivalent claims.
Therefore we cannot *assume* such things without the danger of writing
off a huge section of reality which our theories should fit.

Since the non-existence/irrelevance of God has not yet been proved, and many
claim to have strong empirical evidence of God's existence and
effectiveness in their lives, may I ask that correspondents think more
carefully before making statements like the two above.

Thank you,

Andrew Basden.

------------------------------

End of AIList Digest
********************

∂13-Aug-88  1627	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #48  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 13 Aug 88  16:27:08 PDT
Date: Sat 13 Aug 1988 19:06-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #48
To: AIList@AI.AI.MIT.EDU


AIList Digest            Sunday, 14 Aug 1988       Volume 8 : Issue 48

 Queries:

  Category Theory in AI
  What is Category Theory
  Ada Shells
  Diagnosing plant diseases
  Looking for a Cognitive Science Society
  IEEE Conf. on AI Applications (CAIA) 1989 ?
  Camera Stabilization (1 response)
  Simple ES tools for the Sun

----------------------------------------------------------------------

Date: 8 Aug 88 21:25:42 GMT
From: vsi1!wyse!mips!prls!philabs!dpb@ames.arpa  (Paul Benjamin)
Subject: Category Theory in AI

Some of us here at Philips Laboratories are using  universal
algebra, and more particularly category theory, to formalize
concepts in  the  areas  of  representation,  inference  and
learning.   We are interested in finding others who are tak-
ing a similar approach.  If you are already working in  this
area,  please  respond  by  email  or  USmail (please do not
post), so that we can form an informal network,  and  inter-
change information.

Paul Benjamin
Philips Laboratories
345 Scarborough Rd.
Briarcliff, NY 10510

{uunet,decvax}!philabs!dpb

------------------------------

Date: 12 Aug 88 01:30:57 GMT
From: kddlab!atr-la!geddis@uunet.uu.net  (Donald F. Geddis)
Subject: Re: Category Theory in AI

In a previous article, Paul Benjamin writes:
> Some of us here at Philips Laboratories are using  universal
> algebra, and more particularly category theory, to formalize
> concepts in  the  areas  of  representation,  inference  and
> learning.
>
> Paul Benjamin
>
> {uunet,decvax}!philabs!dpb

I'm familiar with those areas of AI, but not with category theory (or
universal algebra, for that matter).  Can anyone give a short summary for
the layman of those two mathematical topics?  And perhaps a pointer as to
how they might be useful in formalizing certain AI concepts.  Thanks!

    -- Don
--
"You lock the door, and throw away the key
 There's someone in my head, but it's not me."   -- Pink Floyd
Internet: Geddis@Score.Stanford.Edu (which is forwarded to Japan...)
USnail:   P.O. Box 4647, Stanford, CA  94309  USA

------------------------------

Date: Mon 8 Aug 88 15:16:57-PDT
From: VERNE@ECLA.USC.EDU
Subject: Ada Shells

Do you know of any Ada Expert System Shells currently available (either
Public domain or commercial)??

Please reply directly to me as I normally do not monitor this list. If there
are sufficient replies I will summerize for the list.
        Howard Verne
        VERNE@ECLA

------------------------------

Date: 9 Aug 88 14:03:32 GMT
From: mcvax!ukc!reading!onion!cf-cm!cybaswan!cslaurie@uunet.uu.net 
      (Laurie Moseley )
Subject: Diagnosing plant diseases

############################################################################

I am looking for {references | names | addresses | phone numbers } of people
who have developed {expert | intelligent knowledge-based | decision support}
systems in the area of diagnosing plant diseases. Can anyone help ?

I am familiar with the soy bean work of Michalski et al, but know of nothing
since then.

                        With thanks

                        Laurence Moseley


JANET: cslaurie@uk.ac.swan.pyr

SMAIL: Computer Science, University College, Swansea SA2 8PP, UK

TEL:   44 - 792 - 295399

#############################################################################

------------------------------

Date: 9 Aug 88 09:45 EST
From: STERRITT%SDEVAX.decnet@ge-crd.arpa
Subject: Looking for a Cognitive Science Society


Hello,
        I'm posting this request for a friend who doesn't have net access.
Is there any Cognitive Science Society?  If so, what is their address, either
e-mail or snail, and what do they publish, and how often?
        thanks very much,
        chris sterritt
        sterritt%sdevax.decnet@ge-crd.arpa

------------------------------

Date: 9 Aug 88 15:32:21 GMT
From: mcvax!tuvie!tuhold!gfl@uunet.uu.net  (Gerhard Fleischanderl)
Subject: IEEE Conf. on AI Applications (CAIA) 1989 ?

Is anybody out there knowing whether the

        IEEE Conference on Artificial Intelligence Applications (CAIA)

will be held in 1989 ? If there will be a CAIA again in 1989,
what is the deadline for the submission of papers?

The 4th CAIA has been held March 14-18, 1988 at
Sheraton Harbor Island Hotel, San Diego, California, and
has been sponsored by 'The Computer Society of the IEEE'.
I couldn't find any announcement or call for papers for a
5th CAIA in the June-1988-issues of both 'Comm.ACM'
and 'IEEE Computer'.

Sorry for this posting if I have missed a message about that.
Thanks in advance,

Gerhard Fleischanderl

e-mail (UUCP):   ...!mcvax!tuvie!tuhold!gfl

------------------------------

Date: 10 Aug 88 15:04:59 GMT
From: mcvax!ukc!reading!onion!cf-cm!cybaswan!eederavi@uunet.uu.net 
      (f.deravi)
Subject: Camera Stabilization


I am looking for information on camera stabilization and sensors for
this purpose suitable for moving vehicles. In particular
could you please answer the following questions:

1) What is the Steadicam Gyro? How can I find one?? How does it work???
2) What is Sorbothane shock mounting?
3) What are accelerometers and rate gyros? Any suppliers??

Any other related information would be grately appreciated.

                                   - - - - - - - - - - - - - - - - - - - -
Farzin Deravi,                   | UUCP  : ...!ukc!pyr.swan.ac.uk!eederavi|
Image Processing Laboratory,     | JANET : eederavi@uk.ac.swan.pyr        |
Electrical Engineering Dept.,    | voice : +44 792 295583                 |
University of Wales,             | Fax   : +44 792 295532                 |
Swansea, SA2 8PP, U.K.           | Telex : 48358                          |

------------------------------

Date: 13 Aug 88 06:21:03 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Camera Stabilization

In article <49@cybaswan.UUCP> eederavi@cybaswan.UUCP (f.deravi) writes:
>
>I am looking for information on camera stabilization and sensors for
>this purpose suitable for moving vehicles. In particular
>could you please answer the following questions:
>
>1) What is the Steadicam Gyro? How can I find one?? How does it work???
     The Steadicam (TM) is a device used by professional  movie photographers
when it is necessary to photograph with a hand-held camera but high stability
is required.  It consists of one or more gyros on poles, attached to a harness
worn by the cameraman.  It is rather expensive, for no good reason, is patented,
and is not particularly useful for autonomous vehicle operation.

>2) What is Sorbothane shock mounting?
     Sorbothane is an interesting material used for gymnastic mats and such.
It's a soft plastic with some of the properties of a liquid.


        [The only other place I have seen it used is as shock-absorbing
         pads in (somewhat expensive) bicycling gloves.   -- nick]


>3) What are accelerometers and rate gyros? Any suppliers??

     ETAK, Inc. (Menlo Park, CA USA) is coming out with a low-cost two-axis
rate gyro in a few months.  Expensive ones can be obtained from makers of
military and aircraft navigation equipments.  I suggest you read up on
inertial guidance and aircraft autopilots before acquiring anything.

     Panasonic showed a gyroscopically stablized consumer-grade camcorder
at the Consumer Electronics Show this summer.  It should be available at
Japanese retailers by the end of the year.  This may be a promising approach.

                                        John Nagle

------------------------------

Date: 11 Aug 88 23:11:08 GMT
From: hefley@rand-unix.arpa  (Charlene Hefley)
Subject: simple ES tools for the Sun

I am looking for a very simple Expert System development tool for a Sun 3
running Franz Lisp.  I don't want anything terribly expensive or terribly
complex.  The systems that I write will probably eventually be ported
elsewhere, so I don't plan to anything very exotic that might be difficult
to port.  Public domain stuff is acceptable if it's not *too* buggy.

Does anyone know of something suitable?  You may reply directly to me.

Thanks,
Charlene Hefley
inet:  rand-unix.arpa
uucp:  ...!randvax!hefley

------------------------------

End of AIList Digest
********************

∂13-Aug-88  1834	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #49  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 13 Aug 88  18:34:06 PDT
Date: Sat 13 Aug 1988 19:22-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #49
To: AIList@AI.AI.MIT.EDU


AIList Digest            Sunday, 14 Aug 1988       Volume 8 : Issue 49

 Query Responses:

  How to compile a psychologists' email directory?
  Ornithology as an AI domain
  AISB Proceedings
  Church's Y-operator
  PCES
  Sigmoid transfer function
  Feigenbaum's Citation
  English grammar (open/closed classes)

----------------------------------------------------------------------

Date: 6 Aug 88 21:39:24 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: How to compile a psychologists' email directory?


This is a copy of letter to Bob Morecock, Editor of Psychnet:

Bob,

Here is a thought I had: You could perform a double service, to Psychnet
as well as the psychological community if you systematically put together
a psychologists' email address directory. I heard (perhaps from you) that
APA will be publishing members' email addresses, but that's in hard copy and
a while away. If you could get the addresses in an electronic file/listserver
it would be a great service to the field AND would give Psychnet an automatic
broad subscribership.

I don't know what official policy and rules are on this. I suspect
that they're only now being improvised on the fly. But it seems to me
that a newsletter and email directory are sufficiently non-invasive so
you can probably treat email address information as public-domain --
like (listed) phone numbers. People who want to be "unlisted" could
easily put up software that returned unwanted messages unread, and
they could even distribute passwords to the only ones they want to
hear from; but most psychologists, I suspect, would like to see email
used more widely and imaginatively, at least for the time being. Once
we reach the junk mail threshold we can start putting in safeguards.

A method for compiling such a directory might be this: Besides requesting
APA's cooperation (i.e., asking them to give you all the email lists
they've gotten as they go along) you could send email queries to all the
major universities and research institutions, either requesting their
directories of psychologists email addresses, if possible, or else requesting
that your appeal for psychologists' email addresses be posted on the local
electronic bboards and msgs to ask psychologists to send in their email
addresses for the directory and newsletter directly to you. In exchange
you could promise to provide email addresses to those who inquire --
this would not have to be done by you personally, but by software, if
the directory were set up properly.

This is EXACTLY the right time to set up such a psycholgists' email
directory; it will get already-emailing psychologists more actively involved
and it will encourage others to get email addresses. You might even be able
to get a grant to help you do this from APA, NSF or NIMH.

What do you think? [You may want to post this on Psychnet to get
readers' reactions, but really the Psychnet readership is still far
too small and unrepresentative, so in talking to ourselves now we are
just preaching to the converted. This also needs to be posted to a much
larger population. I'm going to put it on some of the USENET groups to
see whether there is other information on compiling such a directory,
perhaps from experience in other fields, and also to beat the bushes
to see whether this has already been begun or done by anyone else for
psychology or related fields.]

Stevan
--
Stevan Harnad   ARPANET:  harnad@mind.princeton.edu         harnad@princeton.edu
harnad@confidence.princeton.edu     srh@flash.bellcore.com      harnad@mind.uucp
BITNET:   harnad%mind.princeton.edu@pucc.bitnet    UUCP:   princeton!mind!harnad
CSNET:    harnad%mind.princeton.edu@relay.cs.net

------------------------------

Date: 7 Aug 88 00:20:58 GMT
From: sunybcs!dmark@rutgers.edu  (David Mark)
Subject: Re: How to compile a psychologists' email directory?

In article <2721@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>This is a copy of letter to Bob Morecock, Editor of Psychnet:
>
>Bob,
>
>Here is a thought I had: You could perform a double service, to Psychnet
>as well as the psychological community if you systematically put together
>a psychologists' email address directory. I heard (perhaps from you) that
>APA will be publishing members' email addresses, but that's in hard copy and
>a while away. If you could get the addresses in an electronic file/listserver
>it would be a great service to the field AND would give Psychnet an automatic
>broad subscribership.

I have had good success in compiling such a directory for geographers and other
spatial scientists.  Three of us began the project by merging our own lists
about 3 years ago.  Then, we ran a workshop on e-mail at the Association
of American Geographers' national meeting in May 1986.  Periodically, I send
the file to all in the file, asking them to confirm their entries and
suggest colleagues to add.  It just grows and grows.  I did a mass e-mailing
in June to about 240 users, and got about 60-70 new ones back.  I even
have a few spatial psychologists!   We have suggested that a field for
email address be added to the AAG's membership form.

David Mark, Chair, AAG Gegraphic Information Systems Specialty Group

geodmm@ubvmsc.cc.buffalo.edu
geodmm@ubvms.BITNET

------------------------------

Date: 8 Aug 88 17:01:21 GMT
From: jbn@glacier.stanford.edu (John B. Nagle)
Reply-to: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Ornithology as an AI domain


     Rod Brooks at MIT has been addressing this problem with his "artificial
insects".  He is referring to the size of the brain, though, rather than an
attempt to emulate real insect behavior.  His most advanced "insect" to date is
supposed to wander around the AI lab searching for empty aluminum cans.

     I suspect that the time has come to make more detailed studies of low-level
animal behavior than have usually been made in the past.  It might be useful,
for example, to study grasping behavior in squirrels by videotaping their
activities as they are presented with food made up in specific shapes, and
reducing the videotape data into kinematic models, then trying to find
control equations that produce similar behavior.  Studies of animal locomotion,
from Muybridge to Raibert, have used similar techniques, and the most
recent work has resulted in just such control equations.  Raibert now has
machines that walk, run, and most recently, turn flips.  The state of the
art in grasping is much worse; most of the work is based on very elaborate
computational geometry and still doesn't work too well with complex hands.

     I have a conjecture that animals do grasping by moving the hand into
a relatively standard configuration for the type of grasp and then turning
control over to a feedback process that can be modelled by energetic means
along the lines of Khatib or Witkin.  One could validate or refute a
conjecture of this type with properly analyzed photographic studies.

     Trying to actually build nests or emulate other sorts of low-level
animal manipulative behavior will be very difficult until the simpler
tasks of basic manipulation are achievable routinely under varied conditions.

                                        John Nagle

------------------------------

Date: Tue, 9 Aug 88 14:03:53 +0100
From: Tony Cohn <agc%snow.warwick.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: AISB Proceedings

Further to the posting of the call for papers for AISB89 (due by November
1 1988), I have been asked about the availability of past AISB conference
proceedings.

The first AISB conference was in 1974 and thereafter biennially until 1982
which was the first ECAI. AISB conferences restarted bienially from 1985.

The availability of past proceedings is as follows:

1974 (Sussex): not available.
1976 (Edinburgh): not available.
1978 (Hamburg, joint with GI): small numbers available from AISB office
1980 (Amsterdam): small numbers available from AISB office
1982 (Paris, retrospectively became the first ECAI): small numbers available
        from the AISB office; selected and revised papers available as
        "Progress in Artificial Intelligence, Steels and Campbell (eds),
        Ellis Horwood, 1985".
1985 (Warwick): selected papers published as "Artificial Intelligence and
        its Applications, Cohn and Thomas (eds) Wiley, 1986".
1987 (Edinburgh): proceedings published as "Advances in Artificial
        Intelligence, Hallam and Mellish (eds) Wiley, 1987".

The address of the AISB office is

Judith Dennison, AISB Executive Officer, School of Cognitive Sciences,
        University of Sussex, Brighton, BN1  9QN, UK
        (email: judithd@uk.ac.sussex.cvaxa).

Please contact the AISB office for details of how to join the society.
Benefits currently include the AISB quarterly, AI Communications (European
members only), and reduced entry to AISB events.

AISB is currently considering publication of selected papers from
the unpublished proceedings (1974 to 1980).  Any comments on this project
including suggestions for papers to be included please contact me.


_______________________________________________________________________________
|UUCP:   ...!ukc!warwick!agc                    | Tony Cohn                   |
|JANET:  agc@uk.ac.warwick.cs                   | Dept. of Computer Science   |
|ARPA:   agc%uk.ac.warwick.cs@nss.cs.ucl.ac.uk  | University of Warwick       |
|BITNET: agc%uk.ac.warwick.cs@UK.AC             | Coventry, CV4 7AL           |
|PHONE:  +44 203 523088/(secretary: 523193)     | ENGLAND                     |

------------------------------

Date: 10 Aug 88 07:38:56 GMT
From: munnari!banana.cs.uq.oz.au!farrell@uunet.UU.NET (Friendless
      Farrell)
Reply-to: farrell%banana.OZ@uunet.UU.NET (Friendless Farrell)
Subject: Church's Y-operator


In a previous article, GODDEN@gmr.COM writes:
>Subject:  Church's Y-operator

  Maybe not Church's Y-operator, but Curry's. I know for certain (since I have
it in front of me) that it is defined on p178 of

        Combinatory Logic Volume 1
        Curry, Feys and Craig
        (QA9.C84 v1 1958 in our library system)

  Y is commonly called the paradoxical or fixpoint combinator. Its important
property is that

        Y f = f (Y f)

Look up the ACM Guide to Computing Literature under combinators, lambda calculus
or possibly applicative languages if you're really interested.

                                        Friendless

farrell@banana.cs.uq.oz - mail me if you can !

------------------------------

Date: Thu, 11 Aug 88 11:57:39 GMT
From: IT21%SYSB.SALFORD.AC.UK@MITVMA.MIT.EDU
Subject: PCES

Subject: expert systems on PCs
From: Andrew Basden

In July -- parvis@gitpyr.gatech.edu asked for info on the usability of
ESs on PCs.  He said:

>I'm doing research on the usability and feasibility of expert systems on
>personal computers such as the Apple Macintosh and the IBM PC.
>
>There are certainly limitations due to memory size and time efficiency.
>What are typical problems when developing and/or using a PC based expert
>system? What do users (not only developers) think about expert systems
>on PCs? What domain solutions are successfully realized on a PC? Are
>the users satisfied with the features and efficiency or are such
>systems 'just expensive toys'?

We developed ELSIE 1986-6 as one of the Alvey Community Club projects.
It consists of four ESs, linked into one system via a common database, to
give Quantity Surveyors advice when acting in a Lead Consultant role
(hence its name! - work it out if you don't get it; finding the name took
6 months of intensive research!).  In this role they help clients who
want to build, say, offices, at the initial stages of planning.  At this
stage the client wants to know, among other things how much the building
will cost, so as to set a budget, how long it will take, what the
development appraisal over the life of the building will be (taking into
account interest, inflation, maintenance, etc.) and how they should go
about organising the building project.

ELSIE has 4 modules to cover these:
    Budget module,
    Time module,
    Development Appraisal module,
    Procurement module.
We started the project in Jan 86, with the aim of creating awareness of
ES technology capability, but in fact found that by June 87 we had
produced four truly usable systems.  These have now been packaged for
sale, and are selling at a rate of over two per week.  Some companies are
coming back for second copies.  Therefore we feel there is evidence that
ELSIE is NOT just an expensive toy, and that users (not just developers)
think it a good thing.

We developed it in Savoir, which is a mature and flexible shell.  SAvoir
is better than some because it is designed to be fast, and to performs
good checking of the KB, such as for loops.  The Budget module, the
biggest, for instance, has around 2500 rule equivalents.  (Savoir is an
inference net system rather than rule based.)   At this size it is
getting near the limits of Savoir on PCs, but we did not hit any such
limits during the whole project.  We developed on PCs.

This actually gave an advantage during validation, in that we could send
out copies of the ES for testers to run on their own PCs.

The knowledge acquisition and other aspect of building the four modules
was based on a methodology mentioned (in an early version) in Attarwala
and Basden (1985)

References:

Savoir, from ISI Ltd., 11 Oakdene Road, Redhill, Surrey, UK.  ≤1000 on PC

For description of ELSIE, see
Brandon P.S., Basden A., Hamilton I., Stockley J. (1988) 'Expert Systems
- the strategic planning of construction projects', The Royal Institution
of Chartered Surveyors, London, UK.

Attarwala, F.T., Basden. A. (1985) 'A methodology for building Expert
Systems', R&D Management.

Trust this info helps.  I can give more if you ask me specific questions.

Andrew Basden,
I.T. Institute,
University of Salford, UK.

------------------------------

Date: 12 Aug 88 17:48:58 GMT
From: pacbell!eeg!marcus@ames.arpa  (Mark Levin)
Subject: Sigmoid transfer function

> >Try this one :   f(x) = x / (1 + |x|)

I followed up before by claiming that this function is used in physiology
and psychophysics to describe neural properties. However the function I
gave showed only one half of the function and had an obvious
discontinuity, which you may find distastful.

If the function is graphed with the x-axis in log coordinates the
function becomes your favorite Sigmoid Function.

FOOTNOTE:
I used this on my thesis which was on the psychophysics of light adaptation.
This is a convenient form for displaying it in the area of adaptation since the
changing of sigma (the constant 1 above) will shift the function along the
axis without changing the shape of the function (change the *threshold*). And
this is what we want adaptation to do.  But, remember that the real function
has the discontinuity at 0 (or Threshold). For those who are interested, a
better model of adaptation is by scaling the inputs to the function
and keeping sigma constant.  This looks the same, but hypothesizes that
pre-processing accounts for adaptation rather than changes in neural
properties.

marcus@eeg.com
Mark Levin RA at the EEG Systems Lab.
1855 Folsom St., San Francisco, CA  949103
{pacbell,lll-winken,ucsfcgl}!eeg!marcus

------------------------------

Date: Fri 12 Aug 88 22:02:23-CDT
From: Charles Petrie <AI.PETRIE@MCC.COM>
Subject: Feigenbaum's Citation

No help for the citation.  But I offer the suggestion that Feigenbaum's
suggestion has been bumped up a level (and more, recursively) by
research into the explicit control of reasoning.  It isn't enough
simply to have a lot of rules executed by a "dumb" interpreter than it is
to have a clever, domain-independent theorem proving strategy.  You
can get pretty far, but you soon bump up against the complexity
barrier. In the former case, it's because rules interact in
complicated ways that need to be controled to produce useful behavior.
In the case of theorem proving, the community is awaiting
the explicit representation of mathematicians' expertise rather than
depending upon clever encodings and syntactic search strategies.

"Dumb" interpreters have built-in strategies: for instance, OPS. Users
take advantage of built-in strategies by hiding control strategies in
domain data, e.g., ad hoc control predicates and extra rule antecedents.
Genesereth's, de Kleer's, and others' work in making reasoned control explicit
has the potential to make systems much smarter.  That's where I'd
place bets on rule-based success these days: research in the
representation of reasoned but explicit control knowledge.

------------------------------

Date: Sat, 13 Aug 88 00:50:25 -0700
From: mcguire@aerospace.aero.org
Subject: English grammar (open/closed classes)


John B. Nagle <jbn@glacier.stanford.edu> writes:
>     I understand that there is an approach to English grammar based on
>the following assumptions.
>      1.  There are four main categories of words, essentially nouns,
>         verbs, adjectives, and adverbs.  These categories are
>         extensible; new words can be added.
>      2.  There are about 125 "special" words, not in one of the four
>         main categories.  This list is essentially fixed.  (New
>         nouns appear all the time, but new conjunctions and articles
>         never.)
>Does anyone have a reference to this, one that lists all the "special"
>words?

The proper technical term for what I think you are referring to is the
distinction between "open class" v.s. "closed class" words.  Certain
classes of words (where a class is defined by its members in some way
behaving the same) contain a finite number of members while other
classes contain a potentially infinite number. If you want to construct
a list of all closed class words in English you might start with the
prepositions, determiners, articles, auxiliary verbs, conjunctions,
numerals, verb features, etc. - though your ultimate list depends upon
how you define your classes, what "behave the same" means, and what
counts as a words.

While I'm familiar with this distinction, and think that it may have
been around in linguistics for quite some while (Bernard Bloch maybe?),
I don't remember it being used much. The only references that spring to
mind are some studies in speech production and slips of the tongue done
in the 70s by Anne Cunningham (she's a Brit though I'm not sure of her
last name) and maybe Victoria Fromkin claiming that less errors are
associated with closed class words and that they play some privileged role
in speech_production/syntax/lexical_access/the_archetecture_of_the_mind.

I can't think of any explicit influence the "open/closed" distinction has
had on generative grammer. I feel however that implicit awareness of
this distinction has lead people to construct and prefer theories where
closed classes correspond to atomic linguistic categories.  Coupled with
the generativist bias on how classes are defined, this preference has
left most most current theories analyzing the examples:

   "John loved Mary"
   "John has loved Mary"
   "John might love Mary"
   "John seems to love Mary"

as having practically nothing in common.

------------------------------

End of AIList Digest
********************

∂14-Aug-88  2356	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #50  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 14 Aug 88  23:56:15 PDT
Date: Mon 15 Aug 1988 02:38-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #50
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 15 Aug 1988       Volume 8 : Issue 50

 Philosophy:

  AIList Digest   V8 #46
  The Godless assumption
  Symbolic Processing
  Re: AI and the future of the society
  Re: The Godless assumption
  Can we human being think two different things in parallel?

----------------------------------------------------------------------

Date: Fri, 12 Aug 88 21:41:26 EDT
From: Marvin Minsky <MINSKY@AI.AI.MIT.EDU>
Subject: AIList Digest   V8 #46

There is a splendid irony in the whole of AI Digest V8 #46.  Consider the
table of contents:

  Feigenbaum's citation
  Sigmoid transfer function

 There are several remarks on each topic The first set discuss a
controvery between "general methods" and methods that use specific
knowledge.  No one mentions that it is not an either/or but an issue
that depends on the nature of the domain - in particular, that in
certain domains it is necessary to make controlled searches and that
knowledge helps but so do good general heuristics.

Next, in #46, we see the discussion of what smoothing functions to use
for making neural nets learn by estimating derivatives and using
hill-climbing.  The irony lies in how that discussion ignores that
very same knowledge/generality issue.  Specifically, hill-climbing is
a weak general method to use when there is little knowledge.  But even
a little knowledge should then make a large difference.  We ought
usually to be able to guess when a solution to an unknown pattern
recognition problem will require a neural net that has large numbers
of connections with small coefficients - or when the answer lies in
more localized solutions with fewer numbers of larger corefficients -
that is, in effect, the problem of finding tricky combinational
circuits.  Let's see more sophisticated arguments and experiments to
see which problem domains benefit from which types of quasilinear
threshold functions, rather than proposing this or that function
without any analysis at all of when it will have an advantage. More
generally, let's see more learning from the past.

------------------------------

Date: Sat, 13 Aug 88 01:47:07 EDT
From: Marvin Minsky <MINSKY@AI.AI.MIT.EDU>
Subject: The Godless assumption


Andrew Basden warns us

> Why should 'religious' not also be 'practical'?  Many people -
> especially ordinary people, not AI researchers - would claim their
> 'religion' is immensely 'practical'.  I suggest the two things are not
> opposed.  It may be that many correspondents *assume* that religion is
> a total falsity or irrelevance, but this assumption has not been
> proved correct, and many people find strong empirical evidence
> otherwise.

Yes, enough to justify what those who "knew" that they were right did
to Bruno, Galileo, Joan, and countless other such victims.  There is
no question that people's beliefs have practical consequences; or did
you mean to assert that, in your philosophical opinion, they simply
may have been perfectly correct?

I hope this won't lead to an endless discussion but, since we have an
expert here on religious belief, I wonder, Andrew, if you could
briefly explain something I never grasped: namely, even if you were
convinced that God wanted you to burn Bruno, why that would lead you
to think that that makes it OK?

------------------------------

Date: Sat, 13 Aug 88 16:41:41 +0300
From: amirben%TAURUS.BITNET@MITVMA.MIT.EDU
Reply-to: <amirben%TAURUS.BITNET@MITVMA.MIT.EDU>
Subject: Symbolic Processing

>
> I once heard an (excellent) talk by a person working with Symbolics.
> (His name is Jim Spoerl.)
>
> One line by him especially remained in my mind:
>
> "What we can do, and animals cannot, is to process symbols.
> (Efficiently.)"
>
  "Symbolic processing" is usually contrasted with numerical or character
  processing, the "common" use of computers.  It has been pointed out that
  it is this area where machines are superior to humans: no man can process
  numbers in the rate of a computer.  However, he can process symbolic
  information much more successfully.
    On the contrary, I see no reason to believe that animals think
  numrically, or represent the scene they see as an array of numbers and
  applying a computation to it decide which way to go...
    So it seem to me that processing symbols (efficiently) is "what we can do,
 and machines cannot" - not animals.
    As for "intelligence" or "thinking" - I think a bird is still superior
 to any computer.


    Amir Ben-Amram

------------------------------

Date: 13 Aug 88 23:11:50 GMT
From: dmocsny@uceng.uc.edu (daniel mocsny)
Subject: Re: AI and the future of the society


In a previous article, John B. Nagle writes:
>
> Definitely, learning is not required.  Horses are
> born with the systems for walking, obstacle avoidance, running, standing up,
> motion vision, foot placement, and small-obstacle jumping fully functional.
>
>                                       John Nagle

If horses are born with these remarkable skills, and no information transfers
from mare to foal across the placenta, then the skills have only one source:
genes. This is quite encouraging, because the genetic code contains a
manageable amount of information (~750 MB for a human, I believe). If the
information content of the brain comes from life experiences, then it could
be inconveniently large. Here we have the machinery for a wonderfully
complex behavior, and the complete logical specification must be sitting
right there on one molecule, waiting for us to decode it. And it could all
fit on a 5.25'' hard disk...

Dan Mocsny, u. of cincinnati ** standard disclaimer **

------------------------------

Date: 14 Aug 88 09:04:23 GMT
From: proxftl!bill@bikini.cis.ufl.edu (T. William Wells)
Reply-to: proxftl!bill@bikini.cis.ufl.edu (T. William Wells)
Subject: Re: The Godless assumption


In a previous article, IT21@SYSB.SALFORD.AC.UK writes:
: Date: Thu, 11 Aug 88 07:58 EDT
: From: IT21%SYSB.SALFORD.AC.UK@MITVMA.MIT.EDU
: To: ailist@AI.AI.MIT.EDU
: Subject:  The Godless assumption
:
: In going through my backlog of AI Mail I found two rather careless
: statements.
:
: In article <445@proxftl.UUCP>, bill@proxftl.UUCP (T. William Wells)
: writes: > that,.... This means
: > that I can test the validity of my definition of free will by
: > normal scientific means and thus takes the problem of free will
: > out of the religious and into the practical.
:
: Why should 'religious' not also be 'practical'?  Many people - especially
: ordinary people, not AI researchers - would claim their 'religion' is
: immensely 'practical'.  I suggest the two things are not opposed.

There was nothing careless about what I said there, nothing at
all.  Whether you like it or not, the religious entails something
which ultimately is outside of reason.  Arguments on religious
topics generate much heat but little light.  These are the
characteristics of debates on free will which I had in mind when
I labeled certain beliefs and discussions about free will as
`religious'.

:                                                                    It may
: be that many correspondents *assume* that religion is a total falsity or
: irrelevance,

Here, however, you have changed the subject; proposing not only
that religion is practical, but that it might be `true'.
However, the religious `true' is antithetical to any rational
`true': religion and reason entail diametrically opposed views of
reality: religion requires the unconstrained and unknowable as
its base, reason requires the contrained and knowable as its
base.

: Since the non-existence/irrelevance of God has not yet been proved, and many
: claim to have strong empirical evidence of God's existence and
: effectiveness in their lives, may I ask that correspondents think more
: carefully before making statements like the two above.

This is utter sillyness: religion rejects the ultimate validity
of reason; 700 and more years of attempting to reconcile the
differing metaphysics and epistemology of the two has utterly
failed to accomplish anything other than the gradual destruction
of religion.

Science, though not scientists (unfortunately), rejects the
validity of religion: it requires that reality is in some sense
utterly lawful, and that the unlawful, i.e. god, has no place.

Religious argument, and tolerance for religious argument, has
absolutely no place in scientific discussion, and that includes
AI discussion.  You *may not* ask me to think carefully, if your
"reasons" for doing so are religious.

------------------------------

Date: Sun, 14 Aug 88 15:54:08 CDT
From: ywlee@p.cs.uiuc.edu (Youngwhan Lee)
Subject: Can we human being think two different things in parallel?

Can we human being think two different things in parallel? Does anyone know
this? One of my friends said that there should be no problem in doing that. He
said we trained to think linear, but considering the structure of brains only
we must be able to think things in parallel if we can train ourselves to do
that. Is he correct?
                     Thanks. ywlee@p.cs.uiuc.edu.

------------------------------

End of AIList Digest
********************

∂15-Aug-88  2048	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #51  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 15 Aug 88  20:48:12 PDT
Date: Mon 15 Aug 1988 23:30-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #51
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 16 Aug 1988      Volume 8 : Issue 51

 Mathematics and Logic:

  Nitpicking about the Axiom of Choice
  Are all Reasoning Systems Inconsistent?
  Ineluctable self reference

----------------------------------------------------------------------

Date: 1 Aug 1988 1819-PDT (Monday)
From: aspnes@decwrl.dec.com (Jim Aspnes)
Subject: Nitpicking about the Axiom of Choice

> From: bwk@mitre-bedford.ARPA (Barry W. Kort)

> In the above scenarios, the resolution is to pick a path at random
> and pursue it first.  To operationalize the decision, one needs to
> implement the Axiom of Choice.  One needs a random number generator.
> Fortunately, it is possible to build one using a Quantum Amplifier.
> (Casting lots will do, if you live in a low-tech society.)

Technically speaking, the Axiom of Choice claims only that choice
functions exist for collections of infinite sets.  These choice
functions _must_ be deterministic (although what the function maps
each set to is not specified by the axiom), and always exist for finite
sets with or without the Axiom of Choice.

So Quantum Amplifiers would not be terribly useful in the
operationalization of the Axiom of Choice, unless your scenario included
a previous operationalization of a representationalization of arbitrary
infinite sets.

Jim Aspnes <asp@cs.cmu.edu>

------------------------------

Date: Mon, 8 Aug 88 13:16:12 EDT
From: jon@XN.LL.MIT.EDU (Jonathan Leivent)
Subject: Are all Reasoning Systems Inconsistent?

It seems that I keep making the same mistakes.

Here is a full version of the contradiction that I am claiming exists.  I'm
making this as complete as possible because previous versions of my assertion
have all suffered from my own clumsiness (abstractly, they were on the right
track; formally, they were all wrong).  So here goes:

Definition of functions and predicates:

Q(a,b) : equality of numbers predicate (so we don't get mixed up with =)
         Aa[Q(a,a)], AaAb[Distinct(a,b) -> ~Q(a,b)]

"X" : the Godel number of X

s("X",a) : the Godel number of X with all occurrences of * replaced by the
Godel number of the number a : s("R(*)",14) is "R(14)"

P(a) : the predicate of provability within this reasoning system

I won't go into proofs of the existence of s and P.  Chapter 24 of Doug
Hofstadter's book _Godel, Escher, Bach_ has the best explanations of these I've
seen (Hofstadter's versions of the functions are a bit different from mine -
I've chosen mine for conciseness, Hofstadter probably chose his to simplify
the proofs of their existence).

Theorems:

T1. AaAb[Q(a,b)P(a) = P(b)] ; just says that P behaves normally

T2. Aa[P(s("~P(*)",a)) -> ~P(a)] ; If I can prove that I can't prove X, then I
                                   can't prove X

T3. If X can be proven within this reasoning system, then P("X") is true


Definitions of numbers:

Let F such that Q(F,"~P(s(*,*))")

Let G such that Q(G,s(F,F)) ; this means that G is s("~P(s(*,*))",F), which is
"~P(s(F,F))" :  so Q(G,"~P(G)") and G is our Godel sentence

(remember that F and G are numbers, not variables)


The inconsistency itself:

1. P(G) = P("~P(G)")    ; from T1 and the defintion of G
2. P("~P(G)") -> ~P(G)  ; from T2
3. P(G) -> ~P(G)        ; from 1 and 2 by substitution of P(G) for P("~P(G)")
4. ~P(G)                ; from 3 (3 is equivalent to ~P(G) v ~P(G))

5. P("~P(G)")           ; from T3 and the fact that 1 thru 4 proves ~P(G)
6. P(G)                 ; from 1 and 5

4 and 6 contradict.


All that I have done is to show that the existence of a Godel sentence within
a reasoning system equipped to reason about itself is inconsistent.  This
differs from Godel's theorem in that Godel shows that a Godel sentence cannot
be proven within a reasoning system in which it is true.

Perhaps the weak link in the contradiction is step 5, which is somewhat of a
"meta" step.  What bothers me most is that there seems to be no formal way of
writing T3, even though it seems to be obviously true (it is only asserting
the equivalence of P with the implied proof predicate in this reasoning
system, which is true by definition of P).  Note that T3 does not claim that
if X is true, then P("X") is true - it requires X to be proven within this
reasoning system, which is a stronger requirement than truth (because of the
incompleteness of P).

Well, I hope I haven't goofed up here as I did previously.  This finally seems
to be a formal statement of what was on my mind after reading Smullyan's
article "Logicians who Reason about Themselves".

-- Jon Leivent

------------------------------

Date: Mon, 8 Aug 88 14:26:16 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: ineluctable self reference


In AIList Digest for Sunday, 7 Aug 1988 (Volume 8, Issue 39),
Mike Dante <DANTE@EDWARDS-2060.ARPA> writes:

MD>| Bruce Nevin suggests that the solution to the "liar's paradox" lies in
   | the self reference to an incomplete utterance.  How would that analysis
   | apply to the following pair of sentences?

   |                   The next sentence I write will be true.
   |                   The previous sentence is false.

These are both metalanguage sentences.  Each is talking about a
metalanguage sentence (the other).  Part of the construal of the
metalanguage predicate `false' is the inference that the negation of its
argument is true.  Thus, in the metalinguistic baggage that sentence 2
carries with it is the negation of sentence 1.  But since sentence 1
incorporates a reference to sentence 2, sentence 2 incorporates (as part
of the metalanguage baggage required to understand it) a reference to
itself.

In more intuitive terms:  what's wrong here is that somewhere it is
being asserted that sentence 1 is both true and false.  Thus, there must
be two distinct series of readings of sentence 1:  an odd-numbered
series in which it is true in what it asserts of sentence 2, and an
even-numbered series in which it is false because of what sentence 2
asserts.  (The same goes, ceteris paribus, for sentence 2.)  When you
get tired of that loop, you fall back on assuming that the two readings
are but one sentence, even while trying to reconcile the two readings
(which leads into the next two reading-tokens, and so on, each round
`meta-' wrt the prior).

  +---+==>[I affirm that] the next sentence I write will be true.
  | +-|-->[I affirm that] the previous sentence is false [and so I deny that]+
  | | |                                                                      |
  | | +----------------------------------------------(loop 1)----------------+
  | |     [and so I deny that] +
  | |                          |                     (loop 2)
  | +--------------------------+
  |       [and so] +
  |                |                                 (loop 3)
  +----------------+

Added metalanguage material is in square braces.  The loop is perhaps
clearer (although misrepresented a bit) this way:

     +->I affirm S1
     |  I affirm S2,
     |    and so I deny S1      (loop 1)
     |      and so I deny S2    (loop 2)
     |        and so +
     |               |          (loop 3)
     +---------------+

All of the looping is part of S2, even though it is by reference to both
S1 and S2, because it is a succession of conjunctions to S2 under `and
so' or `therefore' or the like.  And until you have finished construing
all the metalanguage conjuncts that you need to understand the sentence
you haven't finished understanding the sentence, and so it is not
available to you as an object of reference as regards its meaning.  (As
regards spelling, or handwriting, or depth of incision in stone, or
authorship, etc., it is of course available for reference, but that is
reference with different ends, and that is why it does not matter
whether the referential comes at the end or not.)

It is the metalanguage performative predicates of assertion that bring
about self-referentiality.  (You see the need for these in e.g.
`Clearly, John has left' where the adverb cannot apply to the verb
`leave', still less to its argument `John', but can only apply to an
elided `I infer' or the like.  See _A Grammar of English on Mathematical
Principles_ for discussion.)

Again, self reference (where one reading-token of the sentence is
`meta-' with respect to the other) is the syntactic core of these
paradoxes.  And this is why the obvious (but incomplete) translation
into symbolic logic is something like S <=> ~S:  you have to represent
both readings separately to represent the fallacy.

This is not a claim that all inferencing is part of the `metalinguistic
baggage' that language users presume as though overtly conjoined when
construing sentences and texts.  Only that which can be thought of as
obvious common knowledge.  I realize this is a very large can of worms,
but I won't take credit for opening it--the problem of coping with
common knowledge in language understanding and language generation is
hardly a new one.

Barry W. Kort (bwk@mitre-bedford.ARPA) writes:

BK>| While we are having fun with self-referential sentences, perhaps
   | we can have a go at this one:

   |         My advice to you is: Take no advice from me,
   |         including this piece.

   | (At least the self referential part comes at the end, so that
   | the listener has the whole sentence before parsing the deictic
   | phrase, "this piece".)

This is straightforwardly self-referential.  I addressed the confusion
about `having the whole sentence' vs `resolving all the referentials in
the sentence' in my contribution to AIList 8.39.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

End of AIList Digest
********************

∂15-Aug-88  2321	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #52  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 15 Aug 88  23:21:26 PDT
Date: Mon 15 Aug 1988 23:34-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #52
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 16 Aug 1988      Volume 8 : Issue 52


 Does AI Kill?  

     and
 
 Free Will - How to dispose of naive science types

----------------------------------------------------------------------

Date: 9 Aug 88 14:05:30 GMT
From: Wayne Mesard <mesard@BBN.COM>
Subject: Re: does AI kill?


From a previous article, by jlc@goanna.OZ.AU (Jacob L. Cybulski):
> The Iranian airbus disaster teaches us one thing about "AI Techniques",
> and this is that most of the AI companies forget that the end product
> of AI research is just a piece of computer software that needs to be
> treated like one, i.e. it needs to go through a standard software
> life-cycle and proper software engineering principles still apply to
> it no matter how much intelligence is burried in its intestines.
>
> I don't even mention the need to train the system users.

That's right, you don't.  Neither do you mention the need to make the
specs and limitations of the system crystal clear to the customer as
well as to management and the people who will be touting the system and
its capabilities to potential users.

The Aegis was being expected to perform beyond specifications.

As another example:

The Challenger accident is almost universally recognized as a
politcally-motivated (in both the governmental, and organizational
sense) decision to employ a device in an environment for which it was
never designed.  The shuttle's booster joints are not blamed, because
they were indeed performing to specification.

Why do I get the feeling that if the shuttle tragedy had resulted from a
software failure as opposed to a mechanical failure caused by the same
type of managerial errors, the public outcry would be against the
"homicidal software?"

Just because computer software possesses some rudimentary decision
making ability does not mean that it isn't bound by the limitations of
its human designers.

AI doesn't kill people.  Politicians kill people.

> Jacob

--
unsigned *Wayne_Mesard();        MESARD@BBN.COM           BBN, Cambridge, MA

    You're living in a sick world when an album of cover songs by a
    bunch of RAISINS can go platinum.

------------------------------

Date: 9 Aug 88 13:41:56 GMT
From: dswinney@afit-ab.arpa  (David V. Swinney)
Subject: Re: How to dispose of naive science types (short)

In article <6657@well.UUCP> sierch@well.UUCP (Michael Sierchio) writes:
>
>Theories are not for proving!
>
>A theory is a model, a description, an attempt to preserve and describe
>phenomena -- science is not concerned with "proving" or "disproving"
>theories.
>A theory may or may not adequately describe the phenomena in question, in
>which case it is a "good" or "bad" theory
[deletion]
>Demonstration and experimentation show (to one degree or another) the value
>of a particular theory in a particular domain -- but PROOF? bah!
>--

I agree that theories are not for proving.

I would like to add, however, that a theory which does not adequately
describe a phenomenon should be called an "old theory" and not a "bad theory".

To be a theory in the first place, a hypothesis (at one time or another)
must have been shown to adequately describe a phenomenon or we would refer
to it as a discarded hypothesis.

It is becoming fairly common these days for scientists to present hypothetical
work to the public under the title "Theory".
This tends to dilute public trust in science and scientists in general.
We should all be a little more careful in our use of "theory" and the science
effort required by many parties to achieve that status.

------------------------------

Date: 12 Aug 88 12:37:26 GMT
From: ulysses!gamma!pyuxp!u1100s!castle@bloom-beacon.mit.edu 
      (Deborah Smit)
Subject: Re: How to dispose of naive science types (short)

In article <495@afit-ab.arpa>, dswinney@afit-ab.UUCP writes:
> In article <6657@well.UUCP> sierch@well.UUCP (Michael Sierchio) writes:
> >
> >Theories are not for proving!
> >
> >A theory is a model, a description, an attempt to preserve and describe
> >phenomena -- science is not concerned with "proving" or "disproving"
> >theories.
>
> It is becoming fairly common these days for scientists to present hypothetical
> work to the public under the title "Theory".
> This tends to dilute public trust in science and scientists in general.
> We should all be a little more careful in our use of "theory" and the science
> effort required by many parties to achieve that status.

Another big mistake is when scientists present hypothetical OR theoretical
work under the title "FACT".  E.G. Evolution.  The 'theories' of evolution
(of which there are many, many, and conflicting), do not even fit under
the title theory, since they are not demonstrable, and do not fit with
the facts shown by the fossil record (no intermediate forms -- before
you flame, examine current facts, fossils previously believed to be
intermediate have been debunked).  It certainly cannot be called FACT,
though in college courses, some professors insist on speaking of
'the fact of evolution'.  When evolutionists cannot support their
hypothesis by showing aggreement with known facts, they resort to
emotional mind-bashing (only foolish, gullible people don't believe
in evolution).  Just my two cents.  I enjoy reasonable theories,
they truly unify what we observe, but I don't appreciate emotional
outbursts on the part of those who can't give up their inaccurate
hypotheses to go on to something better.

                - Deborah Smit
:

------------------------------

End of AIList Digest
********************

∂16-Aug-88  0157	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #53  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 16 Aug 88  01:56:52 PDT
Date: Mon 15 Aug 1988 23:36-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #53
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 16 Aug 1988      Volume 8 : Issue 53

 Responses:

  Sigmoid transfer function
  Feigenbaum's citation

----------------------------------------------------------------------

Date: 14 Aug 88 04:51:18 GMT
From: hubcap!shorne@gatech.edu  (Scott Horne)
Subject: Re: Sigmoid transfer function

From article <517@eeg.UUCP>, by marcus@eeg.UUCP (Mark Levin):
>> >Try this one :   f(x) = x / (1 + |x|)
>
> If the function is graphed with the x-axis in log coordinates the
> function becomes your favorite Sigmoid Function.

                            /    x    \
Then why not use f(x) = exp | ------- | ?
                            \ 1 + |x| /



                                --Scott Horne

uucp:           ....!gatech!hubcap!scarlett!{hazel,citron,amber}!shorne
                (If that doesn't work, change "scarlett" to "scarle")
                (If *that* doesn't work, send to cchang@hubcap.clemson.edu)
                (If *that* doesn't work, wait until January & write me at Yale)
SnailMail:      Scott Horne, 812 Eleanor Dr., Florence, SC   29501
VoiceNet:       803 667-9848 (home); 803 669-1912 (office)

------------------------------

Date: 14 Aug 88 16:13:15 GMT
From: buengc!bph@bu-cs.bu.edu  (Blair P. Houghton)
Subject: Re: Sigmoid transfer function

In article <2628@hubcap.UUCP> shorne@citron writes:
>From article <517@eeg.UUCP>, by marcus@eeg.UUCP (Mark Levin):
>>> >Try this one :   f(x) = x / (1 + |x|)
>
>                            /    x    \
>Then why not use f(x) = exp | ------- | ?
>                            \ 1 + |x| /

I always thought the real sigmoid function was erf(x), which
results as the integral of a Normal distribution, which is
the all-time fave among scientists for describing natural
occurrences.
                                                             2     2
It's not much fun to calculate... but then, neither is exp(-x /sigma ).

Ya gotta run into Taylor's theorem somewhere...

It (erf(x)) is available in most math-library packages, though.

                                --Blair
                                  "MacLaurin and Taylor and
                                   Dirac are sitting in a leaking
                                   balloon over the ocean, discussing
                                   which will be the one to leap
                                   into the sea..."

------------------------------

Date: 13 Aug 88 22:43:25 GMT
From: uccba!uceng!dmocsny@ohio-state.arpa  (daniel mocsny)
Subject: Re: Feigenbaum's citation

In a previous article, David Nelson writes:
> To be replaced by the optimistic hope that dumping lots of examples into a
> dumb neural net will produce something profound :-)
>
> daven (Dave Nelson)
> arpa:  daven @ lll-crg.llnl.gov
> uucp:  ...{seismo,mordor,sun,lll-lcc}!lll-crg!daven

 I for one would be happy to forego the profound to realize the usefully
mundane. I can scribble a diagram in one minute that I need thirty minutes
to write a LaTeX picture description for. If my computer was not so
severely retarded, I might be able to make my book deadline.

Neural nets may emerge as quick-and-dirty translators to aid us in slicing
through the Babel out there (and in here :-) ). Give me a computer that
has some idea of what I am saying, and I will handle the profundities. (Or
I will at least try...)

While we are dreaming, how close are you net.AI types to realizing a device
that can obey commands like this: `Examine all the applied math papers
relating to subjects x, y, ..., z, and find the ones bearing on my
current problem q.' I realize that's pretty vague, but I'm not talking about
a straight database query here. What we scientists and engineers really need
is some rational way for us to post our work so our peers can quickly and
transparently access it. How many times have each of us worked for months
on something, only to find that Dr. So-and-so already published in some
obscure conference proceedings not in your local library? Or worse, we did
_not_ find out? If a solution exists, I see it including (1) hardware
advances (terabytes of storage to handle all the scientific literature),
(2) political reform (intellectual property laws need to serve the
scientific community, not the science publishers), (3) standards (before
a computer can get at the technical literature, we need to agree on how
we plan to specify our pages), and (4) of course, the AI (the system must
understand NL queries from techno-types, and retrieve references by
content).

I know I'm probably out to lunch here, but I can't help thinking about
those research $$$ evaporating while I play librarian (and not well, at
that). Not to mention my altogether-too-brief lifespan allotment...

Dan Mocsny, u. of cincinnati *** standard disclaimer ***

------------------------------

Date: 12 Aug 88 08:12:12 GMT
From: kddlab!atr-la!geddis@uunet.uu.net  (Donald F. Geddis)
Subject: Re: Feigenbaum's citation

In a previous article, Earl H. Kinmonth writes:
> As I remember, Feigenbaum achieved notoriety for his (probably
> largely ghosted book) on the Japanese "Fifth Generation
> Project."  Did anything ever come out of the Fifth Generation
> project other than big lecture fees for Feigenbaum to go around
> warning about the Japanese peril?
>
> Is he really a pompous twit (the impression given by the book) or
> is that due to the scatter-brained ghost writer?

This kind of posting really bothers me.  Given the current silence surrounding
results from the Fifth Generation Project, it may be fair to argue that the
"Japanese peril" was overblown.  But it is crucial to remember that things are
always much clearer in hindsight, and we have a great advantage today over
Feigenbaum when the book was written.  (I leave out whether Feigenbaum actually
wrote the book himself, as I have no information on the matter.  But then,
obviously, neither does Earl.)

Even given current results of the Project, it is even more difficult to be
sure that the book's predictions were poor for its time.  Japan was beginning
a massive, well-funded AI effort, and the U.S. lacked (still lacks?) any
similar project.  It is almost always impossible to predict what will happen
to research on the cutting edge of technology, and there is a very real
danger that ignoring such a "threat" can significantly worsen the U.S.'s
position in AI.

And of course, none of the proceeding is any excuse for suggesting that
Feigenbaum is a "pompous twit".  Earl, you really ought to show more
maturity than that.  From personal contact (he has given me informal
advice over the last three months), I can say that he has been very helpful
and informative to at least one graduate student peon.  He has a deep interest
in the Japanese, and is firmly convinced that the efforts of Japan are
largely ignored in the United States, much to the future regret of the U.S.
He has given me pages of unsolicited travel advice (I'm spending the summer
in Japan), and offered tours of his research lab at the Knowledge Systems
Lab at Stanford.  At no time have I sensed anything that might be termed
"pompous".

Now while I know that my limited and personal experience tells little about
Feigenbaum's overall character, it seems that unless Earl has very good
backing, comments such as he made are better left unsaid.

    -- Don

(Dis-Disclaimer:  I am a Ph.D. student at Stanford in AI, so my opinions are
biased and probably to be disregarded on this subject.)
--
"You lock the door, and throw away the key
 There's someone in my head, but it's not me."   -- Pink Floyd
Internet: Geddis@Score.Stanford.Edu (which is forwarded to Japan...)
USnail:   P.O. Box 4647, Stanford, CA  94309  USA

------------------------------

Date: 15 Aug 88 23:00:52 GMT
From: cck@deneb.ucdavis.edu  (Earl H. Kinmonth)
Subject: Re: Feigenbaum's citation

In a previous article, Donald F. Geddis writes:
>In a previous article, Earl H. Kinmonth writes:
>> As I remember, Feigenbaum achieved notoriety for his (probably
>> largely ghosted book) on the Japanese "Fifth Generation
>> Project."  Did anything ever come out of the Fifth Generation
>> project other than big lecture fees for Feigenbaum to go around
>> warning about the Japanese peril?
>>
>> Is he really a pompous twit (the impression given by the book) or
>> is that due to the scatter-brained ghost writer?
>
>This kind of posting really bothers me.  Given the current silence surrounding

It should bother you more that a silly book like the Fifth Generation
Project would get taken seriously, testimony to the low level of
knowledge about Japan prevailing in this country.

>results from the Fifth Generation Project, it may be fair to argue that the
>"Japanese peril" was overblown.  But it is crucial to remember that things are

That appears to be putting it mildly. My impression is that the gap
between hype and delivery is such that in almost any other context,
people would be screaming fraud and calling their lawyers.

>always much clearer in hindsight, and we have a great advantage today over
>Feigenbaum when the book was written.  (I leave out whether Feigenbaum actually
>wrote the book himself, as I have no information on the matter.  But then,
>obviously, neither does Earl.)

Not quite; I've read it.  Have you?  I've also heard (second
hand) that Feigenbaum has blamed the hype on McCorduck (his "joint
author").

>Even given current results of the Project, it is even more difficult to be
>sure that the book's predictions were poor for its time.  Japan was beginning
>a massive, well-funded AI effort, and the U.S. lacked (still lacks?) any

Is that bad? The whole course of postwar Japanese economic development
and that of the US prior to World War II shows that there is a great
advantage to letting others do the big ticket, high risk research, and
concentrating instead on commercialization and mass production.

>similar project.  It is almost always impossible to predict what will happen
>to research on the cutting edge of technology, and there is a very real
>danger that ignoring such a "threat" can significantly worsen the U.S.'s
>position in AI.

So what? Name the areas in which the Japanese have been at the cutting
edge of science (science, not technology). Name the areas in which the
US has been at the cutting edge. Compare the size of the lists. Now
compare the growth in real GNP, growth in real income, etc. for the two
countries.

It has been reported that the Fifth Generation project was so oversold
that the gap between the hype for it and the results has soured the
climate for funding other basic research in Japan....

There is also an even greater danger that blowing up a trivial or even
totally non-existent threat will divert scarce resources into
irrelevant or counter-productive areas. The US-USSR arms race provides
many examples of this.

I vaguely remember reading that the Fifth Generation Project (the book)
was in part responsible for the Pentagon asking for a 650 megabuck
boondoogle to meet the Japanese challenge in AI.

>And of course, none of the proceeding is any excuse for suggesting that
>Feigenbaum is a "pompous twit".  Earl, you really ought to show more

I agree, but read what I originally wrote. I did not use the hype
concerning the Fifth Generation as the basis for deciding that he was a
"pompous twit," and did not in fact say that. I said that's the way he
comes over in the book! Read the book. I'm willing to bet that "modest"
or "self-effacing" are not the adjectives you would apply to the
personality depicted there.

>maturity than that.  From personal contact (he has given me informal

READ THE BOOK.  He comes over as a "legend in his own mind."  As
I indicated, that may be due to the ghost-writer ("joint
author"), but I find it hard to take anyone seriously who'd allow
their name on such a puff piece unless they're running for public
office....

>advice over the last three months), I can say that he has been very helpful
>and informative to at least one graduate student peon.  He has a deep interest
>in the Japanese, and is firmly convinced that the efforts of Japan are

But little real knowledge, again judging from the book.

>largely ignored in the United States, much to the future regret of the U.S.
>He has given me pages of unsolicited travel advice (I'm spending the summer
>in Japan), and offered tours of his research lab at the Knowledge Systems
>Lab at Stanford.  At no time have I sensed anything that might be termed
>"pompous".

That may well be the case, and if it is, that is precisely what I
asked - specifically, was Feigenbaum in the flesh, the "pompous
twit" presented in the book.  An adequate response in this context
would have been "No."

>Now while I know that my limited and personal experience tells little about
>Feigenbaum's overall character, it seems that unless Earl has very good
>backing, comments such as he made are better left unsaid.

My knowledge is inherently limited to Feigenbaum, the public
personality.  Aside from the book, I've read reports of a couple
of his talks.  Frankly, he comes over like a silicon snake oil
salesman.  I get the same feeling from his pronouncements that I
get from each Pentagon report on Soviet military superiority: it
may be true, but I'd much rather hear the message from someone
who does not stand to gain from drum beating.

PS:

If you'd like some unartificial intelligence about Japan, I may
be able to help.  I've lived there a total of six years including
three years as a graduate researcher (modern Japanese social and
economic history) at the University of Tokyo.

>    -- Don
>
>(Dis-Disclaimer:  I am a Ph.D. student at Stanford in AI, so my opinions are
>biased and probably to be disregarded on this subject.)
>--

I don't know about Stanford, but my graduate school experience
was such that any faculty person who even acknowledged the
existence of students appeared kind and loving....

>Internet: Geddis@Score.Stanford.Edu (which is forwarded to Japan...)
>USnail:   P.O. Box 4647, Stanford, CA  94309  USA

E H. Kinmonth Hist. Dept.  Univ. of Ca., Davis Davis, Ca. 95616
916-752-1636/0776

Internet:  ehkinmonth@ucdavis.edu
           cck@deneb.ucdavis.edu
BITNET:    ehkinmonth@ucdavis
UUCP:      {ucbvax, lll-crg}!ucdavis!ehkinmonth
           {ucbvax, lll-crg}!ucdavis!deneb!cck

------------------------------

End of AIList Digest
********************

∂17-Aug-88  2344	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #54  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 17 Aug 88  23:44:16 PDT
Date: Thu 18 Aug 1988 02:13-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #54
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 18 Aug 1988      Volume 8 : Issue 54

 Philosophy:

  Human symbol processing
  Navigation and symbol manipulation
  Can we human being think two different things in parallel?
  The Godless assumption

----------------------------------------------------------------------

Date: Sun, 14 Aug 88 13:57:17 -0200
From: Antti Ylikoski <ayl%hutds.hut.fi%FINGATE.BITNET@MITVMA.MIT.EDU>
Subject: human symbol processing

In AIList Digest V8 #47, jbn@glacier.stanford.edu (John B. Nagle)
writes:

>Antti Ylikoski (YLIKOSKI@FINFUN.BITNET) writes:
>>I once heard an (excellent) talk by a person working with Symbolics.
>>(His name is Jim Spoerl.)
>>
>>One line by him especially remained in my mind:
>>
>> ...
>>
>>A very typical example of the human real-time symbol processing is
>>what happens when a person drives a car.  Sensory input is analyzed
>>and symbols are formed of it: a traffic sign; a car driving in the
>>same direction and passing; the speed being 50 mph.  There is some
>>theory building going on: that black car is in the fast lane and
>>drives, I guess, some 10 mph faster than me, therefore I think it's
>>going to pass me after about half a minute.  To a certain extent, the
>>driver's behaviour is rule-based: there is for example a rule saying
>>that whenever you see a red traffic light in front of you you have to
>>stop the car.  (I remember someone said in AIList some time ago that
>>rule-based systems are "synthetic", not similar to human information
>>processing.  I disagree.)
>
>      As someone who works on automatic driving and robot navigation,
>I have to question this.  One notable fact is that animals are quite
>good at running around without bumping into things.  Horses are capable
>of running with the herd over rough terrain within hours of birth.
>("Horses of the Camargue" has some beautiful pictures of this.)  This
>leads one to suspect that the primary mechanisms are not based on
>symbols or rules.  Definitely, learning is not required.  Horses are
>born with the systems for walking, obstacle avoidance, running, standing up,
>motion vision, foot placement, and small-obstacle jumping fully functional.
>
> ...
>
>In real-world situations, as faced by robots, the processing necessary
>to put the sensory data into a form where rule-based approaches can even
>begin to operate is formidable, and in most non-trivial cases is beyond
>the state of the art.
>
> ...
>
>Personally, I suspect that horse-level performance in navigation and
>squirrel-level performance in manipulation can be achieved without any
>component of the system using mathematical logic.

I won't disagree.

I expressed my thoughts badly; my point was that driving a car is a
different activity from a horse running over uneven terrain, and it
requires symbol processing, even if animals can navigate without logic.

Driving a car could perhaps be described as a combination of already
existing geometric etc. reasoning capabilities and learned symbol
processing skills.

--- Andy Ylikoski



------------------------------

Date: Mon, 15 Aug 88 07:08:54 PDT
From: Stephen Smoliar <smoliar@vaxa.isi.edu>
Subject: navigation and symbol manipulation

John Nagle offered the following observations:

>      I would encourage people moving into the AI field to work in the
> vision, spatial, and geometric domains.  There are many problems that
> need to be solved, and enough computational power is becoming available
> to address them.  Much of the impetus for the past concentration on highly
> abstract domains came from the need to find problems that could be
> addressed with modest computational resources.  This is much less of a
> problem today.  We are beginning to have adequate tools.
>
>      Personally, I suspect that horse-level performance in navigation
> and squirrel-level performance in manipulation can be achieved without
> any component of the system using mathematical logic.

It is also worth noting that Chapter 8 of Gerald Edelman's NEURAL DARWINISM
includes a fascinating discussion of the possible role of interaction between
sensory and motor systems.  I think it is fair to say that Edelman shares
Nagle's somewhat jaundiced view of mathematical logic, and his alternative
analysis of the problem makes for very interesting, and probably profitable,
reading.

------------------------------

Date: Mon, 15 Aug 88 09:26:06 PDT
From: norman%ics@ucsd.edu (Donald A Norman-UCSD Cog Sci Dept)
Reply-to: danorman@ucsd.edu
Subject: Can we human being think two different things in parallel?


The question is: Can we human being think two different things in parallel?
the answer is, it all depends you what you mean.   This is one of my
research areas, so let me try an answer.

There is a vast literature in psychology on the topic of simultaneous
activity (the area is called the field of "attention.")  But the
question is ill-formed, for to answer it requires the definition of
three terms, none of which are well defined:
        think
          What do you mean by "think"?  Any mental activity?
          Well, clearly we can normally walk and talk at the same
          time, but if I am walking over a slippery, dangerous
          mountain peak, I can't: I have to stop talking.  And I can
          listen to you and watch television.  And I can shadow text
          coming in one ear (a old, once favorite experimental method)
          while doing visual tasks at the same time, but failing to do
          verbal tasks.
        thing
          What constitutes separate things.  If the things are on
          closely related topics, are they separate?  If I do mental
          multiplication, is keeping track of the carrys (a working
          memory task) a separate thing than doing the table-look up
          for the products, or from telling you what I am doing?

          Of if I am thinking about tomorrow's dinner, is that
          different than navigating my car through heavy traffic
          (both require decision making, memory,  and planning).
        "in parallel."
          There are lots of ways of doing things in parallel,
          depending upon your deffinition.  Rapid time switching
          (round-robin time sharing) or independent processing
          circuits.  Not clear which the human does -- probably both.
          But there is probably interaction among the things done in
          parallel, so that for many combinations of tasks, although
          they are indeed done at the same time, at least one is done
          more slowly, less efficiently, or with more errors than were
          it being done alone-- so how does this qualify in answer to
          the question.

          Note that connectionist circuits so far only do one task at
          a time, and serially (that is, they can settle into only one
          meaningful state at any one time), although they do that
          task in a highly parallel fashion.  This again, shows the
          dificulty of interpreting the question.

So, the question is ill formed and maybe unanswerable.  There are,
however, clear and unmistakable limits on how much a person can do
consciously at any one time.  But if the skill is highly practiced,
it becomes "automated" and can then evidently be done at the same time
as other things, with either no degradation in either task, or only
small degradation.

If I do 2 things at once, and one is degraded as a result, is this 2
things in parallel?

don norman
     (Schedule: I will be away Aug 17 - Sep 2 -- mostly at the
     International Congress of Psychology in Sydney, Australia.)
Donald A. Norman
Department of Cognitive Science C-015
University of California, San Diego
La Jolla, California 92093
INTERNET: danorman@ucsd.edu     INTERNET: norman@ics.ucsd.edu
BITNET:   danorman@ucsd.bitnet  UNIX:{decvax,ucbvax,ihnp4}!sdcsvax!ics!norman
     (If you reply directly to me, please include your postal
     mail address and all possible e-mail addresses.  I often
     can't answer people because their e-mail paths fail.)

------------------------------

Date: 15 Aug 88 11:53 PDT
From: hayes.pa@Xerox.COM
Subject: Re: think two different things in parallel

YWLee writes

>Can we human being think two different things in parallel?..
>One of my friends said that there should be no problem
> in doing that.

It all depends on what your friend meant by `think'.  In one sense, we are
thinking lots of things in parallel all the time.  For example,  visual
processing is going on ( when your eyes are open ) while you are choosing a form
of words to express what you want to communicate, and something in your head is
listening as well, because if you hear a tiger roar behind you, you will move
really fast.  Even quite simple skills seem to require whole lots of parallel
mental activity.  You probably didnt mean that, though: you meant something more
like the intuitive sense of think.   I dont think we have any clear account of
what that amounts to in terms of cognitive machinery.
But in any case, you wouldnt get the answer to the question by considering the
structure of brains.  The brain is clearly a highly parallel machine, but that
doesnt entail anything about the  structure of conscious thought.
Pat Hayes

------------------------------

Date: Mon 15 Aug 88 14:20:03-PDT
From: Mike Dante <DANTE@EDWARDS-2060.ARPA>
Subject: Re: The Godless assumption.

     I am more than a little surprised by Marvin Minsky's ad hom attack on
Andrew Basden.  Would it be equally fair to turn the argument around and
replace "religion" with "science"?   For example, would Dr. Minsky feel
that his support for science can be fairly attacked by saying:

>Yes, enough to justify what those who "knew" that they were right did
>to the Kulaks in the name of "Scientific" socialism, or the atrocities
>carried out by Nazi "scientists" on concentration camp victims.  There is
>no question that people's beliefs have practical consequences; or did
>you mean to assert that, in your philosophical opinion, they simply
>may have been perfectly correct?

    I would hope, that on second thought, Dr. Minsky might agree that Andrew
Basden is no more responsible for burning Bruno than Marvin Minsky is for
experimenting on Jews.  And even more, that neither religion nor science has
any justification for being self righteous.   Both science and religion have
been used to justify atrocities.  I don't see that as any excuse for being a
Luddite in either field.

------------------------------

Date: Wed, 17 Aug 88 09:24 CDT
From: T. Michael O'Leary <HI.OLeary@MCC.COM>
Subject: Assumptions


     >Science, though not scientists (unfortunately), rejects the
     >validity of religion: it requires that reality is in some sense
     >utterly lawful, and that the unlawful, i.e. god, has no place.

To me this requirement is unnecessarily strict.  Science does not
require that reality be utterly lawful, but merely that it be possible
for scientists to observe patterns in nature.

When asked (apparently by Napoleon) where God fit into his equations,
Laplace is said to have replied, "I have no need of that hypothesis."
To my way of thinking, if he had been confronted by the assertion of Mr.
Wells shown above, his reply should have been the same.

        Michael O'Leary

------------------------------

End of AIList Digest
********************

∂18-Aug-88  2202	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #55  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 18 Aug 88  22:02:13 PDT
Date: Fri 19 Aug 1988 00:38-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #55
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 19 Aug 1988       Volume 8 : Issue 55

 Today's Topics:

  Spang Robinson Report on Supercomputing
  Computer Bridge

----------------------------------------------------------------------

Date: Wed, 17 Aug 88 10:44:30 CDT
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: bm957

Spang Robinson rEport on Super Computing and Parallel Processing
June 1988, Volume 2, No.6

Lead article is on High Performance Networking:

Scientific Computer System announced a 1.4 gigabit per second token
net.  HYPERchannel-DX and Canstar's Super-Network are 100 megabit
per second and Ultra Computer is rumored to have something competative
with Scientific Computer Systems.
______________________________________________________________________________
Next article is on "Network Computing Using Linda"

Sandia showed that 14 VAX processors were twice as powerfull as Sandia's
Cray-1 doing a rocket plume analysis.   The VAXEN were over a thousands
a part.  Linda coordinates the processes.  Similar results
were achieved comparing a CRAY 1 with eleven VAXEN networked
on a Semiconductor application.  However a
thermal analysis achieved only six percent of CRAY performance.

((((((((((((((((((((((((((((((((((((((((

Active Memory Technology has announced a 1024 processor system in
a twenty-five inch tall enclosure.  The system is based on single
board chips.  The DAP510 costs $120,000.

++++++++++++++++++++++++++++++++++++++++
Review ofthe Karteshev Boston Super computer Conference.

It had 3500 registrants and had Gene Amdahl, Alan Perlis,
erich Blockh and Marvin Minsky.

****************************************
SHORTS:

Network Systems Corporation had a 78 percent increase in hardware
sales, 49 percent more revenue and 250 percent increase in profit
over the previous year.

Encore has its third consecutive profitable quarter with revenues
of 9 million and profits of 649,000.  They have installed 170
systems.  Berita Information Systems will be installing systems
in Malaysia with a system already at New-Straits Time Press.

Stellar Computer announced it has 48 milliion invested since
inception.

Cray had a drop in revenues from 214 million to 145 million from
this first quarter to the first quarter of lastyear.  Cray did
get 37 million in two orders, a X-MP/48 for Bettis Atomic
Power Lab and the Korean Advanced Institute of Science and Technology
ordered a Cray model 2.

Convex' first quarter revenues jumped to 22.1 million, double a year ago.
Net income was 1.1 million.

Multiflow delivered 14 TRACE systems this quarter with 33 sold total.
It will be concentrating on computational chemistry.

Concurrent Computer Corporation had increased revenue to 10.9 milllion
from 4.7 year with revenues going from 201.8 million to 179.2 million.

Cray Research announced  X-MP Extended architecture.
It will run both the X-MP and Y-MP instruction sets and allows up to
four times as much memory.  DEC has software that will allow
development of CRAY software under VMS.

Cydrome has expanded its Cydra 5 supercomputer system.  The model 1205 is
priced at $495,000.  It ow achieves 14.5 megaflops on the LINPACK benchmark
and 4.5 megaflop on Livermore loops.

Parasoft announced a version of dbx for parallel systems.  It runs on
NCUBE, Transputer systems.

Ardent introduced a packaged system for computational chemistry.
The systems are BIOGRAF which handles proteins, nucleic acids, lipids
and carboyhydrades.  POLYGRAF is designed for polymer and
material chemistry and handles amorphous or crystallnie solids.

William H. Wallace left Convex to go to Colorado Springs startup
Prisma corporation.

------------------------------

Date: 10 Aug 88 17:50:14 GMT
From: tness7!tness1!nuchat!moray!uhnix1!ceick.cs.uh.edu!ceick@bellcore
      .bellcore.com  (C. F. Eick)
Subject: COMPUTER BRIDGE


                              COMPUTER BRIDGE  ??


Bridge is one of the few games that is known all over the world, which
is confirmed by the fact that usually about 50 nations participate in
Bridge world championships. The American Contract Bridge League(ACBL)
has about 200000 members.

Computer Science research was always attracted to simulate human
capabilities in computers. Writing computer programs that play "good"
Bridge is a challenging research project, because:

*  Bridge requires strategical planning of a high degree of  complexity:
   a "good" Bridge player selects using complex criteria the plan that is most
   likely to succeed in the current situation from a set of applicable plans;
   sometimes, dynamic events force the  player to refine his plan (for example,
   if he didn't get a trick he was hoping for). That is, the game requires
   planning in uncertain environments.
   Furthermore, two independent players have to cooperate in defense
   and bidding; that is, Bridge requires multi-agent planning.
*  In Bridge, like in all other card-games, the distribution of the  cards
   is unknown. However, during the bidding and play additional clues become
   available to locate which cards are held by which oponent. That is, a good
   Bridge program has to be capable to draw inferences and make guesses
   based vague and uncertain knowledge that evolves with time; that is, it
   requires reasoning capabilities involving fuzzy knowledge.
*  There is a large amount of expert knowledge on how to play Bridge.
   Fortunately, this knowledge can be accessed easily:  it is described
   in about 1000 Bridge books and a even larger number of Bridge magazines. To
   represent and organize this huge amount of knowledge in such a way that it
   can be easily processed, changed, retrieved and refined by computers is a
   very challenging knowledge engineering task.
*  Bridge knowledge is usually represented in forms of rules. Therefore,
   a rule based programming style seems to be very attractive when automating
   the game.
*  Brute force algorithms -- very popular in computer chess -- do not seem
   to be suitable for card games, because of the large number of potential
   hands (that additionally have a different probabilities to occur).
   Therefore, different approaches have to be chosen, especially for the early
   stages of the game, when the distribution of the cards is still uncertain.

In summary, research automating Bridge has to address the following topics that
are of major intrest for Artificial Intelligence Research:
reasoning and planning in uncertainty, multi-agent planning, rule based
programming paradigms, efficient search algorithms for incompletely specified
problems.
                            QUESTIONS:

Who is interested in the topic? What can we do to increase the popularity of
Computer Bridge? Who is currently developping programs for Computer Bridge?
Do you think Computer Bridge is a good paradigmatical example application for
AI-research. Will computers be successful in Bridge?

              Development of Bridge Programs at the University of Houston

We are working on Bridge programs since 1986. Currently, 6 students are involved
in the developmemt of Bridge programs.  A first prototype of a Bridge
Bidding program Cougar has been finished in March 1988. The program uses (more
or less) Kantar's version of the Standard American Bidding System. The program
is complete. It also includes defensive bidding, competive bidding, cuebidding
in Slam tries. The program uses a rule-based approach. Rule-sets are selected
to make the appropiate bid in a given context. Writing a Bridge bidding program
that can compete at club level is a quite challenging task. Some figures
will illustrate this point: The program is written in LISP and consists of
about 19000 lines of symbolic code. About 9000 lines are required by the 900
Bridge Bidding rules currently used by the program; the interpretation of
partner's and oponent's bids and hand-evaluation functions and I/O
require about 8000 lines; finally the rule-based inference
engine requires 1000 lines of code. These numbers might look frightening;
however, if we would rewrite the program -- in my opinion -- less than
12OOO lines of symbolic code would be required. Furthermore, the project is a
good test for knowledge engineering techniques and rule-based programming.
Currently, the Cougar program has already about 50% more rules than the
MYCIN system.
The program's offensive bidding part works quite well. It has already been
tested a lot and outperforms the existing commercial Bridge programs.
The program's component for defensive and competive bidding
still has a lot of bugs. It is -- because of the size of the program and the
large number of combinations -- very timeconsuming to fix these bugs. The
programs hand evaluation component works quite reasonable, however many
improvements can be made. A weakness of the current program is its incapability
make inferences from partner's and openents bids (if I have 16 HCP; each openent
has at least 10 HCP, then my partner has at most 4 HCP). Since beginning of 1988
we have also started to automate Bridge play.  We are using a rule-based
approach for defensive play and a planning approach for declarer play. We hope
to finish a first prototype of a complete Bridge playing program through Spring
1989.

According to my knowledge a number of students at US universities are
writing programs automating special aspects of Bridge. However, most of these
students are working quite isolated. According to my experience, at least 3 to 4
students a needed to develop a moderately playing and bidding Bridge program. In
order to be successful, such a project needs support by the corresponding
Computer Science Departments.

In Summer 1989 there will be a Computer Games Olympiad in London. Furthermore,
very likely, an IEEE Computer Bridge Contest for students will be scheduled
for the second half of 1989.

If you have any questions concerning our projects or suggestions for
the future development of Computer Bridge. Please, let me know!

                                           Christoph F. Eick
                                           Assistant Professor
                                           Department of Computer Science
                                           University of Houston
                                           email: ceick@ceick.cs.uh.edu

------------------------------

Date: 14 Aug 88 09:45:41 GMT
From: csli!rustcat@labrea.stanford.edu  (Vallury Prabhakar)
Subject: Re: COMPUTER BRIDGE

In article <834@uhnix1.uh.edu> ceick@ceick.cs.uh.edu (C. F. Eick) writes:
#
#                               COMPUTER BRIDGE  ??
#
[...Overview of bridge and related material deleted...]

I recall having played a few games of computer bridge at Dartmouth College
during 1986-87.  A few things that I remember about this program are:

1) It was written in Basic for a Honeywell machine which had an operating
   system called DCTS.

2) It could take upto 4 players.  Meaning that even 1 player could
   play if so desired, in which case the other team would be handled
   by the computer.

3) The multi-playing capability was handled by using an inter-terminal
   communication program.  I don't know what or how.

4) The user-interface was not outstanding, but it did the job quite
   well.  There were some neat features like auto-playing, obvious
   moves, and maybe even a post-game analysis (I'm not sure).

4) The program did not seem terribly intelligent in playing, but did
   a reasonably good job when playing with an on-and-off not-too-good
   player like me.

5) There was some talk during that time about porting it over to a Unix
   machine.  I left soon after that so I don't have an update.


Hopefully this will help in some way.  Perhaps someone currently at
Dartmouth and familiar with this program could provide more information.

Enjoy.

                                                -- Vallury Prabhakar
                                                -- rustcat@csli.stanford.edu

------------------------------

End of AIList Digest
********************

∂19-Aug-88  0037	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #56  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Aug 88  00:36:44 PDT
Date: Fri 19 Aug 1988 00:43-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #56
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 19 Aug 1988       Volume 8 : Issue 56

 Announcements:

  Language and Language Acquisition Conference
  Preliminary Program of ISIIS'88 (abstract) (In Japanese/Kanji)
  Call for Panels for IJCAI-89
  AAAI-88 workshop on AI and Music.

----------------------------------------------------------------------

Date: Fri, 12 Aug 88 13:59:40 GMT
From: Francis LOWENTHAL <PLOWEN%BMSUEM11.BITNET@MITVMA.MIT.EDU>
Subject: Language and Language Acquisition Conference


 ANNOUNCING A CONFERENCE : LANGUAGE AND LANGUAGE ACQUISITION 4
 =============================================================

     This will be an interdisciplinary seminar.


Dear colleague,

               I have the pleasure to invite you to the fourth
conference we organize on Language  and  Language  Acquisition
at the University of Mons, Belgium.

        The specific theme of this conference will be :
      "LANGUAGE DEVELOPMENT  AND  COGNITIVE DEVELOPMENT"


Date : From August 22 to August 27, 1988
Place : Mons University.

        The aim of this meeting is to further an interdiscipli-
nary and international collaboration among researchers  connec-
ted one way or the other with the field  of  communication  and
subjacent  logic  : this  includes  as  well studies concerning
normal children as handicapped subjects.

        Five topics have been chosen : Mathematics, Philosophy,
Logic and Computer Sciences, Psycholinguistics, Psychology  and
Medical Sciences. During the conference, each morning  will  be
devoted to two 45-minutes lectures on one of these domains, and
to a wide discussion concerning all the papers already  presen-
ted. The afternoon will be  devoted  to  short presentations by
panelists and to further discussions concerning  the  panel and
everything that preceded it.

        There will be no parallel sessions  and, as the organi-
zers want to favour as much as possible discussions between the
participants, it has been decided to reduce the number of  par-
ticipants to 70. The selection procedure will be supervised by
an international committee.

        Further informations  and  registration  forms  can  be
obtained by old fashioned mail or by E-mail from :

                F. LOWENTHAL
                Universite de l'Etat a Mons
                Laboratoire N.V.C.D.
                Place du Parc, 20
                B-7000   MONS (Belgium)
                tel : (32)65.37.37.41
                TELEX  57764 - UEMONS B
                bitnet : PLOWEN@BMSUEM11

         Please, feel free to communicate this call for papers
to other potential interested researchers.


                             F. LOWENTHAL

------------------------------

Date: 17 Aug 88 07:29:20 GMT
From: kddlab!icot32!nttlab!gama!etlcom!kato@uunet.uu.net  (Toshikazu
      Kato)
Subject: Preliminary Program of ISIIS'88 (abstract) (In
         Japanese/Kanji)


                      Preliminary Program
               Second International Symposium on
               Interoperable Information Systems
                            ISIIS'88

                 Nov.10 (Thu.), 11 (Fri.) 1988
              Science Museum (Kagaku Gijutsu Kan),
                    Chiyoda-ku, Tokyo, Japan

                   Organized and sponsored by:
          Interoperability Technology Association for
             Information Processing, Japan (INTAP)
                With the support of (tentative):
          Ministry of International Trade and Industry
                      In cooperation with:
         Information Processing Society of Japan (IPSJ)
           The Institute of Electronics, Information
          and Communication Engineers of Japan (IEICE)
           Association for Computing Machinery (ACM)
                IEEE Computer Society (IEEE/CS)
       Japan Electronic Industry Development Association
        Japan Information Processing Development Center
                 Japanese Standards Association


[1] SCOPE OF THE SYMPOSIUM

This is the second ISIIS international symposium, and follows the
initial  event   of  ISIIS'87.   The  symposium  will   focus  on
interoperability  technology  for information  processing.  There
will be  technical sessions for presentation  of selected papers,
including  reports on  the  national  R&D program  "Interoperable
Database Systems."  On the symposium site,  OSI-based information
network systems  will also  be demonstrated at  the Interoperable
Networking Event (INE'88).

The goal of the symposium is  to explore both the theoretical and
practical aspects of interoperable information systems.

[2] SCHEDULE

Nov. 10 (Thu.)
 9:00  Registration
 9:30  Opening Session
       * Symposium Chairperson
       * Guest Speaker
10:40  (Coffee Break)
11:10  Session 1A:                        Session 1B:
       General Session                    ASN.1
12:30  (Lunch)
13:50  Session 2A:                        Session 2B:
       Implementation                     Formal Description Techniques
15:30  (Coffee Break)
16:00  Session 3A:                        Session 3B:
       Gateway and Network Architecture   Multimedia Database Architecture
17:50
18:30  (Reception at Takebashi Hall)

Nov. 11 (Fri.)
 9:30  Session 4A:                        Session 4B:
       Multimedia Database Systems        Conformance Testing (1)
10:30  (Coffee Break)
11:00  Session 5A:                        Session 5B:
       Conformance Testing (2)            Distributed Database Systems
12:20  (Lunch)
13:40  Session 6:
       Protocol Verification
15:10  (Coffee Break)
15:40  Session 7: Conformance Testing Service
17:30

[3] REGISTRATION

Registration Fee:
Fee [Yen]:     Before Oct. 14       After  Oct. 15
Regular:      20,000  (26,000)     25,000  (31,000)
Member**:     15,000  (21,000)     20,000  (26,000)
Student:       8,000  (14,000)     10,000  (16,000)

        * () includes the Reception Fee.
        ** The member rate applies to members of IEEE, ACM and INTAP.

For more information, please contact:
        ISIIS Secretariat
        Shin-ichiro Yokomizo,
        INTAP
        Sumitomo Gaien Bldg., 24-Daikyo-cho,
        Shinjuku-ku, Tokyo 160, Japan
        Phone: +81 3 358 2721
        Facsimile: +81 3 358 4753
        E-mail: isiis%etl.jp@relay.cs.net

        * Please REPLY this news article, and your message (e-mail) is
        deriverred to the mbox of ISIIS secretariat.

[4] RELATED EVENT: INE'88

As a part  of the interim evaluation activities  for the national
R&D  program  "Interoperable  Database  Systems,"  more  than  10
leading  companies  in  information technology  are  planning  to
demonstrate the OSI-based information network system.

The demonstration will held Nov.8-11, 1988 at the symposium site.

--
Toshikazu KATO
Information Systems Section, Electrotechincal Laboratory, Japan
JUNET(domestic): kato@etl.junet
CSNET(over-sea): kato%etl.jp@relay.cs.net

--
Toshikazu KATO
Information Systems Section, Electrotechincal Laboratory, Japan
JUNET(domestic): kato@etl.junet
CSNET(over-sea): kato%etl.jp@relay.cs.net

------------------------------

Date: Thu, 18 Aug 88 09:15:45 EDT
From: schmolze%cs.tufts.edu@RELAY.CS.NET
Subject: Call for Panels for IJCAI-89

The IJCAI committee requests the submission of proposals for panel sessions
to be presented at IJCAI-89.  A panel session allows from three to five
people to present their views and/or results on a common theme, issue or
question.  The panel topic must be both relevant and interesting to the AI
community.  The panel members must have substantive experience with the
topic.  However, the members need not be members of the AI community.
Preference will be given to panels that demonstrate broad, preferably
international, participation.

A panel topic must be specified clearly and narrowly so it can be adequately
addressed in a single session.  Panel sessions run for 75 minutes.  The
format usually consists of an introduction by the chairperson with the
purpose of providing the audience with a background for the ensuing
discussion.  The panel members, including possibly the chairperson, then
present their views and/or results, followed by interchange between the
participants and, finally, by interchange between the panelists and the
audience.  Preferably, the session ends with an overview by the chairperson.

Panels may primarily serve to present information on a specific topic, such
as recent important results or the status of important projects.  Panels may
focus on alternative approaches or views to a common question, where
panelists present their approaches or views and the results they produced.
Also, panels may be critical, where some members present an approach or view
and other members criticize them, allowing time for rebuttals.

REQUIREMENTS FOR SUBMISSION

A proposal consists of a cover page, an overall summary and a summary of each
member's presentation.

The cover page should contain the following.

  o At the top of the first page, write "PANEL PROPOSAL".

  o Title of panel: The length should be similar to the lengths of titles of
    papers.

  o Chairperson: Name, affiliation, phone number, postal mailing address and
    electronic mailing address.  Please give phone number and address for
    correspondence from the United States.

  o Members: Names, affiliations, phone numbers, postal mailing addresses and
    electronic mailing addresses.  Please give phone numbers and addresses for
    correspondence from the United States.

The overall summary should be brief, giving a clear description of the panel
topic such that members of the general AI community can understand and
appreciate it.  It should explain how the member's presentations will be
integrated.  In addition, it should address the following questions.

  o What is the relevance and/or significance of the panel, including both
    the topic and the members?

  o What is the general AI interest in the topic?  Please give evidence, such
    as recent important papers, workshops, etc.

  o How does the panel membership demonstrate broad, preferably
    international, participation?  If it does not, why is narrow
    participation preferable?

  o If your topic has been discussed by another panel in a recent national or
    international AI conference, how will your panel differ from it?

The overall summary should be from 500 to 1000 words in length.

The final part of the proposal should be a brief summary of each member's
presentation.  This includes the chairperson if she or he will give a
presentation.  Each such summary should give a clear description of the
member's view or approach, summarize results if appropriate, and demonstrate
the connections to the panel topic.  Where appropriate, each summary should
support the arguments given in the overall summary.  These summaries,
including the overall summary, should be coordinated such that the panel
proposal is a sensible whole and not a loosely coupled collection of parts.
Each member's summary should be approximately 500 words.

Please submit six (6) copies of the proposal (cover page, overall summary and
member summaries) no later than December 12, 1988 to:

        IJCAI 89
        c/o AAAI
        445 Burgess Drive
        Menlo Park, CA 94025-3496  USA

Chairpersons for proposals will be notified of the final decisions by March
27, 1989.  The proposals selected for presentation will be published in the
proceedings.  Chairpersons and members of these panels will be allowed to
submit extended versions of their summaries.  Revised versions will be due by
April 27, 1989.

------------------------------

Date: 18 Aug 88 17:20:05 GMT
From: leah!albanycs!mira@csd1.milw.wisc.edu  (Prof. Mira Balaban)
Subject: AAAI-88 workshop on AI and Music.


                     FIRST WORKSHOP ON AI AND MUSIC
                                AAAI-88
                            August 24, 1988
                        Radisson St. Paul Hotel
                              Senate Suite
                          St. Paul, Minnesota


                              PROGRAM

 8:30 -  9:00    :  O.E. Laske
                    Observations on Formalizing Musical Knowledge
                    (invited talk).
 9:10 - 10:30   :  Expert Systems.  Chair: K. Ebcioglu.
        9:10: An Expert System for Music Perception
              J.A. Jones, D.L. Scarborough, B.O. Miller
        9:30: An Expert System for Harmonic Analysis of Tonal Music
              H.J. Maxwell
        9:50: Learning Machines & Tonal Composition
              S. Schwanauer
       10:10: A Cybernetic Composer Overview
              C. Ames, M. Domino

10:40 - 12:00  :  Tutoring, Languages.  Chair:  B. Vercoe
       10:40: An Architecture of an Intelligent Tutoring System for
                 Musical Structure and Interpretation
              M. Baker
       11:00: A Model for Developing a Tutoring System in Music
              B.J. Fugere, R. Tremblay, L. Geleyn
       11:20: Music: The Universal Language
              D. Cope
       11:40: Motivations, Sources, and Initial Design Ideas for CALM:
                 A Composition (Analysis/Synthesis) Language for Music
              E.B. Blevis, M.A. Jenkins

12:00 -  1:00  :  LUNCH

 1:00 -  2:00  :  Cognitive Models, Knowledge Representation.
                  Chair: B. Mont-Reynaud
        1:00: Modelling and Generating Music Using Multiple Viewpoints
              D. Conklin, J. Cleary
        1:20: Issues of Representation in the Analysis of Atonal Music
              J. Roeder
        1:40: A Problem Reduction Approach to Automated Composition
              S.C. Marsella, C.F. Schmidt, J.L. Bresina

 2:10 -  3:10  :  Networks, Parallelism.  Chair: M. Leman
        2:10: Sequential (Musical) Information Processing with PDP-
                 Networks
              M. Leman
        2:30: Neural Net Modeling of Music
              J.J. Bharucha
        2:50: Hearing Polyphonic Music with the Connection Machine
              B. Vercoe

 3:20 -  4:40  :  Perception, Philosophy, Music & AI.  Chair: M. Balaban
        3:20: The Cross Fertilization Relationship Between Music and AI
                 (Based On Experience with the CSM Project).
              M. Balaban
        3:40: Computer Realization of Cognitive Models of Human
                 Perception of Music
              L. Albright
        4:00: On Hearing Music Visually
              B. Mont-Reynaud
        4:20: Myhill's Thesis: There's More to Musical Cognition
                 than Computing
              P. Kugel

 4:50 -  5:10  :  Farewell.

 Coffee will be available in between sessions.

 Accepted abstracts whose authors were unable to
 present the papers at the workshop:

 Jay Tobias: Knowledge Representation in the
 Harmony Intelligent Tutoring System


 A. Camurri and R. Zaccaria: An Experimental Approach to a
 Hybrid Representation of Musical Knowledge

------------------------------

End of AIList Digest
********************

∂19-Aug-88  0315	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #57  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Aug 88  03:14:56 PDT
Date: Fri 19 Aug 1988 00:54-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #57
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 19 Aug 1988       Volume 8 : Issue 57

 Queries:

  Maths Newsgroups; Lenat's AM
  Merrion Inc.'s TOP-ONE
  Hidden Markov models
  expert systems
  LISP compiler for the CRAY?
  AI in Engineering

----------------------------------------------------------------------

Date: 15-AUG-1988 22:48:06 GMT
From: POPX%VAX.OXFORD.AC.UK@MITVMA.MIT.EDU


                      MATHS NEWSGROUPS; LENAT'S AM.


This request  is partly  stimulated by  Paul Benjamin's  category theory
broadcast:


I'd like to know of any newsgroups whose discussions include:

(1) Applications of topology, category  theory, chaos theory etc. to AI.
With the exception of logic, and  some neural net, vision, and low-level
speech work,  I've seen  very little written  on applying  such methods:
a great shame, since there are some powerful tools out there.

(2) Discussions about mathematics generally.

(3)   Theory  and   implementation   of   programming  languages   (e.g.
denotational semantics; compilation of object-oriented languages; models
of parallelism).


I   suspect   there  are   some   Usenet   newsgroups  covering   these.
Unfortunately, I  can't access them, because  our University's computing
service won't  pay the entry-to-Britain  gateway charges imposed  by the
University  of Kent.  Perhaps some  are  available in  digested form  by
another route? (If not, I suppose I could offer my services as digester,
provided that  contributions can be switched  out of Usenet  and through
the UCL or Earn gateways...)


Also,  does anyone  know  of  any work  on  rational reconstructions  of
Lenat's concept discovery  program AM, or on a formal  theory of what it
does?  (i.e.  a  formalisation  of notions  like  the  distance  between
concepts, the "worth" of a concept, etc).


Thanks in advance for any  information. I'll re-send anything of general
interest back to this newsgroup.

------------------------------

Date: 16 Aug 88 20:21:04 GMT
From: dsacg1!ntm1169@tut.cis.ohio-state.edu (Mott Given)
Subject: Merrion Inc.'s TOP-ONE


 I would like to find the telephone number and address of a company called
 Merrion Inc. that sells an expert system building tool called TOP-ONE.
 Also, I would like comments about TOP-ONE on the pros & cons of using it to
 build an expert system application.
--
Mott Given @ Defense Logistics Agency ,DSAC-TMP, P.O. Box 1605,
            Systems Automation Center, Columbus, OH 43216-5002
UUCP:        {cbosgd,gould,cbatt!osu-cis}!dsacg1!mgiven
Phone:       614-238-9431

------------------------------

Date: 17 Aug 88 03:31:16 GMT
From: att!alberta!calgary!radford@bloom-beacon.mit.edu  (Radford Neal)
Subject: Hidden Markov models

I've been playing around with applying hidden Markov models to
data compression. I've read some stuff by S. E. Levinson on
applications of hidden Markov models to speach recognition, and
tried out the apparently famous Baum-Welch algorithm. I'd be
interested in hearing anything about:

    - Applications outside speach recognition.
    - Algorithms for finding models with many states (say >200).
    - Generalizations to two or more dimensions.
    - Ways of reducing models to equivalent models with fewer states.
    - Anything else you think is amusing.

Thanks,

    Radford Neal

------------------------------

Date: 17 Aug 88 16:49:00 GMT
From: uxe.cso.uiuc.edu!gupta@uxc.cso.uiuc.edu
Subject: expert systems


I'm getting interested in Expert Systems. Could someone reccommend any
good books or articles (recent ones) to read?

Thanks


---
Rohit Gupta               Internet:   gupta%uxe.cso.uiuc.edu@uxc.cso.uiuc.edu
P. O. Box 2828 - Sta A    UUCP: uunet!uiucuxc!uxe!gupta
Champaign, IL 61820       Bitnet: gupta@vmd.cso.uiuc.edu

"The University of Illinois is in Champaign-Urbana?!? No wonder I couldn't
find it, I thought you said Shampoo-Banana..."

------------------------------

Date: Wed, 17 Aug 88 16:03:33 CDT
From: Phelps%csvax.cs.ukans.edu@RELAY.CS.NET
Subject: LISP compiler for the CRAY?

Does anyone know of a LISP compiler for the CRAY machine?

Send responses to me at the above e-mail address, or
by mail to:

       Jim Phelps
       US Sprint
       9350 Metcalf Ave.
       Overland Park, KS 66212

or call at 913-967-2542

Thank You

------------------------------

Date: Thu, 18 Aug 88 15:54:12 EDT
From: <sriram@ATHENA.MIT.EDU>
Subject: AI in Engineering

I  am  just trying to get a feel from the AILIST readers whether there
is a need for an International Society for  AI  in  Engineering.  This
society   would  cater  to  application  oriented  AI  researchers  in
engineering. If there is enough interest out  there,  suggestions  for
possible  organizational  strategies would be welcome, i.e., should it
be affiliated with IEEE, AAAI, independent, etc.

sriram@athena.mit.edu

------------------------------

End of AIList Digest
********************

∂19-Aug-88  0550	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #58  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Aug 88  05:49:53 PDT
Date: Fri 19 Aug 1988 01:11-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #58
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 19 Aug 1988       Volume 8 : Issue 58

 Query Responses:

  A public-domain computer chess program
  Camera Stabilization
  Looking for a Cognitive Science Society
  Garden Design and Plant diagnosis
  Sigmoid transfer function
  Feigenbaum's citation

----------------------------------------------------------------------

Date: Sun, 14 Aug 88 11:45:47 -0200
From: Antti Ylikoski <ayl%hutds.hut.fi%FINGATE.BITNET@MITVMA.MIT.EDU>
Subject: a public-domain computer chess program


In AIList Digest V8 #31, Rohit Gupta
<uxe.cso.uiuc.edu!gupta@uxc.cso.uiuc.edu> writes:

>I will be starting my Master's this fall and am fascinated by Artificial
>Intelligence - especially in computer chess.

>Does anyone know of any good info (books, papers, authors, professors,
>articles, research projects) on this subject?

The magazine Creative Computing published a large (several thousand
lines long) chess program written in Pascal and running in a large Cyber
computer I think in the end of the 70's or in the beginning of the 80's.
I recall the article and the program were written by one of the famous
computer chess people, possibly by Hans Berliner.

--- Andy

Disclaimer: the writer of this entry likes to give the impression of
being more intelligent than he is, is known to have written an AIList
entry after having several beers and even is a member of the Mensa.



------------------------------

Date: 15 Aug 88 14:27:00 GMT
From: aplcen!jhunix!apl_aimh@mimsy.umd.edu  (Marty Hall)
Subject: Camera Stabilization

In article <49@cybaswan.UUCP> eederavi@cybaswan.UUCP (f.deravi) writes:
>I am looking for information on camera stabilization and sensors for
>this purpose suitable for moving vehicles.

  The Robotics group at AAI Corp here in Baltimore build gyro-stabilized
gimballed mounts for various cameras and sensors.  My understanding
is that there are aluminium and carbon composite versions, and are
suitable for either ground or air vehicle use.
  You can contact Steve Moody for more info:
        Mr. Steve Moody
        Robotic Systems Operations
        AAI Corporation
        PO Box 126
        Hunt Valley, MD 21030  USA
        (301) 628-3189

------------------------------

Date: 15 Aug 88 15:42:58 GMT
From: trwrb!ries@bloom-beacon.mit.edu  (Marc Ries)
Subject: Camera Stabilization

[...]
>
>     Panasonic showed a gyroscopically stablized consumer-grade camcorder
>at the Consumer Electronics Show this summer.  It should be available at
>Japanese retailers by the end of the year.  This may be a promising approach.
>

  I believe that either VIDEO or V.  Review has a usage report on
  the new Panasonic/Mitsubishi stabilized camcorder this month.

  Negatives: Price ($2400), weight (9+ pounds) and format (VHS only)

  Pluses: It works.

--
                Marc A. ries@trwrb.TRW.COM

                sdcrdcf!---\
                ihnp4!------\----- trwrb! --- ries

------------------------------

Date: Mon, 15 Aug 88 09:33:19 PDT
From: norman%ics@ucsd.edu (Donald A Norman-UCSD Cog Sci Dept)
Reply-to: danorman@ucsd.edu
Subject: Looking for a Cognitive Science Society


Of course there is a cognitive science society-- ten years old this
year..  Just go to the library and look for Cognitive Science, the
journal -- published 4 times a year.  This will also have the name and
address of the society.  And, yes, it has an annual convention, in
Montreal this year right this very moment, almost: Aug 17, 18, and 19.

The current secretary/treasurer is Kurt VanLehn
Kurt VanLehn
Department of Psychology
Carnegie-Mellon University
Pittsburgh, PA 15213
vanlehn@a.psy.cmu.edu

But his term is now over and a new person will be selected at the
Montreal meeting.

don norman

Donald A. Norman
Department of Cognitive Science C-015
University of California, San Diego
La Jolla, California 92093
INTERNET: danorman@ucsd.edu     INTERNET: norman@ics.ucsd.edu

------------------------------

Date: Mon 15 Aug 88 11:51:28-PDT
From: Leslie DeGroff <DEGROFF@INTELLICORP.COM>
Subject: Garden Design and Plant diagnosis

Agricultural and plant diagnostic sytems reply

For those working on or exploring garden design systems (a while ago)
or plant diagnostic systems, I would suggest two things,
1.  Just to start, expand your search for info to include DATABASES. A number
of things have been done with database tools that can in effect provide
a ready made knowledge base.  One potential contact (start point on
a completely different intellectual network) would be
Dr. Bashem,
College of Agriculture, Colorado State University,
Ft Collins CO.
He and others at CSU have built an ornamentals (flowers, trees but not food
plants) database of several thousand plants and about 40 fields including
botanic and common names, growth habits and cultivation requirements. They
had fields for common disease and insect problems but they are sparsely
filled.  This database was originally built in RIM (mainframe database tool
from Boeing) and was being ported to RBASE (tm) in 1986.  RBASE is available
on a variety of micro platforms and has a (sold separate) program interface
library.
Also going back to 1986, I saw a very limited (almost toy) commercial
equivalent with 700+ plants and <20 feilds with access software being
sold by Ortho (I think) It was cheap, on order of 40$.  No programmatic
interface.
2.  Having reoriented toward databases, you may want to pursue the
search by contacting (in the US) the Cooperative Extension Service,
Agricultural Research Service and state Agricultural Colleges. If you
have a particular crop in mind, it is likly that someplace in the US there
is a researcher who has spent his life on it. The ARS and Cooperative
Extension are government agencies and are chartered to do and
collect research information and to provide this information to
farmers and others needing it. (a major goal is disemination of
working solutions to producers and consumers)  I would suggest
treating this as an information network, if the individuals you
talk to don't have the information you want, ask if they can
point you to someone else.   For the actual knowledge to go into
a plant diagnostics system, you almost certainly will be back to
research and researchers involved in this system, but if you
search for databases you may find a massive amount of work already done.

It is also true some of those databases are being used like
limited Expert Systems, with the ornamentals database at CSU, one or two
hours of training could teach Horticultural Design students how to set up a
query to create a list of plants for a desired situation.

I would suspect that there are several ES projects underway within the
ARS, Extension and agricultural college systems, without having
any specific pointers, I would suggest contacting departments at
Purdue and Texas A&M they have been active in related areas, Texas was
working on an impressive demonstration farm sensor and automation project
in 1984.
  Relevant departments would include:
Horticulture (landscape, flowers and vegetables)
Agricultural Engineering (often building sensors and monitoring systems
for other research groups in college)
Agronomy (field crops like corn, wheat, cotton)
Plant Pathology
Entomology
Leslie DeGroff  (DeGroff@Intellicorp.Arpa)


(Agriculture is the root of civilization)

------------------------------

Date: 16 Aug 88 06:05:00 GMT
From: a.cs.uiuc.edu!uicslsv!bharat@uxc.cso.uiuc.edu
Subject: Sigmoid transfer function


erf(x)

Eqn. A

         / X
erf(X) = |              e ( - X*X/(sqrt 2pi))  dX
         |              __________________________
         /(- inf)            (sqrt (2 pi))

In some references the erf is defined as integral from {0} to {X}
rather than {-inf} to {X}.

However if you do not wish to store a table of values of erf, then
numerical methods can be employed to compute the erf to a desired
precision. The following formula may be used.

Eqn B.

     / Y
    |   e ( - X*X/(sqrt 2pi))  dX
    |   __________________________ = (erf(Y) - 1/2)/(sqrt (2 pi)) =
    / 0


    +inf  (i-1)
  _____      ______
  \           |  |
   \     j >=0|  |(2j+1)
   /    ___________________ (y ** (2i+1))
  /
  _____        (2i + i)!
  i >=0

The degree of precision that is required determines the number of
terms from the series that are needed to ensure convergence. If less
than 2&1/2% error is sufficiently precise, it can be assumed that the
erf(x) approx 1.0 for deviations from the mean (mu) greater than twice
the standard deviation (sigma).  In that case the first 4 terms from
Eqn B are sufficient to lead to an accurate result. If greater than
!/2 % accuracy is required, then atleast the first 7 terms of the
series must be considered, and you can assume that erf(x) approx 1.0
for x- mu >=3 sigma, and erf(x) approx 0.0 for x - mu<= 3 sigma .These
approximations can then be incorporated into Eqn 4.2 to facilitate
speedy calculation.

(For the above formula Y = (x-mu)/sigma, or for any distribution with
mu = 0, and sigma = 1.

-Bharat
R.Bharat Rao
bharat%uicsl@uxc.cso.uiuc.edu
bharat@uicsl.csl.uiuc.edu

------------------------------

Date: 16 Aug 88 23:48:19 GMT
From: beowulf!pluto@sdcsvax.ucsd.edu  (Mark E. P. Plutowski)
Subject: Feigenbaum's citation

It is interesting that since Japan has been quiet about
their progress (upon the Fifth Generation Project) it is
assumed that they have therefore progressed very little.
Now, I might assume this about American-based companies,
especially publicly owned ones.  But is this true in Japan?

Does anyone know the facts here?


[Aside:  when i read Feigenbaum's book when it came out,
just a few years earlier Japanese products were the butt of
jokes.  Now, American products are.

(as reported in one of the business trade journals about
the increasing number of Americans working for Japanese
managers. according to the article, Japanese managers consider
Americans "lazy and untrustworthy.")]

Don't flame me, I bought an American car.  But, isn't their
track record good enough of late to take their even
most ambitious plans seriously?

----------------------------------------------------------------------
Mark Plutowski
Department of Computer Science, C-024
University of California, San Diego
La Jolla, California 92093
INTERNET: pluto%cs@ucsd.edu     pluto@beowulf.ucsd.edu
BITNET:   pluto@ucsd.bitnet
UNIX:     {...}!sdcsvax!pluto

------------------------------

Date: 17 Aug 88 02:58:34 GMT
From: cck@deneb.ucdavis.edu  (Earl H. Kinmonth)
Subject: Re: Feigenbaum's citation

In article <5226@sdcsvax.UCSD.EDU>
pluto@beowulf.UUCP (Mark E. P. Plutowski) writes:

>It is interesting that since Japan has been quiet about
>their progress (upon the Fifth Generation Project) it is
>assumed that they have therefore progressed very little.

They haven't been silent.  They publish annual and other reports.
I've gone through several with Japanese engineering friends
looking for content.  There wasn't much.  Playing it close to the
vest is NOT Japanese style for show-piece projects like this.  If
they had something, they'd be crowing.

>Now, I might assume this about American-based companies,
>especially publicly owned ones.  But is this true in Japan?

See above.  Note that the fifth generation Project is not a
company in the conventional sense.

>Does anyone know the facts here?

>[Aside:  when i read Feigenbaum's book when it came out,
>just a few years earlier Japanese products were the butt of
>jokes.  Now, American products are.

You must have been living in a very rural area.  Feigenbaum's
book was published in 1983.  The Japanese reputation for quality
was well-established by the mid-1960s in general, and earlier for
products such as watches and cameras.  I would say that the
Japanese reputation for quality was generally established two
decades before Feigenbaum published, except possibly for real
redneck areas of this country....

>(as reported in one of the business trade journals about
>the increasing number of Americans working for Japanese
>managers. according to the article, Japanese managers consider
>Americans "lazy and untrustworthy.")]

Public opinion polls in Japan show the Japanese think rather
highly of themselves.  A more accurate generalization would be
that a good percentage of the Japanese consider all non-Japanese
lazy and untrustworthy....

>Don't flame me, I bought an American car.  But, isn't their

Sympathy, yes.  Flames, no.

>track record good enough of late to take their even
>most ambitious plans seriously?

No.  Japan has it share of hucksters, con-artists, research
projects to which Proxmire would give his Golden Fleece Award,
and failures.  Just because certain aspects of the economy are
doing exceptionally well should not lead to a "halo effect" that
blinds observers and causes them to abandon all serious
analysis.  To do so would be to apply to Japan the same
uncritical approach Americans have tended to take with respect to
this country, especially in the 1950s and early 1960s.

"Ambitious plans" in Japan should be examined just as critically
as "ambitious plans" in the US.  More bucks, more bull is a rule
that has equal applicability in both cultures.  The history of
American writing on Japan (something I've taught as a course) has
shown one constant: wild exaggeration, whether the stereotype was
negative or positive.

------------------------------

End of AIList Digest
********************

∂19-Aug-88  2121	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #59  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Aug 88  21:21:30 PDT
Date: Sat 20 Aug 1988 00:00-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #59
To: AIList@AI.AI.MIT.EDU


AIList Digest           Saturday, 20 Aug 1988      Volume 8 : Issue 59

 Free Will

  How to dispose of the free will issue
  Evolution
  How to dispose of naive science types (fact vs. theory)

----------------------------------------------------------------------

Date: 15 Aug 88 21:30:38 GMT
From: mcvax!ukc!etive!aiva!jeff@uunet.uu.net  (Jeff Dalton)
Subject: Re: How to dispose of the free will issue

In article <421@afit-ab.arpa> dswinney@icc.UUCP (David V. Swinney) writes:
>The "free-will" theorists hold that are choices are only partially
>deterministic and partially random.

No they don't, or at least not all of them.  Having choices randomly
determined isn't better or more free than having them deterministically
determined.  If fact, it's probably worse, since the result will be
chaotic.

------------------------------

Date: Tue, 16 Aug 88 11:17 MST
From: "James J. Lippard" <Lippard@BCO-MULTICS.ARPA>
Reply-to: Lippard@BCO-MULTICS.ARPA
Subject: Evolution (was Re: How to dispose of naive science types)

>Date: 12 Aug 88 12:37:26 GMT
>From: ulysses!gamma!pyuxp!u1100s!castle@bloom-beacon.mit.edu
>      (Deborah Smit)

>Another big mistake is when scientists present hypothetical OR theoretical
>work under the title "FACT".  E.G. Evolution.  The 'theories' of evolution
>(of which there are many, many, and conflicting), do not even fit under
>the title theory, since they are not demonstrable, and do not fit with
>the facts shown by the fossil record (no intermediate forms -- before
>you flame, examine current facts, fossils previously believed to be
>intermediate have been debunked).  It certainly cannot be called FACT,
>though in college courses, some professors insist on speaking of
>'the fact of evolution'.  When evolutionists cannot support their
>hypothesis by showing aggreement with known facts, they resort to
>emotional mind-bashing (only foolish, gullible people don't believe
>in evolution).  Just my two cents.  I enjoy reasonable theories,
>they truly unify what we observe, but I don't appreciate emotional
>outbursts on the part of those who can't give up their inaccurate
>hypotheses to go on to something better.

There are many erroneous statements in the above (such as the claim
that the fossil record shows that there are "no intermediate forms").
This is not the list for it, so I suggest the discussion on this
subject be moved to the Creation/Evolution list (mail to
rpjday@VIOLET.WATERLOO.EDU).  I will just say here that "evolution"
is an ambiguous term which refers to a fact (descent with modification),
a number of theories (e.g., gradualism, punctuated equilibria), and
a biological paradigm.  Those who talk about the "fact of evolution"
are not necessarily speaking falsely.

 Jim Lippard
 Lippard at BCO-MULTICS.ARPA

------------------------------

Date: 16 Aug 88 23:49:56 GMT
From: att!alberta!calgary!radford@bloom-beacon.mit.edu  (Radford Neal)
Subject: Re: How to dispose of naive science types (fact vs. theory)

In article <388@u1100s.UUCP>, castle@u1100s.UUCP (Deborah Smit) writes:

> Another big mistake is when scientists present hypothetical OR theoretical
> work under the title "FACT".  E.G. Evolution.

I won't get into a discussion of the specifics of evolution, which would
probably be endless, but I would like to point out why biologists
sometimes refer to the "fact" of evolution and the "theory" of natural
selection.

The distinction is between physical reality - the change of form in species
over time, and explanations of that phenomenon, such as natural selection.

Never mind whether you accept that evolution is indeed a fact. There is
as a real distinction here from the biologist's point of view. The "fact"
of evolution could well be established by a non-biologist - say a physicist
who invents a time-viewing machine. The explanation of the phenomenon
requires a real biological theory.

     Radford Neal

------------------------------

Date: Wed, 17 Aug 88  12:56:03 PDT
From: Dennis de Champeaux <ddc%hplddc@hplabs.HP.COM>
Subject: Re: AIList Digest   V8 #52

A reply to the contribution of:

        Date: 12 Aug 88 12:37:26 GMT
        From: ulysses!gamma!pyuxp!u1100s!castle@bloom-beacon.mit.edu
              (Deborah Smit)
        Subject: Re: How to dispose of naive science types (short)


Evolution remains a vulnerable notion.  Deborah Smit reminds us that the
principle labeled by evolution is not a FACT (capitalization her's).  It is
not even a theory, she adds.  Because the evolution 'theories' "... are
not demonstrable, and do not fit with the facts shown by the fossil record
..."

This quotation is contradictory, because after first denying that evolution
can be demonstrated - I take it she meens falsifiable here -, she
subsequently gives evolution the honor status of being a false theory.

For me, evolution is a principle, a suggestion of how to do research.  It
is not falsifiable in the ordinary sense indeed.  If there is a missing
link, the principle urges to look harder.  If this does not yield success,
the principle asks for patience or an "explanation" is given that the
evidence got lost in the turbulences of the past.

Evolution shares this not ordinarily falsifiable feature with the causality
principle.

Are people aware of other (former) principles that belong to the same
family, and which may shed light on the "life cycle" of these principles ?

Dennis de Champeaux
champeaux@hplabs.hp.com
[disclaimer on file]

------------------------------

End of AIList Digest
********************

∂19-Aug-88  2341	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #60  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Aug 88  23:41:02 PDT
Date: Sat 20 Aug 1988 00:04-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #60
To: AIList@AI.AI.MIT.EDU


AIList Digest           Saturday, 20 Aug 1988      Volume 8 : Issue 60

 Queries and Responses:

  Where should she go? (Universities for Machine Learning)
  public-domain computer chess program
  Sorbothane (AIList v8 #48)
  Computer Chess program request/reference request
  AI in Engineering
  Lucid Lisp users mailing list

----------------------------------------------------------------------

Date: 17 Aug 88 17:29:00 GMT
From: cca!mirror!rayssd!raybed2!applicon!bambi!webb!webb@husc6.harvard
      .edu
Subject: Where should she go?


        A friend of mine wants to get her PhD in Computer Science,
specializing in the Machine Learning aspect of Artificial Intelligence.
She has been to the library and collected a list of likely schools, but
the list is too long for her to apply to all the schools on it.
Accordingly, she asked me if I would ask the net for suggestions.  If
you wanted to study machine learning, where would you go and why?  Some
of the schools she is currently considering are:

        Stanford
        MIT
        U. California @ Berkeley
        U. California @ San Diego
        Chapel Hill, North Carolina
        U. Illinois, Chamapane/Urbana
        U. Massachusetts @ Amherst
        U. Pennsylvania
        Carniege-Mellon University

Do you have any comments on the PhD programs at any of these institutions?
The Masters Degree program?  Are there any other colleges you would recommend?
She would appreciate hearing from anyone who has finished, or is currently
working on a similar degree.  Any information at all will be appreciated.
        Please reply to me, as she does not have access to Usenet.  Thanks
very much.

                                Peter Webb.

{allegra|decvax|harvard|yale|mirror}!ima!applicon!webb,
{mit-eddie|raybed2|spar|ulowell|sun}!applicon!webb, webb@applicon.com

------------------------------

Date: Fri, 19 Aug 88 11:14:19 EDT
From: kanderse@sam (Kurt Andersen)
Subject: Re: public-domain computer chess program

        The program sounds like the one published in BYTE many years ago.
I found in one of BYTE's first books called something like The Best of BYTE
(the name be wrong, I saw it 4-5 years ago).  The book had the full original
CDC Cyber 6600 pascal source listings along with four articles describing how
it works.  I hope that help.

Kurt:-)

------------------------------

Date: 19 Aug 88 15:49:00 EDT
From: Nahum (N.) Goldmann <ACOUST%BNR.CA@MITVMA.MIT.EDU>
Subject: Re: Sorbothane (AIList v8 #48)

This is in addition to information from John B. Nagle.

Sorbothane actually is a British product (also marketed in the US).
Contact BTR Development Services Ltd., Horninglow Rd., Burton-on-Trent,
Staffs.  DE13 0SN  United Kingdom.  The contact there is Richard Burton,
Tel. 0283-31155.  Telex 34419.  Send him best regards from me.

To the best of my knowledge, somebody in Japan already uses Sorbothane
for car-mounted CD players and alike.  It has excellent shock/vibration
absorption properties, but has some temperature and other
environmental problems.

As any vibration/shock isolator, it is tricky to design in, plus its
liquidity makes it an additional challenge.  I'm certain Richard will
provide you with further information.

Accelerometers are used to measure vibration.  Their characteristics are
selected based on the problem explored.  Look under Vibration in your
library or contact specialists at Southempton University in the UK (this
is a world class school).


Greetings and love.

Nahum Goldmann
(613)763-2329

e-mail: <ACOUST@BNR.CA>

------------------------------

Date: 19 Aug 88 15:05 EST
From: STERRITT%SDEVAX.decnet@ge-crd.arpa
Subject: Computer Chess program request/reference request


In AI-List vol. 8, number 58,
Antti Ylikoski <ayl%hutds.hut.fi%FINGATE.BITNET@MITVMA.MIT.EDU> writes:

> The magazine Creative Computing published a large (several thousand
> lines long) chess program written in Pascal and running in a large Cyber
> computer I think in the end of the 70's or in the beginning of the 80's.
> I recall the article and the program were written by one of the famous
> computer chess people, possibly by Hans Berliner.

        Does anyone have the exact reference?  Infinitely (well, almost)
better, does anyone have this code online so they could mail it to me?
Or any other chess implementation, in any highlevel (i.e. not Assembly,
Forth or Basic) language?
        thanks a million (nodes),
        chris sterritt
        sterritt%sdevax.decnet@ge-crd.arpa      (on arpanet)

------------------------------

Date: 19 Aug 88 15:02 PDT
From: Sanjay Mittal <mittal.pa@Xerox.COM>
Subject: AI in Engineering

>>Sriram asked about the need for an International Society for  AI  in
Engineering. Here's a response<<
I think we already have too many societies (ACM, IEEE, AAAI, ASME, SME, Cog Sci,
socialist, capitalist, communist, just-plain-wedged, etc) and an even larger
number of journals and conferences. Societies are good for providing a forum via
journals and conferences for a group of researchers and practitioners to share
ideas, problems, etc. However, as with all societies they last as long as there
are some common shared problems, goals and visions. . Note that there is NO
Society of all engineering branches, largely I suspect because there'll be less
to unify than divide the members. And it is not at all clear that there is more
in common between AI in Electrical and AI in Mech than there is between AI in
Medicine and AI in Mech. One could make a strong argument that most of what is
common is AI (theories, tools, techniques). But we already have far too many AI
conferences and journals not to mention AAAI and a host of national AI
societies. [There already are at least two journals that have AI, International,
and Eng in their title and I counted at least four conferences in US alone this
year that had the same combination]. Do we want more?? One strong no for what
its worth!

   ---- Sanjay

------------------------------

Date: Sat, 20 Aug 88 10:03:33 EST
From: munnari!trlamct.oz.au!andrew@uunet.UU.NET (Andrew Jennings)
Subject: Lucid Lisp users mailing list


Sometime ago I was on a Lucid Lisp user's mailing list. Now I seem to have lost
contact with it. Does anybody know how to get in touch again ?



(Postmaster:- This mail has been acknowledged.)

------------------------------

End of AIList Digest
********************

∂21-Aug-88  1912	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #61  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 21 Aug 88  19:12:33 PDT
Date: Sun 21 Aug 1988 21:54-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #61
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 22 Aug 1988       Volume 8 : Issue 61

 Philosophy:

  Sensory/Abstract Reasoning and Parallel Thinking
  Re: Can we human beings think two different things in parallel?

 Religion:

  science, lawfulness, a (the?) god
  The Godless assumption

----------------------------------------------------------------------

Date: Thu 18 Aug 88 10:15:08-PDT
From: George Cole <C.COLE@MACBETH.STANFORD.EDU>
Subject: Sensory/Abstract Reasoning and Parallel Thinking

Two points I'd like to respond on:  Sensory/Abstract Reasoning and Parallel
Thinking.
        Driving a car (or martial arts) at an advanced level clearly involves
"compiled" kinesthetic behavior coupled with sensory processing (in plain terms,
eye-muscle coordination), plus advanced symbolic reasoning coupled with the
same sensory processing (predicted paths of other vehicles from both physics
and "rules of the road" and estimated intent. In commuter driving the abstract
reasoning can reach quite high levels where the entire traffic pattern is
perceived -- how many experienced commuters can spot the tourist who is
"interfering" with the norm? The point I want to emphasize is that the symbolic
reasoning is at a very high level, going beyond "stop at red light" rules to
"the intent of the red car is to reach the exit requiring the blue truck to slow
requiring that lane to slow with a high probability of some jackass swerving one
lane over requiring that lane to react -- so I'm moving over to the fast lane to
ease the potential congestion".
        This is one type of parallel cogitation -- but it can be argued that it
actually is a learned and advanced compilation of processing into multi-level
coordinated behavior, i.e. there really is only one reasoning process that
simply has manifold layers capable of distinct, need-specific interpretation.
True parallel cogitation either is unconscious or reflective: when you are aware
of your reasoning as it progresses, isn't that parallel cogitation? And how many
times have people solved problems in the background as they coped with their
daily rush of events?
        My two-cents suggestion is that people engage in a great deal of
parallel processing, using their innate "multiprocessor" capacity. Since it is
harder to devise integrated algorithms than serial ones, most of the parallel
processing will be of different "types" of reasoning (logical and emotional,
physical and logical, visual and auditory, imitative vocal and creative
mathematical, etc.). When integrated parallel processing has had a positive
survival value (i.e. sensory-muscular coordination) we should find that behavior
demonstrated.
                                George S. Cole, Esq.
                                C.Cole@macbeth.stanford.edu

------------------------------

Date: 21 Aug 88 03:05:58 GMT
From: quintus!ok@Sun.COM (Richard A. O'Keefe)
Reply-to: quintus!ok@Sun.COM (Richard A. O'Keefe)
Subject: Re: Can we human beings think two different things in
         parallel?


In a previous article, Youngwhan Lee writes:
>Date: Sun, 14 Aug 88 16:54 EDT
>From: Youngwhan Lee <ywlee@p.cs.uiuc.edu>
>To: ailist-request@stripe.sri.com
>Subject: Can we human beings think two different things in parallel?
>
>Can we human being think two different things in parallel? Does anyone know
>this? One of my friends said that there should be no problem in doing that. He
>said we trained to think linear, but considering the structure of brains only
>we must be able to think things in parallel if we can train ourselves to do
>that. Is he correct?

I don't think much of the argument, and suspect that the answer depends on
what you mean by "thinking".  For example, while reading your message and
planning my reply, I was extemporising on a soprano recorder (not at all
well, I hasten to add).  Were both of those activities "thinking"?  We can
attend to 2..4 musical parts at once without switching between them, keeping
straight which sounds belong to what part and what patterns are being formed.
But does that count as "thinking"?  And does "different" mean "unrelated"
(perhaps processing information from different senses) or "conflicting"
(perhaps trying to generate speech for two different topics).

------------------------------

Date: Thu, 18 Aug 88 12:57 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: science, lawfulness, a (the?) god

Distribution-File:
        AILIST@AI.AI.MIT.EDU

In AIList Digest   V8 #54, T. Michael O'Leary <HI.OLeary@MCC.COM>
presents the following quotation (without mentioning who originally
wrote it):

>     >Science, though not scientists (unfortunately), rejects the
>     >validity of religion: it requires that reality is in some sense
>     >utterly lawful, and that the unlawful, i.e. god, has no place.

I would say that a God needs not be unlawful.  A counterexample of
some kind could be a line by Einstein: I think he said that the
regularity of the structure of the universe reflects an intellect.  (I
cannot remember the exact form of the quotation, but I think the idea
was this.)

--- Andy

------------------------------

Date: Thu Aug 18 10:32:01 EDT 1988
From: sas@BBN.COM
Subject: The Godless assumption

FYI, for those who had trouble with Mike Dante's comment:

        I don't see that as any excuse for being a Luddite in either field.

a Luddite is one who smashes machines.

                                        An occaisional Luddite,
                                                Seth

------------------------------

End of AIList Digest
********************

∂22-Aug-88  1940	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #62  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 22 Aug 88  19:40:19 PDT
Date: Mon 22 Aug 1988 22:21-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #62
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 23 Aug 1988      Volume 8 : Issue 62

 Religion:

  Re: science, lawfulness, a (the?) god (V8 #61)
  The Godless asumption

----------------------------------------------------------------------

Date: 22 Aug 88 09:10:20 GMT
From: cwp@otter.hple.hp.com (Chris Preist)
Subject: Re: science, lawfulness, a (the?) god


Are you by any chance thinking of -

        " God does not play dice. " - A.Einstein

In which case, he did not use it in the context you suggest. He actually
is using the existence of God to 'disprove' the validity of quantum
mechanics.

i.e.    God exists & God is omnipotent
                   -> God isn't into probablistic structures over which
                      it/she/he has no control
                   -> Quantum mechanics is wrong

Chris

------------------------------

Date: Mon, 22 Aug 88 11:39:04 +0100
From: "Gordon Joly, Statistics, UCL"
      <gordon%stats.ucl.ac.uk@ESS.Cs.Ucl.AC.UK>
Subject: Re: science, lawfulness, a (the?) god (V8 #61)


> From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
> I would say that a God needs not be unlawful.  A counterexample of
> some kind could be a line by Einstein: I think he said that the
> regularity of the structure of the universe reflects an intellect.  (I
> cannot remember the exact form of the quotation, but I think the idea
> was this.)
> --- Andy

If I may be permitted to attempt a second approximation, Einstein said:
"What really interests me, is the question of whether God had a *choice*
in the design of the universe". I guess this encompasses all "things",
including the human mind, no doubt.

Gordon Joly.

------------------------------

Date: Mon, 22 Aug 88 10:38:29
From: ALFONSEC%EMDCCI11.BITNET@CUNYVM.CUNY.EDU
Subject: The Godless asumption


I was surprised at Professor Minsky's use of so naive an argument against
Religion. If Religion is discredited because Giordano Bruno was burnt at
the stake in 1600, then Science is discredited because 120,000 people were
burned in Hiroshima in 1945. In actual fact, neither Religion nor Science
are discredited because of that, only people who do things can be discredited
by them. Theories are discredited by negative evidence or by reason.

And this takes me to another append (which unfortunately I have lost, and
do not recall the signer) where it was stated that Religion and Reason
are contradictory. I challenge this assertion. For it to be true, there should
exist an argumentation that starting at a set of axioms accepted by everybody,
and through a set of reasonable steps, would arrive to the conclusion
"God does not exist". I do not know of such an argument. God's existence
or non-existence is an axiom for most of us, and axioms are not "Reason".

M. Alfonseca

(Usual disclaimer)

------------------------------

Date: Mon, 22 Aug 88 11:00 EDT
From: "William E. Hamilton, Jr."
      <"RCSMPB::HAMILTON%gmr.com"@RELAY.CS.NET>
Subject: the Godless Assumption

The recent debate on the "Godless assumption," in which Andrew Basden,
Marvin Minsky and William Wells have participated touches on the vitally
important questions of

        What is science?
        What is religion?, and
        Where (if anywhere) is there any common ground between the two?

Wells is correct in saying that

        "the religious entails something
        which ultimately is outside of reason,"

in the sense that human reason alone cannot find God. I would add that
science deals with phenomena which can be observed and subjected
to analysis. If you accept that constraint,
then as a scientist you should be cautious about making judgments on
subjects you don't have observations for. However, Wells goes too far
when he says



        ...religion and reason entail diametrically opposed views of
        reality: religion requires the unconstrained and unknowable as
        its base...

        ...religion rejects the ultimate validity
        of reason; ... years of attempting to reconcile the
        differing metaphysics and epistemology of the two has utterly
        failed to accomplish anything other than the gradual destruction
        of religion.

        Science ... rejects the
        validity of religion: it requires that reality is in some sense
        utterly lawful, and that the unlawful, i.e. god, has no place.



The first two above paragraphs make assertions which are certainly not true
of all religions. The third makes statements I would have to
regard as religious, since it makes assertions (reality is lawful, God is
not) about phenomena outside the scope of science.

Granted, religion is outside the scope of science, but that does not make it
wrong. Art and music are outside the scope of science, too, and yet
they teach us important aspects of being human.

        Bill Hamilton
        GM Research Labs

------------------------------

Date: 22 Aug 88 18:32:08 GMT
From: uwslh!lishka@spool.cs.wisc.edu (Fish-Guts)
Reply-to: uwslh!lishka@spool.cs.wisc.edu (Fish-Guts)
Subject: Re: The Godless assumption


In a previous article, Marvin Minsky writes:
>Date: Sat, 13 Aug 88 01:47 EDT
>From: Marvin Minsky <MINSKY@AI.AI.MIT.EDU>
>Subject:  The Godless assumption
>To: AILIST@AI.AI.MIT.EDU, MINSKY@AI.AI.MIT.EDU
>
>
>Andrew Basden warns us
>
>> Why should 'religious' not also be 'practical'?  Many people -
>> especially ordinary people, not AI researchers - would claim their
>> 'religion' is immensely 'practical'.  I suggest the two things are not
>> opposed.  It may be that many correspondents *assume* that religion is
>> a total falsity or irrelevance, but this assumption has not been
>> proved correct, and many people find strong empirical evidence
>> otherwise.
>
>Yes, enough to justify what those who "knew" that they were right did
>to Bruno, Galileo, Joan, and countless other such victims.  There is
>no question that people's beliefs have practical consequences; or did
>you mean to assert that, in your philosophical opinion, they simply
>may have been perfectly correct?

     I find the above statement by Mr. Minsky to be out of line.  It
is true that religious beliefs have been used *as*excuses* to commit
horrible attrocities (witch burnings, the Crusades, Mr. Minsky's
examples above, etc.), but I believe that "science" has been used
*as*an*excuse* in the same way (the Nazi's horrible experiments on
Jewish people, for instance).  Furthermore, both science and religion
can be used as excuses for killing and attrocities in the future.

     Personally, I think that "science" is but a set of beliefs also.
One can reject science as readily as on can reject religions.  I also
propose that for some people a given religion (Christianity, Judaism,
Buddhism, Hinduism, African religions, personal religions, Pagan
religions, or whatever else) describes their world better than
Science; for them religion is a more appropriate (and *practical*) set
of beliefs than science is.  For many people (myself included),
religion and science both provide "appropriate" ways of describing the
universe around them.

>I hope this won't lead to an endless discussion but, since we have an
>expert here on religious belief, I wonder, Andrew, if you could
>briefly explain something I never grasped: namely, even if you were
>convinced that God wanted you to burn Bruno, why that would lead you
>to think that that makes it OK?

     I propose an alternative question: if you were convinced that, in
order to "better mankind" (in the name of science and scientific
curiosity), one would need to experiment on and kill countless numbers
of animals, would that reason make it OK?  How much farther does the
same arguemnt need to be taken in order to justify maiming and killing
of human beings for experiments?  Be really careful when you begin to
generalize.

     Many religions advocate killing and sacrifices, and many do not.
There exists a religion where the final goal is to *stop* killing as
many creatures as possible (according to an Eastern religion class I
took taught by David Knipe, himself a student of Eliade).  Science and
religion can both be used as excuses for killing, and they can both
provide reasons to prevent it.

-----

     A final note: I see no reason why religion and science cannot
coexist together in one's personal beliefs (they do in mine).  I see
no reason why science should deny the "practicality" of religions, or
vice versa.  Although some religious sects (esp. Christianity, Judaism,
and Catholicism) sometimes clash with science on issues such as
evolution vs. creationism, other religions (such as some sects of
Buddhism) accept outside beliefs (e.g. science), which has aided in the
spread of those religions into various cultures.

                                        -Chris

[p.s. if anyone feels that this does not belong in comp.ai.digest, I
am perfectly willing to discuss this via email.]--
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp
                                     ----
"...Just because someone is shy and gets straight A's does not mean they won't
put wads of gum in your arm pits."
                         - Lynda Barry, "Ernie Pook's Commeek: Gum of Mystery"

------------------------------

End of AIList Digest
********************

∂22-Aug-88  2204	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #63  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 22 Aug 88  22:04:11 PDT
Date: Mon 22 Aug 1988 22:27-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #63
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 23 Aug 1988      Volume 8 : Issue 63

 Queries and Responses:

  Category Theory in AI
  Camera Stabilization
  Speech rec. using neural nets
  Sigmoid function
  MACSYMA Availability
  ELIZA

----------------------------------------------------------------------

Date: 18 Aug 88 10:23:55 GMT
From: mcvax!ukc!strath-cs!glasgow!jack@uunet.uu.net  (Jack Campin)
Subject: Re: Category Theory in AI


geddis@atr-la.atr.junet (Donald F. Geddis) wrote:
>>dpb@philabs.philips.com (Paul Benjamin) writes:
>> Some of us here at Philips Laboratories are using  universal
>> algebra, and more particularly category theory, to formalize
>> concepts in  the  areas  of  representation,  inference  and
>> learning.

>I'm familiar with those areas of AI, but not with category theory (or
>universal algebra, for that matter). Can anyone give a short summary for the
>layman of those two mathematical topics?  And perhaps a pointer as to how
>they might be useful in formalizing certain AI concepts.  Thanks!

A short summary is tricky without knowing your mathematical background and
maybe impossible for a real honest-to-goodness layman. A good book to start
with is Herrlich and Strecker's, but if you don't know what a group is, forget
it. Arbib and Manes' "Arrows, Structures and Functors" is also OK, but mainly
applies it to automata theory (not a booming enterprise these days).

Category theory generalizes the notions of "set" and "function", or more
generally "mathematical structure" and "mapping that preserves that structure"
(where the structures might be, say, n-dimensional Euclidean spaces, and the
mappings projections, embeddings and other distance-preserving functions).

Its aim is to describe classes of mathematical object (groups, topological
spaces, partially ordered sets, ...) by looking at the maps between them, and
then to describe relationships between these classes. It captures a lot of
otherwise indescribable mathematical notions of "unique" or "natural" objects
or maps in a class (the empty set, Descartes' construction of the Euclidean
plane as the "product" of two lines, the class of all possible strings in an
alphabet, ...).

The major application of it to computer science so far is in the semantics
of higher-order polymorphic type systems (which can't be described in set
theory). David Rydeheard and Rod Burstall have just published a book
"Computational Category Theory" that describes categorical constructions
algorithmically (in Standard ML) and has a useful bibliography.

But a lot of computer science literature that uses category theory does not
do so in an essential way; the commutative diagrams are just there to give
the authors some mathematical street cred.

I can't imagine what category theory has to contribute to knowledge
representation (though I can just about imagine it helping to describe
neural nets in a more abstract way). Can the philabs people say more
about what they're up to?


--
ARPA: jack%cs.glasgow.ac.uk@nss.cs.ucl.ac.uk       USENET: jack@cs.glasgow.uucp
JANET:jack@uk.ac.glasgow.cs      useBANGnet: ...mcvax!ukc!cs.glasgow.ac.uk!jack
Mail: Jack Campin, Computing Science Dept., Glasgow Univ., 17 Lilybank Gardens,
      Glasgow G12 8QQ, SCOTLAND     work 041 339 8855 x 6045; home 041 556 1878

------------------------------

Date: 18 Aug 88 16:48:27 GMT
From: pacbell!hoptoad!dasys1!step!perl@ames.arpa  (Robert Perlberg)
Subject: Re: Camera Stabilization

In a previous article, John B. Nagle writes:
>      Panasonic showed a gyroscopically stablized consumer-grade camcorder
> at the Consumer Electronics Show this summer.  It should be available at
> Japanese retailers by the end of the year.  This may be a promising approach.

The Panasonic Steadi-Cam has no gyros.  The lens and image sensor are
mounted on a gimbaled platform along with pitch and yaw sensors which
drive motors which move the platform to compensate for camera body
movement.

Robert Perlberg
Dean Witter Reynolds Inc., New York
phri!{dasys1 | philabs | manhat}!step!perl
        -- "I am not a language ... I am a free man!"

------------------------------

Date: 19 Aug 88 18:05:25 GMT
From: att!chinet!mcdchg!clyde!watmath!watvlsi!watale!dixit@bloom-beaco
      n.mit.edu  (Nibha Dixit)
Subject: Speech rec. using neural nets

Is anyody out there looking at speech recognition using neural
networks? There has been some amount of work done in pattern
recognition for images, but is there anything specific being done
about speech?
--
Nibha Dixit  (U of Waterloo, Waterloo, Ont.)
...!watmath!watale!dixit or dixit@watale.waterloo.cdn
dixit@watale.waterloo.edu or dixit@watale.waterloo.bitnet

------------------------------

Date: Sat, 20 Aug 88 15:06:38 -0200
From: Antti Ylikoski <ayl%hutds.hut.fi%FINGATE.BITNET@MITVMA.MIT.EDU>
Subject: Sigmoid function


One way to build a circuit that produces the true sigmoid function
would be to store the value-argument pairs into a ROM and use the
following circuit:
f
           |-----------------|  |------|  |-----------------|
input ---> | A / D converter |->|  ROM |->| D / A converter | ---> output
           |-----------------|  |------|  |-----------------|

Antti (Andy) Ylikoski
Helsinki University of Technology (I think a better translation would be
Helsinki Institute of technology)
Digital Systems Laboratory
YLIKOSKI@FINFUN.BITNET
OPMVAX::YLIKOSKI        (DECnet)
mcvax!hutds!ayl         (UUCP)

------------------------------

Date: Sat, 20 Aug 88 23:41 EDT
From: Nick Papadakis <AILIST-REQUEST@AI.AI.MIT.EDU>
Subject: MACSYMA Availability


        If the messages I have received since my posting in AIList V8
#35 are any gauge, there seems to be a fair amount of misinformation on
the subject of MACSYMA and where to get it.

        The following is a summary of the best information I have been
able to garner via numerous telephone conversations (I had requested
hardcopy, but am tired of waiting for it to arrive).


                        *       *       *


        The original MACSYMA code is owned by MIT, which has granted
licenses to distribute it to three other organizations.


SYMBOLICS: Runs on Apollo, Sun, Symbolics, and all VAXes.
        Licenses range from about $5K to $15K (US prices, for commercial
        customers)  Probably the most sophisticated version, many
        enhancements.
        Source code is NOT provided.
        Call 1-800-622-7962, (in Mass. (617) 621-7770)
        or email petti@ALLEGHENY.SCRC.Symbolics.COM


NESC (National Energy Software Center): Referred to as 'DOE MACSYMA'.
        Runs on Alliant, Celerity, Data-General, Encore, LMI Lambda,
        Sun, Symbolics, TI Explorer, VAX.
        About $2K to $3K for non-subscribers (subscribers get 2 programs
        free, subscriptions are $2.5K to $3.5K)
        Source code IS provided.
        Call Margaret Butler (312) 972-7250

        [As of March 87, an improved version of DOE MACSYMA for the
        TI Explorer (including the SHARE libraries), was available to any
        NESC licensee from Hyde%NGSTL1@TI-CSL.CSNET@RELAY.CS.NET]


INTERMATH - This startup company's product is still in the works, but they
        are interested in talking to people who "might want to embed some
        portion of MACSYMA's functionality in another system".
        Call (617) 868-4510


                        *       *       *


        Now for the distressing part.

        The MIT patent office states that DOE (via NESC) is only
permitted to distribute MACSYMA to government agencies, contractors, and
grantees.  NESC says that is completely untrue, and that they will
continue to distribute to commercial customers.  Symbolics has
trademarked the name 'MACSYMA'.  DOE claims that "it wasn't theirs to
trademark."

        I'm not sure I want to know precisely what causes such a massive
failure of communication.

        I am _quite_ sure that I do *not* wish to receive (and will not
post) any more messages pointing out that the version being distributed
by a certain company was 'the only licensed version' and that all others
were 'bootleg'.  This list exists to inform the AI community, and not to
serve any commercial interest.

        It is clear that the various versions available have varying
degrees of enhancement and support.  The informed customer will take
this into account when making a decision.  I think it is unfortunate
when the good efforts of those who have worked to enhance a product are
compromised by the unsavory tactics of others seeking to promote it.


                - nick

------------------------------

Date: Sun, 21 Aug 88 17:56:44
From: ZZZO%DHVRRZN1.BITNET@CUNYVM.CUNY.EDU
Subject: ELIZA

Date: 21 August 1988, 17:53:57 MEZ
From: Wolfgang Zocher           (0511) 762-3684      ZZZO     at DHVRRZN1
To:   AILIST at AI.AI.MIT

Subject: Need for ELIZA
For the purpose of demonstration in a Lisp-course I need a Common Lisp
version of the ELIZA (Doctor) program (evtl. with Scripts)...
can anyone help me???
WZ (ZZZO at DHVRRZN1)

------------------------------

End of AIList Digest
********************

∂24-Aug-88  1436	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #65  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 24 Aug 88  14:35:59 PDT
Date: Wed 24 Aug 1988 15:31-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #65
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 25 Aug 1988      Volume 8 : Issue 65

 Philosophy:

  AI and the Vincennes incident
  Animal Behavior and AI
  Navigation and symbol manipulation
  Dual encoding, propostional memory and...
  Can we human being think two different things in parallel?

----------------------------------------------------------------------

Date: 19 Aug 88  1449 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: AI and the Vincennes incident

I agree with those who have said that AI was not involved in the
incident.  The question I want to discuss is the opposite of those
previously raised.  Namely, what would have been required so that
AI could have prevented the tragedy?

We begin with the apparent fact that no-one thought about the Aegis
missile control system being used in a situation in which discrimination
between civilian traffic and attacking airplanes would be required.
"No-one" includes both the Navy and the critics.  There was a lot
of criticism of Aegis over a period of years before 1988.  All the
criticism that I know about concerned whether it could stop multiple
missile attacks as it was designed to do.  None of it concerned the
possibility of its being used in the situation that arose.  Not even
after it was known that the Vincennes was deployed in the Persian
Gulf was the issue of shooting down airliners (or news helicopters) raised.

It would have been better if the issue had been raised, but it appears
that we Earthmen, regardless of political position, aren't smart
enough to have done so.  Now that a tragedy has occurred, changes will
be made in operating procedures and probably also in equipment and
software.  However, it seems reasonably likely in the future
additional unanticipated requirements will lead to tragedy.

Maybe an institutional change would bring about improvement, e.g.
more brainstorming sessions about scenarios that might occur.  The
very intensity of the debate about whether the Aegis could stop
missiles might have insured that any brainstorming that occurred
would have concerned that issue.

Well, if we Earthmen aren't smart enough to anticipate trouble,
let's ask if we Earthmen are smart enough and have the AI or other
computer technology to design AI systems
that might help with unanticipated requirements.
 My conclusion is that we probably don't have the technology yet.

Remember that I'm not talking about explicitly dealing with the
problem of not shooting down civilian airliners.  Now that the
problem is identified, plenty can be done about that.

Here's the scenario.

Optimum level of AI.

Captain Rogers:  Aegis, we're being sent to the Persian Gulf
to protect our ships from potential attack.

Aegis (which has been reading the A.P. wire, Aviation Week, and
the Official Airline Guide on-line edition):  Captain, there may
arise a problem of distinguishing attackers from civilian planes.
It would be very embarassing to shoot down a civilian plane.  Maybe
we need some new programs and procedures.

I think everyone knowledgable will agree that this dialog is beyond
the present state of AI technology.  We'd better back off and
ask what is the minimum level of AI technology that might have
been helpful.

Consider an expert system on naval deployment, perhaps not part
of Aegis itself.

Admiral: We're deploying an Aegis cruiser to the Persian Gulf.

System: What kinds of airplanes are likely to present within
radar range?

Admiral: Iranian military planes, Iraqi military planes, Kuwaiti
planes, American military planes, planes and helicopters hired
by oil companies, civilian airliners.

System: What is the relative importance of these kinds of airplanes
as threats?

It seems conceivable that such an expert system could have been
built and that interaction with it might have made someone think
about the problem.

------------------------------

Date: 22 Aug 88 18:42:53 GMT
From: zodiac!ads.com!dan@ames.arc.nasa.gov (Dan Shapiro)
Reply-to: zodiac!ads.com!dan@ames.arc.nasa.gov (Dan Shapiro)
Subject: Animal Behavior and AI


Motion control isn't the only area where studying animals has merit.
I have been toying with the idea of studying planning behavior in
various creatures; a reality check would add to the current debate
about "logical forethought" vs. "reactive execution" in the absence of
plan structures.

A wrinkle is that it would be very hard to get a positive fix on an
animal's planning capabilities since all we can observe is their
behavior (which could be motivated by a range of mechanisms).
My thought is to study what we would call "errors" in animal behavior
- behaviors that a more cognizant or capable planning engine would  avoid.

It seems to me that there must be a powerful difference between animal
planning/action strategies and (almost all) current robotic
approaches; creatures manage to do something reasonable (they survive)
in a wide variety of situations while robots require very elaborate
knowledge in order to act in  narrow domains.

------------------------------

Date: 23 Aug 88 06:05:43 GMT
From: jbn@glacier.stanford.edu (John B. Nagle)
Reply-to: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: navigation and symbol manipulation


In a previous article, Stephen Smoliar writes:
>It is also worth noting that Chapter 8 of Gerald Edelman's NEURAL DARWINISM
>includes a fascinating discussion of the possible role of interaction between
>sensory and motor systems.  I think it is fair to say that Edelman shares
>Nagle's somewhat jaundiced view of mathematical logic, and his alternative
>analysis of the problem makes for very interesting, and probably profitable,
>reading.

       I do not take a "jaundiced view" of mathematical logic, but I
think its applicability limited.  I spent some years on automated program
verification (see my paper in ACM POPL '83) and have a fairly good idea of
what can be accomplished by automated theorem proving.  I consider mathematical
logic to be a very powerful technique when applied to rigidly formalizable
systems.  But outside of such systems, it is far less useful.  Proof is so
terribly brittle.  There have been many attempts to somehow deal with
the brittleness problem, but none seem to be really satisfying.  So,
it seems appropriate to accept the idea that the world is messy and go
from there; to seek solutions that can begin to cope with the messyness of
the real world.

       The trouble with this bottom-up approach, of course, is that you
can spend your entire career working on problems that seem so utterly
trivial to people who haven't struggled with them.  Look at Marc
Raibert's papers.  He's doing very significant work on legged locomotion.
Progress is slow; first bouncing, then constrained running, last year a forward
flip, maybe soon a free-running quadruped.  A reliable off-road runner is
still far away.  But there is real progress every year.  Along the way
are endless struggles with hydraulics, pneumatics, gyros, real-time control
systems, and mechanical linkages.  (I spent the summer of '87 overhauling
an electrohydraulic robot, and I'm now designing a robot vehicle.  I can
sympathise.)

       How much more pleasant to think deep philosophical thoughts.
Perhaps, if only the right formalization could be found, the problems
of common-sense reasoning would become tractible.  One can hope.
The search is perhaps comparable to the search for the Philosopher's Stone.
One succeeds, or one fails, but one can always hope for success just ahead.
Bottom-up AI is by comparison so unrewarding.  "The people want epistemology",
as Drew McDermott once wrote.  It's depressing to think that it might take
a century to work up to a human-level AI from the bottom.  Ants by 2000,
mice by 2020 doesn't sound like an unrealistic schedule for the medium term,
and it gives an idea of what might be a realistic rate of progress.

       I think it's going to be a long haul.  But then, so was physics.
So was chemistry.  For that matter, so was electrical engineering.  We
can but push onward.  Maybe someone will find the Philosopher's Stone.
If not, we will get there the hard way.  Eventually.


                                        John Nagle

------------------------------

Date: Tue, 23 Aug 88 10:54:36 BST
From: Gilbert Cockton <gilbert%cs.glasgow.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Re: Dual encoding, propostional memory and...

In reply to Pat Hayes last posting

>Yes, but much of this debate has been between psychologists, and so has little
>relevance to the issues we are discussing here.
[psychologist's definition of different defined]
>That's not what the AI modeller means by `different', though.
>it isn't at all obvious that different behavior means different
>representations (though it certainly suggests different implementations).

How can we talk about representation and implementation being different
in the human mind.  Are the two different in Physics, Physiology,
Neurobiology ....  And why should AI and psychology differ here?
Aren't they adressing the same nature?

I'm sorry, but I for one can see how these categories from software
design apply to human information processing.  Somewhere or other, some
neurotransmitters change, but I can't see how we can talk convincingly
about this physiological implementation having any corresponding
representation except itself.

Representation and implementation concern the design of artefacts, not
the structure of nature.  AI systems, as artefacts, must make these
distinctions.  But in the debate over forms of human memory, we are
debating nature, not artefact. Category mistake.

>It seems reasonable to conclude that these facts that they
>know are somehow encoded in their heads, ie a change of knowledge-state is a
>change of physical state.  Thats all the trickery involved in talking about
>`representation', or being concerned with how knowledge is encoded.

I would call this implementation again (my use of the word 'encoding'
was deliberately 'tongue in cheek' :-u).  I do not accept the need for
talk of representation.  Surely what we are interested in are good
models for physical neurophysiological processes?  Computation may be
such a model, but it must await the data.  Again, I am talking about
encoding.  Mental representations or models are a cognitive
engineering tool which give us a handle on learning and understanding
problems.  They ae a conative convenience, relevant to action in the
world.  They are not a scientific tool, relevant to a convincing modelling
of the mental world.

>what alternative account would you suggest for describing, for example,
>whatever it is that we are doing sending these messages to one another?

I wouldn't attempt anything beyond the literary accounts of
psychologists.  There is a reasonable body of experimental evidence,
but none of it allows us to postulate anything definite about
computational structures.  I can't see how anyone could throw up a
computational structure, given our present knowledge, and hope to be
convincing.  Anderson's work is interesting, but he is forced to ignore
arguments for episodic or iconic memory because they suggest nothing
sensible in computational terms which would be consistent with the
evidence for long term memory of a non-semantic, non-propositional form.

Computer modelling is far more totalitarian than literary accounts.
Unreasonable restrictions on intellectual freedom result.  Worse still,
far too many cognitive scientists confuse the inner loop detail of
computation with increased accuracy.  Detailed inaccuracy is actually
worse than vague inaccuracy.

Sure computation forces you to answer questions which would otherwise
be left to the future.  However, having the barrel of a LISP
interpreter pointing at your head is no greater guarantee of accuracy
than having the barrel of a revolver pointing at your head.  Whilst
computationalists boast about their bravado in facing the compiler, I
for one think it a waste of time to be forced to answer unanswerable
questions by an inanimate LISP interpreter.  At least human colleagues
have the decency to change the subject :-)

>If people who attack AI or the Computational Paradigm, simultaneously tell me
>that PDP networks are the answer

I don't.  I don't believe either the symbolic or the PDP approach.  I
have seen successes for both, but am not well enough read on PDP to
know it's failings.  All the talk of PDP was a little tease, recalling
the symbolic camp's criticism that a PDP network is not a
representation.  We certainly cannot imagine what is going on in a
massively parallel network, well not with any accuracy.  Despite our
inability to say EXACTLY what is going on inside, we can see that
systems such as WISARD have 'worked' according to its design
criteria.  PDP does not accurately model human action, but it gets
some low level learning done quite well, even on task requiring what
AI people call intelligence (e.g. spotting the apple under teddy's bottom).

>Go back and (re)read that old 1969 paper CAREFULLY,
Ah, so that's the secret of hermeneutics ;-]

------------------------------

Date: Tue, 23 Aug 88 11:42:49 bst
From: Ken Johnson <ken%aiva.edinburgh.ac.uk@NSS.Cs.Ucl.AC.UK>
Reply-to: "Ken Johnson,E32 SB x212E"
          <ken%aiva.edinburgh.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Re: Can we human being think two different things in
         parallel?

In a previous article, Youngwhan Lee writes:
>Date: Sun, 14 Aug 88 16:54 EDT
>From: Youngwhan Lee <ywlee@p.cs.uiuc.edu>
>To: ailist-request@stripe.sri.com
>Subject: Can we human being think two different things in parallel?
>
>Can we human being think two different things in parallel?

I think most people have had the experience of suddenly gaining insight
into the solution of a problem they last deliberately chewed over a few
hours or days previously.  I'd say this was evidence for the brain's
ability to work at two or more (?) high-order tasks at the same time.
But I look forward to reading what Real Psychologists say.

--
------------------------------------------------------------------------------
From:    Ken Johnson (Half Man Half Bicycle)
Address: AI Applications Institute, The University, EDINBURGH
Phone:   031-225 4464 ext 212
Email:   k.johnson@ed.ac.uk

------------------------------

Date: 23 Aug 88 17:41:12 GMT
From: robinson@pravda.gatech.edu (Steve Robinson)
Reply-to: robinson@pravda.gatech.edu (Steve Robinson)
Subject: Re: Can we human being think two different things in
         parallel?


For those of you following Lee's, Hayes' and Norman's postings on
"parallel thinking" there is a short paper in this year's Cognitive
Science Society's Conference proceedings by Peter Norvig at UC-Berkeley
entitled "Multiple Simultaneous Interpretations of Ambiguous Sentences"
which you may find pertinent.  The proceedings are published by LEA.
Since the conference was last week, it may be a while until they are
availble elsewhere.  I heard Norvig's presentation and found it interesting.

Regards,
Stephen

------------------------------

End of AIList Digest
********************

∂24-Aug-88  1745	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #64  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 24 Aug 88  17:44:48 PDT
Date: Wed 24 Aug 1988 13:50-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #64
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 25 Aug 1988      Volume 8 : Issue 64

 Religion:

  The Godless assumption
  Burning Bruno
  Re: science, lawfulness, a (the?) god
  Religion & Cognitive Science

----------------------------------------------------------------------

Date: 23 Aug 88 01:01:30 GMT
From: greg@csoft.co.nz
Reply-to: greg@cstowe.UUCP (Greg)
Subject: Re: The Godless assumption

I have edited out a large number of comments from both sides which could be
debated, but do not belong here. In fact, none of this does, but I will
correct that in my posting!

In a previous article, T. William Wells writes:
>In a previous article, IT21@SYSB.SALFORD.AC.UK writes:
>:                                                                    It may
>: be that many correspondents *assume* that religion is a total falsity or
>: irrelevance,
>
>                                             proposing not only
>that religion is practical, but that it might be `true'.
>However, the religious `true' is antithetical to any rational
>`true': religion and reason entail diametrically opposed views of
>reality: religion requires the unconstrained and unknowable as
>its base, reason requires the contrained and knowable as its
>base.

    The reason basis described here is HUMAN, based on a human perception of
the universe, which is limited at best. If I successfully managed to build
an AI by any method other than running it thru a complete human simulation
(A Mind Forever Voyaging, Infocom Games), I would be surprised if it's
reasoning could be compared to a humans. Much human reasoning is based on
emotions and values that would probably be no discernable value to the
computer. Different human cultures are differ in their perception of reason.
The computer could probably only be described as inscrutable.

        It would even be rather disconcerting to have the first AI proclaim it's
belief in a religion. Come to think about it, anything the first AI 'thought'
would probably have an profound effect on the human model of the universe.

For futher reading about AI's in a universe of their own, read
Gibson, William - Neuromancer, Count Zero and Burning Chrome.
They may change your perception of AI.

Disclaimer - tricked you - this is just an AI in the net anyway.

--

Greg Calkin                                   Commercial Software N.Z. Limited,
...!uunet!vuwcomp!dsiramd!pnamd!cstowe!greg   PO Box 4030 Palmerston North,
or greg@csoft.co.nz                           New Zealand.    Phone (063)-65955

------------------------------

Date: 22 Aug 88  2212 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: Burning Bruno

[In reply to message sent Mon 22 Aug 1988 22:21-EDT.]

Burning  Giordano Bruno presents problems for many religions that Hiroshima
doesn't present for science.  Science doesn't claim that scientific
discoveries can't be used in war.  There would be problems for anyone
who claimed that 1930s science would avert World War II.  As far as
I know, not one person in the world made that claim.  There are also
problems for people who claimed that Marxism was a science, that
countries ruled by Marxism would not commit crimes and that
the Soviet Union was ruled by Marxism.  Plenty of people believed
that and denied that, for example, the millions murdered as kulaks
were murdered.

A religion that claimed that the Catholic Church was protected
from doing evil by God, that the Catholic Church was responsible
for the killing of Bruno and that killing Bruno was a crime
have problems.  Many other religious people who believe that
God will prevent their leaders from certain crimes and errors
have problems every time one of them is caught.

To have problems of this kind requires a certain complex of
beliefs, but such complexes are relatively common.  If certain
people were found to have committed certain crimes, it would
disconcert me a lot.

------------------------------

Date: 23 Aug 88 08:04:56 GMT
From: quintus!ok@Sun.COM (Richard A. O'Keefe)
Reply-to: quintus!ok@Sun.COM (Richard A. O'Keefe)
Subject: Re: The Godless assumption


In a previous article, lishka@uwslh.UUCP writes:
>It is true that religious beliefs have been used *as*excuses* to commit
>horrible atrocities (witch burnings,  ...

This concedes too much.  It is widely believed, but that doesn't make it
true.  The belief in the existence and malevolence of witches was an
*empirical* belief.  If you read "Malleus Maleficarum" (there is at least
one translation available in Paperback) or if you read the court transcripts
and pamphlets from the New England witchcraft trials (there's an historical
society which issued reprints in the first half of this century) you will
find few if any appeals to faith, but many appeals to evidence.  Where we
disagree with the past is about what constitutes evidence (we do not, I
trust, regard torture as necessary on the grounds that evidence so produced
is the most reliable kind, but if we _did_ think that, what do _you_ think
law enforcement agencies would do?).  There are any number of people today
who believe in ghosts, poltergeists, ESP, and the like, on far worse
evidence than our forbears had for believing in witches.

Either there were no few people who wished to be witches, and even believed
that they _were_ witches, or all court testimony is worthless (as Ambrose
Bierce once said, somewhat more forcefully).

Some of the other messages have reflected a similar credulous acceptance
of "pop history".  The past is stranger than we imagine.

This topic really hasn't much to do with AI.
Perhaps it could be moved somewhere else?

------------------------------

Date: Tue Aug 23 10:06:36 EDT 1988
From: sas@BBN.COM
Subject: Re: science, lawfulness, a (the?) god

I think people are getting a bit confused on this one.

Religion is centered around the human soul which in many religions
can be characterized as damned, saved, pure, untested, tainted and so
on.  In Western religions, which are largely guilt based, it is used
to assign human thoughts and actions a place on a good/evil or
moral/immoral scale.

Science is centered around the testable world.  Various statements
about phenomena are assigned values on the true/false scale, in which
truth is determined by testing the statements predictive value, the
predictions being tested by active experiment or passive observation.

To my knowledge there is no scientific litmus test which can determine
the good or evil of a particular thought of action.  Beeckman does not
make a scale to weigh one's soul against a feather.  (Actually, the
popular American view of the afterlife is surprisingly
NON-judgemental)!  The story of Job can even be viewed as a tract
denouncing the attempt to apply human reason to matters religious.

One might expect, given the powers ascribed to the almighty(ies), that
religious law would be more or less self enforcing.  Notice the
difference between the following two sets of taboos:

- Don't eat amanitus bolitus.   - Don't hit yourself with a stick.
- Don't eat human flesh.        - Don't hit other people with a stick.

To keep people from eating human flesh and hitting other people with
sticks, people need some form of government, which is ruled not by
science, not by religion, but by politics.

Will a big enough fire kill a man?  Will the atom bomb explode?
That's science.

Did Bruno reach Nirvana?  Is Truman rotting in hell?  That's religion.

Should we burn people at the stake for heresy?  Should we drop the
bomb on Japan?  That's politics.

                                        Seth

P.S. I can't help adding for you movie buffs, "When a ghost and a king
meet and everyone ends up mincemeat. That's entertainment."

------------------------------

Date: Tue, 23 Aug 88 11:27:59 MDT
From: mantha@cs.utah.edu (Surya M Mantha)
Subject: Re: The Godless asumption

In a previous article, ALFONSEC@EMDCCI11.BITNET writes:
>

>burned in Hiroshima in 1945. In actual fact, neither Religion nor Science
>are discredited because of that, only people who do things can be discredited
>by them. Theories are discredited by negative evidence or by reason.
>
    Not surprising!! This line of reasoning I mean. It is one that is
mostly commonly used to defend institutions that are inherently unjust
undemocratic and intolerant. The blame always lies with "people". The
institution itself ( be it "organized religion", "socialism", "state
capitalism") is beyond reproach. Afterall, it does not owe its existence
to man does it?

>M. Alfonseca
>
>(Usual disclaimer)

Surya Mantha
Department of Computer Science
University of Utah
Salt Lake City

------------------------------

Date: 24 Aug 88 10:00:44 GMT
From: mcvax!csinn!grossi@uunet.UU.NET (Thomas Grossi)
Subject: Re: The Godless asumption


In a previous article, ALFONSEC@EMDCCI11.BITNET writes:
> .... If Religion is discredited because Giordano Bruno was burnt at
> the stake in 1600, then Science is discredited because 120,000 people were
> burned in Hiroshima in 1945.

No, World Politics is discredited:  the bomb was dropped for political reasons,
not scientific ones.  Science provided the means, as it did (in a certain
sense) for Religion as well.

Thomas Grossi
grossi@capsogeti.fr

------------------------------

Date: 24 Aug 88 11:01:45 GMT
From: Jason Trenouth <mcvax!cs.exeter.ac.uk!jtr@uunet.UU.NET>
Subject: Re: The Godless assumption


Surely the "godless assumption" is the natural assumption of all scientific
endevour? If we begin allowing for the existance of a supernatural god, who
could interfere with our experiments, then any major difficulty might halt
progress. The scientists could reason that their god just doesn't want them to
know any more. Its extreme form is "Cartesian doubt":

        I think therefore I am,
        and I definitely can't try to do any research!

Some theists get around this aspect of an interfering god by positing that it
created the universe, which now runs all by itself according to some laws. In
this case we don't need to take the god into account anyway.

There is another alternative, which is to argue that there are a number of
people whose minds are effected by belief in a god, even though we assume its
nonexistence. In this case it is merely another facet of human cognition
available for study.

Ciaou - JT.
--
______________________________________________________________________________
| Jason Trenouth,                        | JANET:  jtr@uk.ac.exeter.cs       |
| Computer Science Dept,                 | UUCP:   jtr@expya.uucp            |
| Exeter University, Devon, EX4 4PT, UK. | BITNET: jtr%uk.ac.exeter.cs@ukacrl|

------------------------------

Date: Wed, 24 Aug 88 10:34 EST
From: steven horst 219-289-9067           
      <GKMARH%IRISHMVS.BITNET@MITVMA.MIT.EDU>
Subject: Religion & Cognitive Science


Here are two questions that came to mind while browsing through the
recent spate of submissions on "the godless question".  The first
is food for thought.   The second is a request for information.

FOOD FOR THOUGHT:  There is a certain similarity between cognitive
    science and religious cosmology in that both employ intentional
    explanation to account for their respective data.  (Though of
    course neither intentional realism nor theism should be regarded
    primarily or solely as scientific theories - both predate
    scientific inquiry and have ramifications outside the sphere
    of scientific investigation.)  A question for those who are
    disposed to accept at least Dennett's views on the need for and
    utility of the "intentional stance" in psychology: If you are
    prepared to ascribe intentional states and processes to explain
    some events (i.e., in psychology), is there any reason to not
    proceed in the same way in other areas?  I'm not asking this
    evangelistically -- I'm just interested in hearing some ideas
    on why the kinds of considerations which may warrant intentional
    realism do or do not also warrant theism.  (Or, for that manner,
    animism.

REQUEST FOR INFORMATION:  Is anyone aware of any projects that
     apply computer modeling to religious practice in studying the
     phenomenology of religion?

     BITNET Adress..........gkmarh@irishmvs
     SURFACE MAIL...........Steven Horst
                            Department of Philosophy
                            Notre Dame, IN  46556

------------------------------

End of AIList Digest
********************

∂25-Aug-88  2008	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #66  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 25 Aug 88  20:08:28 PDT
Date: Thu 25 Aug 1988 22:53-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #66
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 26 Aug 1988       Volume 8 : Issue 66

 Religion:

  The Godless Assumption
  The Ignorant assumption
  backward path and religions
  Why Bruno was burned
  Science vs. 'Religion' -- not all religions have a problem
  Linking Cogsci and Religion

----------------------------------------------------------------------

Date: Thu, 25 Aug 88 09:23:43 HOE
From: ALFONSEC%EMDCCI11.BITNET@CUNYVM.CUNY.EDU
Subject: The Godless Assumption

In a previous article, John McCarthy says:
> Burning  Giordano Bruno presents problems for many religions that Hiroshima
> doesn't present for science.  Science doesn't claim that scientific
> discoveries can't be used in war.

Isaac Asimov (in "The sin of the scientist") contends that Science knew
sin when the first product was developed that could be used ONLY in war.
If I recall correctly, this product was mustard-gas (used in WWI).

> A religion that claimed that the Catholic Church was protected
> from doing evil by God, that the Catholic Church was responsible
> for the killing of Bruno and that killing Bruno was a crime
> have problems.

The Catholic Church never claimed that its members (whatever their
hyerarchy level) were protected from doing evil. The "infallibility
of the pope" has nothing to do with that. It affects not deeds, but
sayings, and only very special ones (only twice in the last 150 years).

In a previous article, sas@BBN.COM says:

> To my knowledge there is no scientific litmus test which can determine
> the good or evil of a particular thought of action.

True. From premises in the indicative mode ("this is so") you can never
deduce a conclusion in the imperative ("you shall do so"). You need at
least a premise in the imperative (i.e. a moral axiom).

In a previous article, Surya M Mantha says:

>In a previous article, ALFONSEC@EMDCCI11.BITNET writes:
>>

>>burned in Hiroshima in 1945. In actual fact, neither Religion nor Science
>>are discredited because of that, only people who do things can be discredited
>>by them. Theories are discredited by negative evidence or by reason.
>>
>    Not surprising!! This line of reasoning I mean. It is one that is
>mostly commonly used to defend institutions that are inherently unjust
>undemocratic and intolerant. The blame always lies with "people". The
>institution itself ( be it "organized religion", "socialism", "state
>capitalism") is beyond reproach. Afterall, it does not owe its existence
>to man does it?

I was not defending institutions. Religion and Science are not
institutions. A Church or a University are. Institutions are made out of
people. If people can be blamed, obviously the institutions can, too.

I was not even attacking people. Who am I to pass judgment on people
who lived at a place, a time, an environment, and who had a background
very different from mine?

Finally, in a previous article, Thomas Grossi says:
>In a previous article, ALFONSEC@EMDCCI11.BITNET writes:
>> .... If Religion is discredited because Giordano Bruno was burnt at
>> the stake in 1600, then Science is discredited because 120,000 people were
>> burned in Hiroshima in 1945.

>No, World Politics is discredited:  the bomb was dropped for political reasons,
>not scientific ones.  Science provided the means, as it did (in a certain
>sense) for Religion as well.

Agreed. But it was also World Politics that was discredited when
Bruno was burnt. There was a lots of politics involved in that.

M. Alfonseca

(Usual disclaimer)

------------------------------

Date: 23 Aug 88 09:51:04 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: The Ignorant assumption

In reply to two separate comments from Marvin Minsky in comp.ai.digest

>Yes, enough to justify what those who "knew" that they were right did
>to Bruno, Galileo, Joan, and countless other such victims.

>More generally, let's see more learning from the past.

Take care when there are trained historians on the net :-)
It is not beliefs that kill, but the power to act on them.  Where
"scientists" have had power, notably in Nazi Germany and Stalinist
Russia, they have killed to suppress heresy, just as the religious
leaders of pre-modern Europe killed the early scientists to put down
particularly annoying heresies.  Of course, you will say, these people
in Germany and Russia were not scientists.  As a trained historian, it
is enough for me that they called themselves scientists, just as the
Inquisition were undoubtedly Christian.  But as a historian, I would
exercise great caution in extending the facts of a previous time into
the present.  One thing one can learn from the past is that this went
out of fashion years ago :-)

The way to analyse what a scientist or Christian would do now, given
the absolute power enjoyed by the Inquisition, is to examine their
beliefs.  Neither group are democrats, nor would they respect many
existing freedoms.  Note that I am talking of roles of science and
religion.  As these people live in democracies, the chances are that
the values of the wider society will repress the totalitarian
instincts of their role-specific formal belief systems.  Do not take
this analysis personally.  The way to attack my argument is to
demonstrate that scientific or christian AUTHORITY are compatible with a
liberal democracy.

Any scientist who believes in a society regulated by scientific reason
(which would rule out the need for consultative subjective democracy)
would, given the power, introduce gulags, mental hospitals and other
devices for the control of the irrational and the heretical.

If anyone finds this unreasonable, consider how scientists wield power
when they do have it in academic organisations and funding bodies.
Admittedly they only murder rival research rather than rival
researchers.  Stakes don't have to be made from wood :-<

P.S.  Sure, move this discussion somewhere else :-)
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

------------------------------

Date: Tue, 23 Aug 88 14:23 N
From: LEO%BGERUG51.BITNET@MITVMA.MIT.EDU
Subject: backward path and religions


In Pattern Recognition, an intelligent system with a backward path in his
reasoning, can be used to try to find the appearance of a certain known
pattern in an input-signal. The system will probably always see this
required pattern if it tries hard enough, even if it is not there. On the
other hand, the Backward Path is a very usefull tool in the recognition of
patterns, in the presence of noise and defects. After forward-backward
resonance, eliminating the noise and correcting the defects, the system can
recall the complete pattern. When using this system in a real-world
environment, how and/or when can we know that the pattern recognition is
false? How are human or animal brains dealing with this problem? (This is
almost a discussion like subjective versus objective.)

Secondly, consider a self-learning, self-organizing neural netwerk.
Furthermore, suppose this system is searching for answers to questions in a
field from which it has almost no knowledge. In this case, the system might
ask  for things that it can never find. But, because of the self-learning,
self-organizing character, it will build answers, imaginary ones, if it
keeps asking long enough. To my opinion, this is the essence of religions
and superstitions. I presume that the number of layers or the 'distance'
between the sense perception and the abstract thinking level is to big.
Hence, when we have to deal with an extensive neural network, like the
human brain, that is working far beneath its capabilities, it will be able
to create imaginary 'objects' and speculations.

I think that we can also put this feature in an other perspective. Animals
with small brains are able to make a distinction between good and bad
circumstances. A lot of animals with greater brains are able to make a
distinction within the good circumstances, and chose a leader : the best.
Humans can go further : they are able to create a leader or leaders, only
excisting in there thoughts.

If we would be able to build large neural networks, with these self-
learning and self-organizing features, what is then the influence of the
structure of this system to these problems? How can we avoid or use them?
Building models or making suppositions is a very important part of
intelligence, but how can we control an AI-system in this, when we are only
able to control the dimensions of the system and the features of the basic
parts, the neurons?

I don't want to insult religious people, or being the cause of a discussion
about religion or believing. I should only appreciate it, if somebody,
having a more clear vieuw or some good idea's about these subjects, should
reply...

L. Vercauteren
AI-section Automatic Control Laboratory
State University of Ghent, Belgium
e-mail LEO@BGERUG51.BITNET

------------------------------

Date: Wed, 24 Aug 88 17:17 PST
From: HEARNE%wwu.edu@RELAY.CS.NET
Subject: Why Bruno was burned


For heaven's sake, Bruno was burned for butting up against
established authority.

Jim Hearne,
Computer Science Department,
Western Washington University,
Bellingham Washington

------------------------------

Date: 25 Aug 88 03:33:56 GMT
From: voder!pyramid!cbmvax!snark!eric@bloom-beacon.mit.edu  (Eric S.
      Raymond)
Subject: Science vs. 'Religion' -- not all religions have a problem

Perhaps Dr. Minsky's remarks were intemperate. But the responses of his
opponents make the error of identifying 'religion' with one particular
*style* of religion, the monotheist-dualist-antimaterialist kind that
happens to dominate Western culture.

Within the context of the Judaism and the two most important Zoroastrian-
influenced religions (Christianity and Islam) it is essentially correct
to describe 'religion' as either a) opposed to science, or b) self-consciously
about things held to be metaphysically 'beyond' scientific inquiry.

These religions depend for critical parts of their belief systems on the
historicity of various 'miraculous' occurences, and so must respond in one
of the above two ways to science's claims to the even *potential* of
universal explanatory power through the notion of unbreachable 'natural law'.

However, there are other kinds of 'religion' (underrepresented in this culture
at present) for which none of this is an issue. Some non-theistic varieties
of Buddhism, for example, are nearly pure psychological schemata with little
or nothing to say about cosmology (Zen is perhaps the best-known of these).

There are many other forms (collectively called 'mystery religions') in which
the religion is not at all concerned with what is 'true' in a physical-
confirmation sense, only what is mythopoetically effective for inducing certain
useful states of consciousness.

To people involved in the shared *experience* of a mystery religion or Zen-like
transformative mysticism, the whole science-vs.-'religion' controversy can seem
just plain irrelevant to what they're doing.

Someone operating from this stance might say: "The gods (or the Vedanta, or the
Logos, or whatever) are powerful in human minds -- who cares if they 'exist'
in a material sense or not?" At least one great Western thinker -- Carl Jung --
would have agreed. Religions come and go, but the archetypes are with us
always.

I bring all this up to point out that the 'religion-vs.-science' debate is a
good deal more parochial and culture-bound than either of the traditional sides
in it recognizes -- that scientists who get drawn into it often implicitly
accept the (usually Christian-inculcated) premise that the validity of a
religion hangs on its cosmological, historical and eschatological claims.

It doesn't have to be that way. I, for example, can testify from ten years
of experience that it is sanely possible to be both a hard-headed materialist
and an ecstatic mystic; both a philosophical atheist and an experiential
polytheist.

Further discussion (if any), however, should take place in talk.religion.misc,
and I have directed followups there.


--
      Eric S. Raymond                     (the mad mastermind of TMN-Netnews)
      UUCP: ..!{uunet,att,rutgers!vu-vlsi}!snark!eric  @nets: eric@snark.UUCP
      Post: 22 South Warren Avenue, Malvern, PA 19355  Phone:  (215)-296-5718

------------------------------

Date: Thu, 25 Aug 88 07:33 EST
From: Thomson Kuhn <KUHN@wharton.upenn.edu>
Subject: Linking Cogsci and Religion

Fo an incredibly tight linking of cognitive science and religion see a book
by Julian Jaynes called, The Origin of Consciousness in the Breakdown of
the Bicameral Mind.

Thomson Kuhn
The Wharton School

------------------------------

End of AIList Digest
********************

∂25-Aug-88  2259	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #67  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 25 Aug 88  22:58:55 PDT
Date: Thu 25 Aug 1988 23:20-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #67
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 26 Aug 1988       Volume 8 : Issue 67

 Today's Topics:

  English grammar: Open versus closed classes of words
  Logic: Are all reasoning systems inconsistent?
  Free Will: How to dispose of naive science types

----------------------------------------------------------------------

Date: Sat, 13 Aug 88 22:34:55 PDT
From: crocker@tis-w.arpa (Stephen D. Crocker)
Subject: open versus closed classes of words in English grammar

McGuire replied to Nagle's query about open versus closed classes of
words in English grammar, viz nouns, verbs, adjectives and adverbs are
open and conjunctions, articles, prepositions, etc. are closed.  He then
comments:

> While I'm familiar with this distinction, and think that it may have
> been around in linguistics for quite some while (Bernard Bloch maybe?),
> I don't remember it being used much. The only references that spring to
> mind are some studies in speech production and slips of the tongue done
> in the 70s by Anne Cunningham (she's a Brit though I'm not sure of her
> last name) and maybe Victoria Fromkin claiming that less errors are
> associated with closed class words and that they play some privileged role
> in speech_production/syntax/lexical_access/the_archetecture_of_the_mind.

I recall in the mid or late 60's reading about a parser built in the UK that
relied heavily on the closed classes -- I think the term was "functions words".
I believe the parser determined which class the other words were in, noun,
verb, etc., solely by the slots created from the function words.  To that
parser, McGuire's four example sentences would be equivalent to

"Foo frobbed fie"
"Foo has frobbed fie"
"Foo might frob fie"
"Foo fums to frob fie"

The parser was exceedingly fast, but I don't remember any follow up from
this work.  If pressed, I can probably find a reference, but I suspect
many readers of this digest are more familiar with the work than I.

In the speech understanding work of the early 70's, I found it interesting
that the functions words played a lesser role than might have been expected
because they tended to be unstressed when spoken and hence reduced in duration
and clarity.  I don't recall whether they played a major role in any of the
later systems.  It's evident that humans depend on these words and learn
new open class words from context created by a combination of the closed
class words and known meanings for the open class words elsewhere in the
sentence.  This suggests that one attribute to look for in truly mature
speech understanding systems is reliable "hearing" of function words.  I'd
be interested if anyone knows the current status of speech understanding
in this area.

Along somewhat separate lines, Balzer at ISI built a rudimentary parser for
English in the early 70's.  It was aimed at extracting formal program specs
from an English specification.  His key example was based heavily on
interpeting the closed classes and treating the open classes as variables.

------------------------------

Date: Mon, 15 Aug 88 17:42
From: HILLS%reston.unisys.com@RELAY.CS.NET
Subject: Re: English Grammar

In AI List V8 #35 John Nagle described a grammar which divided words into
four catagories and requested a reference for the list of 'special' words.

This may be related to the work of Miller, Newman, and Friedman of Harvard.
In 1958 they proposed that words should be divided into two classes which they
defined as follows:

     We will call these two classes the "function words" and the "content
     words".  Function words include those which are traditionally called
     articles, prepositions, pronouns, conjunctions, and auxillary verbs,
     plus certain irregular forms.  The function words have rather specific
     syntactic functions which must, by and large, be known individually
     to the speaker of English.  The content words include those which are
     traditionally called nouns, verbs, and adjectives, plus most of the
     adverbs.  It is relatively easy to add new content words to a language,
     but the set of function words is much more resistant to inovations.


The list of function words is included in the book: 'Elements of Software
Science' by Maurice H. Hallstead, Elsevier, 1977.  This list contains about
330 words.  I suspect that the list of 'special words' sought by Nagle is
contained within this list of function words.

                           -- Fred Hills

------------------------------

Date: 16 Aug 1988 08:06:55 EDT (Tue)
From: Ralph Hartley <hartley@nrl-aic.arpa>
Subject: Re: Are all reasoning systems inconsistent?

Your problem lies in T2

>T2. Aa[P(s("~P(*)",a)) -> ~P(a)] ; If I can prove that I can't prove X,
>                                   then I can't prove X

This implies

Ea(~P(a))

i.e. that the system is consistent. Godel's 2nd (less well known) theorem
states that if it is possible to prove a system consistent within the system
then the system is NOT consistent. Therefore T2 cannot be a theorem in any
consistent system.

BTW - This is also a flaw in Hofstadter's reasoning about the prisoners dilema.
His argument goes as follows:
1. The other player uses the same reasoning as I do.
2. This reasoning produces a unique result (cooperate or defect but not both)
3. Therefor whatever I do he will do too.
4. So I should cooperate.

The problem, again, is that (1) and (2) imply that my logic is consistent -
therefore it is not.

                        Ralph Hartley
                        hartley@nrl-aic.ARPA

------------------------------

Date: 25 Aug 88 12:46:25 GMT
From: unido!sbsvax!yxoc@uunet.UU.NET (Ralf Treinen)
Subject: Re: Are all Reasoning Systems Inconsistent?


In a previous article, Jonathan Leivent writes:
> Here is a full version of the contradiction that I am claiming exists.
...
[ Q is the equality predicate, s is a substitution operation, "X" is the Godel ]
[ number of X                                                                  ]
> P(a) : the predicate of provability within this reasoning system
...
> Theorems:
>
> T1. AaAb[Q(a,b)P(a) = P(b)] ; just says that P behaves normally
>
> T2. Aa[P(s("~P(*)",a)) -> ~P(a)] ; If I can prove that I can't prove X, then I
>                                  can't prove X
>
> T3. If X can be proven within this reasoning system, then P("X") is true
[ "this reasoning system" is the original one together with (at least) T1,T2 ]
...
[ derives a contradiction by constructing a Godel number G, such that ~P(G)  ]
[ can proven in the above system and then applying Theorem T3 ("step 5").    ]
...
> Perhaps the weak link in the contradiction is step 5, which is somewhat of a
> "meta" step.  What bothers me most is that there seems to be no formal way of
> writing T3, even though it seems to be obviously true
...

Theorem T3 is not correct. Just take the empty reasoning system that doesn't
allow to derive any theorem at all. The provability predicate for this
reasoning system is the constant predicate *false*. The formula ~P(G)
constructed above is provable in the this system, but P("~P(G)") is false.

BTW: The empty reasoning system IS consistent.

--
------------------------------------------------------------------------------
EAN  :treinen%fb10vax.informatik.uni-saarland.dbp.de [ @relay.cs.net from US]
UUCP : ...!uunet!unido!sbsvax!treinen   | Ralf Treinen
        or treinen@sbsvax.UUCP          | Universitaet des Saarlandes
CSNET: treinen%sbsvax.uucp@Germany.CSnet| FB 10 - Informatik (Dept. of CS)
ARPA : treinen%sbsvax.uucp@uunet.UU.NET | Bau 36, Im Stadtwald 15
Phone: +49 681 302 2065                 | D-6600 Saarbruecken 11, West Germany

------------------------------

Date: 19 Aug 88 15:26:35 GMT
From: mcvax!ukc!cs.tcd.ie!tcdmath!dbell@uunet.uu.net  (Derek Bell)
Subject: Re: How to dispose of naive science types (short)

In article <388@u1100s.UUCP> castle@u1100s.UUCP (Deborah Smit) writes:
>Another big mistake is when scientists present hypothetical OR theoretical
>work under the title "FACT".  E.G. Evolution.

        All theories can be regarded in that light, since it takes an
infinite amount of evvidence for one to be proved 100%. So, it all boils
down to:         1:What will someone accept as evidence?
                 2:How much/what kind will they take to be convinced,
                        if, at all?

>the title theory, since they are not demonstrable, and do not fit with
>the facts shown by the fossil record (no intermediate forms -- before
>you flame, examine current facts, fossils previously believed to be
>intermediate have been debunked).  It certainly cannot be called FACT,
        I was at a talk here where a paleontologist showed examples of
fossil trilobites of the various subspecies changed within a subspecies,
thus presenting evidence for 'microevolution', ie evolution within a species.

>        When evolutionists cannot support their
>hypothesis by showing aggreement with known facts, they resort to
>emotional mind-bashing (only foolish, gullible people don't believe
>in evolution).

        Whoa!!! Not all evolutionists, & not just evolutionists use
childish mind-bashing. Some creationsts do too.

>  Just my two cents.  I enjoy reasonable theories,
>they truly unify what we observe, but I don't appreciate emotional
>outbursts on the part of those who can't give up their inaccurate
>hypotheses to go on to something better.
>               - Deborah Smit

        This I agree with totally. Rational debate is far far better
than hysterical slanging matches.

------------------------------

End of AIList Digest
********************

∂26-Aug-88  2133	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #68  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 26 Aug 88  21:33:15 PDT
Date: Sat 27 Aug 1988 00:05-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #68
To: AIList@AI.AI.MIT.EDU


AIList Digest           Saturday, 27 Aug 1988      Volume 8 : Issue 68

 Philosophy:

  Connectionist model for past tense formation in English verbs
  Two Points (ref AI Digests passim)
  Can we human being think two different things in parallel?
  Rates of change

----------------------------------------------------------------------

Date: 24 Aug 88 18:17:58 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Connectionist model for past tense formation in English verbs


[Editor's note - Steve Pinker (MIT), and Alan Prince (Brandeis)
co-authored an article in the journal 'Cognition', critiquing Rumelhart
& McClelland's model for past tense formation in English verbs.  This is
Stevan Harnad's critique of that critique.   - nick]


On Pinker & Prince on Rules & Learning

Steve: Having read your Cognition paper [28(1-2) 1988] and twice seen
your talk (latest at cogsci-88), I thought I'd point out what look like
some problems with the argument (as I understand it). In reading my
comments, please bear in mind that I am NOT a connectionist; I am on
record as a sceptic about connectionism's current accomplishments (and
how they are being interpreted and extrapolated) and as an agnostic
about its future possibilities.  (Because I think this issue is of
interest to the connectionist/AI community as a whole, I am branching a
copy of this challenge to connectionists and comp.ai.)

(1) An argument that pattern-associaters (henceforth "nets") cannot do
something in principle cannot be based on the fact that a particular net
(Rumelhart & McClelland [PDP Volume 2 1986 and MacWhinney 1987,
Erlbaum]) has not done it in practice.

(2) If the argument is that nets cannot learn past tense forms (from
ecologically valid samples) in principle, then it's the "in principle"
part that seems to be missing. For it certainly seems incorrect that past
tense formation is not learnable in principle. I know of no
poverty-of-the-stimulus argument for past tense formation. On the
contrary, the regularities you describe -- both in the irregulars and
the regulars -- are PRECISELY the kinds of invariances you would
expect a statistical pattern learner that was sensitive to higher
order correlations to be able to learn successfully. In particular, the
form-independent default option for the regulars should be readily
inducible from a representative sample. (This is without even
mentioning that surely no one imagines that past-tense formation is an
independent cognitive module; it is probably learned jointly with
other morphological regularities and irregularities, and there may
well be degrees-of-freedom-reducing cross-talk.)

(3) If the argument is only that nets cannot learn past tense forms without
rules, then the matter is somewhat vaguer and more equivocal, for
there are still ambiguities about what it is to be or represent a "rule."
At the least, there is the issue of "explicit" vs. "implicit"
representation of a rule, and the related Wittgensteinian distinction
between "knowing" a rule and merely being describable as behaving in
accordance with a rule. These are not crisp issues, and hence not a
solid basis for a principled critique. For example, it may well be
that what nets learn in order to form past tenses correctly is
describable as a rule, but not explicitly represented as one (as it
would be in a symbolic program); the rule may simple operate as a causal
I/O constraint. Ultimately, even conditional branching in a symbolic
program is implemented as a causal constraint; "if/then" is really
just an interpretation we can make of the software. The possibility of
making such systematic, decomposable semantic intrepretations is, of course,
precisely what distinguishes the symbolic approach from the
connectionistic one (as Fodor/Pylyshyn argue). But at the level of a few
individual "rules," it is not clear that the higher-order interpretation AS
a formal rule, and all of its connotations, is justified. In any case, the
important distinction is that the net's "rules" are LEARNED from statistical
regularities in the data, rather than BUILT IN (as they are,
coincidentally, in both symbolic AI and poverty-of-the-stimulus-governed
linguistics). [The intermediate case of formally INFERRED rules does
not seem to be at issue here.]

So here are some questions:

(a) Do you believe that English past tense formation is NOT learnable
(except as "parameter settings" on an innate structure, from
impoverished data)? If so, what are the supporting arguments for that?

(b) If past tense formation IS learnable in the usual sense (i.e.,
by trial-and-error induction of regularities from the data sample), then do
you believe that it is specifically unlearnable by nets? If so, what
are the supporting arguments for that?

(c) If past tense formation IS learnable by nets, but only if the
invariance that the net learns and that comes to causally constrain its
successful performance is describable as a "rule," what's wrong with that?

Looking forward to your commentary on Lightfoot (in Behavioral and Brain
Sciences), where poverty-of-the-stimulus IS the explicit issue, -- best
wishes, Stevan Harnad
--
Stevan Harnad   ARPANET:  harnad@mind.princeton.edu         harnad@princeton.edu
harnad@confidence.princeton.edu     srh@flash.bellcore.com      harnad@mind.uucp
BITNET:   harnad%mind.princeton.edu@pucc.bitnet    UUCP:   princeton!mind!harnad
CSNET:    harnad%mind.princeton.edu@relay.cs.net

------------------------------

Date: Thu, 25 Aug 88 10:51:01 +0100
From: "Gordon Joly, Statistics, UCL"
      <gordon%stats.ucl.ac.uk@ESS.Cs.Ucl.AC.UK>
Subject: Two Points (ref AI Digests passim).

[a] More people died in the fire bombing of Dresden than in Hiroshima;
    the atom bomb is a more powerful image than naplam and hence we forget.
[b] With regard to what Einstein said, Heisenberg's uncertainty princinple
    is also pertinent to "AI". The principle leads to the notion that the
    observer influences that which is observed. So how does this affect the
    observer who preforms a self analysis?

Gordon Joly.

------------------------------

Date: 25 Aug 88 14:39:01 GMT
From: hartung@nprdc.arpa (Jeff Hartung)
Reply-to: hartung@nprdc.arpa (Jeff Hartung)
Subject: Re: Can we human being think two different things in
         parallel?


In a previous article, Ken Johnson writes:
>>Can we human being think two different things in parallel?
>
>I think most people have had the experience of suddenly gaining insight
>into the solution of a problem they last deliberately chewed over a few
>hours or days previously.  I'd say this was evidence for the brain's
>ability to work at two or more (?) high-order tasks at the same time.
>But I look forward to reading what Real Psychologists say.

The above may demonstrate that the brain can "process" two jobs
simultaneously, but is this what we mean by "think"?  If so, this still
doesn't demonstrate adequately that parallel processing is what is
going on.  It may be equally true that serial processing on several
jobs is happening, only some processing is below the threshold of
awareness.  Or, there may be parallel processing, but with a limited
number of processes at the level of awareness of the "thinker".

On the other hand, if we take "thinking" to mean an activity which the
"thinker" is aware of, at least in that it is going on, then there is
strong evidence that there is only limited capacity to attand to
multiple tasks simultaneously, but there is no final conclusion on this
ability as far as I know.  Many studies in the ability to attand to
multiple tasks or perceptual stimuli simultaneously are still being
done.

--Jeff Hartung--
 ARPA - hartung@nprdc.arpa   hartung@sdics.ucsd.edu
 UUCP - !ucsd!nprdc!hartung   !ucsd!sdics!hartung

------------------------------

Date: Fri, 26 Aug 88 16:25:24 EDT
From: <mcharity@ATHENA.MIT.EDU>
Subject: Rates of change

In a previous article, John Nagle writes:

>... Look at Marc
>Raibert's papers.  He's doing very significant work on legged locomotion.
>Progress is slow; ...
>Along the way
>are endless struggles with hydraulics, pneumatics, gyros, real-time control
>systems, and mechanical linkages.  (I spent the summer of '87 overhauling
>an electrohydraulic robot, and I'm now designing a robot vehicle.  I can
>sympathise.)

>... It's depressing to think that it might take
>a century to work up to a human-level AI from the bottom.  Ants by 2000,
>mice by 2020 doesn't sound like an unrealistic schedule for the medium term,
>and it gives an idea of what might be a realistic rate of progress.

>       I think it's going to be a long haul.  But then, so was physics.
>So was chemistry.  For that matter, so was electrical engineering.  We
>can but push onward.  Maybe someone will find the Philosopher's Stone.
>If not, we will get there the hard way.  Eventually.

Continued use of a bottom-up experimental approach to AI need not
demand continued use of the current experimental MEDIUM which so
constrains the rate of change.

While today one may be better off working directly with mechanical
systems, rather than with computational simulations of mechanical
systems, it is unclear that this will be the case in 5 or 10 years.

If a summer's overhaul could be a week's hacking, you have an order of
magnitude acceleration.  If your tools develop similarly, the _rate_
of change is sharply exponential.

Science, like engineering, is limited by the feedback lags of its
development cycles.  Many (most?) of these lags are in information
handling.  Considering our increasing competence, are current
challenges so much vaster than past as to require similar periods of
calendar time?

Mitchell Charity

------------------------------

End of AIList Digest
********************

∂26-Aug-88  2346	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #69  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 26 Aug 88  23:46:44 PDT
Date: Sat 27 Aug 1988 00:30-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #69
To: AIList@AI.AI.MIT.EDU


AIList Digest           Saturday, 27 Aug 1988      Volume 8 : Issue 69

 Seminars:

  Acquiring a Model of the User's Beliefs
  Software Reusability: An Intelligent Approach
  Localized Event-based Planning For Multiagent Domains - Amy Lansky
  Describing Program Transformers with Higher-order Unification

----------------------------------------------------------------------

Date: Fri, 12 Aug 88 11:16:43 EDT
From: finin@PRC.Unisys.COM
Subject: Acquiring a Model of the User's Beliefs ...


                      Ph.D. Dissertation Defense

            Acquiring a Model of the User's Beliefs from
                    a Cooperative Advisory Dialogue

                             Robert Kass

The ability of expert systems to explain their own reasoning is often
cited as their most important feature.  Unfortunately, the quality of
these explanations is frequently poor.  In this talk, I will argue
that for expert systems to produce good explanations, they must have
available a model of the user's beliefs about the system domain.

Obtaining such a model is not easy, however.  Traditional approaches
have depended on the explicit hand-coding of a large number of
assumptions about the beliefs of anticipated system users -- a tedious
and error-prone process.  In contrast, I will present an implicit
method for acquiring a user model, embodied in a set of implicit user
model acquisition rules.  These rules, developed from the study of a
large number of transcripts of people seeking advice from a human
expert, represent likely inferences that can be made about a user's
beliefs -- based on the system-user dialogue and the dialogue
participants' previous beliefs.  This implicit acquisition method is
capable of quickly building a substantial model of the user's beliefs;
a model sufficient to support the generation of expert system
explanations tailored to individual users.  Furthermore, the
acquisition rules are domain independent, providing a foundation for a
general user modelling facility for a variety of interactive systems.

Committee:      Tim Finin (Advisor)
                Aravind Joshi (Chairman)
                Elaine Rich (MCC)
                Bonnie Webber

Date:           Monday, August 15, 1988
Time:           3:00 - 5:00 p.m.
Location:       554 Moore, University of Pennsylvania

------------------------------

Date: Mon, 15 Aug 88 10:59:12 EDT
From: finin@PRC.Unisys.COM
Subject: Software Reusability: An Intelligent Approach (UNISYS)


            Software Reusability: An Intelligent Approach

                 Mark A. Simos and James Solderitsch
                    Software Technology Department
                     UNISYS Paoli Research Center

                           GVL-2 Auditorium
                       Unisys Great Valley Labs
                     12:00-1:00, 15 August 1988,

The topic of software reusability has been at the forefront of
software engineering research for quite some time, but as yet has
failed to live up to initial expectations.  Part of the reason for
this failure was early and lingering confusion about the subjects of
software engineering and software reusablity, and the belief that the
proper software engineering methodology, and perhaps even the right
programming language, would naturally and effortlessly lead to the
creation of reusable software.

Recent research has begun to pinpoint the unique issues relating
explicitly to software reusability.  This talk describes a practical
approach to software reuse based on the incremental development of
intelligent libraries of reusable components.  Such libraries, or
repositories, are structured around explicit domain models which are
knowledgebased frameworks providing taxonomic representations of
specific application domains.  These frameworks provide a uniform view
of both static software components and generative capabilities, and
contain tools to actively guide users in browsing among and selecting
existing components, or classifying and qualifying new candidate
components for the repository.

After an introduction to some of the essential issues of software
reusability, we present some background motivation for a
domain-specific focus to reusability.  We next discuss the use of
program generation and knowledge-based techniques that support
domain-specificity and sketch the evolution of the development of a
reuse library based on these techniques.  We close with a description
of our current project that is directed at developing the basic
Reusability Library Framework (RLF) technology necessary for the
development of such domainspecific libraries.

The RLF project is sponsored by the STARS Ada Foundations Technology
program (contract number N00014-88-C-2052).  Specific objectives of
the RLF project include providing a set of knowledge-based components
in Ada that support the creation and maintenance of domain models, and
the development, using this platform, of general library tools for
component testing, qualification and retrieval.

------------------------------

Date: Tue, 23 Aug 88 09:33:11 PDT
From: CHIN%PLU@ames-io.ARPA
Subject: Localized Event-based Planning For Multiagent Domains - Amy
         Lansky


***************************************************************************
              National Aeronautics and Space Administration
                         Ames Research Center

                        SEMINAR ANNOUNCEMENT


SPEAKER:   Amy L. Lansky
           SRI International

TOPIC:    LOCALIZED EVENT-BASED PLANNING FOR MULTIAGENT DOMAINS

ABSTRACT:

This talk will present the GEM concurrency model and GEMPLAN, a multiagent
planner based on this model.  Unlike standard state-based AI representations,
GEM is unique in its explicit emphasis on events and domain structure --
world activitiy is modeled in terms of events occurring within a set of regions.
Event-based temporal logic constraints are then associated with each region
to delimit legal domain behavior.  GEM's emphasis on constraints is directly
reflected in the architecture of the GEMPLAN planner -- it can be viewed
as a general purposed constraint satisfaction facility.  Its task is to
construct a network of interrelated events that satisfies all applicable
regional constraints and also achieves some stated goal.

A key focus of our work has been on the use of --localized--techniques for
domain representation and reasoning.  Such techniques partition domain
descriptions and reasoning tasks according to the regions of activity within
a domain.  For example, GEM localizes the applicability of domin constraints
and also imposes additional "locality constraints" based on domain structure.
 This use of locality helps alleviate several aspects of the frame problem
for multiagent domains.  The GEMPLAN planner also reflects the use of locality;
its constraint satisfaction search space is subdivided into regional planning
search spaces.  GEMPLAN can pinpoint and rectify interactions among these
regional search spaces, thereby reducing the burden of "interaction analysis"
ubiquitous to most planning systems.


DATE: Wednesday       TIME: 2:00 pm - 3:30 pm     BLDG. 244   Room 103
      August 31, 1988       --------------


POINT OF CONTACT: Marlene Chin   PHONE NUMBER: (415) 694-6527
     NET ADDRESS: chin@pluto.arc.nasa.gov

***************************************************************************

VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18.  Do not
use the Navy Main Gate.

Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance.  Submit requests to the point of
contact indicated above.  Non-citizens must register at the Visitor
Reception Building.  Permanent Residents are required to show Alien
Registration Card at the time of registration.
***************************************************************************

------------------------------

Date: Fri, 26 Aug 88 09:47:31 EDT
From: finin@PRC.Unisys.COM
Subject: Describing Program Transformers with Higher-order Unification


                              AI SEMINAR
                     UNISYS PAOLI RESEARCH CENTER

    Describing Program Transformers with Higher-order Unification

                            John J. Hannan
                   Computer and Information Science
                      University of Pennsylvania


Source-to-source program transformers belong to the class of
meta-programs that manipulate programs as objects. It has previously
been argued that a higher-order extension of Prolog, such as
Lambda-Prolog, makes a suitable implementation language for such
meta-programs. In this paper, we consider this claim in more detail.
In Lambda-Prolog, object-level programs and program schemata can be
represented using simply typed lambda-terms and higher-order
(functional) variables. Unification of these lambda-terms, called
higher-order unification, can elegantly describe several important
meta-level operations on programs. We detail some properties of
higher-order unification that make it suitable for analyzing program
structures. We then present (in Lambda-Prolog) the specification of
several simple program transformers and demonstrate how these can be
combined to yield more general transformers. With the depth-first
control strategy of Lambda-Prolog for both clause selection and
unifier selection all the above mentioned specifications can be and
have been executed and tested.



                     2:00 pm Wednesday, August 3
                     Unisys Paloi Research Center
                         BIC Conference Room
                      Route 252 and Central Ave.
                            Paoli PA 19311

   -- non-Unisys visitors who are interested in attending should --
   --   send email to finin@prc.unisys.com or call 215-648-7446  --

------------------------------

End of AIList Digest
********************

∂27-Aug-88  1740	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #70  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 27 Aug 88  17:40:12 PDT
Date: Sat 27 Aug 1988 20:15-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #70
To: AIList@AI.AI.MIT.EDU


AIList Digest            Sunday, 28 Aug 1988       Volume 8 : Issue 70

 Queries:

  Philosophy of mathematics references
  Expert-Systems in Power Station
  Expert Systems for Statistical Analysis

 Responses:

  How do I learn about AI, Prolog, and/or Lisp
  Speech recognition using neural nets
  Categories & combinators

----------------------------------------------------------------------

Date: 24 Aug 88 12:22:58 GMT
From: steve@hubcap.clemson.edu
Subject: Philosophy of mathematics references

I am trying to prepare an article which relates computer science's use of
logic with the ground rules set down for mathematics by both the
philosophers and logicians.  I would like to know your favorite references
to this topic.

Other topics of interest would be ``nonstandard'' systems and their rules
(e.g., stochastic, quantum) or viewpoints (e.g., connectionist).

Please send direct as I do not monitor many of the groups this request seems
appropriate for.


I will summarize and post.  Thanks.
--
Steve (really "D. E.") Stevenson           steve@hubcap.clemson.edu
Department of Computer Science,            (803)656-5880.mabell
Clemson University, Clemson, SC 29634-1906

------------------------------

Date: Thu, 25 Aug 88 16:51:16 +0200 (Central European Sommer Time)
From: XBR4DC5V%DDATHD21.BITNET@MITVMA.MIT.EDU (Karl_josef Junglas)
Subject: Expert-Systems in Power Station

Please send me information about expert systems in Power Station

------------------------------

Date: Thu, 25 Aug 88 21:21 CDT
From: <KDM2520%TAMSIGMA.BITNET@MITVMA.MIT.EDU>
Subject: Expert Systems for Statistical Analysis


If anyone on the list is aware of any commercial expert systems which do
statistical analysis, I sure would appreciate the information.  I am looking
for expert statistical analysis packages which process data, analyze
correlation etc., forecast trends, detect regeneration cycles, and so on.
I have looked through AI Magazine, IEEE Expert and several Computer journals
and I couldn't see any such product advertisements.  Could someone who is
aware of such expert statistical analysis packages send me the info please?

Thank you.                                              MURALI@TAMLSR (bitnet)

------------------------------

Date: 23 Aug 88 19:29:47 GMT
From: uhccux!todd@humu.nosc.mil  (Todd Ogasawara)
Subject: Re: How do I learn about AI, Prolog, and/or Lisp

In article <952@scovert.sco.COM> johnwe (John Weber, Celtic sysmom) writes:
>In article <398@mfgfoc.UUCP> mike@mfgfoc.UUCP (Mike Thompson) writes:
>>1.  I have an IBM/XT at home with the newest version of TURBO PROLOG.
>>Can I use this system to gain an understanding of AI applications
>>such as expert systems?  If so, what books can help me?  I have not

>       for UN*X. Arity Prolog is a good commercial prolog for
>       the IBM PCish boxes.

I use, and like, Arity/Prolog a lot.  I have both the interpreter and
compiler.  However, I would advise against trying to use it on a
4.77MHz IBM PC type box.  For yucks, I loaded API 5.x on my aged PC
when I received the most recent update.  The latest version of Arity is
very big and is very slow on a 4.77MHz PC.  I found the speed to be
almost acceptable on a 9.54MHz V30 based NEC Multispeed though.  And,
it is a viable development tool on a 10MHz 80286 based AT-clone.

>       Lisp and Prolog address different language issues, and are
>       both good and useful languages. ==> Prolog is quite different
>       from most "normal" languages, and may pose certain learning
>       difficulties. <== My personal favorite Lisps are Kyoto Common

I think the same is said of LISP.  I use both LISP and Prolog depending
on what I am working on.  My recollection is that Prolog was easier to
learn and allowed me to do the things it does best very quickly
(manipulate data in a database-like functions, pattern matching,
etc.).  I also found that when I needed to manipulate MIDI devices
(Musical Interface for Digital Instruments), LISP felt very "natural"
in that list-of-notes environment.

I think that people who are surveying what is out there should at least
investigate both LISP and Prolog and decide which language fits their
needs best.  In my case, it was both, depending on what I was doing.

--
Todd Ogasawara, U. of Hawaii Faculty Development Program
UUCP:           {uunet,ucbvax,dcdwest}!ucsd!nosc!uhccux!todd
ARPA:           uhccux!todd@nosc.MIL            BITNET: todd@uhccux
INTERNET:       todd@uhccux.UHCC.HAWAII.EDU <==I'm told this rarely works

------------------------------

Date: 26 Aug 88 21:43:10 GMT
From: att!chinet!mcdchg!ditka!nfsun!kgeisel@bloom-beacon.mit.edu 
      (kurt geisel)
Subject: Re: Speech rec. using neural nets

Teuvo Kohonen describes success at Helsinki University with a
speaker-independent neural system which recognizes phonemes (the box spits
out phonemes, not words - you would still need a sophisticated parsing stage)
in the article "The 'Neural' Phonetic Typewriter" in the March 1988 issue of
the IEEE's _Computer_.

+--------------------------------------------------------------------------+
| Kurt Geisel, Intelligent Technology Group, Inc.                          |
| Bix: kgeisel                                                             |
| ARPA: kgeisel%nfsun@uunet.uu.net            US Snail:                    |
| UUCP: uunet!nfsun!kgeisel                    65 Lambeth Dr.              |
|                                              Pittsburgh, PA 15241        |
| If a rule fires and no one sees it, did it really fire?                  |
+--------------------------------------------------------------------------+

------------------------------

Date: 27 Aug 88 17:46:41 GMT
From: markh@csd4.milw.wisc.edu  (Mark William Hopkins)
Subject: Categories & combinators

If you are familiar with combinators, I can give a very brief summary of
what category theory is about:

     A category is a typed combinator system with the combinators B (for
composition) and I (for identity).

In general, there is a very close relation between typed combinators
(the typed lambda calculus) and categories.

------------------------------

End of AIList Digest
********************

∂29-Aug-88  2028	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #71  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 29 Aug 88  20:28:08 PDT
Date: Mon 29 Aug 1988 23:09-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #71
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 30 Aug 1988      Volume 8 : Issue 71

 Query:
  Modelling of spatial knowledge, Dr. Benjamin Kuipers

 Responses:
  Category Theory in AI
  How do I learn about AI, Prolog, and/or Lisp?
  Speech recognition with neural nets
  Expert Systems for Statistical Analysis

----------------------------------------------------------------------

Date: Sun, 28 Aug 88 12:56 EDT
From: WHANG@HULAW1.HARVARD.EDU
Subject: Request on the modelling of spatial knowledge!

Hi, there!
        Does anybody know the computer mail address of
Dr. Benjamin Kuipers? He was a profesor in
the Department of Mathematics at Tufts University
But I am not sure he is still there. I'd like to get
some information about his research on the "modelling of
spatial knowledge". Does anybody know about the "TOUR
model" of the cognitive spatial description? Of if you have
some information in this line of research;
the "reapresentation and modelling of spatial knowledge",
please, let me know!

                whang at husc3 ( whang@husc3 )  BITNET
                whang@husc3.harvard.edu (ARPANET)

        Sang-Min Whang

        Department of Psychology
        33 Kirkland St,
        Harvard University
        Cambridge, MA 02138

Thank you for your attention!

Sang-Min.

------------------------------

Date: 22 Aug 88 15:35:13 GMT
From: linus!philabs!hen3ry!dpb@husc6.harvard.edu  (Paul Benjamin)
Subject: Re: Category Theory in AI

In a previous article, Jack Campin writes:
>I can't imagine what category theory has to contribute to knowledge
>representation (though I can just about imagine it helping to describe
>neural nets in a more abstract way). Can the philabs people say more
>about what they're up to?

Well, not really, in a public forum.  But Mike Lowry of the Kestrel
Institute has pointed out that a representation can be viewed as
a category, and a shift of representation as a morphism.  The
question of whether this insight is very productive is open, but at
least it gives us a formal notion of representation, and we've
built on this some formal notions of abstraction and learning.
We'll let you know if this turns out to be fruitful.

Paul Benjamin

------------------------------

Date: 22 Aug 88 21:03:24 GMT
From: sco!johnwe@uunet.uu.net  (John Weber, Celtic sysmom)
Subject: Re: How do I learn about AI, Prolog, and/or Lisp

In article <398@mfgfoc.UUCP> mike@mfgfoc.UUCP (Mike Thompson) writes:
<...>
>I have three question which I hope one of you can answer:
>
>1.  I have an IBM/XT at home with the newest version of TURBO PROLOG.
>Can I use this system to gain an understanding of AI applications
>such as expert systems?  If so, what books can help me?  I have not
>seen Turbo Prolog mentioned in this newsgroup and I fear that
>it is considered by experts to be a toy Prolog or an implementation
>so neutered as to be worthless.

        (Creak...  Damn, this asbestos suit is getting stiff...
        ZIP!  Humm...  Enough nitrogen.  Hisssss... POP! Foosh...)

        My exerence with Turbo "Prolog" was extermely negative.  It
        may be a useful language, but I kinda doubt it.  It doesn't
        support such things as asserting predicates into the data
        base, the syntax isn't C&M, and it is strongly typed.  It
        is also extremely slow.

        (Click.)

        If you can get a hold of C-Prolog or SB-prolog, they are
        quite acceptable and useful implementations.  These are
        for UN*X. Arity Prolog is a good commercial prolog for
        the IBM PCish boxes.

>2.  Does anyone know of classes offered in my area (I live in Los Altos,
>California) at local colleges which would teach me Prolog?  I have already
>checked local community colleges, but their classes are only on
>languages such as Fortran, Cobal, Pascal or 'C'.  Would I be better taking
>a more general class on AI instead of a specific language?  Should I
>consider Lisp over Prolog?  (It came with GNU Emacs and is available on
>my Unix system at work.)

        Lisp and Prolog address different language issues, and are
        both good and useful languages.  Prolog is quite different
        from most "normal" languages, and may pose certain learning
        difficulties.  My personal favorite Lisps are Kyoto Common
        Lisp and MIT C-Scheme.  They are for UN*X, again.  There
        is a Scheme dialect for Macs, but I've never played with it.
        Microsoft has a Lisp for MS-DOS (supposedly it is Common
        Lisp, but again, I haven't played with it).  Emacs Lisp
        is useful in the context of Emacs, but I don't think it
        would make a good way to learn lisp.

        I personally like Lisp more than I like Prolog, but that
        is a taste thing.  Lisp can also be much faster.

        Oh, are you on a 4.* BSD box? If so, there may be Franz Lisp
        floating around your bin directories.  Sun also has a really
        good Lisp package (or so I'm told).

        I thought De Anza Jr. College offered an AI class which
        taught Lisp, but it's been a while since I took a class
        there.

>3.  What is the best way to get introduced to the AI field?  I'm I
>taking the right approach?  Any comments would be appreciated.
>
>Thanks in advance.
>

        No sweat.

>Mike Thompson
>
>---------------------------------------------------------------------------
>Michael P. Thompson                      FOCUS Semiconductor Systems, Inc.
>net: (sun!daver!mfgfoc!engfoc!mike)      570 Maude Court
>att: (408) 738-0600                      Sunnyvale, CA  94086 USA

        Please note:  these are my own opinions, and in no way reflect
        the opinions of my employers.
--

#############################################################################
#                                      #                                    #
# "In the fields of Hell,              # John Weber, ...!uunet!sco!johnwe   #
#  where the grass grows high,         #     @ucscc.ucsc.EDU:johnwe@sco.COM #
#  are the graves of dreams,           #                                    #
#  allowed to die."  -- Author unknown #  Celtic sysmom  with an ATTITUDE!  #
#                                      #  Any opinions expressed are my own #
#############################################################################

------------------------------

Date: Tue, 23 Aug 88 16:32:36 -0200
From: Antti Ylikoski <ayl%hutds.hut.fi%FINGATE.BITNET@MITVMA.MIT.EDU>
Subject: Speech recognition with neural nets

In AIList Digest V8 #63,
att!chinet!mcdchg!clyde!watmath!watvlsi!watale!dixit@bloom-beacon.mit.edu
(Nibha Dixit) writes:

>Subject: Speech rec. using neural nets

>Is anyody out there looking at speech recognition using neural
>networks? There has been some amount of work done in pattern
>recognition for images, but is there anything specific being done
>about speech?

In the Helsinki University of Technology, in the Department of Technical
Physics, the group of Professor Teuvo Kohonen has been studying the
usage of neural nets for speech recognition for several years.

Professor Kohonen gave a talk on their results in the Finnish AI
symposium in this year.  They have an experimental system which uses a
neural net board in a PC.  I cannot remember whether the paper is
written in English or in Finnish, but should you wish to get the
symposium proceedings, contact

        Finnish Artificial Intelligence Society (FAIS)
        c/o Dr Antti Hautamaeki
        HM & V Research
        Helsinki, Finland

I understand Kohonen's results are comparable to other approaches to speech
recognition.

--- Andy

------------------------------

Date: Sun, 28 Aug 1988 21:05-EDT
From: Kai-Fu.Lee@SPEECH2.CS.CMU.EDU
Subject: Speech rec. using neural nets

In response to Nibha Dixit's question about speech recognition using
neural networks, I would recommend the following two articles by
Richard Lippmann:

An Introduction to Computing with Neural Nets, IEEE ASSP Magazine,
        Vol. 4, No. 2, April 1987.
Neural Nets for Computing, IEEE International Conference on Acoustics,
        Speech, and Signal Processing (ICASSP), April, 1988.

The ICASSP conference proceedings contain quite a few interesting
papers on speech recognition with neural networks.

Kai-Fu Lee
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15213

------------------------------

Date: Mon, 29 Aug 88 11:19:19 EST
From: cik@l.cc.purdue.edu (Herman Rubin)
Subject: Re: Expert Systems for Statistical Analysis


In a previous article, KDM2520@TAMSIGMA.BITNET writes:

> If anyone on the list is aware of any commercial expert systems which do
> statistical analysis, I sure would appreciate the information.  I am looking
> for expert statistical analysis packages which process data, analyze
> correlation etc., forecast trends, detect regeneration cycles, and so on.
> I have looked through AI Magazine, IEEE Expert and several Computer journals
> and I couldn't see any such product advertisements.  Could someone who is
> aware of such expert statistical analysis packages send me the info please?
>
> Thank you.                                              MURALI@TAMLSR (bitnet)

There are things that a computer is capable of doing, but this is not one of
them.  Statistics is not a black box into which one can put data and come out
with the state of the universe.

To analyze a problem, it is necessary for the user to input a model, or better,
a collection of models.  The user must realize that many assumptions must be
made.  It is advisable to have a good mathematical statistician available to
point out the consequences of the model which the user does not realize have
been inserted.  Analyze correlation indeed!  It is extremely rare that
correlation has anything to do with the real problem.

In addition, the user must have an evaluation of the consequences of an
incorrect action.  Massive statistical uncertainty may be irrelevant if the
resulting action is unaffected, and small uncertainty may be very important
if the effects of a wrong action are sufficiently great.  I personally have
worked on this problem, and the difficulties are major.

Statistical packages of the type to use this input, when well formulated, are
still in the development stage.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet, UUCP)

------------------------------

End of AIList Digest
********************

∂29-Aug-88  2305	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #72  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 29 Aug 88  23:05:26 PDT
Date: Mon 29 Aug 1988 23:15-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #72
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 30 Aug 1988      Volume 8 : Issue 72

 Religion:

  Science, lawfulness, a (the?) god
  Backward path and religions
  Not Quite Re: The Ignorant Assumption
  The Ignorant assumption
  Giordano Bruno
  God and the Universe
  Pseudo-science strikes again!

----------------------------------------------------------------------

Date: 26 Aug 88 02:22:02 GMT
From: proxftl!bill@bikini.cis.ufl.edu (T. William Wells)
Reply-to: proxftl!bill@bikini.cis.ufl.edu (T. William Wells)
Subject: Re: science, lawfulness, a (the?) god


In a previous article, YLIKOSKI@FINFUN.BITNET writes:
: In AIList Digest   V8 #54, T. Michael O'Leary <HI.OLeary@MCC.COM>
: presents the following quotation (without mentioning who originally
: wrote it):

: >     >Science, though not scientists (unfortunately), rejects the
: >     >validity of religion: it requires that reality is in some sense
: >     >utterly lawful, and that the unlawful, i.e. god, has no place.

I did.

: I would say that a God needs not be unlawful.  A counterexample of
: some kind could be a line by Einstein: I think he said that the
: regularity of the structure of the universe reflects an intellect.  (I
: cannot remember the exact form of the quotation, but I think the idea
: was this.)

"Lawful" does not mean "following, by choice, law", rather, it
means: "constrained by law".  However, religion posits "god" or
"the absolute" or what have you as that which is beyond, above,
determines, flouts, or whatever adjective you like, natural law.
This is essential to religion.

And the "quotation" from Einstein does not serve as a
counterexample; it is just a restatement of the argument from
design.  This argument goes: "the universe appears to have been
designed, therefore there was a designer.  I shall call it god."
How silly!  In its refined form, this argument posits god as a
"primary cause": this makes god "beyond" natural law, as an
explanation for natural law.  It is trivially refuted by pointing
out that it begs the question.  (If the universe requires a
cause, why shouldn't god require a cause?  And if not, why
presume god anyway?)

---

While I am wasting bandwidth religion-trashing, I'll share some
E-mail I received the other day.  I will include the text of it
here, but I am stripping out the identifying marks so as to not
further embarrass the author.

: You are offbase in your premise. Religion (for lack of a much better term)
: is *not* based on that which is unknowable.  It is simply that it is based
: on revealed knowlede/information from God.

Note the confusion in this individual: he talks about "revealed
knowledge" as if it had some relationship to knowledge; however,
there is *no* relationship.  By what means do I distinguish this
"revealed knowledge" from an LSD overdose?  If I am to depend
wholly on divine revalation, then I know *nothing*.  If not, then
I must reject "revealed knowledge" in favor of evidence.  This is
all elementary philosophy, to which religion seems to have
blinded that author.

:                                            This knowlede transcends human
: intellect and is not deducible via human intellect.

This translates to: "this knowledge is unknowable".

:                                                      This should not present a
: problem for you as Quantuum Mechanics has demonstrated that the Universe does
: not operate via a human understandable system of logic.

And this is simple ignorance. Not to mention self-contradictory.

---

This individual has managed to illustrate in one very short note
*exactly* why religion has *no* place in scientific discussion:
the use of religion perverts reasoning by substituting "revealed
knowledge" for evidence, requires the unknowable as part of
reasoning, and uses ignorance as its justification.


---
Bill
novavax!proxftl!bill

------------------------------

Date: 26 Aug 88 10:20:30 GMT
From: quintus!ok@Sun.COM (Richard A. O'Keefe)
Reply-to: quintus!ok@Sun.COM (Richard A. O'Keefe)
Subject: Re: backward path and religions


In article <19880826025229.6.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU>
LEO@BGERUG51.BITNET writes:
>Secondly, consider a self-learning, self-organizing neural netwerk.
>Furthermore, suppose this system is searching for answers to questions in a
>field from which it has almost no knowledge. In this case, the system might
>ask for things that it can never find. But, because of the self-learning,
>self-organizing character, it will build answers, imaginary ones, if it
>keeps asking long enough. To my opinion, this is the essence of religions
>and superstitions. I presume that the number of layers or the 'distance'
>between the sense perception and the abstract thinking level is too big.

I'm canny enough not to ask what a "self-learning" system is ...
"Building imaginary answers" sounds like hypothesis formation in general.
This is the essence of science!  Or rather, science = making up stories
+ trying to knock down other people's stories.

Does anyone seriously suppose that the number of layers between sense
perceptions and SuperString theory is small?  A range of diseases was
attributed to "filterable viruses" -- "virus" just being a word meaning
"poison, venom" -- on what really amounted to a stubborn faith that the
germ theory of disease could be extended beyond the range of sense data
years before viruses were "observed".  Popular beliefs about the origins
of life are based on a very long series of inferences (and what is more,
as Cairns-Smith points out, are quite incompatible with the known
behaviour of the chemicals in question).

There is a serious illusion in talking about modern science: we read
instruments at least as much through theories as through our eyes, and
mistake remote inferences "5 volts across these terminals" for sense
data.

To be iconoclastic, I'd like to suggest that the main difference between
societies in which science dominates and ones in which superstition
dominates is that the former have a sufficient surplus that they can
AFFORD to check their hypotheses.  In society X, there are such large
surpluses that the society can afford to force thousands of farmers out
of business in the interests of fighting inflation.  Society X can afford
a lot of agricultural experiments.  In society Y, there are no surpluses,
so farmer Z continues to put offerings in the spirit-house, because if he
tested his belief (by not making offerings) and he was wrong, it would
mean disaster.  Society Y is not going to do much science.

To put it bluntly, if the risk from examining a practice is greater than
the risk from continuing it, it is _RATIONAL_ not to examine it.  This is
the kind of thing that ethological and anthropological studies should be
able to illuminate:  when will an animal explore new territory as opposed
to staying in its home range (how does the animal's "knowledge" of the
availability of food in the home range affect this), is there a detectable
relationship between the "rigidity" of a society and its surpluses?

I don't think that neural nets as such have anything to do with the case.

------------------------------

Date: Fri Aug 26 09:18:59 EDT 1988
From: sas@BBN.COM
Subject: Not Quite Re: The Ignorant Assumption

Gilbert Cockton's comment:

        Admittedly they only murder rival research rather than rival
        researchers.  Stakes don't have to be made from wood :-<

reminded me of a story I read in the letters column of Sky and
Telescope last year.

Apparently, one powerful researcher was dead set against funding a
particular objective lens design and issued a statement that, not only
would he fight funding for the lens, but that he would fight funding
to any individual who so much as put in a good word for it.
Interestingly, Charles Babbage, felt this was a bit unfair and that a
good design shouldn't be put down so arbitrarily and made his
sentiments known.  Sure enough, retribution was swift and funding for
the Analytical Engine was cut off.

Then again, this sort of thing goes on all the time ....

                                        Seth

------------------------------

Date: 27 Aug 88 01:30:13 GMT
From: garth!smryan@unix.sri.com  (Steven Ryan)
Subject: Re: The Ignorant assumption

>The way to analyse what a scientist or Christian would do now, given
>the absolute power enjoyed by the Inquisition, is to examine their
>beliefs.  Neither group are democrats, nor would they respect many
>existing freedoms.  Note that I am talking of roles of science and
>religion.  As these people live in democracies, the chances are that
>the values of the wider society will repress the totalitarian
>instincts of their role-specific formal belief systems.  Do not take
>this analysis personally.  The way to attack my argument is to
>demonstrate that scientific or christian AUTHORITY are compatible with a
>liberal democracy.

I feel you have made the distinction between Christians and Christianity
implicitly, and I wish to make it explicit.

The ideals of Christianity, tolerance, mercy, and love, would make an
excellent system. Western Christians, on the other hand, still tend toward
out German (cultural) ancestors. (I don't know about Eastern Christians.)

I do take issue that Christians are held in checked by the wider society. In
this country Christians are the majority: it is eternal internal conflicts
between the sects that holds things in checks.

------------------------------

Date: 28 Aug 88 01:11:18 GMT
From: pluto%beowulf@ucsd.edu (Mark E. P. Plutowski)
Reply-to: pluto%beowulf@ucsd.edu (Mark E. P. Plutowski)
Subject: Re: backward path and religions


In a previous article, LEO@BGERUG51.BITNET writes:
>
>In Pattern Recognition, an intelligent system with a backward path...
>...can be used to try to find the appearance of a certain known
>pattern in an input-signal...
>
>Secondly, consider a self-learning, self-organizing neural netwerk.
>Furthermore, suppose this system is searching for answers to questions
>...[of] which it has almost no knowledge.
>...because of the self-learning, self-organizing character,
>it will build answers, imaginary ones, if it
>keeps asking long enough. To my opinion, this is the essence of
>religions and superstitions.

A nice argument, i concur in spirit ;-}.

However, it begged a comment regarding what it means to be an
_imaginary answer_.  Not to kick off
a long discussion about what it means to be imaginary, let me present
my point up front.   Loosely stated:

Our answers come out of conscious thought,
otherwise they would be impossible to record or communicate.
But this conscious thought is driven by unconscious motivations,
and wholistic formulations, which may or may not fit into the
serial symbolic interface required to communicate with the rest
of the world.

{Given a neural network coupled to a symbolic interface,
 which is used to explain the actions of the network:
 the neural net perceives the optimum, and behaves in a way
 that exploits this perception.  The symbolic interface
 tries to explain this behavior as it is able.  Sometimes
 it's capabilities are sufficient, sometimes, however, the
 networks behavior falls into no neat semantic category, other
 than it "got the desired results,"  ie, it perceived the optimum.}

From our unconscious thought, feelings, hunches, and intuition are
expressed consciously as "common sense" "mathematically interesting" or
"symmetrical" "elegant" and "beautiful."   These concepts may be
"felt" in a way uncommunicatable to others in a rational fashion.
(Although this individual may indeed be perceiving a profound truth,
since it is unscientific in nature, it is given a low certainty factor
by the rest of the population.)  This individual uses
this perception to motivate the discovery of provable truths which can
be written in a form communicatable to the general population.
Then, it becomes science.  Until then, it remains only personal belief,
an imagination of what is possible.



Aside:  Einstein believed that imagination was the key to _his_
brand of science, as opposed to the 99% perspiration, 1% inspiration
mix which was apparently the motivation of Edison's brand of science.


P.S. thanks to the author of the posting i quoted above, for adeptly
bringing this argument back to AI.


----------------------------------------------------------------------
Mark Plutowski                          INTERNET: pluto%cs@ucsd.edu
Department of Computer Science, C-024             pluto@beowulf.ucsd.edu
University of California, San Diego     BITNET:   pluto@ucsd.bitnet
La Jolla, California 92093              UNIX:     {...}!sdcsvax!pluto
----------------------------------------------------------------------
  "it was as small as the hope in a dead man's eyes."   (radio ad)

------------------------------

Date: Mon, 29 Aug 88 13:03 O
From: Antti Ylikoski tel +358 0 457 2704
      <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: Giordano Bruno

The case of Giordiano Bruno has occurred several times in AIList.  I
hope that the readers of AIList forgive me that I give some
information involving Bruno and his philosophy even if this is outside
the real scope of AIList.


Giordano Bruno lived from 1548 to 1600.

According to him, the space is infinite and contains innumerable solar
systems where there can be various kinds of beings, possibly even more
developed than humans.  The boundless, eternal and immutable universe
is the only thing that exists; its soul, the force which has an effect
in everything that there is, is the god.  Its elementary parts, which
can be combined and separated but not come into existence or vanish,
are monads, which are simultaneously spiritual and material.  Even the
human soul is an indestructible monad.  Studying the laws of the
universe is the most valuable kind of service of the god that there
is.


It is easy to understand that the contemporaries of Bruno formed the
opinion that from the point of view of Christianity, Bruno was a
heretic.  They believed, and they believed that they had very good
reasons to believe, that the soul of a heretic is condemned to the
hell, which means eternal torture; which is even worse, a heretic
tends to make others to commit heresy.  (Bruno taught in universities
in France, Germany and Great Britain.)

With the abovementioned background in mind, the very strong reaction
of those who condemned Bruno might be more understandable.  Moreover,
I would estimate that very few readers of the AIList would accept
Bruno's theories - pantheism and the monad theory are probably not
very popular nowadays.


------------------------------------------------------------------------------
Antti Ylikoski
Helsinki University of Technology
Digital Systems Laboratory
Otakaari 5 A
SF-02150 Espoo, Finland
tel  : +358 0 451 2176

YLIKOSKI@FINFUN         (BITNET)
OPMVAX::YLIKOSKI        (DECnet)
mcvax!hutds!ayl         (UUCP)

This sentence is false with probability 0.5.

------------------------------

Date: Mon, 29 Aug 88 16:35:30 PST
From: Stephen Smoliar <smoliar@vaxa.isi.edu>
Subject: God and the Universe

Andy Ylikoski made reference to a remark which he attributed to Einstein to
the effect that "the regularity of the structure of the universe reflects an
intellect."  I believe that about a year ago a book was published entitled
THE BLIND WATCHMAKER which presents a rather powerful counter-argument to
this assertion.

------------------------------

Date: Mon, 29 Aug 88 16:44:28 PST
From: Stephen Smoliar <smoliar@vaxa.isi.edu>
Subject: Pseudo-science strikes again!

Thomson Kuhn cited Julian Jaynes' THE ORIGIN OF CONSCIOUSNESS IN THE BREAKDOWN
OF THE BICAMERAL MIND for "an incredibly tight linking of cognitive science
and religion."  I don't want to sound harsh;  but I take a dim view of any
use of the word "science" when the only empirical evidence an author can offer
comes from introspection while under the influence of hallucinatory drugs.
Jaynes certainly provided some imaginative literary criticism with regard to
Homer (although he remains vastly inferior to Albert B. Lord);  but to assume
that anything he has done can be related to cognitive science without first
seeking out more substantive evidence is a sign of the sort of naivete which
science has always tried to transcend.

------------------------------

End of AIList Digest
********************

∂31-Aug-88  1510	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #73  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 31 Aug 88  15:09:49 PDT
Date: Wed 31 Aug 1988 11:17-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #73
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 31 Aug 1988     Volume 8 : Issue 73

 Announcements:

  1st International Symposium On Artificial Intelligence
  SGAICO Connectionism Conference: revised program
  Project MAC 25th Anniversary Symposium
  Knowledge Representation and Reasoning 89 - call for papers

----------------------------------------------------------------------

Date: Wed, 24 Aug 88 11:15:24 EDT
From: simposium internacional de inteligencia
      <SIIACII%TECMTYVM.BITNET@MITVMA.MIT.EDU>
Subject: 1st International Symposium On Artificial Intelligence


Here is the latest information about our symposium if you know persons in
your node who are involved on artificial intellegence
projects, Computer Science, Graduate Programs or Expert Systems
or Electrical Engineer Programs and could be interested in it, please
send it to them. thanks, I appreciate your help.

                       Teresa Lucio Nieto
                       Monterrey Institute of Technology, Mexico



***********************************************************************

                 1ST INTERNATIONAL SYMPOSIUM ON
                    ARTIFICIAL INTELLIGENCE
                     MONTERREY, N.L. MEXICO

***********************************************************************

             THE INFORMATION RESEARCH CENTER OF
           THE INSTITUTO TECNOLOGICO Y DE ESTUDIOS
                  SUPERIORES DE MONTERREY


Is organizing The First International Symposium on Artificial
Intelligence to promote the Artificial Intelligence Technology among
professionals as an approach to problem-solving, to promote the use
of Knowledge-based systems paradigm in solving problems in industry
and business, to make professionals become aware of the Artificial
Intelligence techniques that exist and to demonstrate their use in
solving real problems, to show current Artificial Intelligence
applications in Mexico and other countries. (USA mainly).



Tentative Program:
------------------
The symposium consists of a Tutorial, twelve lectures and selected
papers.

Tutorials:  October 24 and 25.
Introduction to Knowledge-Based Systems:
RICHARD MAYER (Texas A & M University).

Patricia Friel (texas A & M University).

Randy Goebel (University of Alberta, Canada).

Randy Goebel (university of Alberta, Canada).

Conference: October 26, 27 and 28.
Contents:
     *  Knowledge-Based Systems.
     *  Knowledge Acquisition.
     *  Knowledge Representation.
     *  Inference Engine.
     *  Certainty Factors.
     *  Vision.
     *  Robotics.
     *  Expert Systems Applications in Industry.
     *  Natural Language Processing.
     *  Learning.
     *  Speech recognition.
     *  Artificial Intelligence in Mexico.
     *  Fifth Generation Computers.

Conference Participants:
------------------------
The speakers that have already confirmed their participation are:
     *  Romas Aleliunas (Simon Fraser University, Burnaby Canada)
     *  Woodrow Bledsoe (U. of Texas at Austin).
     *  Francisco Cervantes (Instituto de Fisiologia Celular, UNAM,
        Mexico)
     *  Robert Cartwrigth (Rice University, Tx).
     *  Gerhard Fisher (U. Boulder, Colorado).
     *  Randy Goebel (Alberta University, Canada).
     *  Adolfo Guzman (MCC, Austin Tx).
     *  Richard Mayer (Texas A&M).
     *  Pablo Noriega (Centro Cientifico de IBM, Mexico).
     *  Patricia Friel (Texas A&M).
     *  Rene Banares (UNAM, Mexico).
     *  Robert F. Port (Indiana University at Bloomington, USA).
     *  Anthony Gorry (Baylor College of Medicine, Houston, Tx).
     *  David Poole (U. of British Colubia, Canada ).


Software and Hardware Exposition
--------------------------------
During the symposium there will be an exposition of computer hardware
and software including products and systems from companies and
institutions in Mexico and abroad.
We invite software and hardware businesses to participate in this
exposition.


"Call for Papers"
-----------------
We would like to invite all professors and researchers to submit papers
related to the previously mentioned topic areas of the 1st International
Symposium on Artificial Intelligence.
Please submit four copies of summary (4 to 5 pages) and resume to ITESM,
Centro de Investigacion en Informatica, Atn. David Garza.
Deadline: August 31, 1988.
The selected papers will be published in the symposium's proceedings and
will have the opportunity to be presented during the symposium.


Spanish-english and english-spanish will be available for $7.  Most of
the lectures will be given in English.


******************************************************************

            1ST INTERNATIONAL SYMPOSIUM ON
               ARTIFICIAL INTELLIGENCE
                MONTERREY, N.L. MEXICO

Registration Procedure:
-----------------------
Send personal check payable to I.T.E.S.M. to: "ITESM - Centro
de Investigacion en Informatica, Registration Comittee,
Sucursal de Correos 'J', 64849 Monterrey, N.L. Mexico".

*
 15% tax included (prices had changed due Mexico's economical problems).

**
  Hotel reservations are made by sending one night deposit no later than
  forty days prior to arrival date (prices are per person, per night).

Advance registration is encouraged since the attendance is limited.


Place and Date:
---------------
ITESM Monterrey N.L.
October 24-28, 1988

TUTORIALS:
     -  DATE.......... October 24-25.
     -  PLACE......... Auditorio Aulas V  (ITESM).

SYMPOSIUM:
     - DATE..........  October 26, 27, 28.
     - PLACE.........  Auditorio Luis Elizondo (ITESM).
                       Four lectures and a selected paper
                       will take place each day. Lectures
                       will be one hour long. After each
                       one there will be a thirty minutes
                       questions and answers session.




Information and Registration
----------------------------

                     *******************************************
                     *               I T E S M                 *
                     *  Centro de Investigacion en Informatica *
                     *                                         *
                     *  Registration Committee.                *
                     *                                         *
                     *  Sucursal de Correos "J"                *
                     *                                         *
                     *  Monterrey, N.L. Mexico 64849           *
                     *                                         *
                     *  Phone:     (83) 59-57-47               *
                     *             (83) 59-57-50               *
                     *                                         *
                     *  AppleLink:  IT0023                     *
                     *                                         *
                     *  BitNet:   SIIACII@TECMTYVM             *
                     *                                         *
                     *  Internet:                              *
                     *  SIIACII%TECMTYVM.BITNET@MITVMA.MIT.EDU *
                     *                                         *
                     *  Telex:   0382975 ITESME                *
                     *                                         *
                     *  Telefax:   (83) 58 89 31               *
                     *                                         *
                     *******************************************

P.S. ANY INFORMATION FEEL FREE TO CONTACT US, WE WOULD LIKE TO SEND YOU
MORE INFORMATION ABOUT OUR SYMPOSIUM.



------------------------------

Date: 29 Aug 88 15:00 +0200
From: Rolf Pfeifer <pfeifer%ifi.unizh.ch@RELAY.CS.NET>
Subject: SGAICO Connectionism Conference: revised program

*****************************************************************************

SGAICO Conference (REVISED PROGRAM)

*******************************************************************************

Program and Call for Presentation of Ongoing Work

       C O N N E C T I O N I S M   I N   P E R S P E C T I V E

                University of Zurich, Switzerland
                     10-13 October 1988

Tutorials:              10 October 1988
Technical Program:      11 - 12 October 1988
Workshops and
  Poster/demonstration
  session               13 October 1988

******************************************************************************
Organization:           - University of Zurich, Dept. of Computer Science
                        - SGAICO (Swiss Group for Artificial Intelligence and
                                Cognitive Science)
                        - Gottlieb Duttweiler Institute (GDI)

About the conference
____________________

Introdution:
Connectionism has gained much attention in recent years as a paradigm for
building models of intelligent systems in which intresting behavioral
properties emerge from complex interactions of a large number of simple
"neuron-like" elements. Such work is highly relevant to fields such as
cognitive science, artificial intelligence, neurobiology, and computer
science and to all disciplines where complex dynamical processes and
principles of self-organization are studied. Connectionism models seem to be
suited for solving many problems which have proved difficult in the past
using traditional AI techniques. But to what extent do they really provide
solutions? One major theme of the conference is to evaluate the import of
connectionist models for the various disciplines. Another one is to see
in what ways connectionism, being a young discipline in its present form,
can benefit from the influx of concepts and research results from other
disciplines. The conference includes tutorials, workshops, a technical program
and panel discussions with some of the leading researchers in the field.

Tutorials:
The goal of the tutorials is to introduce connectionism to people who are
relatively new to the field. They will enable participants to follow the
technical program and the panel discussions.

Technical Program:
There are many points of view to the study of intelligent systems. The
conference will focus on the views from connectionism, artificial
intelligence and cognitive science, neuroscience, and complex dynamics.
Along another dimension there are several significant issues in the study
of intelligent systems, some of which are "Knowledge representation and
memory", "Perception, sequential processing, and action", "Learning", and
"Problem solving and reasoning". Researchers from connectionism, cognitive
science, artificial intelligence, etc. will take issue with the ways
connectionism is approaching these various problem areas. This idea is
reflected in the structure of the program.

Panel Discussions:
There will be panel discussion with experts in the field on specialized
topics which are of particular interest to the application of connectionism.

Workshops and Presentations of Ongoing Work:
The last day of the conference is devoted to wokrshops with the purpose of
identifying the major problems that currently exist within connectionism,
to define future research agendas and collaborations, to provide a
platform for the interdisciplinary exchange of information and experience,
and to find a framework for practical applications. The workshop day will
als feature presentation of ongoing work (see "Call for presentation of
ongoing work").

*******************************************************************************
*                                                                             *
* CALL FOR PRESENTATION OF OINGOING WORK                                      *
*                                                                             *
* Presentations are invited on all areas of connectionist research. The focus *
* is on current research issues, i.e. "work in progress" is of highest        *
* interest even if major problems remain to be resolved. Work of RESEARCH     *
* GROUPS OR LABORATORIES is particularly welcome. Presentations can be in the *
* form of poster, or demonstration of prototypes. The goal is to encourage    *
* cooperation and the exchange of ideas between different research groups.    *
* Please submit an extended abstract (1-2 pages).                             *
*                                                                             *
* Deadline for submissions:     September 2, 1988                             *
* Notification of acceptance:   September 20, 1988                            *
*                                                                             *
* Contact: Zoltan Schreter, Computer Science Department, University of        *
* Zurich, Switzerland, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland   *
* Phone: (41) 1 257 43 07/11                                                  *
* Fax: (41) 1 257 40 04                                                       *
* or send mail to                                                             *
* pfeifer@ifi.unizh.ch                                                        *
*                                                                             *
*******************************************************************************



Tutorials


MONDAY, October 10, 1988
___________________________________________________________________________

08.30   Tutorial 1: Introduction to neural nets.
        F. Fogelman
                - Adaptive systems: Perceptrons (Rosenblatt) and Adalines
                  (Widrow & Hoff)
                - Associative memories: linear model (Kohonen),
                  Hopfield networks, Brain state in a
                  box model  (BSB; Anderson)
                - Link to other disciplines

09.30   Coffee

10.00   Tutorial 2: Self-organizing Topological maps.
        T. Kohonen
                - Theory
                - Application: Speech-recognizing systems
                - Tuning of maps for optimal recognition accuracy
                  (learning vector quantization)

11:30   Tutorial 3: Multi-layer neural networks.
        Y. Le Cun
                - Elementary learning mechanisms (LMS and Perceptron) and
                  their limitations
                - Easy and hard learning
                - Learning in multi-layer networks: The back-propagation
                  algorithm (and its variations)
                - Multi-layer networks:
                        - as associative memories
                        - for pattern recognition (a case study)
                - Network design techniques; simulators and software tools

13.00   Lunch

14.00   Tutorial 4: Parallel Distributed Processing of symbolic structure.
        P. Smolensky
                Can Connectionism deal with the kind of complex highly
                structured information characteristic of most AI domains?
                This tutorial presents recent research suggesting that
                the answer is yes.

15.30   Coffee

16.00   Tutorial 5: Connectionist modeling and simulation in neuroscience and
                psychology.
        R. Granger
                Biological networks are composed of neurons with a range of
                biophysical and physiological properties that give rise to
                complex learning and performance rules embedded in
                anatomical architectures with complex connectivity.
                Given this complexity it is of interest to identify which
                of the characteristics of brain networks are central and
                which are less salient with respect to behavioral function.
                "Bottom-up" biological modeling attempts to identify the
                crucial learning and performance rules and their
                appropriate level of abstraction.

17.30   End of tutorial sessions
_______________________________________________________________________________

Technical Program


TUESDAY, October 11, 1988
___________________________________________________________________________

Introduction

09:00   Connectionism: Is it a new paradigm?            M. Boden

09:45   Discussion

10:00   Coffee


1. Knowledge Representation & Memory.   Chair: F. Fogelman

        The perspective of:

10:30   -       Connectionism   P. Smolensky    Dealing with structure in
                                                Connectionism

11:15   -       AI/             J. Feldman      A critical review of approaches
                Connectionism                   to knowledge representation and
                                                memory in Connectionism

12:00   -       Neuroscience/   C. v. der Malsburg
                Connectionism                   A neural architecture  for
                                                the  representation of
                                                structured objects


12:45   Lunch


2. Perception, Sequential Processing & Action.  Chair:  T. Kohonen

        The perspective of:

14:30   -       Connectionism   M. Kuperstein   Adaptive sensory-motor
                                                coordination using neural
                                                networks

15:15   -       Connectionism/  M. Imbert       Neuroscience and Connectionism:
                Neuroscience                    The case of orientation
                                                coding.

16:00   Coffee

16:30   -       AI/             J. Bridle       Connectionist approaches to
                Connectionism                   artificial perception:
                                                A speech pattern  processing
                                                approach

17:15   -       Neuroscience    G. Reeke        Synthetic neural modeling:
                                                A new approach to Brain Theory

18:00   Intermission/snack


18.30 - 20.00  panel discussion/workshop on

Expert Systems and Connectionism. Chair: S. Ahuja

                D. Bounds       D. Reilly
                Y. Le Cun       R. Serra

___________________________________________________________________________


WEDNESDAY, October 12, 1988
___________________________________________________________________________

3. Learning. Chair: R. Serra

        The perspective of:

9:00    -       Connectionism   Y. Le Cun       Generalization  and network
                                                design strategies

9:45    -       AI              Y. Kodratoff    Science of explanations versus
                                                science of numbers

10:30   Coffee

11:00   -       Complex Dynamics/
                Genetic Algorithms
                                H. Muehlenbein  Genetic algorithms and
                                                parallel computers

11:45   -       Neuroscience    G. Lynch        Behavioral effects of learning
                                                rules for long-term
                                                potentiation

12:30   Lunch


4. Problem Solving & Reasoning. Chair:  R. Pfeifer

        The perspective of:

14:00   -       AI/             B. Huberman     Dynamical perspectives on
                Complex Dynamics                problem solving and reasoning

14:45   -       Complex Dynamics
                                L. Steels       The Complex Dynamics of common
                                                sense

15:30   Coffee

16:00   -       Connectionism   J. Hendler      Problem solving and reasoning:
                                                A Connectionist perspective

16:45   -       AI              P. Rosenbloom   A cognitive-levels perspective
                                                on  the role of Connectionism
                                                in symbolic goal-oriented
                                                behavior

17:30   Intermission/snack


18:00 - 19:30 panel discussion/workshop on

Implementation Issues & Industrial Applications. Chair:  P. Treleaven

        B. Angeniol     G. Lynch
        G. Dreyfus      C. Wellekens

__________________________________________________________________________


Workshops and presentation of ongoing work



THURSDAY, October 13, 1988
___________________________________________________________________________



9:00-16:00  Workshops in partially parallel sessions. There will be a separate
poster/demonstration session  for the presentation of ongoing work. The
detailed program will be based on the submitted work and will be available at
the beginning of the conference.


The workshops:

1. Knowledge Representation & Memory
        Chair: F. Fogelman

2. Perception, Sequential Processing & Action
        Chair: F. Gardin

3. Learning
        Chair: R. Serra

4. Problem Solving & Reasoning
        Chair: R. Pfeifer

5. Evolutionary Modelling
        Chair: L. Steels

6. Neuro-Informatics in Switzerland: Theoretical and technical neurosciences
        Chair: K. Hepp

7. European Initiatives
        Chair: N.N.

8. Other


16:10   Summing up:  R. Pfeifer

16:30   End of the conference


___________________________________________________________________________

Program as of June 29, 1988, subject to minor changes

___________________________________________________________________________



THE SMALL PRINT

Organizers
Computer Science Department, University of Zurich
Swiss Group for Artificial Intelligence and Cognitive Science  (SGAICO)
Gottlieb Duttweiler Institute (GDI)

Location
University of Zurich-Irchel
Winterthurerstrasse 190
CH-8057 Zurich, Switzerland

Administration
Gabi Vogl
Phone: (41) 1 257 43 21
Fax: (41) 1 257 40 04

Information
Rolf Pfeifer
Zoltan Schreter
Computer Science Department, University of Zurich
Winterthurerstrasse 190, CH-8057 Zurich
Phone: (41) 1 257 43 23 / 43 07
Fax: (41) 1 257 40 04

Sanjeev B. Ahuja, Rentenanstalt (Swiss Life)
General Guisan-Quai 40, CH-8022 Zurich
Phone: (41) 1 206 40 61 / 33 11

Thomas Bernold, Gottlieb Duttweiler Institute, CH-8803 Ruschlikon
Phone: (41) 1 461 37 16
Fax: (41) 1 461 37 39


Participation fees
Conference 11-13 October 1988:
Regular                         SFr.    350.--
ECCAI/SGAICO/
        SI/SVI-members          SFr.    250.--
Full time students              SFr.    100.--

Tutorials 10 October 1988:
Regular                         SFr.    200.--
ECCAI/SGAICO/
        SI/SVI-members          SFr.    120.--
Full time students              SFr.     50.--

For graduate students / assistants a limited  number of reduced
fees are available.

Documentation and refreshments are included.
Please remit the fee only upon receipt of invoice by the
Computer Science Department.

Language
The language of the conference is English.

Cancellations
If a registration is cancelled, there will be a  cancellation charge of
SFr. 50.-- after 1st October 1988, unless you name a replacement.

Hotel booking
Hotel booking will be handled separately.
Please indicate on your registration form
whether you would like information on hotel
reservations.

Proceedings
Proceedings of the conference will be published in book form.
They will become available in early 1989.


------------------------------

Date: Tue, 30 Aug 88 11:54 EDT
From: MAC-25-REQUEST@XX.LCS.MIT.EDU
Subject: Project MAC 25th Anniversary Symposium


      *****************************************************************
                   MIT COMPUTER SCIENCE RESEARCH SYMPOSIUM
                            IN CELEBRATION OF THE
               25th ANNIVERSARY OF THE FOUNDING OF PROJECT MAC
                             OCTOBER 26-27, 1988
                             MIT, CAMBRIDGE, MA
      *****************************************************************
                            Sponsored by the MIT
                       Laboratory for Computer Science
                                     and
                         Industrial Liaison Program

CONFERENCE DESCRIPTION: The symposium will cover a full range of Computer
Science research ongoing at MIT LCS and AI Lab--the two labs which
grew from the original ``Project MAC'' founded in 1963.  Leading researchers
from the faculty and staff of the laboratories will highlight current
research and future activities in multiprocessors; distributed systems;
intelligent systems (AI), linguistics and robotics; cryptology, complexity
and random computation theory; parallel algorithms and programming languages;
and computers and economic productivity.  The symposium will be of interest
to those seeking an overview of research as well as to specialists.

LECTURES OPEN TO THE PUBLIC: without charge, after seating by invited
                             and ILP affiliated guests.

PLACE: Kresge Auditorium, MIT.

      *****************************************************************
                            SCHEDULE AND PROGRAM

                              TUESDAY, October 25
REGISTRATION (5PM-8PM) at Kresge Auditorium
RECEPTION (6PM-9PM) at the MIT Museum (Invited and ILP affiliated guests only)

                             WEDNESDAY, October 26
REGISTRATION (7:45AM-continuing) at Kresge Auditorium
WELCOMING REMARKS (8:45AM-9AM)
  The MIT Administration
  Michael L. Dertouzos, LCS Director
  Albert R. Meyer, Symposium Chair

SESSION 1 (9AM-Noon) Chair: Fernando J. Corbato
  John V. Guttag, Why Programming is Too Hard and What to Do About It
  Nicholas P. Negroponte, Beyond the Desktop Metaphor
  Barbara H. Liskov, Issues in Distributed Computing
  Robert W. Scheifler, Windows in Time: The X Window System
  David D. Clark, The Changing Nature of Computer Networks

LUNCH (Noon-1:30PM)

SESSION 2 (1:30PM-2:20PM) Chair: Robert M. Fano
  Michael L. Dertouzos, Computers for Productivity

SESSION 3 (2:25PM-5:00PM) Chair: Randall Davis
  Peter Szolovits,  Knowledge-Based Systems
  Ramesh S. Patil, An Expert System for Arrhythmia Detection in Noise
  Berthold K.P. Horn, Parallel Networks for Vision
  Rodney A. Brooks, Artificial Creatures
  Marc H. Raibert, Robots that Run

TESTIMONIAL BANQUET (6:30PM-11:00PM) (By Invitation)

                              THURSDAY, Oct. 27
REGISTRATION (8:45AM-continuing) at Kresge Auditorium
SESSION 4  (9AM-Noon) Chair: Frederick C. Hennie, III
  Harold Abelson, Computation as a Framework for Engineering Education
  Albert R. Meyer, Observing Concurrent Processes
  Michael F. Sipser, We Still Don't Know if P=NP
  Shafi Goldwasser, The Quest for Provably Unbreakable Codes
  Silvio Micali, Nothing but the Truth: Zero-Knowledge Protocols
  Ronald L. Rivest, Learning Theory: What's Hard and What's Easy

LUNCH (Noon-1:30PM)

SESSION 5  (1:30PM-2:20PM) Chair: Marvin L. Minsky
  Joel Moses,  Cultural Biases in CS and AI

SESSION 6  (2:25PM-5:00PM) Chair: Jack B. Dennis
  Arvind, A Dataflow Approach to General Purpose Parallel Computing
  William J. Dally, Fine-Grain Concurrent Computing
  Charles E. Leiserson, New Machine Models for Synchronous Parallel Algorithms
  Gerald J. Sussman, Dynamicist's Workshop: Automatic Preparation, Execution,
                     and Analysis of Numerical Experiments
      *****************************************************************

ABSTRACTS:  Detailed abstracts of the above talks is availabe upon request.

INVITATIONS: The symposium lectures are open to the public without charge.
Lunch will be provided for invited and ILP affiliated guests, while
the banquet is for invited guests and their companions.  Invitations
are being sent to alumni and scientific collaborators of Project MAC/LCS/AI,
contract monitors and similar liaison officers from other organizations,
and other laboratory affiliates.

Completing the registration form below will also serve as a request for an
invitation if you have not received one.
      *****************************************************************

                        REGISTRATION FORM

TITLE (Mr. Ms. Dr. ...):
FIRSTNAME:
MIDDLE INITIAL:
LASTNAME:
POSITION (Vice President,...):
COMPANY:
DEPARTMENT/DIVISION:
ADDRESS:

CITY:
STATE:
COUNTRY:
ZIP:
TEL:
EMAIL-ADDRESS:
I WOULD LIKE TO ATTEND (mark with `x'):
  October 25,  RECEPTION:
  October 26,  SYMPOSIUM:
                   LUNCH:
                 BANQUET:
  October 27,  SYMPOSIUM:
                   LUNCH:

BANQUET COMPANION'S NAME:

                          REGISTRANT'S AFFILIATION
  Former MAC/LCS/AI Lab member or student.  Group:
                                             Year:

  Other MAC/LCS/AI affiliation
     (funding officer, research collaborator,...):
                                             Year:
                             Lab-member reference:

  ILP affiliated                  (mark with `x'):
  No affiliation, just want to register for
                    the symposium (mark with `x'):
      *****************************************************************

SEND Registration and further inquiries by EMAIL to
                 Internet:  MAC25-registration@XX.LCS.MIT.EDU
or by REGULAR MAIL to
                Professor Albert R. Meyer, Chairman
                Project MAC 25th Anniversary Symposium
                MIT Laboratory for Computer Science
                545 Technology Square
                Cambridge, MA 02139

                tel: (617) 258-8215

------------------------------

Date: Wed, 31 Aug 88 01:43:10 EDT
From: Hector Levesque <hector%ai.toronto.edu@RELAY.CS.NET>
Subject: Knowledge Representation and Reasoning 89 - call for papers

                              _     _   _
                          |/ |_|   |_| |_|
                          |\ | \   |_|  _|

The First International Conference on Principles of Knowledge Representation
and Reasoning will be held in Toronto, Canada on May 15-18 1989.  KR'89 will
bring together researchers interested in the principles governing systems that
use general-purpose reasoning algorithms over explicit representations of
knowledge.  Authors are requested to submit extended abstracts (not complete
papers) of at most 8 double-spaced pages (12 point), although substantially
longer full papers will appear in the conference proceedings to be published by
Morgan Kaufmann Publishers Inc.  The important dates for KR'89 are:

Submission receipt deadline:            November 1, 1988
Author notification date:               December 15, 1988
Camera-ready copy due to publisher:     February 15, 1989
Conference:                             May 15-18, 1989

A call for papers for KR'89 with full details on topics, submissions, and
review criteria can be found in the journal Artificial Intelligence (vol. 35,2,
June 1988, p. 281), the AI Magazine (vol. 9,1, Spring 1988, p. 137), the AISB
Newsletter (no. 64, p.27), the SIGART Newsletter (no. 104, April 1988, p. 47),
and the Canadian AI Newsletter (April 1988, p.36).  Inquiries of a general
nature can be addressed to the Conference Chair, Ray Reiter, whose csnet
address is reiter@ai.toronto.edu.

Ron Brachman and Hector Levesque
KR'89 Program Chairs

[ See also news.announce.conferences on Usenet for a detailed CFP ]

------------------------------

End of AIList Digest
********************

∂31-Aug-88  1917	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #74  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 31 Aug 88  19:17:25 PDT
Date: Wed 31 Aug 1988 21:53-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #74
To: AIList@AI.AI.MIT.EDU


AIList Digest            Thursday, 1 Sep 1988      Volume 8 : Issue 74

 Queries:
  MicroExplorer Vs. MacIvory Poll
  WANTED: speech data
  Prolog, etc.

 Responses:
  Prolog, etc.
  Where should she go?

----------------------------------------------------------------------

Date: 30 Aug 88 23:04:28 GMT
From: sm.unisys.com!csun!polyslo!mshapiro@oberon.usc.edu  (Mitch
      Shapiro)
Subject: MicroExplorer Vs. MacIvory Poll


Hi, folks....

While I was at AAAI last week, the big showdown finally occurred.
That is to say, that Symbolics finally brought out the MacIvory
to compete against the Texas Instruments MicroExplorer.

I'm looking to get a good opinion survey of these two machines.
I'll be glad to post a summary should there be sufficient interest.


Thanks for any/all words/opinions.

[Mitch]

------------------------------

Date: 30 Aug 88 15:27:30 GMT
From: sunybcs!bandu@rutgers.edu  (Jagath SamaraBandu)
Subject: WANTED: speech data

Could somebody please mail me some speech data which I need for testing
purposes?  It will be really helpful if the text (spoken) is also included.

Thanks in advance

Jagath samarabandu

email - bandu@cs.buffalo.edu v092r8c2@ubvms.bitnet

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Jagath K. Samarabandu (716)-835-4639    |       bandu@cs.buffalo.edu
518, Lasalle Ave.,Buffalo,NY14215       |       v092r8c2@ubvms.bitnet
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

------------------------------

Date: 31 Aug 88 13:41:33 GMT
From: uflorida!fish.cis.ufl.edu!fishwick@gatech.edu  (Paul Fishwick)
Subject: Prolog, etc.

Does anyone know of a PD version of Prolog that will run under UNIX.
It must come with source since we would like to able to use it on
any UNIX machine (including Gould, SUN, VAX, etc.)? We currently have
XLISP and I would very much like to augment this with a PROLOG for
my AI students. If it can be FTP'd, so much the better! Thanks for
responding...

Also, we would be interested in any functional languages (like ML) that
are easily available on the net.

-paul fishwick
fishwick@bikini.cis.ufl.edu


--
+------------------------------------------------------------------------+
| Paul A. Fishwick.......... INTERNET: fishwick@uflorida.cis.ufl.edu     |
| Dept. of Computer Science. UUCP: {gatech|ihnp4}!codas!uflorida!fishwick|
| Univ. of Florida.......... PHONE: (904)-335-8036                       |

------------------------------

Date: 31 Aug 88 16:34:22 GMT
From: att!mtune!mtund!newton@bloom-beacon.mit.edu  (Newton Lee)
Subject: Re: Prolog, etc.

In a previous article, Paul Fishwick writes:
> Does anyone know of a PD version of Prolog that will run under UNIX.
> It must come with source since we would like to able to use it on
> any UNIX machine (including Gould, SUN, VAX, etc.)? We currently have

We use C-Prolog on our UNIX machines (VAX, MIPS, 3B20, UNIX PC, etc.)
It is based on the Prolog system written in IMP by Luis Damas (and
Lawrence Byrd) for the ICL 2900 computers.  For more info, contact
Fernando Pereira, EdCAAD, Dept. of Architecture, University of Edinburgh.

Newton Lee
AT&T Bell Laboratories

------------------------------

Date: 30 Aug 88 10:25:00 EDT
From: Nahum (N.) Goldmann <ACOUST%BNR.CA@MITVMA.MIT.EDU>
Subject: Where should she go?

Peter Webb writes (AILIst, v8, #60)

>         A friend of mine wants to get her PhD in Computer Science,
> specializing in the Machine Learning aspect of Artificial Intelligence.
> She has been to the library and collected a list of likely schools, but
> the list is too long for her to apply to all the schools on it.
> Accordingly, she asked me if I would ask the net for suggestions.  If
> you wanted to study machine learning, where would you go and why?

I believe that the procedure of evaluating a SCHOOL is only suitable
when the undergraduate education is being considered. For a graduate
student, especially a PhD candidate, the first step to prove his or her
scientific maturity is to identify the area of his/her OWN interest.
Myself, I am expecting a PhD student to come with a reasonable
degree of aggression and violence to prove that:

a) I don't understand anything in my own area of expertise;

b) It really does not matter, since my area is doomed in any case;

c) The applicant has a marvellous idea which will save both me and
   the mankind (the humankind?) from the absolete approach (not that it
   will happen at the end of the exersize, but it is a different story).

Would anything less do?

On a practical note, I'd advise her to do the following:

1) Find a couple of good reviews in the library which deal with the
   subject (machine learning?).

2) Loosely identify 2-3 sub-areas of interest.

3) Find fresh publications in these areas.  Based on them, define
   which circle of problems/methods really appeal to her.  Just intuition
   will do.

4) Based on the same publications identify major players in these
   areas whose works sound exciting.

5) Contact these INDIVIDUALS and ask their advice (who is the best
   PERSON to do YOUR research with).  You'll be surprised how much more
   informative their responses will be than what you will get
   "at random".  Yes, they may not speak about
   dormitories and the "perceived importance" of the college, but does it
   really matter where a good researcher is located?  What if it is in
   Australia, Japan, or the UK?  For a PhD student it should not be a
   major obstacle.

6) Come to the person selected and convince him/her that without you
   (see a-c above...).  Propose a couple of research subjects.  At the
   end, settle for the subject HE gives to you).  It is less likely that
   she'll have a major dissapointment at the end of this long exersize.

Sorry for the basic staff.  It's just that I've seen so many PhD's who
would be far happier if they were insurance agents...  If only
someone would explain the basic facts of scientific life to them
beforehand...

Good luck to your friend (at least she asked the question)!

Greetings and love

Nahum Goldmann
(613)763-2329
e-mail <acoust@bnr>

------------------------------

End of AIList Digest
********************

∂02-Sep-88  2317	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #76  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 2 Sep 88  23:16:54 PDT
Date: Fri  2 Sep 1988 23:48-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #76
To: AIList@AI.AI.MIT.EDU


AIList Digest            Saturday, 3 Sep 1988      Volume 8 : Issue 76

 Religion:

  Science, Religion, Rationality
  The Ignorant assumption
  Theistic Arguments
  Science, Lawfulness, a (the?) god
  Giordano Bruno

----------------------------------------------------------------------

Date: Thu, 01 Sep 88 11:24 EST
From: steven horst                        
      <GKMARH%IRISHMVS.BITNET@MITVMA.MIT.EDU>
Subject: science, religion, rationality (long)


 Much of the ongoing discussion of rationality, science and religion
 has been of admirably high quality.  Some things, however, just
 cry out for response.  William Wells has taken the lead role as
 religion-bashing gadfly, so it is to his comments that I shall
 address myself. (Professor Minsky's surprisingly shrill _ad hominem_
 has received sufficient attention, I think.)                           e

 Back in V8 #50 of the Digest, Mr. Wells writes:

 >Whether you like it or not, the religious entails something which
 >is outside of reason.

 It isn't clear what he means by reason.  Religion certainly requires
 more than the rules for deduction in the predicate calculus, and
 probably more than any empirical method one might think adequate for
 the physical sciences.  I.e., it requires more than reasonING.
 Of course it is quite a different thing to say that religious belief
 cannot be reasonABLE.  Practically none of our beliefs are truths of
 logic, and we believe all kinds of things that we do not subject to
 double blind tests or measure with instruments. (Am I unreasonable
 in believing that my parents care about me?  A paranoiac could find
 alternative interpretations for all of their behaviors.  But ONLY
 someone with grave problems would require rigorous empirical tests.)
 The rationality of ANY kind of belief is a tricky thing to analyze.
 (Epistemologists have an almost perverse love for bizarre scenarios
 in which some belief we would normally consider to be aberrant would
 turn out to be perfectly reasonable.)  Yet Mr. Wells seems to treat
 "reason" as some well-established and commonly-agreed-to set of
 principles which have some special connection with another monolithic
 entity of great prestige called "science".  Having spent some deal
 of time around people who spend most of their time thinking about
 epistemology and the history and philosophy of science, I find
 it hard to think of more than one or two of them who share his view.
 (Though this sort of Enlightenment mythos is admittedly very
 widespread in contemporary Western cultures.)  A number of fine
 philosophers of science are, however, practicing Jews (e.g. Shimony)
 and Christians (e.g., van Frassen, McMullin, Quinn), and not doubt
 other religious traditions are represented among them as well.

 But perhaps Mr. Wells merely means to argue against proponents of
 "natural theology" (the attempt to deduce God's existence and
 attributes from observations of the world), and against those who
 believe that religion can proceed wholly upon the deliverances of
 reason.  (Kant argued to this effect.)  Sufis, Thomists, fundamentalis ts
 and many others would agree that more than reason and observation
 of the physical world are needed.  Some candidates for the "more" are
 (a) personal religious experience, or (b) trust in some authority
 (usually grounded in someone else's especially intense religious
 exprerience), or (c) some special "sense" which is attuned to
 apprehending matters divine. (All of these have parallels in other
 areas of human knowledge.)  But Mr. Wells goes further.  In
 V8 # 72 he decries "revealed knowledge".  Responding to post, he says

 > Note the confusion in this individual: he talks about "revealed
 > knowledge" as if it had some relationship to knowledge; however,
 > there is *no* relationship.  By what means do I distinguish this
 > "revealed knowledge" from an LSD overdose?  If I am to depend
 > wholly on divine revalation, then I know *nothing*.  If not, then
 > I must reject "revealed knowledge" in favor of evidence.

 I'm not sure I see a cut-and-dried distinction here.  It seems to
 me that people who have vivid religious experiences have good reason
 to believe things on the basis of them. (They may also have reason
 to reassess their mental health.) Beliefs based on hallucinations
 are not necessarily unreasonable, even if they are false.  It isn't
 the reasoning that has gone awry but the input system.  Since I don't
 see any reason to rule by fiat that religious experiences MUST be
 hallucinatory, it doesn't seem absurd to suppose that
 beliefs based on religious experience COULD be true AND reasonably
 arrived at AND arrived at through a dependable process.
 There are, of course, special difficulties with evidence that is
 not repeatable or public, and this would be a very real difficulty
 if we had to do science by trusting someone's mystical insights.
 But of course we routinely trust other people's reports of what
 they have seen and heard - individual events are by nature not
 repeatable - and the kinds of doubt we might have about another
 person's religious experiences also arise for any experiences he
 reports that are very different from our own.  There can be no
 question of rigorous tests of most religious claims along the lines
 of the tests performed to confirm or disconfirm a scientific theory
 because religious claims, unlike the most familiar paradigms of
 scientific claims but like the great bulk of our beliefs, are not
 about universal laws ranging over classes of physical phenomena.
 This isn't evidence against the truth of religious claims.  Indeed,
 it is quite consonant with the way most religious traditions view
 their own claims.

 Now I agree that science is probably best off when not bound by
 the fetters of some particular religiously inspired cosmology.
 (Arguably religion is better off when it abstains from too much
 cosmology as well, and arguably science is better off once one
 becomes aware of dangerous basic assumptions, such as the
 assumption that space MUST be Euclidean, or that theories in the
 special sciences must be reducible to theories in the proprietary
 vocabulary of physics, or that all theories must be expressible
 as universally quantified sentences in Principia logic.)  But
 surely it is a bit rash for Mr. Wells to say:

 >Science, though not scientists (unfortunately), rejects the
 >validity of religion: it requires that reality is in some sense
 >utterly lawful, and that the unlawful, i.e. God, has no place.

 First, I take it that science is a practice, and hence cannot
 literally accept or reject anything. (Though if one were to
 reject something, one would do well to reject the attribution of
 validity to anything other than a proof or argument.)  But
 the assumption of the lawfulness of nature is more the WORKING
 ASSUMPTION of the sciences than some PRINCIPLE upon which science
 is predicated.  Lots of physicists DON'T believe that all physical
 events can be subsumed under universal laws.  (At least if I've
 been listening carefully enough at conferences on cosmology and on
 quantum theory.) But suppose that there is some measure of anomic
 behavior in the universe - that wouldn't vitiate the success of
 most scientific achievements.  It would at most impose a limit upon
 the scope of scientific inquiry.  Similarly, a God who is not
 a part of a deterministic universe would fall outside of the scope
 of science.  Who claimed otherwise?  Certainly not orthodox Jews,
 Christians or Moslems.  (I suspect the whole issue is different
 with Eastern religions.)  The claim that an ideally completed
 physics could tell us EVERYTHING about EVERYTHING is at best a
 cosmological speculation.  (Certainly not verifiable - lots of
 events we can't test!)  It is no refutation of any religious
 cosmology, just an old-fashioned disagreement.

 And why is it unfortunate that many scientists believe in a god
 or practice some form of religion?  They didn't seem to think it
 was hurting them.  Did it hurt their ability to perform as
 scientists?  Well, Aristotle, Descartes, Darwin, Leibniz, and
 Einstein (to name a handful) seem to have done pretty well. (And
 since this is an AI newsletter, perhaps we should add Alonzo Church
 to the list as well.)

 Finally, Mr. Wells describes his view as

 > all elementary philosophy, to which religion seems to have
 > blinded that author.

 Elementary philosophy?  Perhaps.  But only in the sense that the
 Greeks who believed that solid objects fall because the Earth in
 them is seeking its own level were doing "elementary physics."

            Steven Horst      gkmarh@irishmvs.bitnet
            Department of Philosophy
            Notre Dame, IN  46556
            219-239-7458

------------------------------

Date: 29 Aug 88 17:05:34 GMT
From: okstate!romed!cseg!lag@rutgers.edu  (L. Adrian Griffis)
Subject: Re: The Ignorant assumption

In article <1311@garth.UUCP>, smryan@garth.UUCP (Steven Ryan) writes:
> I feel you have made the distinction between Christians and Christianity
> implicitly, and I wish to make it explicit.
>
> The ideals of Christianity, tolerance, mercy, and love, would make an
> excellent system. Western Christians, on the other hand, still tend toward
> out German (cultural) ancestors. (I don't know about Eastern Christians.)

Another "ideal" of Christianity is the notion that part of what make one
a good person is believing the right things.  In other words, A great deal
of unpleasentness awaits one who does not believe in the right things.
It's not clear to me who tolerance, mercy, and love (Compassion) can ever
be meaningful when they are something that one must do to please others.
It strikes me that this is likely to lead to profound confusion over what
as individuals beliefs really are.

This is not to say that Science never indulges in this sort of intolerance
of beliefs.  But at least Science as a whole does not state as part of its
fundamental platform that you must accept such and such a belief as fact,
without evidence and without question (regardless of what individual scientist
may do).

It's not clear to me at all that any system based on the notion of belief-as-
a-performance can be as the root of an "excellent" system of government.

>
> I do take issue that Christians are held in checked by the wider society. In
> this country Christians are the majority: it is eternal internal conflicts
> between the sects that holds things in checks.
>

And am I ever grateful for that.

                                     ---L. Adrian Griffis

--
  UseNet:  lag@cseg                      L. Adrian Griffis
  BITNET:  AG27107@UAFSYSB

------------------------------

Date: 2 Sep 88 10:59:02 GMT
From: quintus!ok@sun.com  (Richard A. O'Keefe)
Subject: Re: The Ignorant assumption

In article <545@cseg.uucp> lag@cseg.uucp (L. Adrian Griffis) writes:
>This is not to say that Science never indulges in this sort of intolerance
>of beliefs.  But at least Science as a whole does not state as part of its
>fundamental platform that you must accept such and such a belief as fact,
>without evidence and without question (regardless of what individual scientist
>may do).

Straw man!  Straw man!  Neither does Christianity state any such thing.
A major theme of the Bible is "here is the evidence".  Biblical
Archaeology (which tests the historical claims to the extent that they
*can* be tested by present archaeological methods) is regarded as a
PRO-religious activity.  Thomas *is* one of the Apostles, after all...

>Another "ideal" of Christianity is the notion that part of what make one
>a good person is believing the right things.

Again, not so.  To quote the Bible (paraphrased, because my memory's not
that reliable): "You believe in God?  So do the devils!"  An analogy:
you cannot enter into an effective marriage with a particular woman as
long as you continue to believe that she is a fossilized whale.

Criticims of any religion are more effective when they are well-informed.

I'm a little bothered by this reification of "Science" as if it were an
agent capable of "indulging in" behaviours and "stating" things.  Perhaps
Gilbert Cockton could clarify the ontological status of "Science" for us
(:-).

What's the relevance of all of this to AI, anyway?
Are AI people unusually sensitive to "Science" issues because
we want to be part of it, or what?
The study of English literature is not normally regarded as part
of "Science", but it's a decent intellectual field for all that.

------------------------------

Date: Tue, 30 Aug 88 11:54 CDT
From: <CMENZEL%TAMLSR.BITNET@MITVMA.MIT.EDU>
Subject: theistic arguments

In a recent AIList number, T. William Wells writes of the argument from
design:

> This argument goes: "the universe appears to have been
> designed, therefore there was a designer.  I shall call it god."
> How silly!  In its refined form, this argument posits god as a
> "primary cause": this makes god "beyond" natural law, as an
> explanation for natural law.  It is trivially refuted by pointing
> out that it begs the question.  (If the universe requires a
> cause, why shouldn't god require a cause?  And if not, why
> presume god anyway?)

Wells is confusing two traditional theistic arguments here.  The first is the
argument from design, or teleological argument, which traces its origins
primarily to Paley in (if I recall) the early 18th century.  The second is the
the cosmological argument, which goes back in its best known forms to Aquinas.
The teleological argument is more or less as Wells reports, though he doesn't
sufficiently emphasize the role of *explanation* in the argument; the idea is
that the amazing precision, detail, and apparent *purpose* (hence the name
"teleological argument") exhibited in the natural order can only reasonably be
explained by a rational designer, just as (Paley argues) it would be
unreasonable to suppose an intricate watch found in the desert had no designer.

Wells does less justice to the cosmological argument, which in its strongest
form argues not from the idea that anything that exists requires a cause, which
would then be open to Wells' trivial refutation, but from the *contingency* of
the universe.  The idea is that since everything in the physical universe is
contingent, i.e., might not have existed, the universe itself is contingent
(possible fallacy of composition here, but never mind).  A contingently
existing thing requires some sort of explanation for its existence, some reason
for why it exists rather than not.  The only possible explanation (so the
argument goes) is that its existence must be rooted in a *necessary* being, a
being whose nature it is to exist and hence which doesn't require a cause. It
is a further step of course to say that this being has to be God as usually
understood.

I'm not saying it's a *good* argument, just a lot better than Wells would
have it.

--Chris

------------------------------

Date: 30 Aug 88 22:37:36 GMT
From: pluto%beowulf@ucsd.edu (Mark E. P. Plutowski)
Reply-to: pluto%beowulf@ucsd.edu (Mark E. P. Plutowski)
Subject: Re: science, lawfulness, a (the?) god


Regarding this quote from a previous posting:

>  ...the "quotation" from Einstein ...is just a restatement of...
>  the argument... [that goes something like this:]
>  ..."the universe appears to have been
>  designed, therefore there was a designer.  I shall call it god."
>  How silly!  In its refined form, this argument posits god as a
>  "primary cause": this makes god "beyond" natural law, as an
>  explanation for natural law.  It is trivially refuted by pointing
>  out that it begs the question.  (If the universe requires a
>  cause, why shouldn't god require a cause?  And if not, why
>  presume god anyway?)

God didn't design the universe, God is the universe.
Therefore, God is everywhere (just like Elvis ;-} )  and everything
is God, including you and me.  Very simple.

If you accept this philosophy, then it is easier to accept the belief
that AI is plausible, since by the same token, intelligence doesn't
"cause" a being (or mechanism) to behave intelligently,
intelligence is the behavior, and hence, the being (and/or mechanism)
itself.  Therefore, hope springs eternal that this "intelligence"
is not some elusive spirit or ether; and can be studied
rationally.

At the same time, since intelligence is the whole
behavior, decomposing the behavior into its parts is only a part
of the solution to understanding it, just as separating God from
the universe (creating a separate entity called God)
can inhibit you from passing thru the proverbial "eye of the needle,"
and understanding your particular universe.


----------------------------------------------------------------------
Mark Plutowski                          INTERNET: pluto%cs@ucsd.edu
Department of Computer Science, C-014             pluto@beowulf.ucsd.edu
University of California, San Diego     BITNET:   pluto@ucsd.bitnet
La Jolla, California 92093              UNIX:{...}!sdcsvax!beowulf!pluto
----------------------------------------------------------------------
Listen to your surroundings and your self, instead of Jimmy Swaggert.

------------------------------

Date: 31 Aug 88 18:41:55 GMT
From: sri-unix!orawest.SRI.COM!ejs@decwrl.dec.com (e john sebes)
Subject: Re: science, lawfulness, a (the?) god


I'd like to respond to a few of things that T. William Wells has been
writing lately about that good ole hobby-horse "science and religion".

In a previous article, T. William Wells writes:
>: >     >Science, though not scientists (unfortunately), rejects the
>: >     >validity of religion: it requires that reality is in some sense
>: >     >utterly lawful, and that the unlawful, i.e. god, has no place.
>"Lawful" does not mean "following, by choice, law", rather, it
>means: "constrained by law".  However, religion posits "god" or
>"the absolute" or what have you as that which is beyond, above,
>determines, flouts, or whatever adjective you like, natural law.
>This is essential to religion.

It is erroneous to say that "religion" requires belief any particular idea.
*Some* religions require belief in *some* particular ideas.
Specifically, the western, theistic religions Mr. Wells is familiar with
(to the exclusion of all others, apparently) does include belief in God,
with those attributes Mr. Wells mentioned. But this is *absolutely not*
essential to religion in general, or to all particular religions.

A further point is that there are several theistic religious attitudes
which in no way entail any notion about a God acting in the physical
universe which scientists take as their puview. The "watchmaker" God of late
18th century European thought is probably the best known example to
netters; God made the universe, set it going, and enjoys the show.
Ah, Mr. Wells will say, but God *could* then act in the universe, but
just doesn't. A response then might be that perhaps He created the
universe so that He couldn't interfere after creation. Does this make
sense? Can God constrain Himself? Can he create an unmovable stone and
an irresistable force?

Honestly, there is no point to such freshman philosophy hairsplitting.
Even R.C. Church theologians got over that stuff centuries ago.

Get this: it doesn't have anything to do with science!!!
After all, what is it that is so repugnant to Mr. Wells and his ilk
about a theistic scientist who also beleives that God (or whatever)
doesn't act in the physical universe?

Perhaps I am missing something here, but we went over a lot of the same
ground in my 6th grade science class.

I will also try to clarify the notion of "revealed knowledge".
You call something knowledge because you beleive it is true.
Current usage of terms like "knowledge" and "fact" tend to be in the
context of "physically or objectively verifiable", but of course that is
because we believe in such verification. Revealed knowledge is simply
what people call fact, but do not claim to be verifiable. In common
usage, it is therfore a misnomer. But saying that
>This translates to: "this knowledge is unknowable".
just plays on this fact, and doesn't refute the fact that some
people have beliefs to which they choose to apply this term.

I think Mr. Wells' strong concern over the fact that even today many
"rational" people are not logical positivists, really stems from a kind
of hysteria over "creationism" and similar things. He says that
>
>This individual has managed to illustrate in one very short note
>*exactly* why religion has *no* place in scientific discussion:
>the use of religion perverts reasoning by substituting "revealed
>knowledge" for evidence, requires the unknowable as part of
>reasoning, and uses ignorance as its justification.
>---
>Bill
>novavax!proxftl!bill
>
Well, I hate to let the cat out the bag, but "religion" does no such
thing. This perversion (if you want to call it that) is done by
individuals who try to compel people who want evidence to believe in
things that will not admit of evidence. And some of these people even
try to hoke up some evidence as well!

Despicable, I admit, but also more pitiable that anything else.
And equally so is Mr. Wells religion-bashing. Try to make this
connection: yes, religion has no place in scientific discussion, but
that it because it is *irrelevant*, not evil (the only "evil" in this
context is masking religion as science); therefore it is of little
concern in scientific discussions, and in little need of being bashed.
Rest from your intellectual imperialism, and concentrate on whether
someone's science is good work, regardless of whatever other thoughts
there might be lurking in his or her mind, thoughts which you say are
"wrong" or "silly", but are in fact merely irrelevant.

After all, isn't that what we are supposed to be about, in these
scientific discussion groups?

--John Sebes

As a postscript, I beleive that all this came about not because someone opined
"I know God exists, and you AI types leave God out of your theories"
but because someone had the temerity to ask if others might be missing
ideas for interesting models of mind because of wholehearted indulgence in
total reductionism. Unfortunately, this question was stated in a way
that mentioned that fateful word "God". Oh well.

------------------------------

Date: 31 Aug 88 17:32:22 GMT
From: modcomp!joe@uunet.UU.NET (Joe Korty)
Subject: Re: Giordano Bruno


--
It has been interesting to read (although not particularly relevant to this
newsgroup) the differing views readers have on the role that Giordano Bruno
has played in history.  Perhaps some quotes from L. Lerner and E. Gosselin
("Galileo and the Specter of Bruno", Scientific American, November 1986) can
shed some light on the issue.  E. Gosselin, it should be noted, is a professor
of history whose major research interest focuses on the intellectual and
cultural history of the Renaissance and Reformation.

All quotes are w/o permission.  Editorial changes on my part are indicated
by brackets.

  "The two men are often honored as martyrs to science, but for Bruno
  astronomy was a vehicle for politics and theology.  Galileo was tried
  partly because his aims were mistakenly identified with those of Bruno.


  "[...] Bruno has the Copernican model of the solar system wrong.  He
  demonstrates total ignorance of the most elementary ideas of geometry
  [...].  He throws in scraps of pseudoscientific argument, mostly garbled,
  and proceeds to high flying speculations [...].

  "[...] If Bruno had merely been a fool, he might have met with laughter
  and derision [instead of being burned at the stake].  Bruno repeatedly
  makes it clear that the "Supper" [his most important work on the Copernican
  system] is really not about the Copernican system at all: it is only
  peripherally a work on natural science and it is emphatically not to be
  taken literally.  In accordance with the title, its central subject is
  [instead] the nature of the Eucharist.

  "For Bruno, the value of the Copernican system lies not in its astronomical
  details but instead in its scope as a poetic and metaphoric vehicle for
  much wider philosophical speculation.  The Copernican replacement of the
  earth by the sun [...] is for Bruno a symbolic restoration of what he
  calls "the ancient true philosophy"; according to him, it is this philosophy
  one must turn in order to understand the true meaning of the Eucharist.

  "It is important to understand that Bruno's adoption of natural science
  to foster broader theological, ethical, social and political purposes
  was entirely characteristic of the Renaissance world view.  For the people
  of the Renaissance, science was literally a branch of philosophy, often
  called upon to illuminate or illustrate a nonscientific issue.  Intelligent
  and well-educated people often saw explicit and highly anthropocentric
  parallels between scientific knowledge and other aspects of life.  Bruno is
  typical of [his contemporaries] in leaping to conclusions about the relation
  of human beings to God based on theories about the workings [of nature].

In short, Bruno was condemned as a heretic because he really WAS a heretic.
He was not interested in whether or not the Copernican system was correct
or not, nor whether the his own Copernican speculations were correct.  He
was interested only in how to use it to further his own religious and
political agenda.

For these reasons, I feel that the net discussions over Bruno have missed
the target by focusing excessively on his views of the physical world.  These
views were not important to Bruno, so I don't think they should be important
to us.
--
Joe Korty              "flames, flames, go away
uunet!modcomp!joe      come back again, some other day"

------------------------------

End of AIList Digest
********************

∂03-Sep-88  0130	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #75  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 3 Sep 88  01:30:31 PDT
Date: Fri  2 Sep 1988 23:42-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #75
To: AIList@AI.AI.MIT.EDU


AIList Digest            Saturday, 3 Sep 1988      Volume 8 : Issue 75

 Queries:
  Newell's Knowledge Level
  Machine Translation

 Responses:
  Prolog, etc. (2)
  How do I learn about AI, Prolog, and/or Lisp (2)
  The "A/D->ROM->D/A" sigmoid idea by Antti

----------------------------------------------------------------------

Date: Thu, 01 Sep 88 16:36:12 GMT
From: IT21%SYSB.SALFORD.AC.UK@MITVMA.MIT.EDU
Subject: Newell's Knowledge Level

From: Andrew Basden, I.T. Institute, University of Salford, Salford.

Please can anyone help clarify a topic?

In 1982 Allen Newell published a paper, 'The Knowledge Level' (Artificial
Intelligence, v.18, p.87-127), in which he proposed that there is a level
of description above and separate from the Symbol Level.  He called this
the Knowledge Level.  I have found it a very important and useful concept
in both Knowledge Representation and Knowledge Acquisition, largely
because it separates knowledge from how it is expressed.

But to my view Newell's paper contains a number of ambiguities and
apparent minor inconsistencies as well as an unnecessary adherence to
logic and goal-directed activity which I would like to sort out.  As
Newell says, "to claim that the knowledge level exists is to make a
scientific claim, which can range from dead wrong to slightly askew, in
the manner of all scientific claims."  I want to find a refinement of it
that is a bit less askew.

Surprisingly, in the 6 years since the idea was introduced there has
been very little discussion about it in AI circles.  In psychology
circles likewise there has been little detailed discussion, and here the
concepts are only similar, not identical, and bear different names.  SCI
and SSCI together give only 26 citations of the paper, of which only four
in any way discuss the concepts, most merely using various concepts in
Newell's paper to support their own statements.  Even in these four there
is little clarification or development of the idea of the Knowledge
Level.

So I am turning to the AILIST bulletin board.  Has anyone out there any
understanding of the Knowledge Level that can help in this process?
Indeed, is Allen Newell himself listening to the board?

Some of the questions I have are as follows:

1.  Some (eg. Dennett) mention 3 levels, while Newell mentions 5.  Who is
'right' - or rather, what is the relation between them?

2.  Newell says that logic is at the Knowledge Level.  Why?  I would have
put it, like mathematics, very firmly in the Symbol Level.

3.  Why the emphasis on logic?  Is it necessary to the concept, or just
one form of it?  What about extra-logical knowledge, and how does his
'logic' include non-monotonic logics?

4.  The definition of the details of the Knowledge Level is in terms of
the goals of a system.  Is this necessary to the concept, or is it just
one possible form of it?  There is much knowledge that is not goal
directed.

Alexander et. al. and Clancey both question Newell's adherence to logic
and goals, but do not discuss the case.  Can anyone shed any light?  I
have further questions, which I will put directly to some of those who
reply.  Or (please tell me) should I put them on the board?  And would
anyone like a summary from me of my findings?

Thank you, in advance.

Andrew Basden

Information Technology Institute, University of Salford, Salford, UK.
JANET: abasden@uk.ac.salf.b
Phone: (44) 61 736 5843 x510;  Telex: 668680 (Sulib);
Fax: (44) 61 745 7808

------------------------------

Date: Fri, 2 Sep 88 15:59:50 PDT
From: Lynn Gazis <SAPPHO@SRI-NIC.ARPA>
Subject: machine translation

Could someone send me some good references on machine translation?
Please send mail directly to me, as I often have trouble keeping up
with the list.

Lynn Gazis
sappho@sri-nic.arpa

------------------------------

Date: 1 Sep 88 05:29:19 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: Re: Prolog, etc.

In article <1034@mtund.ATT.COM> newton@mtund.ATT.COM (Newton Lee) writes:
>In a previous article, Paul Fishwick writes:
>> Does anyone know of a PD version of Prolog that will run under UNIX.
>> It must come with source since we would like to able to use it on
>> any UNIX machine (including Gould, SUN, VAX, etc.)? We currently have
>
>We use C-Prolog on our UNIX machines (VAX, MIPS, 3B20, UNIX PC, etc.)
>It is based on the Prolog system written in IMP by Luis Damas (and
>Lawrence Byrd) for the ICL 2900 computers.  For more info, contact
>Fernando Pereira, EdCAAD, Dept. of Architecture, University of Edinburgh.
>
>Newton Lee
>AT&T Bell Laboratories

C Prolog is not public domain and never has been.
Fernando hasn't been at EdCAAD for about five years; he is currently
at SRI Cambridge.  EdCAAD is still the place to ask about C Prolog.

You might find Stony Brook Prolog more what you're looking for.
It's covered by a GNU-style "copyleft", but that shouldn't bother
a .edu site.  The contact is Saumya Debray: debray@arizona.edu.

I'd be tempted to mention that Q------ Prolog is really great, more
than worth the price, but it doesn't run on Goulds, so I shan't (:-).

By "any UNIX machine", I hope Fishwick means "any 32-bit byte-addressed
virtual-memory machine running V.2 or later or 4.1BSD or later".  A 286
running Xenix is a UNIX machine, but don't expect porting C Prolog or
SB Prolog to it to be trivial.

------------------------------

Date: 2 Sep 88 17:49:32 GMT
From: aplcen!jhunix!apl_aimh@mimsy.umd.edu  (Marty Hall)
Subject: Re: Prolog, etc.

In a previous article, fishwick@fish.cis.ufl.edu writes:
>Does anyone know of a PD version of Prolog that will run under UNIX.
>It must come with source .....

SB Prolog is a PD, compilable C&M Prolog with source included.  They say
that it runs on "Berkeley UNIX or related operating systems," I know that
it compiles and runs fine on a Sun under 3.x.  The University of Arizona
will ship you 1600 bpi tar tapes for "distribution costs" of $20 in
N. America, $40 overseas.  I am unaware of anonymous ftp sites.
        SB-Prolog Distribution
        Department of Computer Science
        University of Arizona
        Tucson, AZ  85721

                                Regards-
                                        - Marty Hall
-------
--
apl_aimh@jhunix.hcf.jhu.edu   Artificial Intelligence Laboratory, MS 100/601
...uunet!jhunix!apl_aimh      AAI Corporation
apl_aimh@jhunix.bitnet        PO Box 126
(301) 683-6455                Hunt Valley, MD  21030

------------------------------

Date: 1 Sep 88 11:37:31 GMT
From: pur-phy!sawmill!mdbs!kbc@ee.ecn.purdue.edu  (Kevin Castleberry)
Subject: Re: How do I learn about AI, Prolog, and/or Lisp

>       Microsoft has a Lisp for MS-DOS (supposedly it is Common
>       Lisp, but again, I haven't played with it).
Is this true?  Microsoft has a lisp?

Technical Support for mdbs products:
KMAN (a relational db environment),
GURU (an expert system development environment),
MDBS III (a post-relational high performance dbs)
(Our products run in VMS, UNIX, OS/2 and MSDOS.)

is available by emailing to:    support@mdbs.uucp
                or
        {rutgers,ihnp4,decvax,ucbvax}!pur-ee!mdbs!support

        The mdbs BBS can be reached at: (317) 447-6685
        300/1200/2400 baud, 8 bits, 1 stop bit, no parity

Kevin Castleberry (kbc)
Director of Customer Services

Micro Data Base Systems Inc.
P.O. Box 248
Lafayette, IN  47902
(317) 448-6187

For sales call: (800) 344-5832

------------------------------

Date: 2 Sep 88 19:08:26 GMT
From: uhccux!todd@humu.nosc.mil  (Todd Ogasawara)
Subject: Re: How do I learn about AI, Prolog, and/or Lisp

In article <984@mdbs.UUCP> kbc@mdbs.UUCP (Kevin Castleberry) writes:
>>      Microsoft has a Lisp for MS-DOS (supposedly it is Common
>>      Lisp, but again, I haven't played with it).
>Is this true?  Microsoft has a lisp?

Yes, Microsoft has a Lisp they license from a firm in Honolulu called
Soft WareHouse.  Soft WareHouse sells the same product under the name
muLISP-87.  muLISP itself is NOT a Common Lisp.  However, it comes with a
support library (source code in Lisp included) that adds the Common Lisp
functions to muLISP.

They also have an optional incremental compiler (I think this option is
$100 or so, I haven't bought it myself).

muLISP is no replacement for a big expensive Lisp workstation.  But, if you
want a small, inexpensive, relatively speedy full Lisp development, I
recommend you look at this package.

It is small and fast enough to use on my 4.77MHz 8088-based Toshiba T-1000
when I feel like doing some Lisp programming away from my office in the
shade of a tree.

Soft WareHouse also has an interesting license.  It reads "the software
shall be run on at most five (5) computers residing in a single building or
facility, under the control of END USER."  Pretty reasonable, I think.

--
Todd Ogasawara, U. of Hawaii Faculty Development Program
UUCP:           {uunet,ucbvax,dcdwest}!ucsd!nosc!uhccux!todd
ARPA:           uhccux!todd@nosc.MIL            BITNET: todd@uhccux
INTERNET:       todd@uhccux.UHCC.HAWAII.EDU <==I'm told this rarely works

------------------------------

Date: Thu, 1 Sep 88 16:17:04 CDT
From: lugowski@ngstl1.csc.ti.com
Subject: response to the "A/D->ROM->D/A" sigmoid idea by Antti

Concerning the  "analog/digital --> ROM --> digital/analog" neural sigmoids:

Over here in Texas, Gary Frazier (central research labs, Texas Instruments)
and I (ai laboratory, same) have played with a very similar idea for over
a year now.  We would have loved to have kept it to ourselves a bit
longer in order to quietly work out its implications, writing a nice
understated little paper about what it buys and what it doesn't, but
-- sigh -- Antti's note from the prettier end of Europe forces our hand:

1.  Consider not using ROM in favor of RAM.  This allows you to learn the
    sigmoid, if you're so inclined, or otherwise mess with it in real-time.

2.  Leave off the A/D and D/A conversions (for speed's sake) if there's
    a way to compute the thing in analog (often there is).

3.  Consider other functions, rather different from sigmoids and consider
    other uses other than neural summation for network node activities.

4.  Expect interesting system properties to emerge from this rather innocent
    looking hardware move.  More on this in our forthcoming paper.
    Some clues for those who want to think this through in the interim:
    (1) implementations for neural darwinism?, (2) more bang for the
    hyper"plane" buck?, (3) faster convergence than pure gradient descent
.   in weight space?

Well, we could always turn out to be totally off base on this, but here's
the goods just in case we're not.  Comments?  Anyone else tinkering thusly?

                                -- Marek Lugowski
                                   AI Lab, DSEG, Texas Instruments
                                   P.O. Box 655936, M/S 154
                                   Dallas, Texas 75265

                                   lugowski@resbld.csc.ti.com

------------------------------

End of AIList Digest
********************

∂04-Sep-88  2210	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #77  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 4 Sep 88  22:10:15 PDT
Date: Mon  5 Sep 1988 00:46-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #77
To: AIList@AI.AI.MIT.EDU


AIList Digest             Monday, 5 Sep 1988       Volume 8 : Issue 77


 Harnad .vs. Pinker & Prince on language acquisition/connectionism
   (5 messages)

----------------------------------------------------------------------

Date: 1 Sep 88 19:05:36 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Pinker & Prince Reply (short version)

Posted for Pinker & Prince by S. Harnad
-----------------------------------------------------------
From: Steve Pinker <steve@cogito.mit.edu>
Site: MIT Center for Cognitive Science
Subject: answers to S. Harnad's questions, short version

Alluding to our paper "On Language and Connectionism: Analysis of a
PDP model of language acquisition", Stevan Harnad has posted a list of
questions and observations as a 'challenge' to us.  His remarks owe
more to the general ambience of the connectionism / symbol-processing
debate than to the actual text of our paper, in which the questions
are already answered. We urge those interested in these issues to read
the paper or the nutshell version published in Trends in
Neurosciences, either of which may be obtained from Prince (address
below). In this note we briefly answer Harnad's three questions. In
another longer message to follow, we direct an open letter to Harnad
which justifies the answers and goes over the issues he raises in more
detail.

Question # 1: Do we believe that English past tense formation is not
learnable?  Of course we don't! So imperturbable is our faith in the
learnability of this system that we ourselves propose a way in which
it might be done (OLC, 130-136).

Question #2: If it is learnable, is it specifically unlearnable by
nets? No, there may be some nets that can learn it; certainly any net
that is intentionally wired up to behave exactly like a rule-learning
algorithm can learn it. Our concern is not with (the mathematical
question of) what nets can or cannot do in principle, but with which
theories are true, and our conclusions were about pattern associators
using distributed phonological representations.  We showed that it is
unlikely that human children learn the regular rule the way such a
pattern associator learns the regular rule, because it is simply the
wrong tool for the job.  Therefore it's not surprising that the
developmental data confirm that children do not behave in the way that
such a pattern associator behaves.

Question # 3: If past tense formation is learnable by nets, but only
if the invariance that the net learns and that causally constrains its
successful performance is describable as a "rule", what's wrong with
that?  Absolutely nothing! --just like there's nothing wrong with
saying that past tense formation is learnable by a bunch of
precisely-arranged molecules (viz., the brain) but only if the
invariance that the molecules learn, etc. etc. etc.  The question is,
what explains the facts of human cognition?  Pattern associator
networks have some interesting properties that can shed light on
certain kinds of phenomena, such as *irregular* past tense forms. But
it is simply a fact about the *regular* past tense alternation in
English that it is not that kind of phenomenon.  You can focus on the
interesting empirical predictions of pattern associators, and use them
to explain certain things (but not others), or you can generalize them
to a class of universal devices that can explain nothing without an
appeal to the rules that they happen to implement. But you can't have
it both ways.

Alan Prince
Program in Cognitive Science
Department of Psychology
Brown 125
Brandeis University
Waltham, MA 02254-9110
prince@brandeis.bitnet

Steven Pinker
Department of Brain and Cognitive Sciences
E10-018
MIT
Cambridge, MA 02139
steve@cogito.mit.edu

References:

Pinker, S. & Prince, A. (1988) On language and connectionism: Analysis
of a parallel distributed processing model of language acquisition.
Cognition, 28, 73-193. Reprinted in S. Pinker & J.  Mehler (Eds.),
Connections and symbols. Cambridge, MA: Bradford Books/MIT Press.

Prince, A. & Pinker, S. (1988) Rules and connections in human
language. Trends in Neurosciences, 11, 195-202.

Rumelhart, D. E. & McClelland, J. L. (1986) On learning the past
tenses of English verbs. In J. L. McClelland, D. E. Rumelhart, & The
PDP Research Group, Parallel distributed processing: Explorations in
the microstructure of cognition. Volume 2: Psychological and
biological models. Cambridge, MA: Bradford Books/MIT Press.
----------------------------------------------------------------

Posted for Pinker & Prince by:
--
Stevan Harnad   ARPANET:  harnad@mind.princeton.edu         harnad@princeton.edu
harnad@confidence.princeton.edu     srh@flash.bellcore.com      harnad@mind.uucp
BITNET:   harnad%mind.princeton.edu@pucc.bitnet    UUCP:   princeton!mind!harnad
CSNET:    harnad%mind.princeton.edu@relay.cs.net

------------------------------

Date: 1 Sep 88 19:09:35 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Pinker & Prince Reply (long version)


Posted for Pinker & Prince by S. Harnad
------------------------------------------------------------------
From: Steve Pinker <steve@cogito.mit.edu>
To: Stevan Harnad (harnad@mind.princeton.edu)
Site: MIT Center for Cognitive Science
Subject: answers to S. Harnad's questions, longer version

This letter is a reply to your posted list of questions and
observations alluding to our paper "On language and connectionism:
Analysis of a PDP model of language acquisition" (Pinker & Prince,
1988; see also Prince and Pinker, 1988).  The questions are based on
misunderstandings of our papers, in which they are already answered.

(1) Contrary to your suggestion, we never claimed that pattern
associators cannot learn the past tense rule, or anything else, in
principle. Our concern is with which theories of the psychology of
language are true.  This question cannot be answered from an archair
but only by examining what people learn and how they learn it.  Our
main conclusion is that the claim that the English past tense rule is
learned and represented as a pattern-associator with distributed
representations over phonological features for input and output forms
(e.g., the Rumelhart-McClelland 1986 model) is false.  That's because
what pattern-associators are good at is precisely what the regular
rule doesn't need. Pattern associators are designed to pick up
patterns of correlation among input and output features. The regular
past tense alternation, as acquired by English speakers, is not
systematically sensitive to phonological features.  Therefore some of
the failures of the R-M model we found are traceable to its trying to
handle the regular rule with an architecture inappropriate to the
regular rule.

We therefore predict that these failures should be seen in other
network models that compute the regular past tense alternation using
pattern associators with distributed phonological representations
(*not* all conceivable network models, in general, in principle,
forever, etc.).  This prediction has been confirmed.  Egedi and Sproat
(1988) devised a network model that retained the assumption of
associations between distributed phonological representations but
otherwise differed radically from the R-M model: it had three layers,
not two; it used a back-propagation learning rule, not just the simple
perceptron convergence procedure; it used position-specific
phonological features, not context-dependent ones; and it had a
completely different output decoder. Nonetheless its successes and
failures were virtually identical to those of the R-M model.

(2) You claim that

     "the regularities you describe -- both in the
     irregulars and the regulars -- are PRECISELY the kinds of
     invariances you would expect a statistical pattern
     learner that was sensitive to higher order correlations to
     be able to learn successfully. In particular, the
     form-independent default option for the regulars should be
     readily inducible from a representative sample."

This is an interesting claim and we strongly encourage you to back it
up with argument and analysis; a real demonstration of its truth would
be a significant advance.  It's certainly false of the R-M and
Egedi-Sproat models.  There's a real danger in this kind of glib
commentary of trivializing the issues by assuming that net models are
a kind of miraculous wonder tissue that can do anything.  The
brilliance of the Rumelhart and McClelland (1986) paper is that they
studiously avoided this trap. In the section of their paper called
"Learning regular and exceptional patterns in a pattern associator"
they took great pains to point out that pattern associators are good
at specific things, especially exploiting statistical regularities in
the mapping from one set of featural patterns to another. They then
made the interesting emprical claim that these basic properties of the
pattern associator model lie at the heart of the acquisition of the
past tense. Indeed, the properties of the model afforded it some
interesting successes with the *irregular* alternations, which fall
into family resemblance clusters of the sort that pattern associators
handle in interesting ways.  But it is exactly these properties of the
model that made it fail at the *regular* alternation, which does not
form family resemblance clusters.

We like to think that these kinds of comparisons make for productive
empirical science. The successes of the pattern associator
architecture for irregulars teaches us something about the psychology
of the irregulars (basically a memory phenomenon, we argue), and its
failures for the regulars teach us something about the psychology of
the regulars (use of a default rule, we argue).  Rumelhart and
McClelland disagree with us over the facts but not over the key
emprical tests. They hold that pattern associators have particular
aptitudes that are suited to modeling certain kinds of processes,
which they claim are those of cognition.  One can argue for or against
this and learn something about psychology while so doing.  Your claim
about a 'statistical pattern learner...sensitive to higher order
correlations' is essentially impossible to evaluate.

(3) We're mystified that you attribute to us the claim that "past
tense formation is not learnable in principle." The implication is
that our critique of the R-M model was based on the assertion that the
rule is unlearned and that this is the key issue separating us from
R&M.  Therefore -- you seem to reason -- if the rule is learned, it is
learned by a network. But both parts are wrong. No one in his right
mind would claim that the English past tense rule is "built in".  We
spent a full seven pages (130-136) of 'OLC' presenting a simple model
of how the past tense rule might be learned by a symbol manipulation
device.  So obviously we don't believe it can't be learned. The
question is how children in fact do it.

The only way we can make sense of this misattribution is to suppose
that you equate "learnable" with "learnable by some (nth-order)
statistical algorithm". The underlying presupposition is that
statistical modeling (of an undefined character) has some kind of
philosophical priority over other forms of analysis; so that if
statistical modeling seems somehow possible-in-principle, then
rule-based models (and the problems they solve) can be safely ignored.
As a kind of corollary, you seem to assume that unless the input is so
impoverished as to rule out all statistical modeling, rule theories
are irrelevant; that rules are impossible without major
stimulus-poverty. In our view, the question is not CAN some (ungiven)
algorithm 'learn' it, but DO learners approach the data in that
fashion. Poverty-of-the-stimulus considerations are one out of many
sources of evidence in this issue. (In the case of the past tense
rule, there is a clear P-of-S argument for at least one aspect of the
organization of the inflectional system: across languages, speakers
automatically regularize verbs derived from nouns and adjectives
(e.g., 'he high-sticked/*high-stuck the goalie'; she braked/*broke the
car'), despite virtually no exposure to crucial informative data in
childhood. This is evidence that the system is built around
representations corresponding to the constructs 'word', 'root', and
'irregular'; see OLC 110-114.)

(4) You bring up the old distinction between rules that describe
overall behavior and rules that are explicitly represented in a
computational device and play a causal role in its behavior.  Perhaps,
as you say, "these are not crisp issues, and hence not a solid basis
for a principled critique". But it was Rumelhart and McClelland who
first brought them up, and it was the main thrust of their paper. We
tend to agree with them that the issues are crisp enough to motivate
interesting research, and don't just degenerate into discussions of
logical possibilities. We just disagree about which conclusions are
warranted. We noted that (a) the R-M model is empirically incorrect,
therefore you can't use it to defend any claims for whether or not
rules are explicitly represented; (b) if you simply wire up a network
to do exactly what a rule does, by making every decision about how to
build the net (which features to use, what its topology should be,
etc.) by consulting the rule-based theory, then that's a clear sense
in which the network "implements" the rule.  The reason is that the
hand-wiring and tweaking of such a network would not be motivated by
principles of connectionist theory; at the level at which the
manipulations are carried out, the units and connections are
indistinguishable from one another and could be wired together any way
one pleased. The answer to the question "Why is the network wired up
that way?" would come from the rule-theory; for example, "Because the
regular rule is a default operation that is insensitive to stem
phonology". Therefore in the most interesting sense such a network
*is* a rule. The point carries over to more complex cases, where one
would have different subnetworks corresponding to different parts of
rules.  Since it is the fact that the network implements such-and-such
a rule that is doing the work of explaining the phenomenon, the
question now becomes, is there any reason to believe that the rule is
implemented in that way rather some other way?

Please note that we are *not* asserting that no PDP model of any sort
could ever acquire linguistic knowledge without directly implementing
linguistic rules. Our hope, of course, is that as the discussion
proceeds, models of all kinds will be become more sophisticated and
ambitious. As we said in our Conclusion, "These problems are exactly
that, problems.  They do not demonstrate that interesting PDP models
of language are impossible in principle. At the same time, they show
that there is no basis for the belief that connectionism will dissolve
the difficult puzzles of language, or even provide radically new
solutions to them."

So to answer the catechism:

(a) Do we believe that English past tense formation is not learnable?
Of course we don't!

(b) If it is learnable, is it specifically unlearnable by nets?  No,
there may be some nets that can learn it; certainly any net that is
intentionally wired up to behave exactly like a rule-learning
algorithm can learn it. Our concern is not with (the mathematical
question of) what nets can or cannot do in principle, but about which
theories are true, and our analysis was of pattern associators using
distributed phonological representations. We showed that it is
unlikely that human children learn the regular rule the way such a
pattern associator learns the regular rule, because it is simply the
wrong tool for the job. Therefore it's not surprising that the
developmental data confirm that children do not behave the way such a
pattern associator behaves.

(c) If past tense formation is learnable by nets, but only if the
invariance that the net learns and that causally constrains its
successful performance is describable as a "rule", what's wrong with
that? Absolutely nothing! -- just like there's nothing wrong with
saying that past tense formation is learnable by a bunch of
precisely-arranged molecules (viz., the brain) such that the
invariance that the molecules learn, etc. etc.  The question is, what
explains the facts of human cognition? Pattern associator networks
have some interesting properties that can shed light on certain kinds
of phenomena, such as irregular past tense forms.  But it is simply a
fact about the regular past tense alternation in English that it is
not that kind of phenomenon.  You can focus on the interesting
empirical properties of pattern associators, and use them to explain
certain things (but not others), or you can generalize them to a class
of universal devices that can explain nothing without appeals to the
rules that they happen to implement. But you can't have it both ways.

Steven Pinker
Department of Brain and Cognitive Sciences
E10-018
MIT
Cambridge, MA 02139
steve@cogito.mit.edu

Alan Prince
Program in Cognitive Science
Department of Psychology
Brown 125
Brandeis University
Waltham, MA 02254-9110
prince@brandeis.bitnet

References:

Egedi, D.M. and R.W. Sproat (1988) Neural Nets and Natural Language
Morphology, AT&T Bell Laboratories, Murray Hill,NJ, 07974.

Pinker, S. & Prince, A. (1988) On language and connectionism: Analysis
of a parallel distributed processing model of language acquisition.
Cognition, 28, 73-193. Reprinted in S. Pinker & J.  Mehler (Eds.),
Connections and symbols. Cambridge, MA: Bradford Books/MIT Press.

Prince, A. & Pinker, S. (1988) Rules and connections in human
language. Trends in Neurosciences, 11, 195-202.

Rumelhart, D. E. & McClelland, J. L. (1986) On learning the past
tenses of English verbs. In J. L. McClelland, D. E. Rumelhart, & The
PDP Research Group, Parallel distributed processing: Explorations in
the microstructure of cognition. Volume 2: Psychological and
biological models. Cambridge, MA: Bradford Books/MIT Press.
-------------------------------------------------------------
Posted for Pinker & Prince by:
--
Stevan Harnad   ARPANET:  harnad@mind.princeton.edu         harnad@princeton.edu
harnad@confidence.princeton.edu     srh@flash.bellcore.com      harnad@mind.uucp
BITNET:   harnad%mind.princeton.edu@pucc.bitnet    UUCP:   princeton!mind!harnad
CSNET:    harnad%mind.princeton.edu@relay.cs.net

------------------------------

Date: 1 Sep 88 19:13:37 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: Pinker & Prince Reply (long version)


               ON THEFT VS HONEST TOIL

Pinker & Prince (prince@mit.cogito.edu) write in reply:

>>  Contrary to your suggestion, we never claimed that pattern associators
>>  cannot learn the past tense rule, or anything else, in principle.

I've reread the paper, and unfortunately I still find it ambiguous:
For example, one place (p. 183) you write:
   "These problems are exactly that, problems. They do not demonstrate
   that interesting PDP models of language are impossible in principle."
But elsewhere (p. 179) you write:
   "the representations used in decomposed, modular systems are
   abstract, and many aspects of their organization cannot be learned
   in any obvious way." [Does past tense learning depend on any of
   this unlearnable organization?]
On p. 181 you write:
   "Perhaps it is the limitations of these simplest PDP devices --
   two-layer association networks -- that causes problems for the
   R & M model, and these problems would diminish if more
   sophisticated kinds of PDP networks were used."
But earlier on the same page you write:
   "a model that can learn all possible degrees of correlation among a
   set of features is not a model of a human being" [Sounds like a
   Catch-22...]

It's because of this ambiguity that my comments were made in the form of
conditionals and questions rather than assertions. But we now stand
answered: You do NOT claim "that pattern associaters cannot learn the
past tense rule, or anything else, in principle."

[Oddly enough, I do: if by "pattern associaters" you mean (as you mostly
seem to mean) 2-layer perceptron-style nets like the R & M model, then I
would claim that they cannot learn the kinds of things Minsky showed they
couldn't learn, in principle. Whether or not more general nets (e.g., PDP
models with hidden layers, back-prop, etc.) will turn out to have corresponding
higher-order limitations seems to be an open question at this point.]

You go on to quote my claim that:

     "the regularities you describe -- both in the
     irregulars and the regulars -- are PRECISELY the kinds of
     invariances you would expect a statistical pattern
     learner that was sensitive to higher order correlations to
     be able to learn successfully. In particular, the
     form-independent default option for the regulars should be
     readily inducible from a representative sample."

and then you comment:

>>  This is an interesting claim and we strongly encourage you to back it
>>  up with argument and analysis; a real demonstration of its truth would
>>  be a significant advance. It's certainly false of the R-M and
>>  Egedi-Sproat models. There's a real danger in this kind of glib
>>  commentary of trivializing the issues by assuming that net models are
>>  a kind of miraculous wonder tissue that can do anything.

I don't understand the logic of your challenge. You've disavowed
having claimed that any of this was unlearnable in principle. Why is it
glibber to conjecture that it's learnable in practice than that it's
unlearnable in practice? From everything you've said, it certainly
LOOKS perfectly learnable: Sample a lot of forms and discover that the
default regularity turns out to work well in most cases (i.e., the
"regulars"; the rest, the "irregulars," have their own local invariances,
likewise inducible from statistical regularities in the data).

This has nothing to do with a belief in wonder tissue. It was precisely
in order to avoid irrelevant stereotypes like that that the first
posting was prominently preceded by the disclaimer that I happen to be
a sceptic about connectionism's actual accomplishments and an agnostic
about its future potential. My critique was based solely on the logic of
your argument against connectionism (in favor of symbolism). Based
only on what you've written about its underlying regularities, past
tense rule learning simply doesn't seem to pose a serious challenge for a
statistical learner -- not in principle, at any rate. It seems to have
stumped R & M 86 and E & S 88 in practice, but how many tries is
that? It is possible, for example, as suggested by your valid analysis of
the limitations of the Wickelfeature representation, that some of the
requisite regularities are simply not reflected in this phonological
representation, or that other learning (e.g. plurals) must complement
past-tense data. This looks more like an entry-point problem
(see (1) below), however, rather than a problem of principle for
connectionist learning of past tense formation. After all, there's no
serious underdetermination here; it's not like looking for a needle in
a haystack, or NP-complete, or like that.

I agree that R & M made rather inflated general claims on the basis of
the limited success of R & M 86. But (to me, at any rate) the only
potentially substantive issue here seems to be the one of principle (about
the relative scope and limits of the symbolic vs. the connectionistic
approach). Otherwise we're all just arguing about the scope and limits
of R & M 86 (and perhaps now also E & S 88).

Two sources of ambiguity seem to be keeping this disagreement
unnecessarily vague:

(1) There is an "entry-point" problem in comparing a toy model (e.g.,
R & M 86) with a lifesize cognitive capacity (e.g., the human ability
to form past tenses): The capacity may not be modular; it may depend on
other capacities. For example, as you point out in your article, other
phonological and morphological data and regularities (e.g.,
pluralization) may contribute to successful past tense formation. Here
again, the challenge is to come up with a PRINCIPLED limitation, for
otherwise the connectionist can reasonably claim that there's no reason
to doubt that those further regularities could have been netted exactly
the same way (if they had been the target of the toy model); the entry
point just happened to be arbitrarily downstream. I don't say this
isn't hand-waving; but it can't be interestingly blocked by hand-waving
in the opposite direction.

(2) The second factor is the most critical one: learning. You
put a lot of weight on the idea that if nets turn out to behave
rulefully then this is a vindication of the symbolic approach.
However, you make no distinction between rules that are built in (as
"constraints," say) and rules that are learned. The endstate may be
the same, but there's a world of difference in how it's reached -- and
that may turn out to be one of the most important differences between
the symbolic approach and connectionism: Not whether they use
rules, but how they come by them -- by theft or honest toil. Typically,
the symbolic approach builds them in, whereas the connectionistic one
learns them from statistical regularities in its input data. This is
why the learnability issue is so critical. (It is also what makes it
legitimate for a connectionist to conjecture, as in (1) above, that if
a task is nonmodular, and depends on other knowledge, then that other
knowledge too could be acquired the same way: by learning.)

>>  Your claim about a 'statistical pattern learner...sensitive to higher
>>  order correlations' is essentially impossible to evaluate.

There are in principle two ways to evaluate it, one empirical and
open-ended, the other analytical and definitive. You can demonstrate
that specific regularities can be learned from specific data by getting
a specific learning model to do it (but its failure would only be evidence
that that model fails for those data). The other way is to prove analytically
that certain kinds of regularities are (or are not) learnable from
certain kinds of data (by certain means, I might add, because
connectionism may be only one candidate class of statistical learning
algorithms). Poverty-of-the-stimulus arguments attempt to demonstrate
the latter (i.e., unlearnability in principle).

>>  We're mystified that you attribute to us the claim that "past
>>  tense formation is not learnable in principle."... No one in his right
>>  mind would claim that the English past tense rule is "built in".  We
>>  spent a full seven pages (130-136) of 'OLC' presenting a simple model
>>  of how the past tense rule might be learned by a symbol manipulation
>>  device. So obviously we don't believe it can't be learned.

Here are some extracts from OLC 130ff:

   "When a child hears an inflected verb in a single context, it is
   utterly ambiguous what morphological category the inflection is
   signalling... Pinker (1984) suggested that the child solves this
   problem by "sampling" from the space of possible hypotheses defined
   by combinations of an innate finite set of elements, maintaining
   these hypotheses in the provisional grammar, and testing them
   against future uses of that inflection, expunging a hypothesis if
   it is counterexemplified by a future word. Eventually... only
   correct ones will survive." [The text goes on to describe a
   mechanism in which hypothesis strength grows with success frequency
   and diminishes with failure frequency through trial and error.]
   "Any adequate rule-based theory will have to have a module that
   extracts multiple regularities at several levels of generality,
   assign them strengths related to their frequency of exemplification
   by input verbs, and let them compete in generating a past tense for
   for a given verb."

It's not entirely clear from the description on pp. 130-136 (probably
partly because of the finessed entry-point problem) whether (i) this is an
innate parameter-setting or fine-tuning model, as it sounds, with the
"learning" really just choosing among or tuning the built-in parameter
settings, or whether (ii) there's genuine bottom-up learning going on here.
If it's the former, then that's not what's usually meant by "learning."
If it's the latter, then the strength-adjusting mechanism sounds equivalent
to a net, one that could just as well have been implemented nonsymbolically.
(You do state that your hypothetical module would be equivalent to R & M's in
many respects, but it is not clear how this supports the symbolic approach.)

[It's also unclear what to make of the point you add in your reply (again
partly because of the entry-point problem):
>>"(In the case of the past tense rule, there is a clear P-of-S argument
for at least one aspect of the organization of the inflectional system...)">>
Is this or is this not a claim that all or part of English past tense
formation is not learnable (from the data available to the child) in
principle? There seems to be some ambiguity (or perhaps ambivalence) here.]

>>  The only way we can make sense of this misattribution is to suppose
>>  that you equate "learnable" with "learnable by some (nth-order)
>>  statistical algorithm". The underlying presupposition is that
>>  statistical modeling (of an undefined character) has some kind of
>>  philosophical priority over other forms of analysis; so that if
>>  statistical modeling seems somehow possible-in-principle, then
>>  rule-based models (and the problems they solve) can be safely ignored.

Yes, I equate learnability with an algorithm that can extract
statistical regularities (possibly nth order) from input data.
Connectionism seems to be (an interpretation of) a candidate class of
such algorithms; so does multiple nonlinear regression. The question of
"philosophical priority" is a deep one (on which I've written:
"Induction, Evolution and Accountability," Ann. NY Acad. Sci. 280,
1976). Suffice it to say that induction has epistemological priority
over innatism (or such a case can be made) and that a lot of induction
(including hypothesis-strengthening by sampling instances) has a
statistical character. It is not true that where statistical induction
is possible, rule-based models must be ignored (especially if the
rule-based models learn by what is equivalent to statistics anyway),
only that the learning NEED not be implemented symbolically. But it is
true that where a rule can be learned from regularities in the data,
it need not be built in. [Ceterum sentio: there is an entry-point
problem for symbols that I've also written about: "Categorical
Perception," Cambr. U. Pr. 1987. I describe there a hybrid approach in
in which symbolic and nonsymbolic representations, including a
connectionistic component, are put together bottom-up in a principled
way that avoids spuriously pitting connectionism against symbolism.]

>>  As a kind of corollary, you seem to assume that unless the input is so
>>  impoverished as to rule out all statistical modeling, rule theories
>>  are irrelevant; that rules are impossible without major stimulus-poverty.

No, but I do think there's an entry-point problem. Symbolic rules can
indeed be used to implement statistical learning, or even to preempt it, but
they must first be grounded in nonsymbolic learning or in innate
structures. Where there is learnability in principle, learning does
have "philosophical (actually methodological) priority" over innateness.

>>  In our view, the question is not CAN some (ungiven) algorithm
>>  'learn' it, but DO learners approach the data in that fashion.
>>  Poverty-of-the-stimulus considerations are one out of many
>>  sources of evidence in this issue...
>>  developmental data confirm that children do not behave the way such a
>>  pattern associator behaves.

Poverty-of-the-stimulus arguments are the cornerstone of modern
linguistics because, if they are valid, they entail that certain
rules (or constraints) are unlearnable in principle (from the data
available to the child) and hence that a learning model must fail for
such cases. The rule system itself must accordingly be attributed to
the brain, rather than just the general-purpose inductive wherewithal
to learn the rules from experience.

Where something IS learnable in principle, there is of course still a
question as to whether it is indeed learned in practice rather than
being innate; but neither (a) the absence of data on whether it is learned
nor (b) the existence of a rule-based model that confers it on the child
for free provide very strong empirical guidance in such a case. In any
event, developmental performance data themselves seem far too
impoverished to decide between rival theories at this stage. It seems
advisable to devise theories that account for more lifesize chunks of our
asymptotic (adult) performance capacity before trying to fine-tune them
with developmental (or neural, or reaction-time, or brain-damage) tests
or constraints. (Standard linguistic theory has in any case found it
difficult to find either confirmation or refutation in developmental
data to date.)

By way of a concrete example, suppose we had two pairs of rival toy
models, symbolic vs. connectionistic, one pair doing chess-playing and
the other doing factorials. (By a "toy" model I mean one that models
some arbitrary subset of our total cognitive capacity; all models to
date, symbolic and connectionistic, are toy models in this sense.) The
symbolic chess player and the connectionistic chess player both
perform at the same level; so do the symbolic and connectionistic
factorializer. It seems evident that so little is known about how people
actually learn chess and factorials that "developmental" support would
hardly be a sound basis for choosing between the respective pairs of models
(particularly because of the entry-point problem, since these skills
are unlikely to be acquired in isolation). A much more principled way
would be to see how they scaled up from this toy skill to more and
more lifesize chunks of cognitive capacity. (It has to be conceded,
however, that the connectionist models would have a marginal lead in
this race, because they would already be using the same basic
[statistical learning] algorithm for both tasks, and for all future tasks,
presumably, whereas the symbolic approach would have to be making its
rules on the fly, an increasingly heavy load.)

I am agnostic about who would win this race; connectionism may well turn
out to be side-lined early because of a higher-order Perceptron-like limit
on its rule-learning ability, or because of principled unlearnability
handicaps. Who knows? But the race is on. And it seems obvious that
it's far too early to use developmental (or neural) evidence to decide
which way to bet. It's not even clear that it will remain a 2-man race
for long -- or that a finish might not be more likely as a
collaborative relay. (Nor is the one who finishes first or gets
farthest guaranteed to be the "real" winner -- even WITH developmental
and neural support. But that's just normal underdetermination.)

>>  if you simply wire up a network to do exactly what a rule does, by
>>  making every decision about how to build the net (which features to
>>  use, what its topology should be, etc.) by consulting the rule-based
>>  theory, then that's a clear sense in which the network "implements"
>>  the rule

What if you don't WIRE it up but TRAIN it up? That's the case at
issue here, not the one you describe. (I would of course agree that if
nets wire in a rule as a built-in constraint, that's theft, not
honest toil, but that's not the issue!)
--
Stevan Harnad   ARPANET:  harnad@mind.princeton.edu         harnad@princeton.edu
harnad@confidence.princeton.edu     srh@flash.bellcore.com      harnad@mind.uucp
BITNET:   harnad%mind.princeton.edu@pucc.bitnet    UUCP:   princeton!mind!harnad
CSNET:    harnad%mind.princeton.edu@relay.cs.net

------------------------------

Date: 2 Sep 88 19:06:20 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: Pinker & Prince Reply (long version)


Posted for Pinker & Prince [pinker@cogito.mit.edu] by S. Harnad
--------------------------------------------------------------
In his reply to our answers to his questions, Harnad writes:

        -Looking at the actual behavior and empirical fidelity of
         connectionist models is not the right way to test
         connectionist hypotheses;

        -Developmental, neural, reaction time, and brain-damage data
         should be put aside in evaluating psychological theories.

        -The meaning of the word "learning" should be stipulated to
         apply only to extracting statistical regularities
         from input data.

        -Induction has philosophical priority over innatism.

We don't have much to say here (thank God, you are probably all
thinking). We disagree sharply with the first two claims, and have no
interest whatsoever in discussing the last two.

Alan Prince
Steven Pinker
----------------------------------------------------------------------
Posted for Pinker & Prince by:
--
Stevan Harnad   ARPANET:  harnad@mind.princeton.edu         harnad@princeton.edu
harnad@confidence.princeton.edu     srh@flash.bellcore.com      harnad@mind.uucp
BITNET:   harnad%mind.princeton.edu@pucc.bitnet    UUCP:   princeton!mind!harnad
CSNET:    harnad%mind.princeton.edu@relay.cs.net

------------------------------

Date: 3 Sep 88 20:06:45 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: Pinker & Prince Reply (On Modeling and Its Constraints)


Pinker & Prince attribute the following 4 points (not quotes) to me,
indicating that they sharply disgree with (1) and (2) and have no
interest whatsoever in discussing (3) and (4).:

   (1) Looking at the actual behavior and empirical fidelity of connectionist
   models is not the right way to test connectionist hypotheses.

This was not the issue, as any attentive follower of the discussion
can confirm. The question was whether Pinker & Prince's article was to
be taken as a critique of the connectionist approach in principle, or
just of the Rumelhart & McClelland 1986 model in particular.

   (2) Developmental, neural, reaction time, and brain-damage data should be
   put aside in evaluating psychological theories.

This was a conditional methodological point; it is not correctly stated
in (2): IF one has a model for a small fragment of human cognitive
performance capacity (a "toy" model), a fragment that one has no reason
to suppose to be functionally self-contained and independent of the
rest of cognition, THEN it is premature to try to bolster confidence in
the model by fitting it to developmental (neural, reaction time, etc.)
data. It is a better strategy to try to reduce the model's vast degrees of
freedom by scaling up to a larger and larger fragment of cognitive
performance capacity. This certainly applies to past-tense learning
(although my example was chess-playing and doing factorials). It also
seems to apply to all cognitive models proposed to date. "Psychological
theories" will begin when these toy models begin to approach lifesize;
then fine-tuning and implementational details may help decide between
asymptotic rivals.

[Here's something for connectionists to disagree with me about: I don't
think there is a solid enough fact known about the nervous system
to warrant "constraining" cognitive models with it. Constraints are
handicaps; what's needed in the toy world that contemporary modeling
lives in is more power and generality in generating our performance
capacities. If "constraints" help us to get that, then they're useful
(just as any source of insight, including analogy and pure fantasy can
be useful). Otherwise they are just arbitrary burdens. The only
face-valid "constraint" is our cognitive capacity itself, and we all
know enough about that already to provide us with competence data
till doomsday. Fine-tuning details are premature; we haven't even come
near the station yet.]

   (3) The meaning of the word "learning" should be stipulated to apply
   only to extracting statistical regularities from input data.

   (4) Induction has philosophical priority over innatism.

These are substantive issues, very relevant to the issues under discussion
(and not decidable by stipulation). However, obviously, they can only be
discussed seriously with interested parties.
--
Stevan Harnad   ARPANET:  harnad@mind.princeton.edu         harnad@princeton.edu
harnad@confidence.princeton.edu     srh@flash.bellcore.com      harnad@mind.uucp
BITNET:   harnad%mind.princeton.edu@pucc.bitnet    UUCP:   princeton!mind!harnad
CSNET:    harnad%mind.princeton.edu@relay.cs.net

------------------------------

End of AIList Digest
********************

∂05-Sep-88  0123	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #78  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 5 Sep 88  01:23:41 PDT
Date: Mon  5 Sep 1988 00:53-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #78
To: AIList@AI.AI.MIT.EDU


AIList Digest             Monday, 5 Sep 1988       Volume 8 : Issue 78

 Philosophy:

  Two Points (ref AI Digests passim).
  Navigation and symbol manipulation
  New books and reviews thereof, from Nature
  Can we human being think two different things in parallel?
  Newell's Knowledge Level (2)

----------------------------------------------------------------------

Date: 28 Aug 88 19:33:18 GMT
From: bph@buengc.bu.edu (Blair P. Houghton)
Reply-to: bph@buengc.bu.edu (Blair P. Houghton)
Subject: Re: Two Points (ref AI Digests passim).


In a previous article, "Gordon Joly, Statistics, UCL" writes:
>[b] With regard to what Einstein said, Heisenberg's uncertainty princinple
>    is also pertinent to "AI". The principle leads to the notion that the
>    observer influences that which is observed. So how does this affect the
>    observer who preforms a self analysis?

C'mon; Heisenberg said nothing of the kind.  He was talking about tiny
little particles with miniscule kinetic energies.  (...or maybe not |↑D )

"Know thyself" is more like Shakespeare than Heisenberg, and likely
as old as Egypt.

Physical self-analysis on the scale for Heisenberg is moot.  Electrons
"know" where they are and where they are going.  They don't have, nor
do they need, self-analysis.

I do wish people would keep *recursion* and *perturbation* straight
and different from the Uncertainty Principle.  It's a very
poor metaphor (kindof like what Freud did to Oedipus' reputation...)

                                --Blair

------------------------------

Date: 30 Aug 88 02:02:27 GMT
From: josh@klaatu.rutgers.edu (J Storrs Hall)
Subject: Re: navigation and symbol manipulation


>       How much more pleasant to think deep philosophical thoughts.
>Perhaps, if only the right formalization could be found, the problems
>of common-sense reasoning would become tractible.  One can hope.
>The search is perhaps comparable to the search for the Philosopher's Stone.
>One succeeds, or one fails, but one can always hope for success just ahead.
>Bottom-up AI is by comparison so unrewarding.  "The people want epistemology",
>as Drew McDermott once wrote.  It's depressing to think that it might take
>a century to work up to a human-level AI from the bottom.  Ants by 2000,
>mice by 2020 doesn't sound like an unrealistic schedule for the medium term,
>and it gives an idea of what might be a realistic rate of progress.
>
>       I think it's going to be a long haul.  But then, so was physics.
>So was chemistry.  For that matter, so was electrical engineering.  We
>can but push onward.  Maybe someone will find the Philosopher's Stone.
>If not, we will get there the hard way.  Eventually.

There is much to speak for this point of view.  However, halfway
through the historical life of the steam engine, thermodynamics was
put on a sound basis.  In Physics, we had Newton, in Chemistry,
Mendeleev.  In EE there were Maxwell's equations.  There is a two-way
feedback here:  sufficient practical experience allows one to
formulate general principles, which then inform and amplify practical
efforts.

I think the robot and the expert system are the Newcomen engines of
AI.  Our "science" may be all epicycles and alchemy but what we are
after is not a Philosopher's Stone but a periodic table and a
calculus.

There was a feeling among some of the people I polled at AAAI this
year that there is a bit of a malaise in "theoretical AI".  My guess
is that we have our phlogiston and caloric theories that can be turned
into the real thing with some more work and insight.

--JoSH

------------------------------

Date: Wed, 31 Aug 88 16:00:27 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: New books and reviews thereof, from Nature


     Two books relevant to the AI field are reviewed in Nature this week
(25 August).  Philip Kichner reviews "Patterns, Thinking, and Cognition:
A Theory of Judgement", by Howard Margolis.  Margolis proposes the idea
that thinking and judgement are species of pattern recognition.  Whether or
not one agrees with this, the reviewer claims that the idea is presented
in a sufficiently thorough manner to justify a careful study of the work.

     Drew McDermott (whose name appears in this newsgroup now and then)
reviews "Logical Foundations of Artificial Intelligence", by Genesereth
and Nilsson.  The book is a presentation of the Stanford version of
the "logicism" approach to AI.  McDermott is not impressed.

                                        John Nagle

------------------------------

Date: 1 Sep 88 02:31:08 GMT
From: temvax!pacsbb!tlohrbe@bpa.bell-atl.com (trevor  lohrbeer)
Subject: Re: Can we human being think two different things in
         parallel?


In another article, Ken Johnson says:
>> Can we human being think two different things in parallel?
>
>I think most people have had the experience of suddenly gaining insight in
>into the solution of a problem they last deliberately chewed over a few
>hours or days previously.  I'd say this was evidence for the brain's abili
>ability to work at two or more (?) high-order tasks at the same time.
>But I look forward to reading what Real Psychologists say.

In response to this, Jeff Hartung writes:

>The above may demonstrate that the brain can "process" two jobs
>simultaneously, but is this what we mean by "think"?  If so, this still
>doesn't demonstrate adequately that parallel processing is what is
>going on.  It may be equally true that serial processing on several
>jobs is happening, only some processing is below the threshold of
>awareness.  Or, there may be parallel processing , but with a limited
>number of processes at the level of awareness of the "thinker".

I think the problem does indeed lie in what we mean by "thinking". But
if we define thinking in terms of working out a Xdefinit solvab problem,,
such as working out a math problem (a large one consisting of say m
multiplying two three digit numbers, not something that can be recalled fro
memory), and also append the notion that one must be consiously thinking it
for it to be "thinking", then the problem is solvable.

To solve it, try to do the problem.  Try for example multiplying 356 x 674
and 965 x 3124, at the same time.  T A way to be pretty sure that you are
figuring out the problem serially, is to see if you come out with the
answers to both problems at the same time.  Try to do it and you'll find
that even for a mathematical wizard, it is impossible to work out the two
problems simultaneously, if done at the consious level.

At the unconcious level though, it is possible to think in parallel.  Take
an instance of walking and talking at the same time.  The brain must send m
messages to the legs, mouth, heart, and many other muscles, all at the same
time.  It must also intake the senses of touch (for balance), of vision (to
see where your going), and sometimes smell.  It then has to analyze it all
while still keeping all the muscles moving and intaking more data.  So at
the unconcious level, the number of things able to be done in parallel
become innumerable.

Trevor Lohrbeer

------------------------------

Date: 3 Sep 88 15:06:22 GMT
From: jbn@glacier.stanford.edu (John B. Nagle)
Reply-to: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: Newell's Knowledge Level


      Much the same idea has been referred to as "deep understanding" by
the rule-based knowledge representation people.  The term "deep structure"
is sometimes used by those working on natural language understanding.  In
both cases, the limitations of the superficial representations in use today are
being recognized.   The remark "Mycin doesn't know about bacteria" dates
from the previous decade, but is still applicable.  Many critics of AI,
from Weitzenbaum to Dreyfus, have noted this problem, which some refer to
as the "knowledge representation problem".  This is a key unsolved problem
in AI.  A recent posting here by McCarthy indicates that he considers it
the key unsolved problem, and that effort should be directed toward the
development of a formal language suitable for the representation of
"deep understanding" of the real world.

      I have not heard of any system where "deep understanding" or "deep
structure" or a "knowledge level" were implemented in any general way.
In a very few systems, always ones where the underlying domain is formalizable,
there is some notion of deep understanding.  Eurisko (Lenat) comes close.
When people use these terms, they are usually talking about the parts of the
problem for which no useful approaches are known.

                                        John Nagle

------------------------------

Date: 4 Sep 88 15:53:41 GMT
From: mohan@boc.rutgers.edu (Sunil Mohan)
Subject: Re: Newell's Knowledge Level


The Knowledge Based Software Development Environment  (KBSDE) group at
Rutgers  University are  strong believers  in  the  separation of  the
specification of  knowledge  from the  specification of  its   use.  I
believe that  that  is the  underlying theme of    Newell's "Knowledge
Level".  Marr has also talked about  the specification of a  system in
different   levels, separating     knowledge    from   algorithm  from
implementation. This allows a partitioning of the concerns involved in
developing  a system. As  a simple  example, it   allows one to decide
whether inability to  solve a particular problem   is  due to  lack of
knowledge or an  inherently `incomplete'   algorithm  that uses   that
knowledge. Describing your research along these  levels will also help
you and the reader decide where the contribution lies. See for example
the paper "Learning At The Knowledge Level" by Dietterich (I think).

How many levels you choose to have depends entirely  on how finely you
wish to partition your concerns. There is no "right" partitioning. The
eventual aim is clarity.

As far as logic  belonging at the Knowledge Level  is concerned, in so
far as logic is used as  a declarative specification of knowledge, and
its implications, that is the purpose  of the knowledge level. I would
tend to think that logic may also be  used to specify the algorithm at
the symbol level, thus allowing the capability  of reasoning about the
algorithm.

I don't know  what you mean by  "extra-logical". Could  you perhaps be
taking the  terms too literally?  Remeber that the  Knowledge Level in
itself  is not  interesting.  It  is  interesting  because  of what it
achieves  (viz. clarity,   focussing  attention).  Logic   is   just a
specification and  reasoning device. Any  form of logic should  do, so
long as you are aware of its capabilities and limitations.

_
Sunil

------------------------------

End of AIList Digest
********************

∂05-Sep-88  1150	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #79  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 5 Sep 88  11:50:31 PDT
Date: Mon  5 Sep 1988 14:31-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #79
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 6 Sep 1988       Volume 8 : Issue 79

 Religion:
  The Ignorant assumption
  The Godless Assumption
  Science, lawfulness, a (the?) god
  Religious experience and cognitive science

 From the Moderator:
  Teleology

----------------------------------------------------------------------

Date: 2 Sep 88 10:59:02 GMT
From: quintus!ok@sun.com  (Richard A. O'Keefe)
Subject: Re: The Ignorant assumption

In article <545@cseg.uucp> lag@cseg.uucp (L. Adrian Griffis) writes:
>This is not to say that Science never indulges in this sort of intolerance
>of beliefs.  But at least Science as a whole does not state as part of its
>fundamental platform that you must accept such and such a belief as fact,
>without evidence and without question (regardless of what individual scientist
>may do).

Straw man!  Straw man!  Neither does Christianity state any such thing.
A major theme of the Bible is "here is the evidence".  Biblical
Archaeology (which tests the historical claims to the extent that they
*can* be tested by present archaeological methods) is regarded as a
PRO-religious activity.  Thomas *is* one of the Apostles, after all...

>Another "ideal" of Christianity is the notion that part of what make one
>a good person is believing the right things.

Again, not so.  To quote the Bible (paraphrased, because my memory's not
that reliable): "You believe in God?  So do the devils!"  An analogy:
you cannot enter into an effective marriage with a particular woman as
long as you continue to believe that she is a fossilized whale.

Criticims of any religion are more effective when they are well-informed.

I'm a little bothered by this reification of "Science" as if it were an
agent capable of "indulging in" behaviours and "stating" things.  Perhaps
Gilbert Cockton could clarify the ontological status of "Science" for us
(:-).

What's the relevance of all of this to AI, anyway?
Are AI people unusually sensitive to "Science" issues because
we want to be part of it, or what?
The study of English literature is not normally regarded as part
of "Science", but it's a decent intellectual field for all that.

------------------------------

Date: Sat, 03 Sep 88 14:16:20 -0800
From: Liz Allen-Mitchell <elroy!grian!liz@ames.arc.nasa.gov>
Subject: Re: The Godless Assumption

Where are we starting with this?  Too many folks seem to be starting
with science and saying that God can't exist because of whatever or
that God does exist because the world is orderly or whatever.  But I
don't think too many people actually start there.  Science can't prove
or disprove the existence of God (as at least one person *did* point
out).

Where do *I* start?  Where many people start -- I believe there is a
God.  A lot of people start with the opposite assumption -- that there
is not a God.

So, how does either assumption effect how we do science?  For me, my
belief in God goes a little further than just an assumption that there
is a supernatural being out there somewhere.  I believe some very
particular things about God.  I don't, for example, believe that God is
whimsical and changes the results of my experiments just to confuse or
mislead me.  I believe that God created the universe and that He did so
in an orderly way -- in a way that allows us to reason about everything
from whether or not the sun will rise tomorrow to whether or not my
connectionist network is going to settle down with reasonable results.
I do believe that God does "interfere" in the natural world from time
to time, but that is more the exception than the rule (I could go into
more detail here, but I think it is getting rather far off the digest's
subject).  But, because I believe that God made the world in an orderly
way and that He does want us to learn about His creation, I can do
science believing that I will learn not only about the world but also
about God.  For me believing in God enhances doing science.

Others may believe in a God, but not believe in one who is orderly.
They may well have problems doing science, as some have pointed out.

Many do not believe in God at all.  They may well believe the world is
orderly but others, who are probably not scientists, may *not* believe
in an orderly world.  I don't think that a belief that there is no God
can lead one to assume that the world is a place that we can understand
in a scientific way.  I can see how one who *is* doing science can
expect it to be fruitful because it has been in the past, but if you
have never been exposed to science, you may be difficult to convince
that science is not a rather hopeless pursuit.  You may believe that
while some things (like the sun rising) are predictable, other things
(like the wind blowing) are totally random events.

I assume that since we are all scientists (or at least trying to be!),
that we do believe in an orderly universe.  Believing that there is a
God no more precludes that than believing that there is no God.


Re Bill Wells' article about revealed knowledge:  He seems to be
assuming that anyone believing in revealed knowledge must hold all
revealed knowledge absolutely.  This is not necessarily so.  I think
all of us hold some knowledge absolutely (eg that the world is
orderly).  For those of us who believe that some knowledge is revealed
by God, the knowledge we hold absolutely includes some revealed
knowledge.  But does that mean we hold all revealed knowledge that
way?  No.  One can believe in God, believe that He is perfect and
believe that He speaks to you and yet believe that not everything you
think He has told you is absolutely true.  Some do come to this
conclusion, but they are basing this on the (false) assumption that
they always hear God perfectly.

Let me give you an example and then explain how I handle this.  If you
run an experiment twice and get results that contradict previous
results, how do you handle it?  Do you decide that the world must not
be orderly after all?  Maybe you do on a cynical day, but most likely,
you decide you made a mistake somewhere.  That's how I handle revealed
knowledge.  If it contradicts some other beliefs I have or if some
later evidence contradicts the revealed knowledge, I don't stop
believing in God or in an orderly world.  I try to figure out where I
made a mistake -- and I do allow for the possibility that I simply made
up my "revealed knowledge".


From a scientist *and* a Christian...
--
                - Liz Allen-Mitchell    liz@grian.cps.altadena.ca.us
                                        ames!elroy!grian!liz
"God is light; in him there is no darkness at all." -- 1 John 1:5b

------------------------------

Date: 3 Sep 88 22:43:25 GMT
From: proxftl!bill@bikini.cis.ufl.edu (T. William Wells)
Reply-to: proxftl!bill@bikini.cis.ufl.edu (T. William Wells)
Subject: Re: science, lawfulness, a (the?) god


As expected, my messages generated something of a heated
response.  Also as expected, some of the response completely
missed the point.  It is flatly not arguable that religions (and
quasi-religions like Marxism) have caused some of the greatest
evils in the world.  And, in spite of what those who would
condemn science in order to defend religion say, science has
never been the *cause* (only the means) of evil.  Nor could it,
since it does not propose an ethical system, and thus does not
provide a cause for action.  (And to forestall an almost certain
response: yes, there have been great evils done *in the name of*
science, but closer examination shows one thing: the purposes
were unrelated to the scientific.)

But those observations, however true, are irrelevant to the
point.  No matter how evil religion might be (and I hold that
faith of all kinds, including the religious, is one of the
greatest evils), this fails to invalidate it as a means of
knowing.  What *does* invalidate it is its assumptions of an
unlawful reality and of an unknowable universe.

There were also several responses which oozed various kinds of
epistemological relativism to attempt to defend the notion that
science and religion are compatible.  I had originally written
contemptuous and sarcastic replies to these idiocies, but I have
had second thoughts: such fuzzy-mindedness does not deserve the
attention that specific responses would create.  That this kind
of relativism invalidates science is not a matter for debate; I
shall not waste time on it.

Following are a number of messages about which I have some
specific comments.

---

T. Michael O'Leary <HI.OLeary@MCC.COM> writes:

:      >Science, though not scientists (unfortunately), rejects the
:      >validity of religion: it requires that reality is in some sense
:      >utterly lawful, and that the unlawful, i.e. god, has no place.
:
: To me this requirement is unnecessarily strict.  Science does not
: require that reality be utterly lawful, but merely that it be possible
: for scientists to observe patterns in nature.

The mere assertion that there are patterns, without reason to
believe that they might be projected into the future, does not
constitute science.  However, the existence of the unlawful
invalidates prediction.  Of any kind.

Consider what it means to say that something is unlawful: it
means that there are *no* constraints on its actions.  The proper
answer to "Can the unlawful do X?" is *yes*.  Given such a thing,
there is no reason to believe that the patterns that we perceive,
the predictions that come true, or even our mere existence, are
not entirely accident, devoid of meaning.  And the "partly
lawful" does not provide an escape either: where would you draw
the line?

No, if we wish to accept that science be valid, we must accept
that there is *nothing* unlawful.  And since religion accepts
that there is that which is unlawful, it undercuts the necessary
ground for science.

So, to reiterate: science and religion are incompatible.  There
is no reconciliation.

---

"William E. Hamilton, Jr.", on Mon, 22 Aug 88 11:00 EDT writes:

:         ...religion and reason entail diametrically opposed views of
:         reality: religion requires the unconstrained and unknowable as
:         its base...
:
:         ...religion rejects the ultimate validity
:         of reason; ... years of attempting to reconcile the
:         differing metaphysics and epistemology of the two has utterly
:         failed to accomplish anything other than the gradual destruction
:         of religion.
:
:         Science ... rejects the
:         validity of religion: it requires that reality is in some sense
:         utterly lawful, and that the unlawful, i.e. god, has no place.
:
: The first two above paragraphs make assertions which are certainly not true
: of all religions.

I disagree; all religions I have heard of, and certainly all
major religions, are based on a metaphysics that makes science
invalid.

:                   The third makes statements I would have to
: regard as religious, since it makes assertions (reality is lawful, God is
: not) about phenomena outside the scope of science.

You missed the point: I did not say that reality was lawful
(though that is *in fact* correct), what I said was that reality
must be lawful in order that science be valid.

: Granted, religion is outside the scope of science, but that does not make it
: wrong. Art and music are outside the scope of science, too, and yet
: they teach us important aspects of being human.

I disagree.  Art and music are *not* outside the scope of
science.  And, to mention AI at least once in this message, it is
necessary, in order that AI be more than programming tricks and
mental masturbation, that the presumption behind that statement
(that that which pertains to consciousness is necessarily outside
the knowable) be false.

---

Richard A. O'Keefe <quintus!ok@Sun.COM>, writes:

: This topic really hasn't much to do with AI.
: Perhaps it could be moved somewhere else?

Actually it does.  Of the sciences, AI is easily the most
philosophical; debates on the nature of reality (which AI
researchers will have to figure out how to represent somehow) and
on the validity of knowledge are both inevitable and necessary.
And has anyone considered that ethics, too, also is relevant?

---

ALFONSEC%EMDCCI11.BITNET@CUNYVM.CUNY.EDU writes:

: In a previous article, sas@BBN.COM says:
:
: > To my knowledge there is no scientific litmus test which can determine
: > the good or evil of a particular thought of action.
:
: True. From premises in the indicative mode ("this is so") you can never
: deduce a conclusion in the imperative ("you shall do so"). You need at
: least a premise in the imperative (i.e. a moral axiom).

I disagree.  Moreover, I hold that ethics is a central, if
perhaps unrecognized, problem for AI.  I would suggest that the
answer to the questions of ethics are intimately related to the
problem of goal-directed activity in AI systems.

---
Bill
novavax!proxftl!bill

------------------------------

Date: Sat, 3 Sep 88 21:32:24 -0200
From: Antti Ylikoski <ayl%hutds.hut.fi%FINGATE.BITNET@MITVMA.MIT.EDU>
Subject: religious experience and cognitive science

It seems that one of the main arguments against "religious knowledge" is
the subjectivity of religious experience.  But when a scientist carries
out an experiment then he gets out of it subjective experiences such as
the act of perceiving the position of a pointer on the scale of a
milliamperemeter.

Knowledge which has been derived from experience is usually considered
reliable if:

a) the experience has taken place under circumstances which are known,
are described by the experimenter, and are known to produce reliable
results

b) the experience is repeatable; it is described by the experimenter and
can be carried out by others, and when they do this they get the same
results.

Religious experience is repeatable, I would claim.  I have read
descriptions written by evangelist Christians involving their
experiences, and they are very similar.

Whether religious experiences can be considered to take place "under
circumstances which produce reliable results" is less evident.  I have
played with the idea that one could collect a large number of people
representing various religions and study them and their religious behaviour
and experiences with the methods of experimental psychology, trying to
exclude the possibilities of hallucination, bad mental health, cheating
and so forth.  This would produce scientific data either confirming or
not confirming the "reality" of religious experience.

I would guess that the experiment proposed above would indicate that
religious experience is real.  More than a decade ago, I read a book on
popularized science and found the statement that the
electroenchephalograms of people that have regularly practiced Zen
meditation for a long time are different from those of ordinary people;
they have more theta waves.  Thus, Zen experience is scientifically
observable even at the neurological level.  (It is not certain whether I
can find the reference any more.)

I'm dreaming of the day when Cognitive Science can say facts about
religious experience with the same level of detail and reliability as
cognitive scientists nowadays know the human vision.

--- andy

------------------------------

Date: 4 Sep 88 22:57:57 GMT
From: garth!smryan@unix.sri.com  (Steven Ryan)
Subject: Re: The Ignorant assumption

>This is not to say that Science never indulges in this sort of intolerance
>of beliefs.  But at least Science as a whole does not state as part of its
>fundamental platform that you must accept such and such a belief as fact,
>without evidence and without question (regardless of what individual scientist
>may do).

Frequent mistake--to do science you have to accept the scientific method on
faith. Essentially science states that the universe is rational and
objective. Ultimately, any way of viewing the universe is based on assumptions
taken on faith.

A similar subject is the Church-Turing hypothesis (after all, this is comp.ai).
Minsky-style people assert it is true and justifies their most ambitious scheme.

>> I do take issue that Christians are held in checked by the wider society. In
>> this country Christians are the majority: it is eternal internal conflicts
>> between the sects that holds things in checks.
>
>And am I ever grateful for that.

I once heard that in English Civil War Protestant Anglicans and Puritans
fought each other with the hatred normally reserved for Catholics.

I suppose the battle between pro-ai and anit-ai programmers is similar.

------------------------------

Date: Mon, 05 Sep 88 11:47:24 HOE
From: ALFONSEC%EMDCCI11.BITNET@CUNYVM.CUNY.EDU
Subject: Religion

In
>AIList Digest            Saturday, 3 Sep 1988      Volume 8 : Issue 76

L. Adrian Griffis writes:
>This is not to say that Science never indulges in this sort of intolerance
>of beliefs.  But at least Science as a whole does not state as part of its
>fundamental platform that you must accept such and such a belief as fact,
>without evidence and without question (regardless of what individual scientist
>may do).

This is a misunderstanding of what religious beliefs mean.

First, it is false that they must be accepted without evidence.
There is evidence. The clearest case is, of course, that of
adults who become converted into a religion (there are scientists,
too, in this class). They found the evidence sufficient for them.
Mostly, however, it is not a "scientific evidence", rather a
"historical evidence" or a "reasonable evidence".

Second, it is true that, after the evidence has been accepted, a
certain "obstinacy on belief" (in the words of C.S. Lewis) is required.
But this is also true of other human beliefs (which amount to more than
99 % of all our knowledge, even to scientific knowledge).

What would you say about a scientist who refuses to believe all those
scientific facts which must be learnt on authority grounds
and decides to test everything in practice? Science would not
advance much if every scientist did that.

Let me put a more convenient example. When we meet a person of the
opposite sex, we first take some time to "get the evidence". At some
point, we may be convinced that this person is appropriate as a
spouse. We may marry this person.

But later on, a certain "obstinacy on belief" is required. Or am I
going to believe every slander that may come to my ears about
my wife? Or, to be a good scientist, should I put her to the test?
Devise an experiment to find out whether she is faithful to me
under different conditions, for example? Perhaps, if I did that,
I would be considered a good scientist, but certainly, too,
a very devious person, or even a fool. Remember the story about
Cephalus and Procris.

M. Alfonseca

------------------------------

Date: Mon, 5 Sep 88 14:18 EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
Subject: Teleology


        The wise, it is said, never discuss religion or politics in
public.  This is not to say that they don't think about the issues, or
consider them unimportant, but merely that they recognize the frailty of
public discussion as a means of obtaining useful results.

        One should always be suspect of a discussion where the most
knowledgeable parties also have the largest axes to grind, where no one
invests the time and effort to master the issues unless they have a
vested interest in the result.  The final product seems to consist of
little other than elaborate rationalizations for pre-existing notions.

        Both 'Science' and 'Religion' are, in my view, guilty of this.

        So what?

        The problem with arguing politics or religion is the small
likelihood of anyone convincing anyone else of anything.

        AIList already has enough traffic for any *four* normal lists.
The most common reason given by people who unsubscribe is 'just couldn't
keep up' or 'low signal-to-noise ratio'.

        As moderator, I find it difficult to squelch a discussion that
so many people obviously find interesting (interesting enough to post
their two-cents worth) but it simply is not germane.  Perhaps a new list
for discussing the 'Philosophy of Science' would find a large readership
(Interestingly, the physics list is currently undergoing the same sort
of turmoil over the appropriateness of meta-discussion), but it certainly
would not find me as moderator.

        Accordingly, all future postings on this topic that do not take
extreme pains to highlight their specific relevance to AI or CogSci will
be bit-bucketed without further apology.

        Sorry ...


                - nick

------------------------------

End of AIList Digest
********************

∂08-Sep-88  1526	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #80  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 8 Sep 88  15:26:20 PDT
Date: Wed  7 Sep 1988 23:04-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #80
To: AIList@AI.AI.MIT.EDU


AIList Digest            Thursday, 8 Sep 1988      Volume 8 : Issue 80

 Queries:

  Expert-ease Software (By Human Edge)
  References on language simulation
  Linguistic multivariable controller for a nuclear power plant
  Darwinism applied to Machine Learning.
  Should we use an inductive tool for this problem?
  FRL questions

----------------------------------------------------------------------

Date: Sat, 03 Sep 88 22:38:52 EDT
From: 6200265@pucc.princeton.edu
Subject: Expert-ease Software (By Human Edge)

I would like to purchase a copy of the knowledge-acquisition tool By Human
Edge Software called "EXPERT-EASE".

The company went out of business two years ago, and no longer supports the
software.

If you are looking to sell your copy of EXPERT-EASE, or know someone who is
please let me know.

My copy crashed and I need the software to complete my research.

Please respond to 6200265@PUCC on BITNET, or call (609) 452-5340 during the
day.

     **********************      Thank You     ********************

     Brenda Belkin.

------------------------------

Date: Sun, 04 Sep 88 16:21 EST
From: steven horst                        
      <GKMARH%IRISHMVS.BITNET@MITVMA.MIT.EDU>
Subject: references on language simulation


 I have an undergraduate student who is interested in researching
 AI work in language simulation.  I am familiar with the main
 PROFESSIONAL publications, but don't know of anything that would
 give a beginning student a good and reasonably up-to-date description
 of major research areas and point him in the right direction for
 whatever specific topics he may wish to pursue further.  He has
 expressed a mild interest in parsers and a stronger interest in
 work that involves knowledge representation and context sensitivity.

 I should appreciate suggestions for:
   (a) Survey texts
   (b) Literature review articles
   (c) Bibliographies (especially annotated ones).

 Please send any suggestions to my bitnet address:
         Steven Horst          gkmarh@irishmvs.bitnet
         Department of Philosophy
         Notre Dame, IN  46556

------------------------------

Date: MON 5 SEP 88
From: Levent Akin<AKIN02%TRBOUN%BITNET@CUNYVM.CUNY.EDU>
Subject: Linguistic multivariable controller for a nuclear power plant

I am a doctoral student (in nuclear engineering) and I plan to build a
linguistic multivariable controller for a nuclear power plant(represented by a
set of first order linear differential equations. I think there are two
possible ways of building the meta-level knowledge base:
1) A reflection of the plant model(actually a subset considering the observable
   variables) such as:
   "If fuel temperature increases then time rate of power change decreases."
or
2) Start with a minimum set of rules and build the knowledge base using
   repeated simulation and a scoring mechanism.

Since in Turkey I have very limited access to AI literature such as proceedings
and reports, any suggestions, flames, etc. to put me on the right track are
welcome.

*******************************************************************************
*The recipe for perpetual ignorance is: be satisfied with your opinions and   *
*content with your knowledge.                                                 *
*                                     Elbert Hubbard, The Philistine Vol. V   *
*******************************************************************************
-------------------------------------------------------------------------------
Levent Akin                           e-mail:EARN(BITNET): <AKIN02@TRBOUN>
Bogazici Universitesi
Muhendislik Fakultesi
P.K.2
80815 Bebek-Istanbul
TURKEY

------------------------------

Date: 5-SEP-1988 09:58:48
From: <75008378%VAX2.NIHED.IE@MITVMA.MIT.EDU>
Subject: Darwinism applied to Machine Learning.


Hello,

I'm a lecturer in the School of Electronic Engineering here in
NIHED in Dublin, and I'll shortly be embarking on research for a PhD.
I am interested in Machine Learning, particularly with a
minimal knowledge base and/or using "darwinian"/genetic type mechanisms.

I'd appreciate hearing from people working in this area
(especially, but not exclusively, in Ireland or the UK). As a
quid pro quo, I have just completed an internal research report,
reviewing some work on adaptive classifier systems (Holland et al)
and neuronal group selection (Reeke & Edelman); I won't post it because
its rather long, but I'll happily e-mail it to anyone who's
interested.

Thanks,

Barry McMullin
EARN/BITNET/EUNET: <MCMULLINB@VAX2.NIHED.IE>

------------------------------

Date: 5 Sep 88 14:48:39 GMT
From: mcvax!dnlunx!lippolt@uunet.UU.NET (Ben Lippolt)
Subject: Should we use an inductive tool for this problem?


Hello,

We have the following problem:
         We have about 200 items. Each item belongs to one of 13 classes and
         is described by 12 attributes. The values an item has for these 12
         attributes are not absolute, however, but are expressed relative to
         the other items. Like this:

         attr1     attr2       attr3     attr4

          3         2            4         3
          2         3            3         4
          1         4            2         2
          4         1            1         1

         The numbers refer to items 1 to 4. Let's say that item 1 belongs to
         class 1, item 2 to class 3, item 3 to class 6 and item 4 to class 3.
         There is a correlation between the class an item belongs to and the
         positions it has for each attribute. We can see, for instance, that
         item 3 is ranked above item 1 for all attributes and that the class
         of item 3 is higher than the class of item 1. If we look at items 2
         and 4, we see that of these two for some attributes item 2 is ranked
         higher and for some attributes item 4. Both items belong to the same
         class.

         What we want to do now, is to check the consistency of the
         correlation between the class an item belongs to and the relative
         positions it occupies for the twelve attributes. For instance, an
         item that is ranked very high for each attribute should not belong to
         the same class as an item that is ranked very low for each attribute.
         We want to start with e.g. 50 items and check for each new item that
         we add whether its class and positions are consistent with the other
         items.

Our questions are:
         Can we use an inductive tool for this problem? Are 50 cases, with
         12 attributes each, enough to start working with? Can we find
         inconsistencies, which might be rather vague, with such a tool? Is
         it possible to incorporate fuzzy logic in an inductive tool? Which
         tool should we use?

Any comments are highly appreciated.

Ben Lippolt              (..!mcvax!dnlunx!lippolt, or lippolt@hlsdnl5)
Marlies van Steenbergen  (..!mcvax!dnlunx!marlies)
PTT Research, Neher Laboratories.

------------------------------

Date: 6 Sep 88 07:04:43 GMT
From: mcvax!prlb2!crin!napoli@uunet.uu.net  (Amedeo NAPOLI)
Subject: FRL questions


My previous questions about FRL have not been answered yet, but I keep
goin'on in my quest.
Can ANYBODY give me some information about FRL in general, and about the
semantics and use of the % and @ special data forms in particuliar ?

Thanx!

--
--- Amedeo Napoli @ CRIN / Centre de Recherche en Informatique de Nancy
EMAIL : napoli@crin.crin.fr - POST : BP 239, 54506 VANDOEUVRE CEDEX, France

------------------------------

End of AIList Digest
********************

∂11-Sep-88  2046	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #81  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 11 Sep 88  20:45:47 PDT
Date: Sun 11 Sep 1988 23:27-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #81
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 12 Sep 1988       Volume 8 : Issue 81

 Announcements:

  Call for papers: SCAI'89
  IEEE CVPR 1989 Call for Papers
  Interacting with Computers - Call for papers
  Intelligent CAD: Call for Papers
  Congress on Cybernetics and Systems

----------------------------------------------------------------------

Date: 30 Aug 88 10:07:04 GMT
From: mcvax!enea!tut!ks@uunet.uu.net  (Kari Syst{)
Subject: Call for papers: SCAI'89


                           SCAI'89
    THE SECOND SCANDINAVIAN CONFERENCE ON ARTIFICIAL INTELLIGENCE 1989
                        June 13-15, 1989
                        Tampere, Finland

                1st Announcement and Call for Papers


The  Conference is organized by the Finnish, Danish, Norwegian and Swedish
artificial  Intelligence Societies  and  Tampere University of Technology.
On behalf of these, the Organization Committee has the pleasure to cordially
invite everyone interested in AI and related topics to participate in SCAI'89.


TECHNICAL PROGRAM

The program of the Conference will contain invited and contributed papers in
plenary and parallel sessions, workshops and an exhibition.


CONTRIBUTIONS INVITED

Contributed papers in the following fields of AI are welcome:

  1. Logic and AI theory
  2. Knowledge representation and inference methods
  3. Knowledge based systems
  4. Natural Language and speech
  5. AI-tools and environments

Prospective authors of papers are invited to return the attached form together
with an extended abstract of their proposed paper.  Abstracts of the papers
should be submitted to the SCAI'89 Secretariat, and are due by October 31,
1988.


ABSTRACTS

The abstracts should be informative rather than descriptive and the text
should not exceed three pages.


MANUSCRIPTS

Acceptance of papers will be notified to the authors by December 31, 1988,
together with full requirements and typing instructions for the manuscripts.
Final manuscripts will have to be submitted for publication not later than
March 31, 1989.


PROCEEDINGS

The written papers will be published in the SCAI'89 Proceedings which will
be distributed to the participants at registration.


WORKSHOPS

It has been planned that a workshop titled "Medical Expert Systems" will be
arranged in connection with the Conference.  Proposals for other workshop
topics are kindly requested to be sent the SCAI'89 Secretariat before October
31, 1988.


EXHIBITION

An exhibition of AI-tools and literature will be held during the Conference.
All enquires should be directed to the SCAI'89 Secreteriat.


SCAI'89 SECRETERIAT

All correspondence should be directed to:

   scai89@tut.fi

or:

   SCAI'89
   Tampere University of Technology
   Ms Raili Siekkinen
   P.O.BOX 527
   SF-33101 Tampere
   Finland

   Phone Int  +358 31 162441
   Telex 22313
   Telefax +358 31 162907


DEADLINES

Abstracts of papers submitted:    October 31, 1988
Acceptance of papers notified:    December 31, 1988
2nd Announcement available:       January 31, 1989
Manuscripts of papers submitted:  March 31, 1989
Registration with reduced fee:   March 31, 1989
Hotel reservation:                March 31, 1989


Please fill the following questionnaire in and send it to
SCAI'89 Secreteriat before October 31, 1988.  You may send it
either by electronic or ordinary mail.

==================================================================

PRELIMINARY PARTICIPATION QUESTIONNAIRE
(please fill in)

I wish to receive the 2nd announcement
(If more than one copy, quantity indicated)             (  )

I plan to participate in SCAI'89                        (  )

I plan to present a paper at the Conference             (  )
(title and abstract enclosed)

My paper concerns the topic:

  1. Logic and AI theory                                (  )
  2. Knowledge representation and inference methods     (  )
  3. Knowledge based systems                            (  )
  4. Natural Language and speech                        (  )
  5. AI-tools and environments                          (  )


I will attend the workshop on Medical Expert Systems    (  )

I like to propose a workshop
"                                                      "
to be arranged as a part of the Conference (Detailed
proposal enclosed)

My organization is interested in taking part in the
exhibition, please contact me                           (  )


name
    -----------------------------------------------------


company/institution
                   --------------------------------------



---------------------------------------------------------



address
       --------------------------------------------------



---------------------------------------------------------



phone
     ----------------------------------------------------


telefax
       --------------------------------------------------


electronic mail
               ------------------------------------------
--
Kari Systa      ks@tut.fi (ks@tut.UUCP, ..!mcvax!tut!ks,  ks@fintut.bitnet)
Tampere Univ. Technology/Computer Systems Laboratory           Phone
Po. Box. 527, SF-33101 Tampere                           work: +358 31 162585
Finland                                                  home: +358 31 177412

------------------------------

Date: 31 Aug 88 23:02:18 GMT
From: mailrus!uflorida!haven!uvaarpa!virginia!uvacs!wnm@rutgers.edu 
      (Worthy N. Martin)
Subject: IEEE CVPR 1989 Call for Papers


                      CALL FOR PAPERS

              IEEE Computer Society Conference
                            on
          COMPUTER VISION AND PATTERN RECOGNITION

                    Sheraton Grand Hotel
                   San Diego, California
                      June 4-8, 1989.



                       General Chair


               Professor Rama Chellappa
               Department of EE-Systems
               University of Southern California
               Los Angeles, California  90089-0272


                     Program Co-Chairs

Professor Worthy Martin          Professor John Kender
Dept. of Computer Science        Dept. of Computer Science
Thornton Hall                    Columbia University
University of Virginia           New York, New York  10027
Charlottesville, Virginia 22901


                     Program Committee

Charles Brown         John Jarvis            Gerard Medioni
Larry Davis           Avi Kak                Theo Pavlidis
Arthur Hansen         Rangaswamy Kashyap     Alex Pentland
Robert Haralick       Joseph Kearney         Roger Tsai
Ellen Hildreth        Daryl Lawton           John Tsotsos
Anil Jain             Martin Levine          John Webb
Ramesh Jain           David Lowe



                    Submission of Papers

Four copies of complete  drafts,  not  exceeding  25  double
spaced  typed  pages  should be sent to Worthy Martin at the
address given above by November 16, 1988  (THIS  IS  A  HARD
DEADLINE).   All reviewers and authors will be anonymous for
the review process.  The cover page will be removed for  the
review  process.   The  cover  page  must contain the title,
authors'  names,  primary  author's  address  and  telephone
number, and index terms containing at least one of the below
topics.  The second page of the  draft  should  contain  the
title  and  an abstract of about 250 words.  Authors will be
notified of notified of acceptance by February 1,  1989  and
final  camera-ready  papers, typed on special forms, will be
required by March 8, 1989.  Submission of Video Tapes  As  a
new  feature  there  will  be  one or two sessions where the
authors can present their work using video tapes only.   For
information  regarding  the  submission  of  video tapes for
review purposes, please contact John Kender at  the  address
above.



                 Conference Topics Include:

          -- Image Processing
          -- Pattern Recognition
          -- 3-D Representation and Recognition
          -- Motion
          -- Stereo
          -- Visual Navigation
          -- Shape from _____ (Shading, Contour, ...)
          -- Vision Systems and Architectures
          -- Applications of Computer Vision
          -- AI in Computer Vision
          -- Robust Statistical Methods in Computer Vision



                           Dates

      November 16, 1988 -- Papers submitted
      February 1, 1989  -- Authors informed
      March 8, 1989     -- Camera-ready manuscripts to IEEE
      June 4-8, 1989    -- Conference

------------------------------

Date: Thu, 1 Sep 88 17:32:37 BST
From: mdw%informatics.rutherford.ac.uk@NSS.Cs.Ucl.AC.UK
Subject: Interacting with Computers - Call for papers


INTERACTING WITH COMPUTERS - CALL FOR PAPERS
The Interdisciplinary Journal of Human-Computer Interaction

INTERACTING WITH COMPUTERS will provide a new international forum for
communication about HCI issues betwen academia and industry.
It will allow information to be disseminated in a form
accessible to all HCI practitioners, not just to academic researchers.
This new journal is produced in conjunction with the BCS
Human-Computer Interaction Specialist Group.  Its aim is to stimulate ideas
and provoke widespread discussion with a forward-looking perspective.
A dialogue will be built up between theorists, researchers and human
factors engineers in academia, industry and commerce thus fostering
interdisciplinary dependencies.

The journal will initially appear three times a year.  The first issue
of INTERACTING WITH COMPUTERS will be published in March 1989.

Each issue will contain a large number of fully refereed papers
presented in a form and style suitable for the widest possible
audience.  All long papers will carry an executive summary for those who would
ot read the paper in full.  Papers may be of any length but content will be
substantial.  Short applications-directed papers from industrial contributors
are actively encouraged.

Every paper will be refereed not only by appropriate peers but also by experts
outside the area of specialisation.  It is intended to support a continuing
commentary on published papers by referees and journal readers.

COVERAGE

o  Systems and dialogue design
o  Evaluation techniques
o  User interface design, tools and methods
o  Empirical evaluations
o  User features and user modelling
o  new research paradigms
o  Design theory, process and methodology
o  State-of-the-art reviews
o  Organisational and social issues
o  Intelligent systems
o  Training and education applications
o  Emerging technologies

PAPER TYPES

The editorial board of INTERACTING WITH COMPUTERS wish to publish papers in all
areas of HCI.  To ensure the desired breadth of coverage it has created three
Special Editorial Boards (SEBs) in:

                             o  Computer science
                             o  Human sciences
                             o  Applications

A wide range of paper types are encouraged.  The overriding concern is that
papers should contribute to advancing the field of HCI.  All papers should be of
a high standard and should be concerned with wider applications in addition to
intellectual rigour.  Papers should take an interdisciplinary approach to HCI
and should address the journal's anticipated diverse readership.

Some initial paper types are:

o  State of the art reviews on any aspect of HCI
o  Reports on theoretical research, highlighting practice and context
o  Brief papers covering technical work-in-progress
o  Experience with industrially-based applications
o  Discussion of evaluation issues and findings



READERSHIP

o  HCI professionals
o  Computer and social scientists
o  Industry practitioners
o  Systems, software and interface designers
o  Human factors engineers

CALL FOR PAPERS

We invite papers addressing problems and issues in HCI in such a way that the
contents are accessible to readers from all contributing disciplines.
Theoretically-orientated papers should attempt to demonstrate the relevance of
theory to practice and applications-orientated papers are strongly encouraged.

The language of the journal is English but the publishers are
experienced in assisting authors whose first language is not English.
Authors are encouraged to adopt a clear and accessible style.  All
papers will be fully and appropriately refereed with respect to content and
paper type.

Authors should submit six copies of their manuscript, typed on one side
only and in double-line spacing to the address below. An explanatory
document, "Guidance for Authors", is also available from this address:

General Editorial and Management Board,
INTERACTING WITH COMPUTERS,
Butterworth Scientific,
P O Box 63,
Westbury House,
Bury Street,
Guildford,
Surrey GU2 5BH,
United Kingdom.

------------------------------

Date: 8 Sep 88 15:11:44 GMT
From: mcvax!pauljan@uunet.uu.net  (Paul Veerkamp)
Subject: Intelligent CAD: Call for Papers


                            Call for Papers

                    Third Eurographics Workshop on
                        INTELLIGENT CAD SYSTEMS
                 "Practical Experience and Evaluation"

                           April 3-7, 1989
                  Hotel Opduin, Texel, The Netherlands


AIM AND SCOPE:
    This is the third workshop in a series of three Eurographics Workshops
on Intelligent CAD Systems which have the following topics

1.      1987:   Theoretical and Methodological Aspects
2.      1988:   Implementational Issues
3.      1989:   PRACTICAL EXPERIENCE AND EVALUATION

Applying knowledge engineering to CAD has during the last decade become a
major area of research, known as Intelligent CAD. The scope of this workshop
includes (but is not limited to):

-       Experiments with intelligent CAD systems.
-       The role of intelligent CAD systems in industrial design.
-       Acquisition and maintenance of design expertise.
-       Software engineering for implementations of intelligent CAD systems.
-       User interfaces for intelligent CAD systems.
-       Knowledge representation languages for design.
-       Integration of application software (finite elements, qualitative
        physics, etc.) to intelligent CAD systems.

The proceedings of the workshop will be published by Springer-Verlag in the
EurographicSeminar Books series. The record of the first workshop has already
been published by Springer-Verlag and the volume covering the second workshop
should be out before December, 1988.

DEADLINES:
-       Nov. 1, 1988      Deadline for extended abstracts.
-       Jan. 15, 1989     Notification of acceptance.
-       March 15, 1989    Submission of full papers.
-       April 3-7, 1989   Workshop.
-       May 15, 1989      Deadline for final manuscripts.
-       Dec. 1, 1989      Springer-Verlag third volume in book shops.

ABSTRACTS:
    We are planning to accept 20 papers and we shall limit the total
number of participants to 50. Please submit 3 copies of a single-spaced
extended abstract of at LEAST 1000 and at MOST 3000 words (not counting
figures and references) on A4 sheets before November 1, 1988 to the
workshop secretary:

    Ms. Marja Hegt, ICAD WS #3
    Centre for Mathematics and Computer Science (CWI)
    Kruislaan 413, 1098 SJ Amsterdam, The Netherlands
    Tel. +31-20-592-4058, Fax +31-20-592-4199, E-mail marja@cwi.nl.uucp

Submission by electronic mail is encouraged as long as the author sends
the additional material (e.g. figures) in time. The abstract should
include: title of the contribution, author's name, adress (phone, fax,
e-mail information is very useful), main text and figures, and
references. If you are interested only in participating (but not in
presenting a paper), then you are still required to submit an abstract
which should take the form of a position paper explaining how you regard
the issues of intelligent CAD. We are not planning to admit applicants
who have not submitted an extended abstract (or as explained, a position
paper).

WORKSHOP FEE:
    1100 Dutch Guilders. This price includes accommodation, food, and
a special excursion. The workshop starts on April 3rd (Monday) with dinner
and ends on April 7th (Friday) after lunch. The fee for an accompanying
non-participating person is 750.

ORGANISATION:
    This conference is organised by the Centre for Mathematics and Computer
Science (CWI). The co-chairmen are:
-       P.J.W. ten Hagen (CWI, The Netherlands), and
-       P.J. Veerkamp (CWI, The Netherlands).

PROGRAMME COMMITTEE:
        A. Agogino (Univ. of California - Berkeley, USA),
        V. Akman (Bilkent Univ., Turkey),
        F. Arbab (Univ. of Southern California, USA),
        P. Bernus (Hungarian Academy of Sciences, Hungary),
        A. Bijl (Univ. of Edinburgh, Scotland),
        J. Encarnacao (TH Darmstadt, West Germany),
        T. Kjellberg (Royal Institute of Technology, Sweden),
        G. Kramer (Schlumberger Palo Alto Research Center, USA),
        M. Mac an Airchinnigh (Univ. of Dublin, Ireland),
        K. MacCallum (Univ. of Strathclyde, UK),
        S. Murthy (IBM Thomas J. Watson Research Center, USA),
        G. Joubert (Philips Eindhoven Research Center, The Netherlands),
        D. Sriram (Massachusetts Institute of Technology, USA),
        W. Strasser (Univ. of Tuebingen, West Germany),
        T. Takala (Technical Univ. of Helsinki, Finland),
        F. Tolman (TNO, The Netherlands),
        T. Tomiyama (Univ. of Tokyo, Japan), and
        J. Treur (Univ. of Amsterdam, The Netherlands).

------------------------------

Date: 9 Sep 88 20:12:00 GMT
From: spnhc@cunyvm.bitnet
Subject: Congress on Cybernetics and Systems


             WORLD ORGANIZATION OF SYSTEMS AND CYBERNETICS

         8 T H    I N T E R N A T I O N A L    C O N G R E S S

         O F    C Y B E R N E T I C S    A N D   S Y S T E M S

                            to be held
                         June 11-15, 1990
                                at
                          Hunter College
                    City University of New York
                         New York, U.S.A.

     This triennial conference is supported by many international
groups  concerned with  management, the  sciences, computers, and
technology systems.

      The 1990  Congress  is the eighth in a series, previous events
having been held in  London (1969),  Oxford (1972), Bucharest (1975),
Amsterdam (1978), Mexico City (1981), Paris (1984) and London (1987).

      The  Congress  will  provide  a forum  for the  presentation
and discussion  of current research. Several specialized  sections
will focus on computer science, artificial intelligence, cognitive
science, psychocybernetics  and sociocybernetics.  Suggestions for
other relevant topics are welcome.

      Participants who wish to organize a symposium or a section,
are requested  to submit a proposal ( sponsor, subject, potencial
participants, very short abstracts ) as soon as possible, but not
later  than  September 1989.  All submissions  and correspondence
regarding this conference should be addressd to:

                    Prof. Constantin V. Negoita
                         Congress Chairman
                   Department of Computer Science
                           Hunter College
                    City University of New York
             695 Park Avenue, New York, N.Y. 10021 U.S.A.

------------------------------

End of AIList Digest
********************

∂13-Sep-88  1842	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #82  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 13 Sep 88  18:42:47 PDT
Date: Tue 13 Sep 1988 21:14-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #82
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 14 Sep 1988     Volume 8 : Issue 82

 Queries:

  Info on Automatic Reasoning
  Validation of Expert Shell Applications
  paper review time
  intelligent tutoring query
  model curriculum
  Network Design problems
  Request Raj Reddy's AAAI talk details
  artificial intelligence programming code

----------------------------------------------------------------------

Date: 8 Sep 88 07:23:14 GMT
From: munnari!charlie.oz.au!spock@uunet.UU.NET (Simon Tong)
Reply-to: spock@charlie.OZ (Simon Tong)
Subject: Info on Automatic Reasoning


  G'day.  I am after a list of Australian institutions that are actively
  conducting research on Automatic Reasoning and/or Automated Theorem Proving.

  I am also looking for good references to books, articles or journals
  that are devoted to the above areas.

  I would be grateful for any information ( esp. the current status of
  research, new paradigms ).

  Please respond by mail and if anyone is interested, I shall summarize to the
  network.

  Thanks in advance.

==============================================================================
  Simon Tong
  Deakin University, Geelong, Victoria.
==============================================================================

------------------------------

Date: 8 Sep 88 19:12:17 GMT
From: mailrus!ncar!dinl!noren@ohio-state.arpa  (Charles Noren)
Subject: Validation of Expert Shell Applications

I need a pointer to information on software validation techniques
in general and specifically the validation of software applications
written in an expert shell.  I am using G2 by Gensym (which I like
very much) and need to get a handle on formal verification techniques.

Thanks,
--
Chuck Noren
Martin Marietta I&CS, Denver, CO
(303) 971-7930

------------------------------

Date: 8 Sep 88 21:26:22 GMT
From: psuhcx!sbj@psuvax1.psu.edu  (Sanjay B. Joshi)
Subject: paper review time

Does anybody know what the turn around time for papers submitted to

IEEE Robotics and Automation
IEEE Man, Systems, Cybernetics.

How long is the average review process. And how long does it take to
appear in print once the paper is accepted.



sanjay joshi

------------------------------

Date: 9 Sep 1988 14:12:37 CDT
From: Susan.Mengel@LSR.TAMU.EDU
Subject: intelligent tutoring query


Does anybody know where I might obtain a copy of:

     Friend, J.E. and R.R. Burton.
     Teacher's Guide for Diagnostic Testing in Arithmetic:  Subtraction.
     Cognitive and Instructional Sciences.
     Xerox Parc, Palo Alto Research Center, Palo Alto, CA.

This was a manual used in the BUGGY project conducted by John Seely Brown
and Richard R. Burton.  I have written to Dr. Brown for a copy of it, but
have received no answer.  I am going to call him as well, but I still want
to see if anyone else might have it.

I am a Ph.D. student and would like to use the results of this research in my
dissertation on intelligent tutoring systems.

I would also like to know if anyone is doing research on combining neural
networks and intelligent tutoring systems.

Thanks in advance,

Susan Mengel
Research Associate

Reply to:  Dept. of Computer Science
           Texas A&M University
           College Station, TX  77843-3112
           (409) 845-5534

ARPANET:  Susan.Mengel@LSR.TAMU.EDU
BITNET:   MENGEL@TAMLSR

------------------------------

Date: Mon, 12 Sep 88 13:39 N
From: <INDUGERD%CNEDCU51.BITNET@MITVMA.MIT.EDU>
Subject: model curriculum

Hello,

As AI teaching is developping in Swiss universities, I would like to
establish a kind of model curriculum for AI, both at undergraduate and graduate
level.

So I would greatly appreciate suggestions and advices from people involved
in AI teaching, and know what courses are actually taught in universities
having programs in AI.

If enough interest, I will summarize for the net.

Thank you.

Philippe Dugerdil
Institute of Informatics
Univ.of Neuchatel
Switzerland

Bitnet: indugerd@cnedcu51.

------------------------------

Date: 12 Sep 88 17:41:27 GMT
From: pc@bellcore.bellcore.com  (Peter Clitherow)
Subject: Network Design problems

It seems that problems of designing communications networks (which i'm
interested in) are related to other sorts of networks different domains,
such as Urban Planning and the Oil/Gas industry.  Has anyone studied the
methods those fields use for design - if so, could they send me any
reference?

Peter Clitherow, Bellcore,
  444 Hoes Lane, Room 1H-213,
  Piscataway, NJ 08854

------------------------------

Date: 12 Sep 88 21:58:25 GMT
From: att!ihlpa!tracy@bloom-beacon.mit.edu  (Tracy)
Subject: Request Raj Reddy's AAAI talk details

Sadly, I did not take notes during Raj Reddy's address at the AAAI
conference in St. Paul. Could someone please remind me what the five
tenets of AI were? I can vaguely remember some of them like the
50,000 +/- 20,000 rule.

Thanks to anyone who can help.

        --Kim Tracy, 312-979-4164

------------------------------

Date: 13 Sep 88 00:47:01 GMT
From: orange.cis.ohio-state.edu!amra@ohio-state.arpa  (Nasir K Amra)
Subject: artificial intelligence programming code

I am going through Chernaik, Eugene 's "Artificial  Intelligence Programming"
 2nd editionbook in an attempt to learn common lisp as welll as ai programming
techniques. Does anyone know if the code in the book is available via
the net (preferably ftp)? It would save me quite a lot of typing in and
allow me to experiment with the code.

------------------------------

End of AIList Digest
********************

∂14-Sep-88  1829	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #83  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 14 Sep 88  18:28:42 PDT
Date: Wed 14 Sep 1988 21:08-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #83
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 15 Sep 1988      Volume 8 : Issue 83

 Philosophy:

  The Uncertainty Principle.
  Newell's response to KL questions
  Pinker & Prince: The final remark ...
  Navigation and symbol manipulation
  I got rhythm
  Robotics and Free Will

----------------------------------------------------------------------

Date: Mon, 05 Sep 88 14:38:35 +0100
From: "Gordon Joly, Statistics, UCL"
      <gordon%stats.ucl.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: The Uncertainty Principle.

In Vol 8 # 78 Blair Houghton cries out:-
> I do wish people would keep *recursion* and *perturbation* straight
> and different from the Uncertainty Principle.

Perhaps... But what is the *perturbation* in question? "Observation"?

Blair also observes
> Electrons "know" where they are and where they are going.

And I know where I'm coming from too, Man!

On page 55 (of the American edition) of "A Brief History of Time",
Professor Stephen Hawking says

``The uncertainty principle had profound implications for way in
which we view the world... The uncertainty principle signaled an
end to Laplace's dream of a theory of science, a model of the
universe that could be completely deterministic: one certainly
cannot predict future events exactly if one cannot even measure
the present state of the universe precisely!''

And what of "chaos"?

Gordon Joly.

------------------------------

Date: Mon, 05 Sep 88 12:14:10 EDT
From: Allen.Newell@CENTRO.SOAR.CS.CMU.EDU
Subject: Newell's response to KL questions


> From: IT21%SYSB.SALFORD.AC.UK@MITVMA.MIT.EDU
> Subject: Newell's Knowledge Level
> From: Andrew Basden, I.T. Institute, University of Salford, Salford.

> Please can anyone help clarify a topic?

> In 1982 Allen Newell published a paper, 'The Knowledge Level' (Artificial
> Intelligence, v.18, p.87-127), in which he proposed that there is a level
> of description above and separate from the Symbol Level.  He called this
> the Knowledge Level.  I have found it a very important and useful concept
> in both Knowledge Representation and Knowledge Acquisition, largely
> because it separates knowledge from how it is expressed.
>
> But to my view Newell's paper contains a number of ambiguities and
> apparent minor inconsistencies as well as an unnecessary adherence to
> logic and goal-directed activity which I would like to sort out.  As
> Newell says, "to claim that the knowledge level exists is to make a
> scientific claim, which can range from dead wrong to slightly askew, in
> the manner of all scientific claims."  I want to find a refinement of it
> that is a bit less askew.
>
> Surprisingly, in the 6 years since the idea was introduced there has
> been very little discussion about it in AI circles.  In psychology
> circles likewise there has been little detailed discussion, and here the
> concepts are only similar, not identical, and bear different names.  SCI
> and SSCI together give only 26 citations of the paper, of which only four
> in any way discuss the concepts, most merely using various concepts in
> Newell's paper to support their own statements.  Even in these four there
> is little clarification or development of the idea of the Knowledge
> Level.

[[AN: I agree there has been very little active use or development of the
  concept in AI, although it seems to be increasing somewhat.  The two most
  important technical uses are Tom Ditterich's notion of KL vs SL learning and
  Hector Levesque's work on knowledge bases.  Zenon Pylyshyn uses the notion
  as an appropriate way to discuss foundation issues in an upcoming book
  on the foundatations of cognitive science (while also using the term
  semantic level for it).  And David Kirsh (now at MIT AIL) did a thesis in
  philosophy at Oxford on the KL some time ago, which has not been published,
  as far as I know.  We have continued to use the notion in our own research
  and it played a strong role in my William James Lectures (at Harvard).  But,
  importantly, the logicists have not found it very interesting (with the
  exception of Lesveque and Brachmann).  I would say the concept is doing
  about as well as the notion of weak methods did, which was introduced in
  1969 and didn't begin to play a useful role in AI until a decade later.

  I might say that the evolution of the KL in our own thinking has been
  (as I had hoped) in the direction of seeing the KL as just another systems
  level, with no special philsophical character from the other levels.
  In particular, there seems to me no more reason to talk about an observer
  taking an intentional stance when using the knowledge level to describe
  a system than there is to talk about an engineering taking the electronic-
  circuits stance when he says "consider the circuit used for ...".  It is
  ok, but the emphasis is on the wrong syllABLE.  One other point might be
  worth making.  The KL is above the SL in the systems hierarchy.  However,
  in use, one often considers a system whose internal structure is described
  at the SL as a collection of components communicating via languages and
  codes.  But the components may themselves be described at the KL, rather
  than at any lower level.  Indeed, design is almost always an approximation
  to this situation.  Such usage doesn't stretch the concept of KL and SL in
  anyway or put the KL below the SL.  It is just that the scheme to be used
  to describe a system and its behavior is always pragmatic, depending on
  what is known about it and what purposes the description is to serve.
]]

> So I am turning to the AILIST bulletin board.  Has anyone out there any
> understanding of the Knowledge Level that can help in this process?
> Indeed, is Allen Newell himself listening to the board?

[[AN: No, but one of my friends (Anurag Acharya) is and forwarded it to me,
  so I return it via him.]]

> Some of the questions I have are as follows:
>
> 1.  Some (eg. Dennett) mention 3 levels, while Newell mentions 5.  Who is
> 'right' - or rather, what is the relation between them?

[[AN: The computer systems hierarchy (sans the KL), which is what I infer
  the "5" refers to, is familiar, established, and technical (i.e., welded
  into current digital technology).  There may also exist other such
  systems hierarchies.  Dennett (and Pylyshyn, loc cit) talk about 3, simply
  because the details of the lower implementations are not of interest to
  them, so they simply talk about some sort of physical systems.  There is
  no doubt that the top two levels correspond: the program or symbol level,
  and above that the knowledge, semantic or intentional systems level.  That
  does not say the formulations or interpretations of the intentional systems
  level and the KL are identical, but they are aimed at the same phenomena
  and the same systems possibilities.  There is an upcoming Brain and
  Behavioral Science treatment of Dennett's new book on the Intentional
  Stance, in which my own (short) commentary raises the question of the
  relation of these two notions, but I do not know what Dennett says about
  it, if anything.]]

> 2.  Newell says that logic is at the Knowledge Level.  Why?  I would have
> put it, like mathematics, very firmly in the Symbol Level.

[[AN: Here Basden mystifies me.  However obscure I may have been in the KL
  paper, I did not say that logic was at the KL.  On the contrary, as
  the paper says in section 4.4, "A logic is just a representation of
  knowledge.  It is not the knowledge itself, but a structure at the symbol
  level."]]

> 3.  Why the emphasis on logic?  Is it necessary to the concept, or just
> one form of it?  What about extra-logical knowledge, and how does his
> 'logic' include non-monotonic logics?

[[AN: Again, the paper seems to me rather clear about this.  Logics are
  simply languages that are designed to be clear about what knowledge
  they represent.  They have lots of family resemblances, because certain
  notions (negation, conjunction, disjunction, functions and parameters)
  are central to saying things about domains.  Monotonic logics are so called,
  because they are members of this family.  I don't have any special 'logic'
  that I am talking about, just what the culture calls logic.  The emphasis
  on logic is real, just like the emphasis on analysis (the mathematics of
  the continuum) is real for physics.  But there are lots of other ways of
  representing knowledge, for example, modeling the situations being known.
  And there is plenty of evidence that logics are not necessarily efficient
  for extracting new useful expressions.  This evidence is not just from AI,
  but from all of mathematics and science, which primarily use formalisms
  that are not logics.  As to "extra-logical" knowledge, I understand that
  term ok as a way of indicating that some knowledge is difficult to express
  in logics, but I do not understand it in any more technical way.  Certainly,
  the endeavor of people like McCarthy has been to seek ways to broaden
  the useful expressiveness of logic -- to bring within logic kinds of
  knowledge that here-to-fore seemed "extra-logical".  Certainly, there is lots
  of knowledge we use where we have not yet developed ways of expressing in
  external languages (data structures outside the head); and having not done
  it cannot be quite sure that it can be done.

  I should say that in other people's (admittedly rare) writings about the
  KL there sometimes seems to be a presumption that logic is necessary and
  that, in particular, some notion of implicational closure is necesssary.
  Neither are the case.  Often (read: usually) agents have an indefinitely
  large body of knowledge if expressed in terms of ground expressions of
  the form "in situation S with goal G take action A".  Thus, such knowledge
  needs to be represented (by us or by the agent itself) by a finite physical
  object plus some processes for extracting the applicable ground expressions
  when appropriate.  With logics this is done by taking the knowledge to be
  the implicational closure over a logic expression (usually a big
  conjunction).  But, it is  perfectly possible to have other productive ways
  (models with legal transformations), and it is perfectly possible to
  restrict logics so that modus ponens does not apply (as Levesque and others
  have recently emphasized).  I'm not quite sure why all this is difficult to
  be clear about.  It may indeed be because of the special framing role of
  logics, where to be clear in our analyses of what knowledge is there we
  always return to the fact that other representations can be transduced
  to logic in a way that preserves knoweldge (though it does not preserve
  the effort profile of what it takes to bring the knowledge to bear).]]

> 4.  The definition of the details of the Knowledge Level is in terms of
> the goals of a system.  Is this necessary to the concept, or is it just
> one possible form of it?  There is much knowledge that is not goal
> directed.

[[AN: In the KL formulation, the goals of the system are indeed a necessary
  concept.  The KL is a systems level, which is to say, it is a way of
  describing the behavior of a system.  To get from knowledge to behavior
  requires some linking concept.  This is all packaged in the principle
  of rationality, which simply says that an agent uses its knowledge to
  take actions to attain its goals.  You can't get rid of goals in that
  formulation.  Whether there are other formulations of knowledge that might
  dispense with this I don't rightly know.  Basden appears to be focussing
  simply on the issue of a level that abstracts from representation and
  process.  With only that said, it would seem so.  And certainly, generally
  speaking, the development of logic and epistemology has not taken goals as
  critical.  But if one attempts to formulate a system level and not just
  a level of abstraction, then some laws of behavior are required.  And
  knowledge in action by agents seems to presuppose something in the agents
  that impels them to action.

  Dennett, D. The Intentional Stance, Cambridge, MA: Bradford Books MIT
  Press, 1988 (in press).

  Dietterich, T. G. Learning at the knowledge level.  Machine Learning,
  1986, v1, 287-316.

  Levesque, H. J. Foundations of a functional approach to knowledge
  representation, Artificial Intelligence, 1984, v23, 155-212.

  Levesque, H. J. Making believers out of computers, Artificial Intelligence,
  1987, v30, 81-108.

  Newell, A. The intentional stance and the knowledge level: Comments on
  D. Dennett, The Intentional Stance. Behavioral and  Brain Sciences (in
  press).

  Newell, A., Unified Theories of Cognition, The William James Lectures.
  Harvard University, Spring 1987  (to be published).  (See especially
  Lecture 2 on Foundations of Cognitive Science.)

  Pylyshyn, Z., Computing in cognitive science, in Posner, M. (ed) Foundations
  of Cognitive Science, MIT Bradford Press  (forthcoming).

  Rosenbloom, P. S., Laird, J. E., & Newell, A. Knowledge-level learning
  in Soar, AAAI87.

  Rosenbloom, P. S., Newell, A., &  Laird, J. Towards the knowledge level
  in Soar: The role of architecture in the use of knowledge, in VanLehn, K.,
  (ed), Architectures for Intelligence, Erlbaum (in press).

]]

------------------------------

Date: 7 Sep 88 06:10:53 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: Pinker & Prince: The final remark...


Posted for Pinker & Prince by S. Harnad:
__________________________________________________________________
From: Alan Prince <prince@cogito.mit.edu>
Cc: steve@ATHENA.MIT.EDU

Here is a final remark from us. I've posted it to connectionists and will
leave it to your good offices to handle the rest of the branching factor.
Thanks, Alan Prince.

``The Eye's Plain Version is a Thing Apart''

Whatever the intricacies of the other substantive issues that
Harnad deals with in such detail, for him the central question
must always be: "whether Pinker & Prince's article was to be taken
as a critique of the connectionist approach in principle, or just of
the Rumelhart & McClelland 1986 model in particular" (Harnad 1988c, cf.
1988a,b).

At this we are mildly abashed:   we don't understand the continuing
insistence on exclusive "or".  It is no mystery that our paper
is a detailed analysis of one empirical model of a corner (of a
corner) of linguistic capacity; nor is it obscure that from time
to time, when warranted, we draw broader conclusions (as in section 8).
Aside from the 'ambiguities' arising from Harnad's humpty-dumpty-ish
appropriation of words like 'learning', we find that the two modes
of reasoning coexist in comfort and symbiosis.  Harnad apparently
wants us to pledge allegiance to one side (or the other) of a phony
disjunction.  May we politely refuse?

S. Pinker
A. Prince
______________________________________________________________
Posted for Pinker & Prince by:
--
Stevan Harnad   ARPA/INTERNET:  harnad@mind.princeton.edu   harnad@princeton.edu
harnad@confidence.princeton.edu     srh@flash.bellcore.com      harnad@mind.uucp
CSNET:    harnad%mind.princeton.edu@relay.cs.net    UUCP:  princeton!mind!harnad
BITNET:   harnad@pucc.bitnet    harnad@pucc.princeton.edu         (609)-921-7771

------------------------------

Date: Wed, 7 Sep 88 23:59 EDT
From: Michael Travers <mt@media-lab.media.mit.edu>
Subject: Re: navigation and symbol manipulation

    Date: 23 Aug 88 06:05:43 GMT
    From: jbn@glacier.stanford.edu (John B. Nagle)

                                   It's depressing to think that it might take
    a century to work up to a human-level AI from the bottom.  Ants by 2000,
    mice by 2020 doesn't sound like an unrealistic schedule for the medium term,
    and it gives an idea of what might be a realistic rate of progress.

Well, we're a little ahead of schedule.  I'm working on agent-based
systems for programming animal behavior, and ants are my main test case.
They're pretty convincing, and have been blessed by real ant
ethologists.  But I won't make any predictions as to how long it will
take to extend this methodology to mice or humans.

------------------------------

Date: Fri, 9 Sep 88 10:12 EDT
From: PGOETZ%LOYVAX.BITNET@MITVMA.MIT.EDU
Subject: I got rhythm

Here's a question for anybody:  Why do we have rhythm?

Picture yourself tapping your foot to the tune of the latest Top 40 trash hit.
While you do this, your brain is busy processing sensory inputs, controlling
the muscles in your foot, and thinking about whatever you think about when
you listen to Top 40 music.

If you could write a conventional program to do all those things, each task
would take a different amount of time.  It would "consciously" perceive
time at varying rates, since a lot of time spent processing one task would
give it less time of "consciousness" (whatever that is.)  So if this program
were solving a system of 100 equations and 100 unknowns while tapping its
simulated foot, the foot taps would be at a slower rate than if it were
doing nothing else at all.

I suspect that the same would hold true of parallel programs and expert-
system paradigms.  For neural networks, an individual task would take the
same amount of time regardless of data, but some things require more subtasks.

It comes down to this:  Different actions require different processing
overhead.  So why, no matter what we do, do we perceive time as a constant?
Why do we, in fact, have rhythm?  Do we have an internal clock, or a
"main loop" which takes a constant time to run?  Or do we have an inadequate
view of consciousness when we see it as a program?

Phil Goetz
PGOETZ@LOYVAX.bitnet

------------------------------

Date: Sun, 11 Sep 88 17:32:38 -0200
From: Antti Ylikoski <ayl%hutds.hut.fi@MITVMA.MIT.EDU>
Subject: Robotics and Free Will

In a recent AIList issue, John McCarthy presented the problem how a robot
could utilize information dealing with its previous actions to improve its
behaviour in the future.  Here is an idea.

Years ago, an acquaintance of mine came across a very simple computer game
which was annoyingly overwhelming to its human opponent.

The human chose either 0 or 1.  The computer tried to guess the
alternative he had chosen in advance.  He told the alternative he had
chosen to the computer, which told him if it had guessed right or
wrong.

The human got a point if the guess of the machine was incorrect; otherwise
the machine got a point.

After a number of rounds, the computer started to play very well, guessing
the alternative that the human had chosen correctly in some 60-70 per cent of
the rounds.

Neither of us ever got to know how the game worked.  I would guess it had a
model of the behaviour of the human opponent.  Perhaps the model was a Markov
process with states "human chooses 0" and "human chooses 1"; maybe the
program performed a Fourier analysis of the time series.

This suggests an answer to McCarthy's problem.  Make the robot have a
model of the behaviour of the environment.  Calculate the parameters of
the model with a best fit approach from the history data.  The robot
also might have several possible models and choose the one which
produces the best fit to the history data.

If the environment is active (other robots, humans) one also could
apply game theory.


------------------------------------------------------------------------------
Antti Ylikoski                !YLIKOSKI@FINFUN    (BITNET)
Helsinki University of Technology    !OPMVAX::YLIKOSKI    (DECnet)
Digital Systems Laboratory        !mcvax!hutds!ayl    (UUCP)
Otakaari 5 A                !
SF-02150 Espoo, Finland            !

------------------------------

End of AIList Digest
********************

∂14-Sep-88  2154	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #84  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 14 Sep 88  21:53:52 PDT
Date: Wed 14 Sep 1988 21:18-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #84
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 15 Sep 1988      Volume 8 : Issue 84

 Mathematics and Logic:

  The Ignorant assumption (leftover Religion) (5 messages)
  Rules .vs. axioms

----------------------------------------------------------------------

Date: 9 Sep 88 03:40:56 GMT
From: s.cc.purdue.edu!afo@h.cc.purdue.edu  (Neil Rhodes)
Subject: Re: The Ignorant assumption

In a previous article, Greg Lee writes:
>From article <1383@garth.UUCP>, by smryan@garth.UUCP (Steven Ryan):
>" That's it. Any formal system requires such assumptions.
>
>Well, I would say that some natural deduction systems of logic have
>no assumptions -- only rules of derivation.  But you can probably
>find a definition of 'assumption' that makes what you say true.
>

I have a problem with Mr. Lee's reasoning in the above statement, and it
seems to be the foundation of most of his recent arguments.

If a formal system were to contain "only rules of derivation," what would
these rules act upon to form statements (theorems) about the system?
Rules alone in a formal system give you nothing.  For this reason, you
need a given set of statements (axioms) from which these rules can derive
other statements (theorems).  Since these axioms are not derived and are
necessary to the formal system, then you must "believe" them to be true
while working within the system.

Since many scientific statements are derived within formal systems, to
believe these statements you must also believe other statements which
cannot be proved.

If Mr. Lee still believes that science asks us to take nothing on
"faith," then I am curious to know what flaws he finds in *my*
reasoning.

--
Neil Rhodes
afo@s.cc.purdue.edu

------------------------------

Date: 9 Sep 88 14:27:25 GMT
From: uhccux!lee@humu.nosc.mil  (Greg Lee)
Subject: Re: The Ignorant assumption

From article <3546@s.cc.purdue.edu>, by afo@s.cc.purdue.edu (Neil Rhodes):
" ...
" I have a problem with Mr. Lee's reasoning in the above statement, and it
" seems to be the foundation of most of his recent arguments.
"
" If a formal system were to contain "only rules of derivation," what would
" these rules act upon to form statements (theorems) about the system?
" Rules alone in a formal system give you nothing.  For this reason, you

They give you nothing but tautologies, at least.

" need a given set of statements (axioms) from which these rules can derive
" other statements (theorems).  Since these axioms are not derived and are
" necessary to the formal system, then you must "believe" them to be true
" while working within the system.

There are formalizations of logic that require axioms, but not all
do.  Gerhard Gentzen created systems that have no axioms.  For
instance:
        Suppose p (one can introduce provisional assumptions freely)
        Conclude p (one can repeat an assumption as a conclusion)
  So, p implies p (since p was concluded on the basis of the provisional
          assumption p, one can derive the implication)

" Since many scientific statements are derived within formal systems, to
" believe these statements you must also believe other statements which
" cannot be proved.

Perhaps that's so.  My example does not concern "scientific statements".
I was reacting to a statement that "formal systems" require assumptions.
They don't -- maybe formalized scientific systems do, in a sense,
but even there assumptions can be treated as provisional rather than
as axioms.  This is not to disagree with what Neil Rhodes said just
above.

As you will observe, a Gentzen system does involve assumptions, but
no specific assumption is given as part of the system.  That is, there
are no axioms.

" If Mr. Lee still believes that science asks us to take nothing on
""faith," then I am curious to know what flaws he finds in *my*
" reasoning.

I find no flaws.  If you are to have faith in scientific conclusions,
you must have faith in scientific assumptions.  But why have faith in
anything?  Why does "science ask us" to do this?  If you have a need to
believe in things, other than tautologies, I think you ought not to lay
this at the door of science.  It's a personal problem, which I think you
should try to get over.

                Greg, lee@uhccux.uhcc.hawaii.edu

------------------------------

Date: 10 Sep 88 10:32:39 GMT
From: l.cc.purdue.edu!cik@k.cc.purdue.edu  (Herman Rubin)
Subject: Re: The Ignorant assumption

In a previous article, Greg Lee writes:
> From article <3546@s.cc.purdue.edu>, by afo@s.cc.purdue.edu (Neil Rhodes):
                        ....................
< " Rules alone in a formal system give you nothing.  For this reason, you

> They give you nothing but tautologies, at least.

< " need a given set of statements (axioms) from which these rules can derive
< " other statements (theorems).  Since these axioms are not derived and are
< " necessary to the formal system, then you must "believe" them to be true
< " while working within the system.

> There are formalizations of logic that require axioms, but not all
> do.  Gerhard Gentzen created systems that have no axioms.  For
> instance:
>       Suppose p (one can introduce provisional assumptions freely)
>       Conclude p (one can repeat an assumption as a conclusion)
>   So, p implies p (since p was concluded on the basis of the provisional
>         assumption p, one can derive the implication)

In a treatment of natural deduction mentioned above, one shows that

        The customary axioms and axiom schemes are derivable.

        The customary rules of derivation are valid.

        Any theorem provable by natural deduction can be proved by using
the customary axioms, axiom schemes, and rules of derivation.

However, starting with a set of axioms and no rules, nothing more can be
derived.  Thus we see that rules are stronger than axioms.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet, UUCP)

------------------------------

Date: 10 Sep 88 15:17:45 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Rules vs axioms


     In theorem proving, previously proved theorems are often used, correctly,
as rewrite rules.  Free addition of new "axioms" to theorem proving systems
generally results in unsoundness.  As Boyer and Moore once wrote, "It is
one thing to use axioms about a concept known to mathematics for
a century.  It is quite another to write down axioms about an idea
invented yesterday."   See Boyer and Moore's "A Computational Logic"
for the constructivist's way of avoiding this problem.

     Attempts to use the theorem proving paradigm in less formal domains
were made in the late 70s and early 80s, but without notable success.

                                        John Nagle

------------------------------

Date: 11 Sep 88 00:07:50 GMT
From: garth!smryan@unix.sri.com  (Steven Ryan)
Subject: Re: The Ignorant assumption

>There are formalizations of logic that require axioms, but not all
>do.  Gerhard Gentzen created systems that have no axioms.  For
>instance:
>       Suppose p (one can introduce provisional assumptions freely)
>       Conclude p (one can repeat an assumption as a conclusion)
>  So, p implies p (since p was concluded on the basis of the provisional
>         assumption p, one can derive the implication)

Well, I see an assumption--it assumes the existence of a formal system.

------------------------------

Date: 11 Sep 88 00:26:44 GMT
From: garth!smryan@unix.sri.com  (Steven Ryan)
Subject: Re: The Ignorant assumption

>" Isn't adopting provisional assumptions an act of faith?
>
>Not really.  Consider the provisional assumption of a reductio ad
>absurdum argument.
>
>" ... I define faith as adopting assumptions without proof.
>
>It's an odd definition -- if we adopt it, we are led to the conclusion
>that all of us have faith and are therefore religious.
>
>" That's it. Any formal system requires such assumptions.
>
>Well, I would say that some natural deduction systems of logic have
>no assumptions -- only rules of derivation.  But you can probably
>find a definition of 'assumption' that makes what you say true.

I really was hoping people would be content with an intentionally imprecise
and informal discussion. If we want to be rigourous, I think it is important
to define a process. I will propose:

A process P is an orderred triple (S,M,Q).
S is a undefined set (of states).
M is a set of pdfs m:S->[0,1].
Q is a relation on MxM called transistions, denoted m->n.

P is probablistic if for any m,s, 0<m(s)<1.
P is not probablistics if for all m,s, m(s)=0 or m(s)=1.

P is deterministic if Q is a function.
P is nondeterministic if Q is not a function.

P is a formal system if S is denumerable and Q is effectively computable.

I think science and religion and CT could be explained as different constraints
on S, M, and Q. If the consensus is to move the discussion into a cryptoformal
notations, that's fine by me, since my education was in math anyway rather than
philosophy.

------------------------------

End of AIList Digest
********************

∂15-Sep-88  2246	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #85  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 15 Sep 88  22:46:17 PDT
Date: Fri 16 Sep 1988 01:22-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #85
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 16 Sep 1988       Volume 8 : Issue 85

 Seminars:

  Expert Systems for Agriculture Workshop
  Parallel Symbolic Computing Using Multilisp
  The Representation of Pronouns and Definite Noun Phrases

----------------------------------------------------------------------

Date: Tue, 6 Sep 88 10:43:34 CDT
From: dale@topaz.tamu.edu (A. Dale Whittaker)
Subject: Expert Systems for Agriculture Workshop


A first-of-its-kind workshop on the integration of expert systems with
conventional problem solving techniques for agricultural problems was
held in San Antonio, Texas on August 10 through 12, 1988.  This workshop was
supported by the American Association for Artificial Intelligence (AAAI)
and by the Knowledge Systems Area of the American Society of
Agricultural Engineering (ASAE).  The meeting was part of the AAAI
workshop series on applied topics and was focused toward agriculture.

                 Agriculture is an area of enormous potential for appli-
            cations of integrated knowledge-based/conventional technolo-
            gies. For example, excellent  databases  are  available  for
            information  ranging from historical weather data to indivi-
            dual dairy  cow  records.   Complex  simulations  have  been
            developed to describe phenomena ranging from plant growth to
            economic systems.  These investments are a valuable asset as
            knowledge sources for knowledge-based decision making.

                 The primary goals of this meeting were to:

            -    assess the state-of-the-art of integrated  systems  for
                 agriculture.

            -    determine what factors are  necessary  to  advance  the
                 state-of-the-art.

            -    expose research needs and opportunities for the future.

            -    form  an  interdisciplinary  core  of  researchers  for
                 future communication and collaboration.


          A  wide  variety   of   research   organizations   were
          represented at the meeting including:

                 Department of Entomology, Texas A&M University

                 Department of Entomology, University of Massachusetts

                 School of  Computer  Science,  Rochester  Institute  of
                 Technology

                 Department of Statistics, North Carolina State  Univer-
                 sity

                 Agricultural Engineering Department, Texas A&M  Univer-
                 sity

                 Honeywell-Bull, Knowledge Engineering Services

                 Texas Agricultural Experiment Station

                 United States Dept.  of  Agri.,  Agricultural  Research
                 Service (Texas, Arizona, Nebraska)

                 International Maize and Wheat Improvement Center, Cali,
                 Columbia

                 Animal Science Department, Oklahoma State University

                 Department of Agricultural and Applied Economics, Univ.
                 of Minn.

                 Department of Agricultural Economics, Univ. of Arkansas

                 Agricultural Engineering Department, Purdue University

                 Institute of Food and Agricultural Sciences,  Univ.  of
                 Fla.

         Topics presented included:


                 The state of the Art and Future of Symbolic and Numeric
                 Computation: Hardware Industry Viewpoint

                 EASY-MACS:  A  Knowledge-based  System  Supporting  IPM
                 Decision Making in Apples

                 Integrating a Knowledge-based Meat Grading System  with
                 a Voice-input Device

                 Expert System and Conventional Programming Methods  for
                 Small Farm Planning

                 A Blackboard Approach for  Integrating  Expert  Systems
                 with Conventional Problem Solving Techniques

                 The State of the Art and Future of Symbolic and Numeric
                 Computation: Software Industry Viewpoint

                 The Use of Expert System Techniques and Database  Files
                 to Produce Customized Decision Aid Software

                 COTFLEX:  An Integrated Expert and Database System  for
                 Decision Support in Texas Cotton Production

                 Use of an Expert System to Derive Pesticide Groundwater
                 Contamination Recommendations

                 An Expert  System  to  Elicit  Risk  Preferences:   The
                 Futility of Utility Revisited

                 Developing Integrated Decision  Support  Systems  Using
                 Prolog

                 Decision Analysis as a Tool for Integrating  Simulation
                 with  Expert  Systems  When  Risk  and  Uncertainty are
                 Important

                 Farm Application of GOSSYM/COMAX

                 Integrated Expert System for Culling Management of Beef
                 Cows

****************************************************************************

For more information concerning the workshop, contact:

A. Dale Whittaker
Agricultural Engineering Dept.
Texas A&M University
College Station, TX  77843-2117

dale@topaz.tamu.edu

(409)845-8379

------------------------------

Date: Tue, 06 Sep 88  16:35:08 EDT
From: "Peter Mager" <met9i7n%BUACCA.BITNET@MITVMA.MIT.EDU>
Subject: Parallel Symbolic Computing Using Multilisp

The following seminar may be of interest to AI list subscribers:
                         ACM GREATER BOSTON CHAPTER SICPLAN

                             Thursday, September 8, 1988
                                       8 P.M.

                     Bolt Beranek and Newman, Newman auditorium
                              70 Fawcett St., Cambridge

                     Parallel Symbolic Computing Using Multilisp
                               Robert H. Halstead, Jr.
                           Laboratory for Computer Science
                                         MIT

          Multilisp  is  an  extension  of  the  Lisp  dialect  Scheme  with
        additional  operators  and   additional   semantics   for   parallel
        execution.  The  principal parallelism construct in Multilisp is the
        "future," which exhibits  some  features  of  both  eager  and  lazy
        evaluation.   Multilisp  has  been  implemented,  and  runs  on  the
        shared-memory   Concert   multiprocessor,  using  as  many   as   34
        processors.  The implementation uses interesting techniques for task
        scheduling and garbage collection.  The task scheduler helps control
        excessive  resource  utilization by means of  an  unfair  scheduling
        policy;  the  garbage  collector  uses  a  multiprocessor  algorithm
        modeled after the incremental garbage collector of Baker.

          Current work focuses on making Multilisp a more humane programming
        environment, on expanding the power of  Multilisp  to  express  task
        scheduling policies, and on  measuring  the  properties of Multilisp
        programs with the goal of designing  a  parallel  architecture  well
        tailored for efficient  Multilisp  execution.  The talk will briefly
        describe  Multilisp,  discuss  the  areas of current  activity,  and
        outline  the   direction  of  the  Multilisp  project  with  special
        attention to the areas of task scheduling and architecture design.

------------------------------

Date: Tue 13 Sep 88 15:58:37-EDT
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: The Representation of Pronouns and Definite Noun Phrases


                    BBN Science Development Program
                       AI Seminar Series Lecture

      THE REPRESENTATION OF PRONOUNS AND DEFINITE NOUN PHRASES IN
                             LOGICAL FORM

                             Mary P. Harper
                            Brown University
                         Computer Science Dept.
                    (MPH%cs.brown.edu@RELAY.CS.NET)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                    10:30 am, Thursday September 15


     Initially, I will discuss the representation of pronouns in logical
form.  Two factors influence the representation of pronouns.  The first 
factor is computational.  This factor imposes certain requirements on
the logical form representation of a pronoun.  For example, the initial 
representation of a pronoun in logical form should be derivable
before its antecedent is known.  The antecedent, when  determined,
should be specified in a way consistent with the initial representation of 
the pronoun.  The second factor is linguistic.  This factor requires
that the representation for a pronoun should be capable of expressing
the range of behaviors of a pronoun in English, especially in the domain
of verb phrase ellipsis.

     I will review past models of verb phrase ellipsis.  These models do 
not provide a representation of pronouns for computational purposes, and 
accordingly fail to meet our computational requirements.  Additionally, I will
show that these models fail to represent pronouns in a way which captures the
full range of behaviors of pronouns.  

    I will then propose a new representation for pronouns and show how this 
representation meets our computational requirements while providing a better 
model of pronouns in verb phrase ellipsis. 

    The representation of definite noun phrases will also be discussed.  As in 
the case of pronouns, there are two factors which influence this representation
(i.e. modeling definite behavior and obeying our computational guidelines).  
I will discuss several examples which argue for representing definites as 
functions in logical form before pronoun resolution is carried out.  I will
discuss the actual representation I chose, and illustrate its use with an
example.

------------------------------

End of AIList Digest
********************

∂18-Sep-88  1236	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #86  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 18 Sep 88  12:35:58 PDT
Date: Sun 18 Sep 1988 15:19-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #86
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 19 Sep 1988       Volume 8 : Issue 86

 Why we got rhythm (5 messages)

----------------------------------------------------------------------

Date: Thu, 15 Sep 88 09:01:25 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: Re: I got rhythm

Subject: Re:  I got rhythm
In AIList Digest for Thursday, 15 Sep 1988 (Volume 8 : Issue 83),
we read the following from Phil Goetz (PGOETZ%LOYVAX.BITNET@MITVMA.MIT.EDU):

PG> Here's a question for anybody:  Why do we have rhythm?
  |
  | Picture yourself tapping your foot to the tune of the latest Top 40
  | trash hit. . . . Different actions require different processing
  | overhead.  So why, no matter what we do, do we perceive time as a
  | constant?  Why do we, in fact, have rhythm?  Do we have an internal
  | clock, or a "main loop" which takes a constant time to run?  Or do we
  | have an inadequate view of consciousness when we see it as a program?

The music has rhythm.  The foot tapper has synchrony.

There are lots of physiological processes that are rhythmical in nature,
and with which one can synchronize other behavior.  Some are ongoing,
notably heartbeat, breathing, and brain waves.  Others are easier to
start and stop, like walking or running.

However it's done, it seems straightforward for organisms to set up an
ad hoc oscillation, as in shivering, rubbing hands/paws together,
pacing.  For such activities it seems plausible that the governing
mechanisms are encapsulated and require little attention.  Minsky's
_Society of Mind_ is a good place to look.  (Open question how ad hoc
they are, perhaps they are in synchrony with preexisting rhythms.)

The musicians (and not just the toe tappers and other dancers) are also
synchronizing their actions with respect to existing rhythms, even if
only to a beat counted out by the leader of the band at the outset
(a-one, and a-two . . . ).  Where does the initiating musician get the
rhythm?  Heartbeat?  Imagining/ remembering oneself walking?  (That is
the meaning of `andante'.)  Imagining/remembering people dancing?
Certainly, once they have started, members of the band must synchronize
their playing with one another (ensemble).

What happens when the foot tapper is preoccupied with other thoughts?
The tapping doesn't slow down, it can't because synchrony is essential
to it.  Instead, it becomes sporadic.  The process itself gets dropped
and picked up again.  Just so, new musicians have to practice keeping up
a steady rhythm despite being distracted by other things (coordinating
fingers on the instrument, remembering the words in a song).  Their
novice performance is typically marked by interrupting and resuming the
given rhythm.  If a practicing pianist slows down in a passage where the
notes are small and close together, it is mostly to coordinate the
fingers physically, not to free up processing time.  (Preferred way is
to slow the whole piece down and play at a constant tempo.)

It seems to require a certain amount of attention to maintain a rhythmic
behavior, presumably above the threshold required to maintain synchrony.

But that's not much, as anyone can attest who has discovered her or his
body swaying or falling in step or tapping unawares during a
conversation.

Rhythm (cyclicity) is an environmental given.  Resonance (entrainment)
is also a given in physics, ecology, psychology.  Music and dance play
with these givens.

Seems to me that cyclicity and synchrony has survival value in that it
helps make organisms predictable to one another.  Creatures that become
prey are typically those unable to maintain synchrony with their social
group because of sickness, etc.  Stricking examples of synchrony include
flocks of birds, schools of fish.  We have recently heard of LIFE
emulations of flock behavior involving little processing overhead.

Perhaps the problem is not how do individuals synchronize in a flock,
but rather how does individuation happen out of the flock, and to what
extent.  It seems plausible that the experience of being an independent
ego that we humans cherish is an illusion.  To maintain such an
illusion, we ignore counterevidence.  A pretty good definition of
unconscious behavior.  (Say, did you know your foot was tapping?)

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Thu, 15 Sep 88 10:32:21 EDT
From: hayden@prism.TMC.COM (Hayden Ridenour)
Subject: rhythm

> So why, no matter what we do, do we perceive time as a constant?

Who does this?  Try keeping track of time when you're in a hurry to get
seven different things done at once and compare it to how slow time passes
when you're waiting for something.  The time passes at the same rate, but
we don't perceive it at the same constant rate.

As for why you can be tapping your foot to the rhythm of a song you're
listening to while you're doing other things:  you have the music as a
timing source.  You could think of it as an interrupt process keyed to
the rhythm of the music.
~?
~h

------------------------------

Date: Thu 15 Sep 1988 13:44 CDT
From: <UUCJEFF%ECNCDC.BITNET@MITVMA.MIT.EDU>
Subject: RE: I got rhythm

>
>It comes down to this:  Different actions require different processing
>overhead.  So why, no matter what we do, do we perceive time as a constant?
>Why do we, in fact, have rhythm?  Do we have an internal clock, or a
>"main loop" which takes a constant time to run?  Or do we have an inadequate
>view of consciousness when we see it as a program?

>Phil Goetz
>PGOETZ@LOYVAX.bitnet

Being both an international jazz recording artist and computer programer,
with a growing background in AI (but not the AI religion),
I will join this jam session on this tune called "I GOT RYTHM".

I really dont know how much I can answer Phil's questions,
but I can give some perspective on how a musician or drummer or someone
with good time views rhythm.

Start out by looking at some of the terminology surrounding rythm, we say
something is "in a groove" or  "in the pocket" or "it swings".
The first two imply precision, ease, and continuaty, while "swing" implies
motion.  All three terms imply "autonomy", and that is so true.
When something "swings", the music goes by itself.
"Time" is another important word.  For musical genres which are known to
have advanced forms of rythm, "TIME" is a very mutable characteristic.
By laying slightly back of the beat, you make the sound float in the air,
and by pushing ahaed slightly, you can give music drive and fire.
To be able to master it and use it, I would tend to say it involves all
parts of the human psycho physical structure... You certainly excite  your
nervous system, you need your reflexes to control the muscles that are tapping
the foot or playing the instrument, the emotions are involved, and on the
mental level, you need to concentrate and use your ability to image things.
Then of course there is the musical idea itself behind the whole thing.
Certainly if you are lazily tapping your foot and not paying too much
other attention, these other charactaristcs will take a lessor role.
But to the extent you are tapping good time, you must have the automatism
there.  Where this comes from, I don't know, but that is how you feel it.

Therefore, I would suggest that we do not perceive time as a constant.
We don't in ordinary life, and we don't in music.  If we are really
getting into a piece, we could listen for hours and it will not seem
like a long time.  I heard a live performance of Stavinsky's La Sacre du
Printemp, even though it is 40 minutes, it went by like 10 minutes.
Isn't it a famous quote of Einstein when asked to explain relativity,
and he said "If you are sitting next to a beautiful girl, hours go by
like minutes, but if you are sitting next to ..., minutes seem like hours"

Phil asked about programming this.  Since I have become disillusioned at
how generic most jazz today sounds ( Yes Wynton, that's you) my musical
direction has been to make a One Man Digital Band with an Atari ST MIDId
to my MIDIcapable trumpet and a variety of synthesizers and drum machines.

I have been programming a walking bass line. It reads my trumpet and
figures a bass line in real time.  If I change keys, It changes keys.
If I hold a note. It holds a note.  etc...
The only way to program rhythm and make it sound "human", is to study humans,
identify the slight delays or anticipations, and try to come up with a
scheme so it is related to the appropriate information of the other music.
This itself is a creative act on the part of the human being.
There are no fixed schemes on determining what is approprate, one has to
do the research and evaluate the results.  Whehher you can get it down to
an adaptive filter is anyone's guess.

At present there are some people working in this area, and there are even
some commercial products out that based on some of the concepts I have
presented.  There are devices which are designed to take a perfectly timed
computer generated drum sync track and massage the pulses so it will give
a human feel.  On the unit are switchs to make it sound like a
60s Motown feel, a 70s L.A. sound, brazillian, and on and on and on.
I think I have said enough, I hope that answer's Phils question.  If not
I hope that this was interesting otherwise.  If not, solid..........

Jeff Beer, UUCJEFF@ECNCDC.BITNET... Chicago Ill....
"I'll play it and tell you what it is later"... Miles Davis

------------------------------

Date: Fri, 16 Sep 88 03:07:35 EDT
From: Joseph.Tebelskis@F.GP.CS.CMU.EDU
Subject: Re: I got rhythm

In V8 #83, Phil Goetz asks:

> It comes down to this:  Different actions require different processing
> overhead.  So why, no matter what we do, do we perceive time as a constant?
> Why do we, in fact, have rhythm?  Do we have an internal clock, or a
> "main loop" which takes a constant time to run?  Or do we have an inadequate
> view of consciousness when we see it as a program?

First you need to realize that the computer is a poor metaphor for the brain.
Modern computers are organized around a single CPU through which all the
computations must flow, while memory plays a passive and underutilized
role -- hence the CPU is called the "bottleneck" of modern computers.
As you noted, multitasking slows down individual tasks on such machines.
In contrast, the brain has a hundred billion processors (neurons), and its
vast memory is active rather than passive.  Its various modules operate in
parallel, so they don't slow each other down; this is why we can perceive
time as a constant no matter what we're doing.  Also, the brain does not
execute a high-level "program" of instructions: its operation is guided by
autonomous physical processes at the neural level.  From this neural level
emerge all the diverse cognitive phenomena, including rational thought,
emotions, and consciousness.  However, the only emergent phenomenon which
maps well onto our computer programming paradigm is rational thought -- so
that's what symbolic AI has always concentrated on.  The emergent phenomenon
of consciousness is "made of the same stuff" at a low level, but it just
cannot be approximated satisfactorily at the symbolic (programming) level.

With regard to rhythm and parallelism, I currently visualize the brain
as an extremely complex "resonance chamber".  At various scales and
physical locations within the brain, different subnetworks can be resonating
in different ways.  The simplest kind of resonance would be a cyclical
reverberation of activity at a characteristic frequency; such a pulsing
signal could control your foot as you tap out a rhythm.  More complex types
of resonance may simultaneously be in operation elsewhere in the brain,
controlling unrelated cognitive tasks such as doing a math problem.  I
suspect that subnetworks of the brain use complex resonance patterns to
symbolically represent brief progressions of events, such as perceptual
sequences, fast motor procedures, and internal state transitions.  Such
temporally encoded symbols, recursively telescoped together in the
"resonance chamber" of the brain, may account for the natural emergence of
a hierarchy of symbolic representations for event progressions spanning
arbitrary time scales.  It is also conceivable that resonant representations
avoid interfering with each other in the brain just as physical waves do,
by superposition.

  Joe Tebelskis, connectionist
  (jmt@f.gp.cs.cmu.edu)

------------------------------

Date: Fri, 16 Sep 88 14:59:08 bst
From: Bert Hutchings <bert%aiva.edinburgh.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Re: I got rhythm

In article <19880915011053.7.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU> Phil Goetz
asked "Why do we have rhythm?  . . .  Why do we, in fact, have rhythm?"

Most of us have, but...  My wife taught music to young schoolchildren and
found an occasional exception.  We know one rhythm-deaf adult too, unable
to keep a beat, or to distinguish a regular one from a slightly irregular
one.  I estimate between 1/50 and 1/200 of people have this condition.

------------------------------

End of AIList Digest
********************

∂18-Sep-88  1452	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #87  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 18 Sep 88  14:51:55 PDT
Date: Sun 18 Sep 1988 15:25-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #87
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 19 Sep 1988       Volume 8 : Issue 87

 Philosophy:

  The Uncertainty Principle
  State and change/continuous actions (2 messages)
  Why?

----------------------------------------------------------------------

Date: Thu, 15 Sep 88 14:59:38 edt
From: bph%buengc.bu.edu@bu-it.BU.EDU (Blair P. Houghton)
Subject: Re: The Uncertainty Principle.

>In Vol 8 # 78 Blair Houghton cries out:-
>> I do wish people would keep *recursion* and *perturbation* straight
>> and different from the Uncertainty Principle.

And Gordon Joly Whines Back:
>Perhaps... But what is the *perturbation* in question? "Observation"?

By "recursion," I actually meant feedback, which was the process
to which Heisenberg-o-morphic uncertainty was being applied in
order to invoke chaos in artificially intelligent systems.

Lessee if I can verbosify the intuitions:

Uncertainty exists because one can not determine the state of
a particle system unless one has:
        a. infinite time to make the measurement with zero energy; or,
        b. infinite energy to make the measurement in zero time.

(it's usually equivalently described as:
"determining the momentum requires a long distance over which to
observe, hence the particle's position, which can be anywhere along
that distance, is not known; and, determining the position requires
a very short distance for observation, which causes the error of
the momentum measurement to increase.)

This is manifest in the fact that adding energy to the
system in order to make an understandable observation will necessarily
change the state of the system.

This DOES NOT mean that observing the system creates uncertainty.

Such a thing is equivalent to saying that observing the perfectly flat
surface of the ocean causes waves to form, when in fact it is the
observer's boat's bobbing in the water that causes those waves.

THIS is the "perturbation in question."

>Blair also observes
>> Electrons "know" where they are and where they are going.
>
>And I know where I'm coming from too, Man!
>
>On page 55 (of the American edition) of "A Brief History of Time",
>Professor Stephen Hawking says

And I'm s'posed to argue?  No Way.

>``The uncertainty principle had profound implications for way in
>which we view the world... The uncertainty principle signaled an
>end to Laplace's dream of a theory of science, a model of the
>universe that could be completely deterministic: one certainly
>cannot predict future events exactly if one cannot even measure
>the present state of the universe precisely!''
>
>And what of "chaos"?

Actually, it means we have to keep our error-bars polished and
ready.  I wasn't ready for infinite-precision laboratory
equipment, anyway.

Theoretically, it means our theory has to be treated the same
way we treat experimental data; we could even begin to consider
current theory to be the data of logical deduction experiments,
which is I believe a view consistent with Einstein's
of mathematics as an imprecise method for describing nature
at the incept.

                                --Blair
                                  "It's always a nice feeling
                                   to be consistent with Einstein."

------------------------------

Date: 16 Sep 88 21:25:29 GMT
From: uflorida!fishwick@gatech.edu  (Paul Fishwick)
Subject: state and change/continuous actions


An inquiry into concepts of "state" and "change":

In browsing through Genesereth's and Nilsson's recent book "Logical
Foundations of Artificial Intelligence," I find it interesting to
compare and contrast the concepts described in Chapter 11 - "State
and Change" with state/change concepts defined within systems
theory and simulation modeling. The authors make the following statement:
"Insufficient attention has been paid to the problem of continuous
actions." Now, a question that immediately comes to mind is "What problem?"
Perhaps, they are referring to the problem of defining semantics for
"how humans think about continuous actions." This leads to some
interesting questions:

 1) Clearly, the vast literature on math modeling is indicative of
    "how humans think about continuous actions." This knowledge is
    in a compiled form, and use of this knowledge has served
    science in an untold number of circumstances.

 2) If commonsense knowledge representation is the issue then we
    might want to ask a fundamental question "Why do we care about
    representing commonsense knowledge about continuous actions?"
    I can see 2 possible goals: One goal is to validate some given
    theory of commonsense "continuous action" knowledge against
    actual psychological data. Then we could say, for instance, that
    Theory XYZ reflects human thought and is therefore useful.
    I don't think it would be useful to increase our knowledge of
    mechanics or fluidics, for instance, but perhaps a psycho-therapist
    might find this knowledge useful. A second goal is to obtain
    a better model of the continuous action (this reflects the
    "AI is an approach to problem solving" method where one can
    study "how Johnny reasons when balls are bounced" and obtain
    a scientifically superior model regardless of its actual
    psychological validity). Has anyone seen a commonsense model
    of continuous action that is an improvement over systems of
    differential equations, graph based queueing models (and other
    assorted formal languages for systems and simulation)?

Obviously, I'm trying to spark some inter-group discussion and so I hope
that any responses will post to both the AI group (comp.ai) AND
the SIMULATION group (comp.simulation). In addition (sci.math) and
(comp.theory.dynamic-sys) may be appropriate.

I believe that Genesereth and Nilsson are quite correct that "reasoning
about time and continous actions" is an important issue. However, an
even more important issue revolves around people discussing
concepts about "state," "time," and "change" by crossing disciplines.
Any thoughts?

-paul

+------------------------------------------------------------------------+
| Prof. Paul A. Fishwick.... INTERNET: fishwick@bikini.cis.ufl.edu       |
| Dept. of Computer Science. UUCP: gatech!uflorida!fishwick              |
| Univ. of Florida.......... PHONE: (904)-335-8036                       |
| Bldg. CSE, Room 301....... FAX is available                            |
| Gainesville, FL 32611.....                                             |
+------------------------------------------------------------------------+

------------------------------

Date: 17 Sep 88 16:14:13 GMT
From: uhccux!lee@humu.nosc.mil  (Greg Lee)
Subject: Re: state and change/continuous actions

From a previous article by fishwick@uflorida.cis.ufl.EDU (Paul Fishwick):
"
"  2) If commonsense knowledge representation is the issue then we
"     might want to ask a fundamental question "Why do we care about
"     representing commonsense knowledge about continuous actions?"
"     I can see 2 possible goals: One goal is to validate some given
" ...

To reason about continuous actions where the physics hasn't been
worked out or is computationally infeasible.  How about that as a
third goal?

" Obviously, I'm trying to spark some inter-group discussion and so I hope
" that any responses will post to both the AI group (comp.ai) AND
" the SIMULATION group (comp.simulation). In addition (sci.math) and
" (comp.theory.dynamic-sys) may be appropriate.

Tsk, tsk.  Left out sci.lang.  The way people think about these
things is reflected in the tense/aspect systems of natural languages.

" I believe that Genesereth and Nilsson are quite correct that "reasoning
" about time and continous actions" is an important issue. However, an
" even more important issue revolves around people discussing
" concepts about "state," "time," and "change" by crossing disciplines.
" Any thoughts?

In English, predicates which can occur with Agent subjects, those
capable of deliberate action, can also occur in the progressive
aspect, expressing continuous action.  This suggests some
connection between intent and continuity whose nature is not
obvious, to me anyway.

                Greg, lee@uhccux.uhcc.hawaii.edu

------------------------------

Date: 17 Sep 88 23:40:47 GMT
From: markh@csd4.milw.wisc.edu  (Mark William Hopkins)
Subject: Why?


     Any time that one sets out to deal with a major problem, there is usually
some kind of end-state that is desired, an IDEAL if you will.  It's a necessary
component of the problem solving task; so much so that if you were to lack the
goals and direction you would just end up floundering and meandering -- and
that's what is often (wrongly) perceived as doing philosophy.

     So this brings up the question on my mind:

          Why does anyone want artificial intelligence?

What is it that you're seeking to gain by it?  What is it that you would have
an intelligent machine do?  And when you answer these questions then answer how
and why considering AI seems more urgent today than ever before.

     Link what I've just said in the first two paragraphs.  You'll see that it
is a recursive problem.  It applies both to AI and to you in the quest of
seeking AI.  If you want to successfully deal with the problem of AI, then you
are going to have to know just what it is that you are trying to do.  Human
curiosity (about the nature of our mind) is one thing, but even that has to be
directed toward a pressing need -- so the question remains just what the
pressing need is.  To say that we merely desire to understand the mind is just
a way of rephrasing the question -- it is not an answer.

     I asked the question and raised the issue, so probably I should try to
answer it too.   The first thing that comes to mind is our current situation
as regards science -- its increasing specialization.  Most people will agree
that this is a trend that has gone way too far ... to the extent that we may
have sacrificed global perspective and competence in our specialists; and
further that it is a trend that needs to be reversed.  Yet fewer would dare
to suggest that we can overcome the problem.  I dare.  One of the most
important functions of AI will be to amplify our own intelligence.  In fact,
I believe that time is upon us that this symbiotic relation between human and
potentially intelligent machine is triggering an evolutionary change in our
species as far as its cognitive abilities are concerned.
      Seen this way, we'll realise that the axiom still holds that: THE COMPUTER
IS A TOOL.  It's an Intelligent Tool -- but a tool nevertheless.  Nowadays, for
instance, we credit ourselves with the ability to go at high speeds (60 mph in a
car) even though it is really the machine that is doing it for us.  Likewise it
is going to be with intelligent tools.
     So in this way, the problem with the information explosion is going to be
solved.  Slowly, it is dawning on us that the very need for specialization is
becoming obsolete.

     A major determinant of how fragmented science is is how much communication
takes place.  I submit here that the information explosion is for the most part
an explosion in redundancy brought about by a communication bottleneck.  Our
goal is then to find a way to open up this bottle neck.  It is here, again that
AI (especially in relation to intelligent data bases) may come to the rescue.

     This is what I see as for the Why's.

------------------------------

End of AIList Digest
********************

∂19-Sep-88  2123	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #88  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Sep 88  21:23:20 PDT
Date: Mon 19 Sep 1988 23:40-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #88
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 20 Sep 1988      Volume 8 : Issue 88

 Queries:

  Sierra OPS5
  Systems Engineering Level in KBS
  Genetic Learning Algorithms
  Model-based Reasoning
  Spatial Reasoning
  Intelligent Tutoring Systems
  Hybrid Knowledge Representation (MRS, KLONE, KRYPTON)

----------------------------------------------------------------------

Date: 16 Sep 88 18:40:16 GMT
From: ai!neves@speedy.wisc.edu  (David M. Neves)
Subject: Sierra OPS5


I know a student who wants to use an OPS5 for the IBM PC.  Sierra OPS5
is one possibility.  From its ad it looks great.  It is complete and
accepts external C functions.  Does anyone have actual experience with
it?  Any limitations that are not advertised?  Is it appropriate for
heavy debugging (i.e. making frequent changes to literalizes, rules,
memory on a large production system)?
-thanks, david

;David Neves, Computer Sciences Department, University of Wisconsin-Madison
;Usenet:  {rutgers,ucbvax}!uwvax!neves
;Arpanet: neves@cs.wisc.edu

------------------------------

Date: 16 Sep 88 19:44:50 GMT
From: ece-csc!ncrcae!gollum!jdavis@ncsuvx.ncsu.edu  (James P. Davis)
Subject: Systems Engineering Level in KBS

I am looking for any pointers to references regarding the "systems
engineering level" of knowledge as defined for knowledge base
management systems (KBMS). The only reference I have is in
Brodie et al. *On Knowledge Based Management Systems*, where
Brachman and Levesque discuss the various levels associated with
knowledge representation and knowledge systems (knowledge level,
symbol level, organization level). They mention this Systems
Engineering level in passing, but do not fully define it.

Does anyone have any references, or is anyone doing work in this
area of further defining these "levels" (Ron and Hector, are you
out there)?

The nature of my research in this area involves the definition of
a "level" which allows structure and organization to be imposed
on the Universe of Discourse (which doesn't conform to Newell's
Knowledge Level, which deals specifically with what can be stated
or implied about the world on a functional basis independent of
organization or implementation). However, I am looking at this
imposition of organization independent of how the knowledge
schema is implemented or manipulated to carry out rational behavior
(which doesn't confrom to Newell's Symbol Level either, which
deals with issues of how rational behavior is realized on a
machine, addressing such issues as how to exploit the syntactic
properties of a representation technique to effectively produce
rational actions, e.g., inheritance in frame systems).

The perspective that I am approaching this from is based on the
ideas from Database and data modeling involving the construction
of an "enterprise model" of a domain, which is primarily a
structural description (in some formalism such as any number of
deviations of the E-R model which have been researched) that
captures domain objects, relationships, and constraints according
to some set of model-dependent wff's. This description is a declarative
representation of the UoD. What I am looking at is the correlation
between this process in database/data modeling and constructing
knowledge schemas for a domain in AI. The goal is to define an
architecture for the tight coupling of database and knowledge based
systems as KBMS'.

It seems that some of the work that I am doing at this level
between the KNowledge and Symbol Levels (which I call the "Enterprise
Level" may be what has been termed the "Systems Engineering" Level.
Is this Systems Engineering Level defined sufficiently? Is anyone
working on it? Are there references? Anyone want to correspond
regarding these levels?

Any and all responses are appreciated.


jdavis@Gollum.Columbia.NCR.COM

Jim Davis
Advanced Systems Development
NCR Corporation

------------------------------

Date: 18 Sep 88 13:49:01 GMT
From: thefool@athena.mit.edu  (Michael A. de la Maza)
Subject: Genetic Learning Algorithms


   I am currently working on a genetic learning algorithm(gla) engine that
draws inferences from a horse racing database (the results could be
enRICHening).  Has anyone compiled a bibliography of gla articles/books?
   If I'm inundated with responses I'll post a summary here.


Michael A. de la Maza                 thefool@athena.mit.edu
Query: What is the answer to this question?



[There is a separate list covering genetic algorithms called GA-LIST.
Send subscription requests to gref@NRL-AIC.ARPA.  However, AIList will
continue to carry occasional information ...

        In addition, offutt@caen.engin.umich.edu (Daniel M. Offutt) is
offering a GA function optimization package.  Contact him for details.

                        - nick]

------------------------------

Date: 19 Sep 88 00:55:24 GMT
From: ucsdhub!hp-sdd!ncr-sd!ncrcae!gollum!jdavis@ucsd.edu  (James P.
      Davis)
Subject: Model-based Reasoning

I am looking for some good references on the subject of Model-based
reasoning (MBR). I am also interested in finding out who is doing
work/research in this area, and what domains are being investigated.
Nobody seems to have put any special compendiums (like Morgan Kaufmann)
in this area yet. Any of you out there?

Specifically, I am looking at the area of using a modeling framework,
which allows the structure and behavior for certain classes of
domains to be expressed in some declarative form, to drive the
reasoning process. My understanding of MBR is that it is an approach
at exploiting the inherent structure and constraints of a system
or enterprise to guide the process of reasoning about problems in
the given domain. I am developing an "analogical" representation
which allows the expression of domain semantics in terms of
structure and constraint declaration constructs based on the syntactic
construction of wff's in the modeling technique. The domain is
information systems design. In theory, by developing a self-describing
modeling formalism, in which the information systems design activity
can take place, the nature of the solution space can be constrained
such that only those solutions which adhere to the semantics of the
formalism itself (in which are expressed the semantics of the domain
application) are relevant.

What's happening in MBR? How does it relate to "reasoning from first
principles"?

Any and all responses are appreciated. I can summarize to the net if
requested.

Jim Davis
Advanced Systems Development
NCR Corporation
jdavis@Gollum.Columbia.NCR.COM

------------------------------

Date: 19 Sep 88  7:59 -0100
From: unido!lan.informatik.tu-muenchen.dbp.de!prassler@uunet.UU.NET
Reply-to: unido!lan!prassler@uunet.UU.NET
Subject: Spatial Reasoning

Path: lan!prassler
Newsgroups: comp.cog-eng,comp.ai.digest
Date: 19 Sep 88 07:58:55 GMT
Organization: Inst. fuer Informatik, TU Muenchen, W. Germany
Lines: 30


To people working or interested in the field of representation of large-scale
space and spatial reasoning !!

I'm a member of an AI and Cognitive Science group at the Technical University
of Munich, West-Germany, working on connectionist models for spatial reasoning
processes. I'm currently planning a research visit to the United States to get
to know and may be to work a few months with people working on similar topics.
Is anybody out there who is interested in such a collaboration. I expect to be
financially independent through a six months scholarship form the German
Academic Exchange Service.

Some personal data:

Name:
  Erwin Prassler
Education:
  Technical University of Munich
  Diploma in Computer Science, 1985
Address:
  Department of Computer Science
  Technical University of Munich
  Arcisstr.21
  D-8000 Munich 2
  West-Germany
e-mail:
  unido!tumult!prassler@uunet.UU.NET
interests:
  spatial reasoning, connectionist models, sailing

------------------------------

Date: Mon, 19 Sep 88 09:46:53 -0800
From: Rika Yoshii <ryoshii@nrtc.northrop.com>
Subject: Intelligent Tutoring Systems

    Could anyone send me a list of books and articles on
    Intelligent Tutoring Systems used to teach
    languages such as English, Spanish, Japanese, etc.?

    Also, is anyone aware of a system (besides TEIRESIAS,
    KLAUS) which allows an expert to use English in adding
    RULES to expert systems?

    Please send your reply to
           ryoshii@nrtc.northrop.com

    Thank you.
     Rika Yoshii

------------------------------

Date: Mon, 19 Sep 88 15:25
From: Fabrizio Sebastiani <FABRIZIO%ICNUCEVM.BITNET@MITVMA.MIT.EDU>
Subject: Hybrid Knowledge Representation (MRS, KLONE, KRYPTON)

I am looking for papers on hybrid knowledge representation (MRS, KLONE,
KRYPTON and the like); I am pretty familiar with the "KLONE world"
literature (at least, with what has gone on up to 1985), but don't know
much about: 1) what has been written past that date; 2) what
has been written AGAINST this approach. Can anyone provide references to
relevant papers on the subject?    Is anyone interested to discuss
the issue?       Thanks    Fabrizio Sebastiani

------------------------------

End of AIList Digest
********************

∂26-Sep-88  1023	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #89  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 26 Sep 88  10:23:04 PDT
Date: Mon 26 Sep 1988 00:01-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #89
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 26 Sep 1988       Volume 8 : Issue 89

 Announcements:

  6th International Workshop on Machine Learning
  Symposium on Computational Approaches to Scientific Discovery
  Workshop on Evaluation of Natural Language Processing Systems
  Canadian AI Table of Contents, October 1988

----------------------------------------------------------------------

Date: 12 Sep 88 20:20:44 GMT
From: segre@cu-arpa.cs.cornell.edu  (Alberto M. Segre)
Subject: 6th International Workshop on Machine Learning


                                  Call for Topics:

                  Sixth International Workshop on Machine Learning

                                 Cornell University
                              Ithaca, New York; U.S.A.

                               June 29 - July 1, 1989



               The Sixth International Workshop on Machine Learning will be
          held  at  Cornell  University, from June 29 through July 1, 1989.
          The workshop will be divided into four to six disjoint  sessions,
          each  focusing on a different theme. Each session will be chaired
          by a different member of the machine learning community, and will
          consist  of  30  to  50  participants  invited  on  the  basis of
          abstracts submitted to the session chair. Plenary  sessions  will
          be held for invited talks.

               People interested in chairing one  of  the  sessions  should
          submit  a  one-page  proposal,  stating the topic of the session,
          sites  at  which  research  is  currently  done  on  this  topic,
          estimated  attendance,  format  of  the  session,  and  their own
          qualifications as session chair.  Proposals should  be  submitted
          by November 1, 1988 to the program chair:

              Alberto Segre
              Department of Computer Science
              Cornell University, Upson Hall
              Ithaca, NY 14853-7501  USA

              Telephone: (607) 255-9196


          Electronic mail should be addressed to  "ml89@cs.cornell.edu"  or
          "segre@gvax.cs.cornell.edu".    The   organizing  committee  will
          evaluate proposals on the basis of  perceived  demand  and  their
          potential  impact on the field. Topics will be announced by early
          1989, at which time a call for papers  will  be  issued.  Partial
          travel support may be available for some participants.

------------------------------

Date: Tue, 13 Sep 88 01:11 PDT
From: Shrager.pa@Xerox.COM
Subject: Symposium on Computational Approaches to Scientific Discovery


Computational Approaches to Scientific Discovery

Stanford University; January 7-8, 1989

Scientific discovery stands as a major open issue in Cognitive Science.
What are the conditions for discovery and what knowledge is brought to
bear?  What roles are played by experimentation, observation,
instrumentation, and culture in the discovery process?  How are
important discoveries noticed and how are they transmitted?

Recently, significant progress has been made in the computational
understanding of scientific discovery. In order to bring together the
principal researchers in this field, and so move closer to a unified
theory of scientific reasoning and discovery, a symposium on this topic
will be held at Stanford University on January 7 (Saturday) and January
8 (Sunday), 1989. The symposium will cross several methodological
boundaries, including Cognitive Psychology, Artificial Intelligence, and
Philosophy of Science, and will cover a variety of scientific domains.

Presentations will be through invitation, but to ensure participation by
researchers without `contacts' and from a broad range of related fields,
a small number of additional attendees will be invited. The ideal
participant will have developed and tested (by implementation,
experiment, etc.) a computational theory of scientific reasoning,
preferably emphasizing some aspect of discovery.  These might include:

* Mechanisms of theory formation
* Prediction and causal reasoning
* Experimentation and instrument construction
* The organization of scientific information
* Sociological and cultural issues
* Unified models of discovery

Applicants should send a short research summary (**maximum** of two
pages) describing their research efforts and interests in scientific
reasoning or discovery to the program co-chair (see notes below) by:

                              >> OCTOBER 15, 1988 <<

Program co-Chairs:

   Jeff Shrager
        Xerox PARC
        3333 Coyote Hill Rd.
        Palo Alto, CA
        94304

        Shrager@Xerox.com
        Phone: 415/494-4338

    Pat Langley
        University of California at Irvine

        langley@CIP.UCI.EDU

[Please direct queries and applications to Jeff Shrager.  Applications
*must* be submitted in HARDCOPY via U.S.Mail (or in person).  Other
queries may be made by netmail, telephone, in writing, or in person.]

------------------------------

Date: 14 Sep 88 15:56:32 GMT
From: rutgers!prc.unisys.com!finin@ucsd.edu (Tim Finin)
Reply-to: rutgers!prc.unisys.com!finin@ucsd.edu (Tim Finin)
Subject: Workshop on Evaluation of Natural Language Processing Systems


                        CALL FOR PARTICIPATION

                             Workshop on
          Evaluation of Natural Language Processing Systems

                          December 8-9, 1988
                        Wayne Hotel, Wayne, PA
                       (Suburban Philadelphia)

There has been much recent interest in the difficult problem of
evaluating natural language systems.  With the exception of natural
language interfaces there are few working systems in existence, and
they tend to be concerned with very different tasks and use equally
different techniques.  There has been little agreement in the field
about training sets and test sets, or about clearly defined subsets of
problems that constitute standards for different levels of
performance.  Even those groups that have attempted a measure of
self-evaluation have often been reduced to discussing a system's
performance in isolation - comparing its current performance to its
previous performance rather than to another system.  As this
technology begins to move slowly into the marketplace, the need for
useful evaluation techniques is becoming more and more obvious.  The
speech community has made some recent progress toward developing new
methods of evaluation, and it is time that the natural language
community followed suit.  This is much more easily said than done and
will require a concentrated effort on the part of the field.

There are certain premises that should underly any discussion
of evaluation of natural language processing systems:

   o It should be possible to discuss system evaluation in general without
     having to state whether the purpose of the system is
     "question-answering" or "text processing."  Evaluating a system
     requires the definition of an application task in terms of I/O pairs
     which are equally applicable to question-answering, text processing,
     or generation.

   o There are two basic types of evaluation: a) "black box evaluation"
     which measures system performance on a given task in terms of
     well-defined I/O pairs; and b) "glass box evaluation" which examines
     the internal workings of the system.  For example, glass box
     performance evaluation for a system that is supposed to perform
     semantic and pragmatic analysis should include the examination of
     predicate-argument relations, referents, and temporal and causal
     relations.

Given these premises, the workshop will be structured around the
following three sessions: (1) Defining "glass box evaluation" and
"black box evaluation."; (2) Defining criteria for "black box
evaluation", (A Proposal for establishing task oriented benchmarks for
NLP Systems, Session Chair - Beth Sundheim); (3) Defining criteria for
"glass box evaluation." (Session Chair - Jerry Hobbs).  Several
different types of systems will be discussed, including
question-answering systems, text processing systems and generation
systems.

Researchers interested in participating should submit a short (250-500
word) description of their experience and interests, and expected
contributions to the workshop.  In particular, if they have been
involved in any evaluation efforts that they would like to report on,
they should include a short abstract (500-1000 words) as well.  The
number of participants at the workshop must be restricted due to
limited room size.  The descriptions and abstracts will be reviewed by
the following committee: Martha Palmer (Unisys), Beth Sundheim (NOSC),
Ed Hovy (ISI), Tim Finin (Unisys), and Lynn Bates (BBN).

This material should arrive at the address given below no later than
October 1st.  Responses to all who submit abstracts or descriptions
will be sent by November 1st.

  Martha Palmer
  Unisys Paoli Research Center
  PO Box 517
  Paoli, PA 19301
  palmer@prc.unisys.com
  215-648-7228
--
  Tim Finin                     finin@prc.unisys.com
  Paoli Research Center         ..!{psuvax1,sdcrdcf,cbmvax}!burdvax!finin
  Unisys                        215-648-7446 (office)  215-386-1749 (home)
  PO Box 517, Paoli PA 19301    215-648-7412 (fax)

------------------------------

Date: Mon, 19 Sep 88 12:05:02 EDT
From: Christopher Prince <mcgill-vision!arcsun!chris@EDDIE.MIT.EDU>
Subject: Canadian AI Table of Contents, October 1988


Table of contents from
Canadian Artificial Intelligence, No. 17, October 1988
a publication of the CSCSI (Canadian Society for Computational Studies
of Intelligence).

(Deadline for Next Issue: November 15, 1988)


        Communications
3       Executive Notes
8       Notes from Members

        AI News
9       Short Takes
11      New Products

        Feature Articles
15      AI and Canada's Participation in Space Station
        Connie Bryson
19      Neural Networks:  An Engineer's Perspective
        Casimir Klimasauskas

        Research Reports
25      Research in the Knowledge Sciences at the University of Calgary
        Ian Witten and Brian Gaines
30      AI Research and Development at CompEngServ, of the CEMTECH Group Ltd.
        Archie Bowen
32      AI Research at Bell-Northern Research
        Dick Peacocke

        Conference Reports
39      CIAR Graduate Student Workshop on Knowledge Representation
        Howard Hamilton and Sharon Hamilton
46      CSCSI '88 Conference
        Howard Hamilton
49      26th Annual Meeting of the Association for Computational Linguistics
        Dan Lyons and Mark Ryan
54      Intelligent Tutoring Systems International Conference - 88
        members of ARIES lab at U. of Saskatchewan

        Publications
61      Book Reviews
67      Books Received
67      Technical Reports

69      Conference Announcements

-----------------------------------------------------------------------------

Please send any mail to the following addresses, and not to me:

Content and Submissions:
-----------------------

Canadian AI Magazine
C/O Alberta Research Council,
6815 8th Street NE, 3rd Floor
Calgary, Alberta, CANADA T2E 7H7
(403) 297-2600

UUCP: cscsi%arcsun.uucp%ubc.csnet@relay.cs.net
   or cscsi%noah.arc.cdn@alberta.uucp
CDNnet: cscsi@noah.arc.cdn


Subscription Requests:
---------------------

CIPS
243 College Street (5th floor),
Toronto, Ontario, CANADA
M5T 2Y1

------------------------------

Date: Thu, 22 Sep 88 13:42:54 +0300
From: scia@stek5.oulu.fi (SCIA confrence in OULU)


      The 6th Scandinavian Conference on Image Analysis
      =================================================

      June 19 - 22, 1989
      Oulu, Finland

      Second Call for Papers



      INVITATION TO 6TH SCIA

      The 6th Scandinavian Conference on Image  Analysis   (6SCIA)
      will  be arranged by the Pattern Recognition Society of Fin-
      land from June 19 to June 22, 1989. The conference is  spon-
      sored  by the International Association for Pattern Recogni-
      tion. The conference will be held at the University of Oulu.
      Oulu is the major industrial city in North Finland, situated
      not far from the Arctic Circle. The conference  site  is  at
      the Linnanmaa campus of the University, near downtown Oulu.

      CONFERENCE COMMITTEE

      Erkki Oja, Conference Chairman
      Matti Pietik{inen, Program Chairman
      Juha R|ning, Local organization Chairman
      Hannu Hakalahti, Exhibition Chairman

      Jan-Olof Eklundh, Sweden
      Stein Grinaker, Norway
      Teuvo Kohonen, Finland
      L. F. Pau, Denmark

      SCIENTIFIC PROGRAM

      The program will  consist  of  contributed  papers,  invited
      talks  and special panels.  The contributed papers will cov-
      er:

              * computer vision
              * image processing
              * pattern recognition
              * perception
              * parallel algorithms and architectures

      as well as application areas including

              * industry
              * medicine and biology
              * office automation
              * remote sensing

      There will be invited speakers on the following topics:

      Industrial Machine Vision
      (Dr. J. Sanz, IBM Almaden Research Center)

      Vision and Robotics
      (Prof. Y. Shirai, Osaka University)

      Knowledge-Based Vision
      (Prof. L. Davis, University of Maryland)

      Parallel Architectures
      (Prof. P. E. Danielsson, Link|ping University)

      Neural Networks in Vision
      (to be announced)

      Image Processing for HDTV
      (Dr. G. Tonge, Independent Broadcasting Authority).

      Panels will be organized on the following topics:

      Visual Inspection in the  Electronics  Industry  (moderator:
      prof. L. F. Pau);
      Medical Imaging (moderator: prof. N. Saranummi);
      Neural Networks and Conventional  Architectures  (moderator:
      prof. E. Oja);
      Image Processing Workstations (moderator: Dr.  A.  Kortekan-
      gas).

      SUBMISSION OF PAPERS

      Authors are invited to submit four  copies  of  an  extended
      summary of at least 1000 words of each of their papers to:

              Professor Matti Pietik{inen
              6SCIA Program Chairman
              Dept. of Electrical Engineering
              University of Oulu
              SF-90570 OULU, Finland

              tel +358-81-352765
              fax +358-81-561278
              telex 32 375 oylin sf
              net scia@steks.oulu.fi

      The summary should contain sufficient  detail,  including  a
      clear description of the salient concepts and novel features
      of the work.  The deadline for submission  of  summaries  is
      December  1, 1988. Authors will be notified of acceptance by
      January 31st, 1989 and final camera-ready papers will be re-
      quired by March 31st, 1989.

      The length of the final paper must not exceed 8  pages.  In-
      structions  for  writing the final paper will be sent to the
      authors.

      EXHIBITION

      An exhibition is planned.  Companies  and  institutions  in-
      volved  in  image analysis and related fields are invited to
      exhibit their products at demonstration stands,  on  posters
      or video. Please indicate your interest to take part by con-
      tacting the Exhibition Committee:

              Matti Oikarinen
              P.O. Box 181
              SF-90101 OULU
              Finland

              tel. +358-81-346488
              telex 32354 vttou sf
              fax. +358-81-346211

      SOCIAL PROGRAM

      A social program will be arranged,  including  possibilities
      to  enjoy  the  location  of the conference, the sea and the
      midnight sun. There  are  excellent  possibilities  to  make
      post-conference  tours  e.g.  to Lapland or to the lake dis-
      trict of Finland.

      The social program will consist of a get-together  party  on
      Monday June 19th, a city reception on Tuesday June 20th, and
      the conference Banquet on Wednesday June 21st. These are all
      included  in the registration fee. There is an extra fee for
      accompanying persons.

      REGISTRATION INFORMATION

      The registration fee will be 1300  FIM  before  April  15th,
      1989  and 1500 FIM afterwards. The fee for participants cov-
      ers:  entrance  to  all  sessions,  panels  and  exhibition;
      proceedings; get-together party, city reception, banquet and
      coffee breaks.

      The fee is payable by
              - check made out to 6th SCIA and mailed to the
                Conference Secretariat; or by
              - bank transfer draft account or
              - all major credit cards

      Registration forms, hotel information and  practical  travel
      information  are  available from the Conference Secretariat.
      An information package will be sent to authors  of  accepted
      papers by January 31st, 1989.

      Secretariat:
              Congress Team
              P.O. Box 227
              SF-00131 HELSINKI
              Finland
              tel. +358-0-176866
              telex 122783 arcon sf
              fax +358-0-1855245

      There will be hotel rooms available for  participants,  with
      prices  ranging  from  135 FIM (90 FIM) to 430 FIM (270 FIM)
      per night for a single room (double room/person).

------------------------------

End of AIList Digest
********************

shrager.pa@xerox.com
Computational Approaches to Scientific Discovery
I would like to be invited to your symposium.  I have an only Stanford
report that discusses the notion of creativity and argues that a creative
solution of the statement of a problem is one that involves entities
that are not formed from the entities mentioned in the statement of 
the problem by functional composition.  In particular, there can be
easy creativity.
∂26-Sep-88  1025	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #91  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 26 Sep 88  10:24:19 PDT
Date: Mon 26 Sep 1988 01:56-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #91
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 26 Sep 1988       Volume 8 : Issue 91

 Philosophy -- Why do AI?  (4 messages)

----------------------------------------------------------------------

Date: 18 Sep 88 20:56:11 GMT
From: ucsdhub!hp-sdd!ncr-sd!serene!pnet12!bstev@ucsd.edu  (Barry
      Stevens)
Subject: Re: Why?

markh@csd4.milw.wisc.edu (Mark William Hopkins) writes:
>         Why does anyone want artificial intelligence?

>     A major determinant of how fragmented science is is how much communication
>takes place.  I submit here that the information explosion is for the most part
>an explosion in redundancy brought about by a communication bottleneck.  Our
>goal is then to find a way to open up this bottle neck.  It is here, again that
>AI (especially in relation to intelligent data bases) may come to the rescue.

Along with the need to handle increasing amounts of information, comes an
increased need for performance:

   Timeliness -- the speed at which information must be processed has
                 increased dramatically. (e.g. computer console messages
                 in a commercial datacenter with multiple CPUs need to be
                 analyzed at the rates of 5 to 50 per SECOND. )

   Accuracy   -- decisions must be made at accuracies that are beyond the
                 sustained ability of human experts (e.g process control
                 systems needing 0.1% accuracy in set point values for
                 hundreds of variables set every minute for 24 hrs/day)

   Cost       -- expert knowledge must be employed in situations where
                 the presence of experts can't be afforded (e.g. stock
                 or commodity trading systems based on expert systems
                 and/or neural nets)

   Availability- most experts are fond of their weekends and evenings, and
                 make a very big deal over their vacations. AI methods can
                 make their skills available 24 hrs, 365 days/year.

I have surveyed many companies in their use of AI techniques. My personal
feeling, supported by no one else at this point, is that the "why" of AI
will be answered when the following application is implemented and becomes
widespread:

   A mid level manager must analyze a budget report once a week. He uses
   the rules he follows as the basis for an expert system: "If the
   variance is greater than $1000 in Acct 101, OR the TOTAL in Line 5
   is greater than 10% of plan, OR ... " an then delegates the expert
   system and his rule base of 10, 15, or 20 rules to HIS SECRETARY, AI
   and expert systems will have come of age in industry.

The big question will be answered not by robotics applications, or speaker
independent speech recognition, or writer-independent character
recognition, or even smart data bases. (Most professionals don't use data
bases), but by simple tasks, done by almost everyone in the work
environment, taken over or delegated to someone else as a result of AI. The
AI applications that do that will propogate across the workplace like LOTUS
or other truly horizontal applications.

UUCP: {crash ncr-sd}!pnet12!bstev
ARPA: crash!pnet12!bstev@nosc.mil
INET: bstev@pnet12.cts.com

------------------------------

Date: Mon, 19 Sep 88 10:17:41 -0400 (EDT)
From: David Greene <dg1v+@andrew.cmu.edu>
Subject: Re: Why?

In <digest.sXB3k7y00Ukc40RUZb@andrew.cmu.edu> markh@csd4.milw.wisc.edu  (Mark
William Hopkins) writes:

>The first thing that comes to mind is our current situation
>as regards science -- its increasing specialization.  Most people will
>agree that this is a trend that has gone way too far ... to the extent that
>we may have sacrificed global perspective and competence in our
>specialists; and further that it is a trend that needs to be reversed.
>Yet fewer would dare to suggest that we can overcome the problem.

I agree that this is serious and that AI, as an inherently interdisciplinary
field, has the potential to pull areas together.   However, there is tremendous
pressure within the academic community to encourge and reward *focused* efforts
in a narrow area, at least until you become a tenured old-sage :-)

It's very time consuming to keep up with multiple fields to any real depth but
even as you look for synergy you hear your advisor saying, "It won't get
published if the the editors don't have a department for it..."  Even when
there is a department, it is suggested that you remove the excess (other
disciplines) to make it more relevent or accessible to the regular readership.
I think it's worth the effort, but it would certainly help if it weren't such
an uphill struggle.

-David

-----------------
David Perry Greene                           GSIA
dg1v@andrew.cmu.edu                      Carnegie Mellon University

"You're welcome to use my oppinions, just don't get them all wrinkled."

------------------------------

Date: 19 Sep 88 06:59:52 GMT
From: TAURUS.BITNET!shani@ucbvax.berkeley.edu
Subject: Re: Why?

In article <6823@uwmcsd1.UUCP>, markh@csd4.milw.wisc.edu.BITNET writes:
>         Why does anyone want artificial intelligence?
>
> What is it that you're seeking to gain by it?  What is it that you would have
> an intelligent machine do?

Well, well waddaya know! :-)

Not long ago, an endless argument was held in this newsgroup, reguarding AI
and value-systems. It seem that the reason this argument did not (as far as
I know) reach any constructive conclousions, is that the question above was
never raised... So realy? what do we expect an intelligent machine to be like?

Or let me sharp the question a bit:

                  How will we know that a machine is intelligent, if we lack
                  the means to measure (or even to define) intelligence ?

This may sound a bit cynical, but it is my opinion that setting up such
misty goals, and useing therms like 'intelligence' or 'value-systems' to
describe them, is mainly ment to fund something which MAY BE beneficial
(since research is allmost always beneficial in some way), but will never
reach those goals... why who would like to fund a research which will only
end up with easyer to use programming languages or faster computers?


O.S.

BTW: I wish it wasn't like that. It could be wonderful if RND financing was
     not goal-depended... all and all, the important thing is the research
     itself.

------------------------------

Date: 21 Sep 88 20:32:10 GMT
From: quintus!certes!jeff2@unix.sri.com  ( jeff)
Subject: Re: Why?

in article <867@taurus.BITNET>, shani@TAURUS.BITNET says:
>
> In article <6823@uwmcsd1.UUCP>, markh@csd4.milw.wisc.edu.BITNET writes:
>>         Why does anyone want artificial intelligence?
>>
>> What is it that you're seeking to gain by it?  What is it that you would have
>> an intelligent machine do?
>
> Or let me sharp the question a bit:
>
>                   How will we know that a machine is intelligent, if we lack
>                   the means to measure (or even to define) intelligence ?
>
> This may sound a bit cynical, but it is my opinion that setting up such
> misty goals, and useing therms like 'intelligence' or 'value-systems' to
> describe them, is mainly ment to fund something which MAY BE beneficial
> (since research is allmost always beneficial in some way), but will never
> reach those goals... why who would like to fund a research which will only
> end up with easyer to use programming languages or faster computers?
>

Consider the following:
        1): it takes nearly 30 years (from conception to expert level)
                to train a new programmer/software engineer

        2): the average "expert expectancy" of this person is (I'm guessing)
                probably 10 - 15 years

        3): there are nearly 100,000,000 working people with ideas to improve
                the way their jobs are done.

        4): that (perhaps) 1 person in 10 of these has the skills to
                automate the job.

At least two people are required to automate some portion of a task; one to
describe the process and one to automate it; this increases the cost of the
automation process (two salaries are being paid to do one job), and limits
the number of tasks that can be automated at any one time to the number of
automaters available.

As a result, the number of tasks to be automated is expanding much more
rapidly than the number of people to automate it. Given that few automaters
remain experts in their field long enough to be fully replaced, we have no
choice but to reduce the skill level required to automate a task if we want
to improve our abilities to automate tasks. This alone is justification for
research into "easy to use" languages.

Additionally, it would be nice if AI could create a tool for the development
of the other automation tools that are sufficiently close to those in current
use (e.g. English) that little training is required to use them.

--
/*---------------------------------------------------------------------------*/
 Jeff Griffith       Teradyne/Attain, Inc., San Jose, CA 95131 (408)434-0822
 Disclaimer:         The views expressed here are strictly my own.
 Paths:              jeff@certes!quintus or jeff@certes!aeras!sun

------------------------------

End of AIList Digest
********************

∂26-Sep-88  1023	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #90  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 26 Sep 88  10:23:41 PDT
Date: Mon 26 Sep 1988 01:48-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #90
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 26 Sep 1988       Volume 8 : Issue 90

 Queries:

  Awareness in Epistemic Logics
  Best AI Universities??
  NL interfaces to Rule Based Expert Systems
  Source for the RETE algorithm (Forgy, CMU)

 Responses:

  Genetic Learning Algorithms
  Model-based Reasoning

----------------------------------------------------------------------

Date: Fri, 23 Sep 88 18:34 SET
From: Fabrizio Sebastiani <FABRIZIO%ICNUCEVM.BITNET@MITVMA.MIT.EDU>
Subject: Awareness in Epistemic Logics

Does anybody know whether further studies have been carried out on Fagin
and Halpern's notion of "awareness" in epistemic logics, as from their
1985 IJCAI paper?  whether the notion had been previously discussed in
the philosophy of language or the philosophy of mind?  Anyone wishing to
discuss the topic, provide references, send papers, etc., is invited to
contact me.    Fabrizio Sebastiani

------------------------------

Date: 5 Sep 88 15:37:29 GMT
From: hpl-opus!hpccc!hp-sde!hpfcdc!hpgrla!danj@hplabs.hp.com  (Dan
      Johnson)
Subject: Best AI Universities??

***
I am conducting an informal survey on U.S. universities with graduate
C.S. programs in the following areas:

        Image Processing
        Pattern Recognition
        AI
        Neural Nets
        User Interface Design

Which universities have the best instructional and/or research programs
in these areas and why?  All opinions gratefully accepted.  (Opinions
based on factual data such as graduate surveys, etc. are even more gratefully
accepted.  :-).

I will summarize and repost if there is sufficient interest.

----------------------------------------------------
Dan Johnson          UUCP: hplabs!hpfcla!hpgrla!danj
Hewlett-Packard
Greeley Division

------------------------------

Date: Tue, 20 Sep 88 10:24:14 +1000
From: "ERIC Y.H. TSUI" <munnari!aragorn.oz.au!eric@uunet.UU.NET>
Subject: NL interfaces to Rule Based Expert Systems

I recently broadcasted and seek information on NL interfaces to rule based
expert systems. There is no reply and I came across the following article:

DATSKOVSKY-MOERDLER, G., McKEOWN, K.R. and ENSOR, J.R. (1987);
Building Natural Language Interfaces for Rule-based Systems, IJCAI-87,
p682-687.

The first two authors are from Columbia University (NY) and the third author
is from AT&T Bell Lab. (Holmdel, N.J.).

Would anyone have their e-mail address ? (I am still interested to learn
about pointers to other work.)

Eric Tsui                               eric@aragorn.oz
Division of Computing and Mathematics
Deakin University
Geelong, Victoria 3217
Australia

------------------------------

Date: Fri, 23 Sep 88 10:51:03 +1000
From: "ERIC Y.H. TSUI" <munnari!aragorn.oz.au!eric@uunet.UU.NET>
Subject: Source for the RETE algorithm (Forgy, CMU)

Anyone knows where I can obtain a version of the source/object code for
the RETE algorithm (Forgy's, CMU) ? I would like to run a few experiments
with it and if I do decide to incorporate that into our system, I would
re-write one in Prolog anyway. (Needless to say, all for non-commerical
purposes.) Versions in C, Lisp, Prolog, Smalltalk and Pascal all welcome.

Am I correct that the latest publication on RETE is:

Forgy, C.L. and Shepherd, S.J. (1987); Rete: A Fast Match Algorithm,
AI Expert 2(4), p35-40.

Eric Tsui                               eric@aragorn.oz
Division of Computing and Mathematics
Deakin University
Geelong, Victoria 3217
AUSTRALIA

------------------------------

Date: Tue, 20 Sep 88 13:36 PDT
From: jan cornish <cornish@RUSSIAN.SPA.Symbolics.COM>
Subject: Genetic Learning Algorithms

    Date: 18 Sep 88 13:49:01 GMT
    From: thefool@athena.mit.edu  (Michael A. de la Maza)


       I am currently working on a genetic learning algorithm(gla) engine that
    draws inferences from a horse racing database (the results could be
    enRICHening).  Has anyone compiled a bibliography of gla articles/books?
       If I'm inundated with responses I'll post a summary here.

What makes you think a GA will work?

You probably would want to use a GA based "classifier system" (see a
book by John Holland et. al. called Induction) in which a random
population inductive rules are evolved. You might want to take a look at
an article "Pinpointing Good Hypothesies with Heuristics" by Steven
Salzberg in the book "Artifical Intelligenece & Statistics". He
developed a weighted feature vector approach where the weights were
updated by heuristics. It worked well.  The feature vector was about 70
dimensional.

Where are you getting your data?

    Michael A. de la Maza                 thefool@athena.mit.edu
    Query: What is the answer to this question?

Answer: what question is this the answer to?

    [There is a separate list covering genetic algorithms called GA-LIST.
    Send subscription requests to gref@NRL-AIC.ARPA.  However, AIList will
    continue to carry occasional information ...

A good list ...

            In addition, offutt@caen.engin.umich.edu (Daniel M. Offutt) is
    offering a GA function optimization package.  Contact him for details.

                            - nick]

------------------------------

Date: Tue, 20 Sep 88 19:31:46 EDT
From: davis@wheaties.ai.mit.edu (Randall Davis)
Subject: Model-based Reasoning


Concerning:
    From: jdavis@ucsd.edu  (James P. Davis)
    Subject: Model-based Reasoning

    I am looking for some good references on the subject of Model-based
    reasoning (MBR). I am also interested in finding out who is doing
    work/research in this area, and what domains are being investigated.
    Nobody seems to have put any special compendiums (like Morgan Kaufmann)
    in this area yet. Any of you out there?

See the article by Davis and Hamscher in "Exploring AI", a compendium of
recent AAAI survey talks, just published by M/K.  The article is a survey of
the state of the art of model-based troubleshooting as of August 1987.

In addition, I'm working on an edited collection of articles summarizing the
MIT group's work in this area, including troubleshooting, test generation,
design, design for testability, combining causal and associational reasoning,
etc.  Available in spring/summer 1989.

    How does MBR relate to "reasoning from first principles"?

They're used essentially synonymously.  "First principles" was used earlier on
to emphasize that the systems reasoned from fundamental engineering principles
rather than empirical associations; "model-based" has been used more recently
to acknowledge the central role of the device model in comparing behavior
predicted by the model with behavior actually emitted by the physical device.

------------------------------

End of AIList Digest
********************

∂26-Sep-88  1025	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #92  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 26 Sep 88  10:25:02 PDT
Date: Mon 26 Sep 1988 01:59-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #92
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 26 Sep 1988       Volume 8 : Issue 92

 Philosophy:

  Theft or Honest Toil, Pinker & Prince, learning rules
  Common sense knowledge of continuous action (2 messages)
  I got rhythm
  Commonsense reasoning

----------------------------------------------------------------------

Date: 2 Sep 88 22:35:01 GMT
From: mnetor!utzoo!dciem!dretor!client2!mmt@uunet.uu.net  (Martin
      Taylor)
Subject: Theft or Honest Toil, (was Re: Pinker & Prince Reply (long
         version))


Harnad characterizes learning rules from a rule-provider as "theft",
whereas obtaining them by evaluation of the statistics of input data
is "honest toil".  But the analogy is perhaps better in a different
domain: learning by evaluating the statistics of the environment
is like building up amino acids and other nutritious things from
inorganic molecules through photosynthesis, whereas obtaining rules
from rule-providers is like eating already built nutritious things.
One ofthe great advantages of language is that we CAN take advantage
of the regularities discovered in the data by other people.  The rules
they tell us may be wrong, but to use them is easier than to discover
our own rules.  It is hardly to be taken as an analogy to "theft".

If we look at early child learning, the "theft" question becomes:
Has evolution provided us with a set of rules that we do not have to
obtain from the data, so that we can later obtain more rules from
people who did themselves learn from data?  Obviously in some sense
the answer is "yes" there are SOME innate rules regarding how we
interpret sensory input, even if those rules are as low-level as
to indicate how to put together a learning net.  Obviously, also,
there are MANY rules that we have to get from the data and/or from
people who learned them from the data.  The question then becomes
whether the "rules" regarding past-tense formation are of the innate
kind, of the data-induced kind, or of the passed-on kind.

My understanding of the developmental literature is that children
pass through three phases: (i) correct past-tense formation for those
verbs for which the child uses the past tense frequently; (ii) false
regularization, in which non-regular past tenses (went) are replaced
by regularized ones (goed); (iii) more-or-less correct past tense
formation, in which exceptions are properly used, AND novel or
neologized verbs are given regular past tenses (in some sense of
regular).  This sequence suggests to me that the pattern does not
have any innate rule component.  Initially, all words are separate,
in the sense that "went" is a different word from "go".  Later,
relations among words are made (I will not say "noticed"), and
the notion of "go" becomes part of the notion of "went".  Furthermore,
the notion of a root meaning with tense modification becomes part
of verbs in general.  Again, I will not say that this is connected
with any kind of symbolic rule.  It may be the development of net
nodes that are activated for root parts and for modifer parts of
words.  It would be overly rash to claim either that rules are involved
or that they are not.  In the final stage, the rule-like way of
obtaining past tenses is well established enough that the exceptions
can be clearly distinguished (whether statistically or otherwise is
again disputable).

One thing that seems perfectly clear is that humans are in general
capable of inducing rules in the sense that some people can verbalize
those rules.  When such a person "teaches" a rule to a "student",
the student must, initially at least, apply it AS a rule.  But even
in this case, it is not clear that skilled use of what has been learned
involves continuing to use the rule AS a rule.  It may have served
to induce new node structures in a net.

In "The Psychology of Reading" (Academic Press, 1983), my wife and I
discussed such a sequence under the heading of "Three-phased Learning",
which we took to be a fairly general pattern in the learning of skilled
behaviour (such as reading).  Phase 1 is the learning of large-scale
unique patterns.  Phase 2 is the discovery of consistent sub-patterns
and consistent ways in which the sub-patterns relate to each other
(induction or acquisition of rules).  Phase 3 is the incorporation
of these sub-elements and relational patterns into newly structured
global patterns--the acquisition of true skill.

"Theft," in Harnad's terms, can occur only as part of Phase 2. Both
Phase 1 and Phase 3 involve "honest toil."  My feeling is that
current connectionist models are mainly appropriate to Phase 1,
and that symbolic approaches are mainly appropriate to Phase 2,
though there is necessarily overlap.  There should not be a contention among
models using one or other approach, if this is so.  They are both
correct, but under different circumstances.
--
Martin Taylor  DCIEM, Box 2000, Downsview, Ontario, Canada M3M 3B9
uunet!mnetor!dciem!client1!mmt  or mmt@zorac.arpa   (416) 635-2048

------------------------------

Date: 18 Sep 88  1543 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: common sense knowledge of continuous action

If Genesereth and Nilsson didn't give an example to illustrate
why differential equations aren't enough, they should have.
The example I like to give when I lecture is that of spilling
the water glass on the lectern.  If the front row is very
close, it might get wet, but usually not even that.  The
Navier-Stokes equations govern the flow of the spilled water
but are entirely useless in this common sense situation.
No-one can acquire the initial conditions or integrate the
equations sufficiently rapidly.  Moreover, absorption of water
by the materials it flows over is probably a strong enough
effect, so that more than the Navier-Stokes equations would
be necessary.

Thus there is no "scientific theory" involving differential
equations, queuing theory, etc.  that can be used by a robot
to determine what can be expected when a glass of water
is spilled, given what information is actually available
to an observer.  To use the terminology of my 1969 paper
with Pat Hayes, the differential equations don't form
an epistemologically adequate model of the phenomenon, i.e.
a model that uses the information actually available.

While some people are interested in modelling human performance
as an aspect of psychology, my interest is artificial intelligence.
There is no conflict with science.  What we need is a scientific
theory that can use the information available to a robot
with human opportunities to observe and do as well as a
human in predicting what will happen.  Thus our goal is a scientific
common sense.

The Navier-Stokes equations are important in (1) the design
of airplane wings, (2) in the derivation of general inequalities,
some of which might even be translatable into terms common sense
can use.  For example, the Bernoulli effect, once a person has
(usually with difficulty) integrated it into his common sense
knowledge can be useful for qualitatively predicting the effects of
winds flowing over a house.

Finally, the Navier Stokes equations are imbedded in a framework
of common sense knowledge and reasoning that determine the
conditions under which they are applied to the design of airplane
wings, etc.

------------------------------

Date: 19 Sep 88 01:18:29 GMT
From: garth!smryan@unix.sri.com  (Steven Ryan)
Subject: Re: state and change/continuous actions

>Foundations of Artificial Intelligence," I find it interesting to
>compare and contrast the concepts described in Chapter 11 - "State
>and Change" with state/change concepts defined within systems
>theory and simulation modeling. The authors make the following statement:
>"Insufficient attention has been paid to the problem of continuous
>actions." Now, a question that immediately comes to mind is "What problem?"

Presumably, they are referring to that formal systems are strictly discrete and
finite. This has to do to with `effective computation.' Discrete systems can be
explained in such simple terms that is always clear exactly what is being
done.

Continuous systems are computably using calculus, but is this `effective
computation?' Calculus uses a number of existent theorems which prove some
point or set exists, but provide no method to effectively compute the value.
Or is knowing the value exists sufficient because, after all, we can map the
real line into a bounded interval which can be traversed in finite time?

It is not clear that all natural phenomon can be modelled on the discrete
and finite digital computer. If not, what computer could we use?

>Any thoughts?

------------------------------

Date: 19 Sep 88 01:18:45 GMT
From: dscatl!mgresham@gatech.edu (Mark Gresham)
Subject: I got rhythm


In a recent article <on comp.ai.digest> PGOETZ@LOYVAX.BITNET writes:

>Here's a question for anybody:  Why do we have rhythm?
>
>Picture yourself tapping your foot to the tune of the latest Top 40 trash hit.
>While you do this, your brain is busy processing sensory inputs, controlling
>the muscles in your foot, and thinking about whatever you think about when
>you listen to Top 40 music.
>[...text deleted...]
>It comes down to this:  Different actions require different processing
>overhead.  So why, no matter what we do, do we perceive time as a constant?

The fact is, we *don't*. (Take it from a musician!)  Generally
people have a quite erratic perception of time.
Th perception (the top 40 example) is one of constancy in
relationship to some other perceived event be believe to
be constant (or assume is so).  Hence, the "beats" in the
music (which we deem to be regular) are giving us fresh input
which we use to "correct" our foot tapping.

>Why do we, in fact, have rhythm?  Do we have an internal clock, or a
>"main loop" which takes a constant time to run?  Or do we have an inadequate
>view of consciousness when we see it as a program?
>
>Phil Goetz
>PGOETZ@LOYVAX.bitnet

Try this experiment.  Or several of you try it.
Take a stopwatch (digital is preferable because silent).
Don't look at it or any other clock, and don't count;
press the start button.
Then, when you think five minutes are up, stop it.
Look at the watch and see how you did.
I know of one percussionist who is said to be quite accurate.
If you are really concentrating on "the passage of time"
--genuinely trying to be aware of it--my guess is that
you'll start to sweat (or otherwise become uncomfortable)
after about 40 seconds or so.  It takes quite a bit of
discipline to empty your mind enough to successfully do
that.  Try it.  Invent other similar experiments.

Let me know what you discover.

--Mark Gresham

(please e-mail or post to rec.music.classical)

++++++++++++++++++++++++++++++++++++++++++
Mark Gresham              Atlanta, GA, USA
UUCP:  ...!gatech!dscatl!mgresham
INTERNET: mgresham@dscatl.UUCP
++++++++++++++++++++++++++++++++++++++++++

------------------------------

Date: 19 Sep 88 15:20:13 GMT
From: fishwick@bikini.cis.ufl.edu (Paul Fishwick)
Subject: commonsense reasoning


I very much appreciate Prof. McCarthy's response and would like to comment.
The "water glass on the lectern" example is a good one for commonsense
reasoning; however, let's further examine this scenario. First, if we
wanted a highly accurate model of water flow then we would probably
use flow equations (such as the NS equations) possibly combined with
projectile modeling. Note also that a lumped model of the detailed math
model may reduce complexity and provide an answer for us. We have not
seen specific work in this area since spilt water in a room is
of little scientific value to most researchers. Please note that I am
not trying to be facetious -- I am just trying to point out that *if* the
goal is "to solve the problem of predicting the result of continuous actions"
then math models (and not commonsense models) are the method of choice.
Note that the math model need not be limited to a single set of PDE's.
Also, the math model can be an abstract "lumped model" with less complexity.
The general method of simulation incorporates combined continuous and
discrete methods to solve all kinds of physical problems. For instance,
one needs to use notions of probability (that a water will make it
to the front row), simplified flow equations, and projectile motion.
Also, solving of the "problem of what happens to the water" need not
involve flow equations. Witness, for instance, the work of Toffoli and
Wolfram where cellular automata may be used "as an alternative to"
differential equations. Also, the problem may be solved using visual
pattern matching - it is quite likely that humans "reason" about
"what will happen" to spilt liquids using associative database methods
(the neural netlanders might like this approach) based on a huge
library of partial images from previous experience (note Kosslyn's work).

I still haven't mentioned anything about artificial intelligence yet - just
methods of problem solving. I agree that differential equations by
themselves do not comprise an epistemologically adequate model. But note
that no complex problem is solved using only one model language (such as
DE's). The use of simulation is a nice example since, in simulating
a complex system, one might use many "languages" to solve the problem.
Therefore, I'm not sure that epistemological adequacy is the issue.
The issue is, instead, to solve the problem by whatever methods
available.

Now, back to AI. I agree that "there is no theory involving DE's (etc.)
that can be used by a robot to determine what can be expected when a
glass of water is spilled." I would like to take the stronger position
that searching for such a singular theory seems futile. Certainly, robots of
the future will need to reason about the world and about moving liquids;
however, we can program robots to use pattern matching and whatever else
is necesssary to "solve the problem." I supposed that I am predisposed
to an engineering philosophy that would suggest research into a method
to allow robots to perform pattern recognition and equation solving
to answer questions about the real world. I see no evidence of a specific
theory that will represent the "intelligence" of the robot. I see only
a plethora of problem solving tools that can be used to make future
robots more and more adaptive to their environments.

If commonsense theories are to be useful then they must be validated.
Against what? Well, these theories could be used to build programs
that can be placed inside working robots. Those robots that performed
better (according to some statistical criterion) would validate
respective theories used to program them. One must either 1) validate
against real world data [the cornerstone to the method of computer
simulation] , or 2) improved performance. Do commonsense theories
have anything to say about these two "yardsticks?" Note that there
are many AI research efforts that have addressed validation - expert
systems such as MYCIN correctly answered "more and more" diagnoses
as the program was improved. The yardstick for MYCIN is therefore
a statistical measure of validity. My hat is off to the MYCIN team for
proving the efficacy of their methods. Expert systems are indeed a
success. Chess programs have a simple yardstick - their USCF or FIDE
rating. This concentration of yardsticks and method of validation
is not only helpful, it is essential to demonstrate the an AI method
is useful.

-paul

+------------------------------------------------------------------------+
| Prof. Paul A. Fishwick.... INTERNET: fishwick@bikini.cis.ufl.edu       |
| Dept. of Computer Science. UUCP: gatech!uflorida!fishwick              |
| Univ. of Florida.......... PHONE: (904)-335-8036                       |
| Bldg. CSE, Room 301....... FAX is available                            |
| Gainesville, FL 32611.....                                             |
+------------------------------------------------------------------------+

------------------------------

End of AIList Digest
********************

∂26-Sep-88  2040	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #93  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 26 Sep 88  20:40:40 PDT
Date: Mon 26 Sep 1988 23:22-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #93
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 27 Sep 1988      Volume 8 : Issue 93

 Philosophy -- The Grand Challenge (4 messages)

----------------------------------------------------------------------

Date: 22 Sep 88 08:20:17 GMT
From: peregrine!zardoz!dhw68k!feedme!doug@jpl-elroy.arpa  (Doug Salot)
Subject: Grand Challenges

In the 16 Sept. issue of Science, there's a blurb about the
recently released report of the National Academy of Sciences'
Computer Science and Technology Board ("The National Challenge
in Computer Science and Technology," National Academy Press,
Washington, DC, 1988).  Just when you thought you had the
blocks world figured out, something like this comes along.

Their idea is to start a U.S. Big Science (computer science,
that is) effort ala Japan.  In addition to the usual clamoring
for software IC's, fault tolerance, parallel processing and
a million mips (ya, 10↑12 ips), here's YOUR assignment:

1) A speaker-independent, continuous speech, multilingual real-time
translation system.  Make sure you don't mess up when the
the speech is ambiguous, nongramatical, or a phrase is incomplete.
Be sure to maintain speaker characteristics (what's Chinese sound
like with a Texas accent?).  As you may know, Japan is funding
a 7 year effort at $120 million to put a neural-net in a telephone
which accomplishes this feat for Japanese <-> English (it's a
picture phone too, so part of the problem is to make lips
sync with the speech, I guess).

2) Build a machine which can read a chapter of a physics text and
then answer the questions at the end.  At least this one can be
done by some humans!

While I'm sure some interesting results would come from attempting
such projects, these sorts of things could probably be done sooner
by tossing out ethical considerations and cloning humanoids.

If we were to accept the premise that Big Science is a Good Thing,
what should our one big goal?  I personally think an effort to
develop a true man-machine interface (i.e., neural i/o) would be
the most beneficial in terms of both applications and as a driving
force for several disciplines.
--
Doug Salot || doug@feedme.UUCP || ...{zardoz,dhw68k,conexch}!feedme!doug
           Raisin Deters - Breakfast never tasted so good.

------------------------------

Date: 23 Sep 88 13:39:57 GMT
From: ndcheg!uceng!dmocsny@iuvax.cs.indiana.edu  (daniel mocsny)
Subject: Re: Grand Challenges

In article <123@feedme.UUCP>, doug@feedme.UUCP (Doug Salot) writes:
[ goals for computer science ]

> 2) Build a machine which can read a chapter of a physics text and
> then answer the questions at the end.  At least this one can be
> done by some humans!
>
> While I'm sure some interesting results would come from attempting
> such projects, these sorts of things could probably be done sooner
> by tossing out ethical considerations and cloning humanoids.

A machine that could digest a physics text and then answer questions
about the material would be of atronomical value. Sure, humanoids can
do this after a fashion, but they have at least three drawbacks:

(1) Some are much better than others, and the really good ones are
rare and thus expensive,
(2) None are immortal or particularly speedy (which limits the amount of
useful knowledge you can pack into one individual),
(3) No matter how much the previous humanoids learn, the next one
still has to start from scratch.

We spend billions of dollars piling up research results. The result,
which we call ``human knowledge,'' we inscribe on paper sheets and
stack in libraries. ``Human knowledge'' is hardly monolithic. Instead
we partition it arbitrarily and assign high-priced specialists to each
piece. As a result, ``human knowledge'' is hardly available in any
sort of general, meaningful sense. To find all the previous work
relevant to a new problem is often quite an arduous task, especially
when it spans several disciplines (as it does with increasing
frequency). I submit that our failure to provide ourselves with
transparent, simple access to human knowledge stands as one of the
leading impediments to human progress. We can't provide such access
with a system that dates back to the days of square-rigged ships.

In my own field (chemical process design) we had a problem (synthesizing
heat recovery networks in process plants) that occupied scores of
researchers from 1970-1985. Lots of people tried all sorts of approaches
and eventually (after who knows how many grants, etc.) someone spotted
some important analogies with some problems from Operations Research work
of the '50's. We did have to develop some additional theory, but we could
have saved a decade or so with a machine that ``knew'' the literature.

Another example of an industrially significant problem in my field is
this: given a target molecule and a list of available precursors,
along with whatever data you can scrape together on possible chemical
reactions, find the best sequence of reactions to yield the target
from the precursors. Chemists call this the design of chemical syntheses,
and chemical engineers call it the reaction path synthesis problem. Since
no general method exists to accurately predict the success of a chemical
reaction, one must use experimental data. And the chemical literature
contains references to literally millions of compounds and reactions, with
more appearing every day. Researchers have constructed successful programs
to solve these types of problems, but they suffer from a big drawback: no
such program embodies enough knowledge of chemistry to be really useful.
The programs have some elaborate methods to represent to represent reaction
data, but these knowledge bases had to be hand-coded. Due to the chaos
in the literature, no general method of compiling reaction data automatically
has worked yet. Here we have an example of the literature containing
information of enormous potential value, but it is effectively useless.

If someone handed me a machine that could digest all (or at least
large subsets) of the technical literature and then answer any
question that was answerable from the literature, I could become a
wealthy man in short order. I doubt that many of us can imagine how
valuable such a device would be. I hope to live to see such a thing.

Dan Mocsny

------------------------------

Date: 24 Sep 88 17:53:11 GMT
From: ncar!tank!arthur!daryl@gatech.edu  (Daryl McLaurine)
Subject: Re: Grand Challenges

On "Human Knowledge"...

        I am one of many people who makes a living by generation solutions to
complex problems or tasks in a specific field by understanding the
relationships between my field and many 'unrelated' fields of study.  As the
complexety of today's world increases, The realm of "Human Knowledge" cannot
remain 'monolithic', to solve many problems, _especialy_ in AI, one must
acquire the feel of the dynamic 'flow' of human experence and sense the
conectives within.  Few people are adept at this, and the ones who are, ether
become _the_ leading edge of their field, or are called opon to consult for
others by acting as that mythical construct that will 'understand' human
experence on demand.

        In my field, both acedemic and profesinal, I strive to make systems
that will acquire knowledge and make ,_AT BEST_, moderately simple corrila-
tions in data that may point to solutions to a specified task.  It is still
the realm of the Human Investigator to take these suggestions and make a compl-
ete analysis of them by drawing on his/her(?) own heurestic capability to arive
at a solution.  To this date, the most advanced construct I have seen, only
does a type of informational investigative 'leg work', and rarly can it corr-
alate facts that seem to be unrelated, but may actualy be ontological. (But,
I am working on it ;-} )  It is true that the computer model of what we do
would be more effective for a research investigator, but the point to which
we can program 'intuituitive knoledge' beyond simple relationships in pattern
recognition is far off.  The human in this equasionis still an unknown factor
to itself (can YOU tell me how you think, and if you can, there are MANY
cognitive sci. people [psycologists, AI researchers, etc] who want to talk to
you...), and until we can solve the grand chalenge of knowing ourselves, our
creations are little more than idiot savants (and bloody expencive ones at
that!)

-kill me, not my clients (Translated from the legalese...)
   ↑
<{[-]}>-----------------------------------------------------------------------
   V   Daryl McLaurine, Programmer/Analyst (Consultant)
   |   Contact:
   |       Home:   1-312-955-2803 (Voice M-F 7pm/1am)
   |       Office: Computer Innovations 1-312-663-5930 (Voice M-F 9am/5pm)
   |         daryl@arthur (or zaphod,daisy,neuro,zem,beeblebrox) .UChicago.edu
==\*/=========================================================================

------------------------------

Date: 26 Sep 88 05:33:07 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Grand Challenges


      The lesson of the last five years seems to be that throwing money at
AI is not enormously productive.  The promise of expert systems has not
been fulfilled (I will refrain from quoting some of the promises today),
the Japanese Fifth Generation effort has not resulted in any visible
breakthroughs (although there are some who say that its real purpose was
to divert American attention from the efforts of Hitachi and Fujitsu to
move into the mainframe computer business), the DARPA/Army Tank Command
autonomous land vehicle effort has resulted in vehicles that are bigger,
but just barely able to stay on a well-defined road on good days.

      What real progress there is doesn't seem to be coming from the big-bucks
projects.  People like Rod Brooks, Doug Lenat, and a few others seem to be
makeing progress.  But they're not part of the big-science system.

      I will not comment on why this is so, but it does, indeed, seem to be
so.  There are areas in which throwing money at the problem does work,
but AI may not be one of them at this stage of our ignorance.

                                        John Nagle

------------------------------

End of AIList Digest
********************

∂08-Oct-88  1129	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #95  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 8 Oct 88  11:29:29 PDT
Date: Sat  8 Oct 1988 14:12-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #95
To: AIList@AI.AI.MIT.EDU


AIList Digest             Sunday, 9 Oct 1988       Volume 8 : Issue 95


 Here we go again (please read)


 Announcements:

  Computer Othello Tournament
  2nd Generation Expert Systems
  1989 Summer Computer Simulation Conference
  Washington Neural Network Society Meeting
  3rd Intl. Conference on Genetic Algorithms
  AAAIC '88
  Annual Survey of AI Applications to Law and Taxation
  CLP mailing list
  conceptual structures news group

----------------------------------------------------------------------

Date: Fri, 7 Oct 88 22:05 EDT
From: AILIST-REQUEST@AI.AI.MIT.EDU
Subject: Here we go again (please read)


        Hardware difficulties with the distribution machine have
interrupted AIList service for the past week; apologies to all.
Unfortunately, the problem has not been completely resolved and may
reoccur.

        Just a reminder about addresses:

Postings:       AILIST@AI.AI.MIT.EDU
Admin:          AILIST-REQUEST@AI.AI.MIT.EDU

        The list is moderated so there is no harm done if you send to
the 'wrong' address; it simply makes it easier for me to keep organized.
Keep those Subject: lines clear and to the point ...

        Don't expect instant turnaround - I try not to send out a digest
unless I have at least four messages on the same topic.  It can take
awhile for this many to accumulate.  Please let me know how you feel
about my choice of topics, and the assignment of messages thereto.

        Getting the amount of traffic *down* and the signal-to-noise
ratio *up* is currently a high priority.  The next few days will be
extra busy as I start moving out accumulated material, so please bear
with me.

        Cheers,


                - nick

------------------------------

Date: 26 Sep 88 16:11:25 GMT
From: tank!ncar!noao!asuvax!nud!mcdchg!clyde!watmath!csc@oddjob.uchica
      go.edu  (Wade Richards)
Subject: Computer Othello Tournament


                                The Fifth Annual

                University of Waterloo Computer Science Club

                          Computer Othello Tournament


When:   Saturday, November 12, 1988, 9:00 am EST.

Where:  University of Waterloo, Waterloo, Ontario. Math and Computer
        Building.  (Room to be announced.)

Who:    Anyone.

Why:    To encourage programming for purposes other than completing
        CS assignments.


The competition is open to anyone and everyone.  Each entrant is
required to have written a non-commercial computer program which plays
the game of Othello.  The programs may run on any computer that:

        a) the competitor can transport to or dial up from the
           competition site;
     or b) is available for use by the Computer Science Club.
           These include a VAX running 4.3BSD, an IBM 4341 running
           VM/CMS, a Hewlett-Packard 9000 series 200 running HP-UX
           (similar to System III Unix), and an IBM PC running MS-DOS.

The program may be written in any computer language, or implemented in
hardware if so desired.

Players who are unable to play in person can transmit their moves via
bitnet or telephone, or send their program to us.  We will appoint a
proxy if needed to run your program for you.

If you choose to send your program in, please ensure that it will run
on a different system without problems.  We will accept either
executable or source, but source has a much better chance of working.
It must be bug free, with complete implementation details.  The clearer
your documentation is, the better chance we have of successfully compiling
your program.  Although we will make every reasonable effort, the CSC
cannot guarantee your program's operation.  In the event that we cannot
run your program, we will refund your entry fee.

The games will be run under international Othello rules, with each
player allotted 30 minutes of playing time per match.  The competition
will be organized as a Swiss system tournament.

A trophy will be provided by the Computer Science Club to the top
finisher in the competition, and the winner's name will be inscribed
on the permanent tournament trophy.  There will also be an award for
the top undergraduate finisher from the University of Waterloo.

Entry fee for the competition will be $5.00 (Canadian) for Club members
and $7.00 (Canadian) for others.  An entry form follows in the next
article; if you are interested in competing, please fill it out and
return it.

Those who wish to submit source must have their entry in by Oct. 29, 1988
accompanied by the code.  The deadline for other entries is November 5,
1988.  If we have to run your executable, it must be in by this date as
well.

For complete rules or more information, please contact:

                Computer Science Club
                MC 3037
                University of Waterloo
                200 University Avenue West
                Waterloo, Ontario
                N2L 3G1

                (519) 885-1211 ext. 3870

                {uunet,clyde,utai}!watmath!csc

------------------------------

Date: 26 Sep 88 16:59:30 GMT
From: mcvax!inria!crcge1!david@uunet.uu.net  (Marc David)
Subject: 2nd Generation Expert Systems


AVIGNON 89
----------
Ninth International Workshop: Expert Systems & their Applications
Avignon - France, May 29 - June 2, 1989.

Specialized Conference on:


             SECOND  GENERATION  EXPERT  SYSTEMS
             ===================================


                     Call  for  Papers


Following the first session on  Second Generation  Expert Systems
organized during the 12th IMACS Congress (Paris, July 18-22, 88),
a second specialized conference is organized during Avignon'89.

Second Generation Expert Systems are able  to  combine  heuristic
reasoning  with deeper reasoning, based on a model of the problem
domain.  The conference will emphasize practical and  theoretical
issues  relating to the cooperation of these two kinds of reason-
ing.

TOPICS INCLUDE:
---------------

 - integration of different reasoning techniques;
 - architecture (preferably implemented) for combining  heuristic
reasoning and model-based reasoning;
 - cooperation of multiple expertise;
 - application of cooperative reasoning  to  real-world  problems
(e.g. diagnosis, control, planing, design);
 - the use of qualitative, causal  or  temporal  reasoning  tech-
niques to augment heuristic reasoning;
 - integration of qualitative and quantitative reasoning.

In addition to technical quality, papers  will  be  evaluated  by
their  potential  to  contribute to achieving the goals of Second
Generation Expert Systems.

SUBMISSION:
-----------

Submit 6 copies of full-length papers (no longer than 5000 words;
about 20 double-spaced pages) before December 12, 1988 to:
                                     -----------------
          Jean-Claude Rault - Avignon'89;
          EC2; 269-287 rue de la Garenne
          92000  Nanterre; France

          tel: 33 - 1 - 47.80.70.00
          fax: 33 - 1 - 47.80.66.29


PROGRAM COMMITTEE:
------------------

chairman: Jean-Marc  David
          Laboratoires de Marcoussis
          route de Nozay
          91460  Marcoussis; France
          tel: 33 - 1 - 64.49.14.89
          fax: 33 - 1 - 64.49.06.94

 Alice  Agogino   (University of California at Berkeley; USA);
 Bert  Bredeweg   (University of Amsterdam; The Netherlands);
 B.  Chandrasekaran   (Ohio State University; USA);
 Marie-Odile  Cordier   (Universite de Rennes; France);
 Jean-Luc  Dormoy   (Etudes et Recherches EDF; France);
 Jean-Paul  Krivine   (Sedco Forex Schlumberger; France);
 Benjamin  Kuipers   (University of Texas at Austin; USA);
 Robert  Milne   (Intelligent Applications; UK);
 Richard  Pelavin   (Philips Laboratories; USA);
 Olivier  Raiman   (Centre Scientifique IBM; France);
 Reid  Simmons   (Carnegie-Mellon University; USA);
 Luc  Steels   (Vrije Universiteit Brussel; Belgium);
 Jon  Sticklen   (Michigan State University; USA);
 Pietro  Torasso   (Universita di Torino; Italy);
 Louise  Trave   (LAAS-CNRS; France).

------------------------------

Date: 27 Sep 88 23:15:34 GMT
From: killer!pollux!ti-csl!home!sullivan@eddie.mit.edu  (Mike
      Sullivan)
Subject: 1989 Summer Computer Simulation Conference

------------------------------------------------------
                Call For Papers

       Summer Computer Simulation Conference
                Austin, Texas
              July 24-27, 1989


The 1989 Summer Computer Simulation Conference to be held in
Austin, Texas, July 24-27 is looking for abstracts in the area of
Knowledge Based Systems and Simulation.  Topics we are looking
for include the areas of:
        o Knowledge Based Simulation Theory
        o Intelligent Simulation Systems
        o Knowledge Based Simulation Tools
        o Knowledge Based Systems using Simulation
        o Knowledge Representation for Simulation
        o Intelligent Simulation Control Architectures
        o Applications of Simulation Techniques to Knowledge
          Based Systems
        o Interactions Between Conventional Simulations and
          Knowledge Based Systems

Please send your one page abstract to:
     Society for Computer Simulation
     P.O. Box 17900
     4838 Ronson Court, Suite 'L'
     San Deigo, CA 92117-7900
     ATTN: Group XIII.

Please include your name, organization, address and netaddress
(if available).  Deadline for abstracts is November 1, 1988.

------------------------------

Date: Thu, 29 Sep 88 23:18:58 EDT
From: weidlich@ludwig.scc.com (Bob Weidlich)
Subject: Washington Neural Network Society Meeting

            The Washington Neural Network Society

                    First General Meeting
                  October 12, 1988  7:00 PM

                   Speaker:  Fred Weingard
                 Booz, Allen & Hamilton, Inc.
                     Arlington, Virginia.


          Neural Networks: Overview and Applications


Neural networks and neurocomputing provide a novel and promis-
ing  alternative  to conventional computing and artificial in-
telligence.  Conventional computing is  characterized  by  the
use  of algorithms to solve well-understood problems.  Artifi-
cial intelligence approaches are  generally  characterized  by
the  use  of  heuristics  to  obtain good, but not necessarily
best, solutions to problems whose solution steps  are  not  so
well-understood.   In both approaches, knowledge representions
or data structures to solve the problem must be worked out  in
advance  and  a problem domain expert is essential.  These ap-
proaches result in systems that are brittle to unexpected  in-
puts, cannot adapt to a changing environment, and cannot easi-
ly take advantage of parallel hardware architectures.   Neural
network  systems, in contrast, can learn to solve a problem by
exposure  to  examples,  are  naturally  parallel,   and   are
``robust"  to novelty.  In this talk Fred Weingard will give a
general overview of neural networks that covers  many  of  the
most promising neural network models, and discuss the applica-
tion of such models to three difficult real-world problems  --
radar  signal  processing,  optimal decisionmaking, and speech
recognition.

Fred Weingard heads the Neural Network Design and Applications
Group  at  Booz, Allen & Hamilton.  Prior to joining Booz, Al-
len, Mr. Weingard was a senior intelligence analyst at the De-
fense Intelligence Agency.  He has degrees in engineering from
Cornell University and is completing his doctorate in computer
science / artificial intelligence at George Washington Univer-
sity.

The meeting will be held in the Contel Plaza Building  Audito-
rium  at  Contel  Federal Systems in Fairfax, Virginia, at the
southwest edge of the Fair Oaks  mall.   Directions  from  495
Beltway:  Take Route 66 Westbound (toward Front Royal) and get
off at route 50 heading west (Exit 15 Dulles/Winchester).   Go
1/4  mile on route 50, follow sign to "shopping center".  Stay
in right lane and merge into service road that  circles  shop-
ping center.  Take driveway from service road to Contel build-
ing.  Address is 12015 Lee Jackson Highway.   Contel  building
is  across  shopping  parking  lot  from Lord and Taylor, near
Sears.  For further information call Billie Stelzner at  (703)
359-7685.   Host  for  the meeting is the recently-established
Contel Technology Center.  Dr. Alan Salisbury, Director of the
Technology  Center,  will  present a brief introduction to the
plans for research and application of technology at the Contel
laboratory,  including  work  in  artificial  intelligence and
man-machine interface design.

               Schedule:
               7:00 - 7:15 Welcoming (Alan Salisbury)
               7:15 - 8:15 Speaker (Fred Weingard)
               8:15 - 8:30 Report on Neural Network Society (Craig Will)
               8:30 - 9:30 Reception, informal discussion

------------------------------

Date: Wed, 5 Oct 88 11:31:53 EDT
From: John Grefenstette <gref@aic.nrl.navy.mil>
Subject: 3rd Intl. Conference on Genetic Algorithms


                              Call for Papers

         The Third International Conference on Genetic Algorithms
                                 (ICGA-89)


     The Third International Conference on Genetic Algorithms (ICGA-
     89), will be held on June 4-7, 1989 at George Mason University
     near Washington, D.C.  Authors are invited to submit papers on
     all aspects of Genetic Algorithms, including: foundations of
     genetic algorithms, search, optimization, machine learning using
     genetic algorithms, classifier systems, apportionment of credit
     algorithms, relationships to other search and learning paradigms.
     Papers discussing specific applications (e.g., OR, engineering,
     science, etc.) are encouraged.


     Important Dates:

             10 Feb 89:      Submissions must be received by program chair
             10 Mar 89:      Notification of acceptance or rejection
             10 Apr 89:      Camera ready revised versions due
             4-7 Jun 89:     Conference Dates


     Authors are requested to send four copies (hard copy only) of a
     full paper by February 10, 1989 to the program chair:


                            Dr. J. David Schaffer
                            Philips Laboratories
                            345 Scarborough Road
                            Briarcliff Manor, NY 10510
                            ds1@philabs.philips.com
                            (914) 945-6168


     Conference Committee:

     Conference Chair:       Kenneth A. De Jong, George Mason University
     Local Arrangements:     Lashon B. Booker, Naval Research Lab
     Program Chair:          J. David Schaffer, Philips Laboratories
     Program Committee:      Lashon B. Booker
                             Lawrence Davis, Bolt, Beranek and Newman, Inc.
                             Kenneth A. De Jong
                             David E. Goldberg, University of Alabama
                             John J. Grefenstette, Naval Research Lab
                             John H. Holland, University of Michigan
                             George G. Robertson, Xerox PARC
                             J. David Schaffer
                             Stephen F. Smith, Carnegie-Melon University
                             Stewart W. Wilson, Rowland Institute for Science

------------------------------

Date: Thu, 6 Oct 88 14:34:20 edt
From: wilsonjb%avlab.dnet@wpafb-avlab.arpa (Jim Wilson, AFWAL/AAI,
      55800)
Subject: AAAIC '88

Aerospace Applications of Artificial Intelligence (AAAIC) '88

                    Special Emphasis
                           On
              Neural Network Applications



LOCATION:       Stouffer Dayton Plaza Hotel
                Dayton, OH

DATES:          Monday, 24 Oct - Friday, 28 Oct 88


PLENARY SESSION         Tuesday Morning

        Lt General John M. Loh,
          Commander, USAF Aeronautical Systems Division

        Dr. Stephen Grossberg,
          President, Association of Neural Networks


TECHNICAL SESSIONS      Tuesday - Thursday  (in paralell)

        I.  Neural Network Aerospace Applications
                Integrating Neural Netorks and Expert Systems
                Neural Networks and Signal Processing
                Neural Networks and Man-Machine Interface Issues
                Parallel Processing and Neural Networks
                Optical Neural Networks
                Back Propogation with Momentum, Shared Weights and Recurrency
                Cybernetics

        II. AI Aerospace Applications
                Developmental Tools and Operational and Maintenance Issues
                  Using Expert Systems
                Real Time Expert Systems
                Automatic Target Recognition
                Data Fusion/Sensor Fusion
                Combinatorial Optimaztion for Scheduling and Resource Control
                Machine Learining, Cognition, and Avionics Applications
                Advanced Problem Solving Techniques
                Cooperative and Competitive Network Dynamics in Aerospace

Tutorials

        I.   Introduction to Neural Nets                Mon 8:30 - 11:30
        II.  Natural LAnguage Processing                    8:30 - 11:30
        III. Conditioned Response in Neural Nets            1:30 -  4:30
        IV.  Verification and Validation of Knowledge       1:30 -  4:30
                Based Systems

Workshops

        I.   Robotics, Vision, and Speech               Fri 8:30 - 11:30
        II.  AI and Human Engineering Issues                8:30 - 11:30
        III. Synthesis of Intelligence                      1:30 -  4:30
        IV.  A Futurists View of AI                         1:30 -  4:30


REGISTRATION INFORMATION
                                        (after 30 Sept)

        Conference                      $225
        Individual Tech Session (ea)    $ 50
        Tutorials (ea)                  $ 50
        Workshops (ea)                  $ 50


Conference Reistration includes:        Plenary Session
                                        Tuesday Luncheon
                                        Wednesday Banquet
                                        All Technical Sessions
                                        Proceedings

Tutorials and Workshops are extra.

For more information, contact:

        AAAIC '88
        Dayton SIGART
        P.O. Box 31434
        Dayton, OH 45431

        Darrel Vidrine
        (513) 255-2446

Hotel information:

        Stouffer Dayton Plaza Hotel
        (513) 224-0800

        Rates:          Govt            Non-Govt

                Single  $55             $75

                Double  $60             $80

------------------------------

Date: Thu, 6 Oct 88 14:58:46 EDT
From: donald berman <berman@corwin.ccs.northeastern.edu>
Subject: Annual Survey of AI Applications to Law and Taxation


       I am editing THE ANNUAL SURVEY OF ARTIFICIAL INTELLIGENCE AND LAW which
     will cover automated practice systems, expert systems, conceptual
     retrieval from legal data bases, computer assisted education, hypertext,
     and decision analysis.

       I invite researchers and developers to submit short articles; users to
     submit product reviews; and developers to submit information about their
     AI products for listing in a comprehensive directory. For more
     information you may either reply to this electronic message or contact

       Professor Donald H. Berman
       Center for Law & Computer Science
       Northeastern University
       400 Huntington Ave.
       Boston, MA 02115           tel. (617) 437-3346

       Berman@corwin.ccs.northeastern.edu

------------------------------

Date: 6 Oct 88 19:41:28 GMT
From: THUNDER.BOLTZ.CS.CMU.EDU!spiro@pt.cs.cmu.edu  (Spiro Michaylov)
Subject: CLP mailing list


The CLP mailing list has been created and is going strong. If you have asked
to be put on it and have not received any messages it is probably because I
haven't been able to get your e-mail address working. If this is the case,
please mail clp-request@cs.cmu.edu with lots of alternative e-mail addresses
for me to try.

In particular, the following addresses are causing problems

...!utacs.uta.fi!ph (user doesn't exist)
...!aida!em (host doesn't exist)
clp%inf21@ztivax.siemens.com (host has gone away?)
----------------------------

Spiro Michaylov
Carnegie Mellon Computer Science.

------------------------------

Date: 7 Oct 88 19:42:39 GMT
From: busalacc@umn-cs.arpa  (Perry J. Busalacchi)
Subject: conceptual structures news group


               Conceptual Structures News Group
               --------------------------------

As was discussed at the annual conceptual graphs workshop,
a new news group is being formed which will focus on discussions
pertaining to John Sowa's Conceptual Structure theory. This
group will be monitored by Perry Busalacchi (University
of Minnesota). If you are interested in subscribing send mail
to busalacc@umn-cs.cs.umn.edu. Received mail will be compiled
into a weekly newsletter and sent to all subscribing parties.

-perry

------------------------------

End of AIList Digest
********************

∂08-Oct-88  1446	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #96  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 8 Oct 88  14:45:50 PDT
Date: Sat  8 Oct 1988 14:59-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #96
To: AIList@AI.AI.MIT.EDU


AIList Digest             Sunday, 9 Oct 1988       Volume 8 : Issue 96


 Spang Robinson Report

 Philosophy:

  Re: common sense "reasoning"
  Followup on JMC/Fishwick Diffeq
  Re: Newell's response to KL questions

----------------------------------------------------------------------

Date: Sun, 18 Sep 88 08:07:46 CDT
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: bm965

The Spang Robinson Report on Artificial Intelligence, August, 1988, Volume 4,
No.  8

Lead Article is on the "New AI Industry"

Revenue List

Expert System Development Tools
1987 139 million
1989 278 million

Natural Language
1987 49 million
1989 95 million

Symbolic Processing Languages
1987 51 million
1989 145 million

AI Services
1987 150 million
1989 336 million

Symbolic Processors

1987 170 million
1989 161 million

General Workstations

1987 81 million
1989 277 million

Number of companies selling AI technology or applications
1986 80
1988 ~160

Discussions aof Carnegie Group, Inference, IntelliCorp, Teknowledge and Lucid
(Lisp)

Lucid revenues in 1988 were 1.4 million.
________________________________________
Neural Netowrks:

Discussion of various neural network products.  They have a 10,000 u9it
installed base.  It took 30 months to achieve 10000 units in Expert
systems contrasted to 13 months for neural networks.

MIT sent out 7000 copeis of the software in Explorations in
Parallel Distributed Processing.
NeuralWorks published 1000 copies of its NeuralWare tool.  They range
from $195 to $2995.00

Neuronics hs sold 500 units of MacBrain,

TRW sold 40 units but is third in dollar volume.

________________________________________
Hypertext and AI.

CogentTEXT is a hypertext system embedded in Prolog.  Each hypertext
button causes execution of an apporpriate segment of Prolog code.  This
system is a "shareware"
product.  It can be obtained from Cogent Software for $35.00 (508 875 6553)
________________________________________
Third Millenium is a venture capital fund still interested in AI start ups
(as well as Neural networks).
))))))))))))))))))))))))))))))))))))))))
Shorts:

IntelliCorp reports profit of $416,000 for its fourth quarter.

Lucid has a product called Distill! which will remove the develpment
environment for the runtime execute.  SUN renewed its on-goiing OEM agreement.
Lucid has sold a total of 3000 products with 2000 went to SUN.  CSK will be
selling LUCID in Japan.

Neuron Data has integrated Neuron OBJECT with ORACLE, SYBASE and Ingres.
The interfaces cost $1000 each.

KDS has released a version of an expert system shell with Blackboard.

Logicware ported MPROLOG and TWAICE (expert system shell) to IRIS
systems.

Flavors TRechnology has introduced a system to real-time inference 10,000
rules in 10 milliseconds.  A Japanese company ordered the product.

Inference has ported ART to IBM mainframes and PC (under MS-DOS).

The Spang Robinson Report has a two pag list of  AI companies
broken down into each of the following fields:

Expert System Tools, Expert System Applications,
Languages (e. g. PROLOG), natural language systems and hardware

------------------------------

Date: 26 Sep 88 14:59:56 GMT
From: jbn@glacier.stanford.edu (John B. Nagle)
Reply-to: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: common sense "reasoning"


      Use of the term "common-sense reasoning" presupposes that common sense
has something to do with reasoning.  This may not be the case.  Many animals
exhibit what appears from the outside to be "common sense".  Even insects
seem to have rudiments of common sense.  Yet at this level reasoning seems
unlikely.

      The models of behavior expressed by Rod Brooks and his artificial
insects (there's a writeup on this in the current issue of Omni), and by
Hans Moravec in his new book "Mind Children", offer an alternative.  I
won't attempt to summarize that work here, but it bears looking at.

      I would encourage workers in the field to consider models of common
sense that don't depend heavily on logic.  There are alternative ways to
look at this class of problem.  Both Brooks and Moravec use approaches
that are spatial in nature, rather than propositional.  This seems to be
a good beginning for dealing with the real world.

      The energetic methods Witkin and Kass use in vision processing are
another kind of model which offers a spatial orientation, an internal
drive toward consistency, and the ability to deal with noisy data.  These
are promising beginnings for common-sense processing.



                                        John Nagle

------------------------------

Date: 28 Sep 88  2:21 +0100
From: ceb%ethz.uucp@RELAY.CS.NET
Subject: Followup on JMC/Fishwick Diffeq

>From ceb Wed Sep 28 02:21:09 MET 1988 remote from ethz
>for Robots Interchange


Apropos using diffeqs or other mathematical models to imbue a robot
with the ability to reason about observation of continuous phenomena:
in John McCarthy's message <cdydW@SAIL.Stanford.EDU>, JMC states that
(essentially) diffeqs are not enough and must be imbedded in
"something" larger, which he calls "common sense knowledge".  He also
state that diffeqs are inappropriate because "noone could acquire the
initial [boundary?] conditions and integrate them fast enough".

I would like to pursue this briefly, by asking the question:

  Just how much of this something-larger (JMC's framework of common
  sense knowledge) could be characterized as descriptions
  of domains in which such equations are in force, and in describing
  the interactions between neighboring domains?

I ask because I observe in my colleagues (and sometimes in myself)
that an undying fascination with the diffeq "as an art form" can lead
one think about them `in vitro', i. e. isolated on paper, with all
those partial-signs standing so proud. You have to admit, the idea as
such gets great mileage: you have a symbolic representation of
something continuous, and we really don't have another good way of
doing this.  Notwithstanding, in order to use them, you've got to
describe a domain, the bc's, etc.

This bias towards setting diffeqs up on a stage may also stem from
practical grounds as well: in numerical-analysis work, even having
described the domain and bc's you're not home free yet - the equations
have to be discretized, which leads to huge, impossible-to-solve
matrices, etc.  There are many who spend the bulk of their working
lives trying to find discretizations which behave well for certain
ill-behaved but industrially important equations.  Such research is
done by trial-and-error, with verification through computer
simulation.  In such simulations, to try out new discretizations, the
same simple sample domains are used over and over again, in order to
try to get results which *numerically* agree with some previously
known answer or somebody elses method.  In short, you spend a lot of
time tinkering with the equation, and the domain gets  pushed to the
back of your mind.

In the case of the robot, two things are different:
1. No one really cares about the numerical accuracy of the results:
   something qualitative should be suffficient.
2. The modelled domains are *not* simple, and do not stay the same.
   There can also be quite a lot of them.

I would wager that, if the relative importance of modelling the domain
and modelling the intrinsic behavior that takes place within it were
turned around, and given that you could do a good enough job of
modelling the such domains, then:
a. only a very small subset of not scientifically accurate but very
easy to integrate diffeqs would be needed to give good performance,
b. in this case, integration in real time would be a possibility,
and,
c. something like this will be necessary.  I believe this supports the
position taken by Fishwick, as near as I understood it.

One might wonder idly if the Navier-Stokes equation (even in laminar
form) would be among the small set of subwager a.  Somehow I doubt it,
but this is not really so important, and certainly need not be decided
in advance.  It may even be that you can get around using anything at
all close to differential equations.

What does seem important, though, is the need to be able to
geometrically describe domains at least qualitatively accurately, and
this `on the fly'.  I am not claiming this would cover all "common
sense knowledge", just a big part of it.

ceb

P. S. I would also be interested to know of anyone working on such
modelling --- this latter preferably by mail.

------------------------------

Date: 30 Sep 88 04:06:59 GMT
From: goel-a@tut.cis.ohio-state.edu (Ashok Goel)
Subject: Re: Newell's response to KL questions


I appreciate Professor Allen Newell's explanation of his scheme of
knowledge, symbolic, and device levels for describing the architecture
of intelligence. More recently, Prof. Newell has proposed a scheme
consisting of bands, specifically, the neural, cognitive, rational,
and social bands, for describing the architecture of the mind-brain.
Each band in this scheme can have several levels; for instance, the
cognitive band contains (among others) the deliberation and the
operation levels.  What is not clear (at least not to me) is the
relationship between the two schemes.  One possible relationship is
colinearity in that the device level corresponds to the neural band,
the symbolic level to the cognitive band, and the knowledge level to
the rational band. Another possibility is containment in the sense
that each of band consists of (the equivalents of) knowledge,
symbolic, and device levels. A yet another possibility is
orthogonality of one kind or another. Which relationship (if any)
between the two schemes does Prof. Newell imply?

A commonality between Newell's two schemes is their emphasis on
structure.  A different scheme, David Marr's, focuses on the
processing and functional aspects of cognition. Again, what (if any)
is the relationship between Newell's levels/bands and Marr's levels?
Colinearity, containment, or some kind of orthogonality?

--ashok--

------------------------------

End of AIList Digest
********************

∂08-Oct-88  1700	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #97  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 8 Oct 88  16:59:51 PDT
Date: Sat  8 Oct 1988 15:03-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #97
To: AIList@AI.AI.MIT.EDU


AIList Digest             Sunday, 9 Oct 1988       Volume 8 : Issue 97

 More on ... The Grand Challenge (5 messages)

----------------------------------------------------------------------

Date: 26 Sep 88  2110 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: re: The Grand Challenge is Foolish

[In reply to message sent Mon 26 Sep 1988 23:22-EDT.]

I shall have to read the article in Science to see if the Computer
Science and Technology Board has behaved as foolishly as it seems.
Computer science is science and AI is the part of computer science
concerned with achieving goals in certain kinds of complex
environments.  However, defining the goals of AI in terms of reading a
physics book is like defining the goal of plasma physics in terms of
making SDI work.  It confuses science with engineering.

If the Computer Science and Technology Board takes science seriously
then they have to get technical - or rather scientific.  They might
attempt to evaluate the progress in learning algorithms, higher
order unification or nonmonotonic reasoning.

If John Nagle thinks that "The lesson of the last five years seems to
be that throwing money at AI is not enormously productive.", he is
also confusing science with engineering.  It's like saying that the
lesson of the last five years of astronomy has been unproductive.
Progress in science is measured in longer periods than that.

------------------------------

Date: 27 Sep 88 15:29:56 GMT
From: leverich@rand-unix.arpa  (Brian Leverich)
Subject: Re: Grand Challenges

In article <17736@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
>
>      The lesson of the last five years seems to be that throwing money at
>AI is not enormously productive.

Recent "big science" failures notwithstanding, the infusion of money into
AI may turn out to have been a more productive investment than we realize.

As a case in point, consider expert system technology.  It seems doubtful
that the technology is currently or soon will be capable of capturing
human "expertise" in more than a relative handful of freakishly
well-defined domains.

That doesn't mean the technology is useless, though.  Antiquated COBOL
programming replaced or substantially increased the productivity of
millions of clerks who used to do the arithmetic necessary to maintain
ledgers.  There still are millions of clerks out there who perform
evaluation activities that can be very well defined but are too complex to
cost-effectively program, debug, maintain, and document in COBOL.  A safe
bet is that over the next decade what shells _really_ do is allow the
business data processing community to automate a whole class of clerical
activities they haven't been able to handle in the past.  Unglamorous as
it seems, that single class of applications will really (no hype) save
industry billions of dollars.

Rather than looking at how well research is satisfying its own goals, when
talking about the productivity of research it may make more sense to take
a hard-headed "engineering" perspective and ask what can be built after
the research that couldn't be built before.
--
  "Simulate it in ROSS"
  Brian Leverich                       | U.S. Snail: 1700 Main St.
  ARPAnet:     leverich@rand-unix      |             Santa Monica, CA 90406
  UUCP/usenet: decvax!randvax!leverich | Ma Bell:    (213) 393-0411 X7769

------------------------------

Date: 30 Sep 88 07:53:30 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: Grand Challenges:  Expert System Shells replace COBOL

In article <1717@randvax.UUCP> leverich@rand-unix.UUCP (Brian Leverich) writes:
>A bet is that over the next decade what shells _really_ do is allow the
>business data processing community to automate a whole class of clerical
>activities they haven't been able to handle in the past.  Unglamorous as
>it seems, that single class of applications will really (no hype) save
>industry billions of dollars.

At last, someone in comp.ai lets it slip what ES shells are really
being used for (not a revelation to anyone who follows IKBS usage though).

Surveys in the UK (d'Agapeyeff, Ince) show that shells are being used
to write small (200 rule) systems that do traditional DP processing
which probably is beyond realistic COBOL programming.  Furthermore
(Ince) they are being programmed by casual computer users with no
programming background.

Someone asked for the 3 achievements of AI and no one answered.  I
intended to post my 3 to the net, but got diverted by some metaphysics.

I vote ES shells the achievement of the decade for:

           avoiding CS snobbery and turning out restricted natural
           language end-user programming languages which the untrained
           user will pick up and write applications in.  Shells may be
           the first step in bringing some form of programming to the
           masses (but remember that adventure games got there first
           with restricted natural language).

Note that the big shells (Art, KEE etc) fail the test as they replace
CS snobbery with IKBS snobbery.  The shells in real use tend to be the
PC based ones.

Note that the human-computer system here is quite powerful, far more
powerful than the no-human system aimed at by the AI zealots.  If
more people in AI understood the classic human factors task allocation
problem, they would be more likely to turn out technologies which do
help people to use computers, rather than abortive technologies
which try to help computers to abuse people.  Thank god this fails.
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

------------------------------

Date: 2 Oct 88 16:30:57 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Grand Challenges:  Expert System Shells replace COBOL


      This is true.  What expert systems have, in practice, turned out to
be is simply another form of special-purpose programming system for the
development of a specific class of applications.  Spreadsheets were the
first such form to achieve truly widespread use.  One could argue about old
report-writer systems such as RPG, but such systems were and are generally used
by data processing staff.  Spreadsheets are more often set up by people
who themselves want to analyze the numbers.  Programmable database programs
for end users, such as Dbase and its successors, followed.  Now we have
Apple's Hypercard, again, a simplified programming system for end users.

      Expert systems shells are tools of the same class.  They provide
a system in which programs for a limited class of problems can be neatly
expressed.  As such, they are useful, but not a profound breakthrough.

      About five years ago, I made the statement that when all is said and done,
expert systems will be more important than syntax-directed parsing but
less important than relational databases.  In retrospect, this seems a
valid assessment.

      If this whole technology had simply been called "rule-based programming",
the same results probably would have been obtained, with much less controversy.


                                        John Nagle

------------------------------

Date: 2 Oct 88 16:58:31 GMT
From: leverich@rand-unix.arpa  (Brian Leverich)
Subject: Re: Grand Challenges:  Expert System Shells replace COBOL

In article <1680@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>
>I vote ES shells the achievement of the decade for:
>
>          avoiding CS snobbery and turning out restricted natural
>          language end-user programming languages which the untrained
>          user will pick up and write applications in.  Shells may be
>          the first step in bringing some form of programming to the
>          masses (but remember that adventure games got there first
>          with restricted natural language).
>

Yup.  Now I have a nomination for the nth (probably not second or third,
but up there...) most significant _real_ contribution of AI, again in the
vein of providing new programming tools: knowledge-based simulation
languages.

Large simulations have traditionally been exceedingly costly to design,
debug, and extend, largely because the Fortran or even Simscript code
of the models isn't the least bit isomorphic with the physical system being
modeled.  Modeling trucks moving brainlessly around on a road network was
hard; modeling a multi-mode transportation system where management was
using heuristics to pursue cost-minimization and other goals was essentially
impossible.

Enters the object-oriented message-passing paradigm.  All of the sudden
individual trucks become "trucks" in the model (rather than rows in a
matrix), managers become "managers", and "managers" and "trucks" interact
by exchanging English-like messages rather than by changing entries in
some arbitrary set of matrices.  Design, debug, and extension is much
easier.  No hype - I've used ROSS (RAND's KBSim tool) to build some 4000+
lines of code simulations.

A good bet is that this object-oriented message-passing stuff is going to
have a considerable impact upon the simulation community.
--
  "Simulate it in ROSS"
  Brian Leverich                       | U.S. Snail: 1700 Main St.
  ARPAnet:     leverich@rand-unix      |             Santa Monica, CA 90406
  UUCP/usenet: decvax!randvax!leverich | Ma Bell:    (213) 393-0411 X7769

------------------------------

End of AIList Digest
********************

∂08-Oct-88  1915	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #98  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 8 Oct 88  19:15:35 PDT
Date: Sat  8 Oct 1988 15:06-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #98
To: AIList@AI.AI.MIT.EDU


AIList Digest             Sunday, 9 Oct 1988       Volume 8 : Issue 98

 Machine Consciousness (6 messages)

----------------------------------------------------------------------

Date: 5 Oct 88 17:47:57 GMT
From: mailrus!uflorida!usfvax2!mician@ames.arpa  (Rudy Mician)
Subject: Intelligence / Consciousness Test for Machines
         (Neural-Nets)???


I have a question that I know has been addressed in the past (and undoubtedly
continues to be addressed):

When can a machine be considered a conscious entity?

For instance, if a massive neural-net were to start from a stochastic state
and learn to interact with its environment in the same way that people do
(interact not think), how could one tell that such a machine thinks or exists
(in the same context as Descarte's "COGITO ERGO SUM"/"DUBITO ERGO SUM"
argument- that is, how could one tell whether or not an "I" exists for the
machine?

Furthermore, would such a machine have to be "creative"?  And if so, how would
we measure the machine's creativity?

I suspect that the Turing Test is no longer an adequate means of judging
whether or not a machine is intelligent.


If anyone has any ideas, comments, or insights into the above questions or any
questions that might be raised by them, please don't hesitate to reply.

Thanks for any help,

     Rudy


--

Rudy Mician     mician@usfvax2.usf.edu
Usenet:         ...!{ihnp4, cbatt}!codas!usfvax2!mician

------------------------------

Date: 5 Oct 88 20:24:56 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Intelligence / Consciousness Test for Machines
         (Neural-Nets)???

In article <1141@usfvax2.EDU> mician@usfvax2.usf.edu.UUCP,
(Rudy Mician) asks:

>When can a machine be considered a conscious entity?

Consciousness is not a binary phenomenon.  There are degrees of
consciousness.  So the transition from non-conscious to conscious
is a fuzzy, gradual transition.

A normal person who is asleep is usually regarded as unconscious,
as is a person in a coma.  An alert Dalmation may be considered
conscious.

It might be more instructive to catalog the stages that lead to
higher levels of consciousness.  I like to start with sentience,
which I define as the ability of a system to sense its environment
and to construct an internal map, model, or representation of that
environment.  Awareness may then be defined as the ability of a
sentient system to monitor an evolving state of affairs.

Self-awareness may, in turn, be defined as the capacity of a sentient
system to monitor itself.

As an aware being expands its powers of observation, it achieves
progressively higher degrees of consciousness.

Julian Jaynes has suggested that the bicameral mind gives rise to
human consciousness.  By linking two semi-autonomous hemispheres
through the corpus callosum, it is possible for one hemisphere
to act as observer and coach for the other.  In other words,
consciousness requires a feedback loop.

Group consciousness arises when independent individuals engage in
mutual mirroring and monitoring.  From Narcissus to Lewis Carroll,
the looking glass has served as the metaphor for consciousness raising.

--Barry Kort

------------------------------

Date: 6 Oct 88 12:06:26 GMT
From: uhccux!lee@humu.nosc.mil  (Greg Lee)
Subject: Re: Intelligence / Consciousness Test for Machines
         (Neural-Nets)???

From article <1141@usfvax2.EDU>, by mician@usfvax2.EDU (Rudy Mician):
" ...
" When can a machine be considered a conscious entity?

Always.  It's a matter of respect and empathy on your part.  All
the machines I use are conscious.

Or never, maybe, if you take 'conscious' seriously enough to
entertain the possibility that you yourself are not conscious
except sporadically.  Whatever one may think of his overall thesis,
Julian Jaynes (The Origin of Consciousness and the Bicameral Mind)
is very persuasive when he argues that consciousness is not
required for use of human language or every-day human activities.

                Greg, lee@uhccux.uhcc.hawaii.edu

------------------------------

Date: 6 Oct 88 15:06:55 GMT
From: tank!uxc!ksuvax1!cseg!cdc@oddjob.uchicago.edu  (C. David
      Covington)
Subject: Re: Intelligence / Consciousness Test for Machines
         (Neural-Nets)???

In article <1141@usfvax2.EDU>, mician@usfvax2.EDU (Rudy Mician) writes:
>
> I have a question that I know has been addressed in the past (and undoubtedly
> continues to be addressed):
>
> When can a machine be considered a conscious entity?
>
 . . .
>
> I suspect that the Turing Test is no longer an adequate means of judging
> whether or not a machine is intelligent.
>

     Regarding intelligent machines, to the naive it's totally magic, to the
wizard it's clever programming and a masterful illusion at best.  To ascribe
consciousness to a machine is a personal matter.  If I cannot tell the
difference between a 'conscious' human and a skillful emulation of the same,
then I am perfectly justified in *modeling* the machine as human.  It's not
so much a question of what *is* as a question of what *appears* to be.

     The same machine might be rightfully deemed conscious by one but not
by another.  I must expose my world view as predominantly Christian at this
point.  My belief in a Supreme Being places my view of man above all other
animals and therefore above any emulation of man by machine.  I say this not
so much to convert the masses to my point of view but to clarify that there
are people that think this way and this allows no place for conscious
machines.

     So to readdress the original question, the Turing test is certainly
still valid from my understanding that it is a matter of how accurately
you can mimic human behaviors.  Between the lines you are making the
assumption that man and machine are the same in essence.  To this I object
by faith.  The question cannot be properly addressed without first dealing
with world views on man.

                                                David Covington
                                                Assistant Professor
                                                Electrical Engineering
                                                University of Arkansas
                                                (501)575-6583

------------------------------

Date: 6 Oct 88 23:58:51 GMT
From: esosun!jackson@seismo.css.gov  (Jerry Jackson)
Subject: Re: Intelligence / Consciousness Test for Machines
         (Neural-Nets)???


In article <40680@linus.UUCP> bwk@mitre-bedford.ARPA (Barry W. Kort) writes:

   In article <1141@usfvax2.EDU> mician@usfvax2.usf.edu.UUCP,
   (Rudy Mician) asks:

> >When can a machine be considered a conscious entity?

 >  A normal person who is asleep is usually regarded as unconscious,
 > as is a person in a coma.  An alert Dalmation may be considered
 >  conscious.


A person who is in a coma is unconscious because he is incapable of
experiencing the outside world.  Consciousness is a *subjective*
phenomenon.  It is truly not even possible to determine if your
neighbor is conscious.  If a person felt no pain and experienced no
colors, sounds, thoughts, emotions, or tactile sensations he could be
considered unconscious.  Note that we would be unable to determine
this.  He could behave in exactly the same way while being completely
inert/dead inside.  Machines that are obviously unconscious such as
feedback-controlled battleship guns and thermostats respond to their
environments but, I would hardly call them conscious.  It is hard to
imagine what one would have to do to make a computer conscious, but it
does seem that it would involve more than adding a few rules.


--Jerry Jackson

------------------------------

Date: 7 Oct 88 20:09:54 GMT
From: hubcap!ncrcae!gollum!rolandi@gatech.edu  (mail)
Subject: Re: Intelligence / Consciousness Test for Machines
         (Neural-Nets)???


Adding to Barry Kort's......

>Consciousness is not a binary phenomenon.  There are degrees of
>consciousness.  So the transition from non-conscious to conscious
>is a fuzzy, gradual transition.

When a person is awake and responds in a predictable manner, he is said
to be conscious.

>Awareness may then be defined as the ability of a
>sentient system to monitor an evolving state of affairs.

When a person is known to know some given thing, he is said to be aware.

>Self-awareness may, in turn, be defined as the capacity of a sentient
>system to monitor itself.

When a person can label his own behavior in ways that are consistent with
the labels of those who observe him, he is said to be self-aware.


Walter Rolandi
rolandi@ncrcae.Columbia.NCR.COM
NCR Advanced Systems Development, Columbia, SC

------------------------------

End of AIList Digest
********************

∂10-Oct-88  1255	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #99  
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 10 Oct 88  12:55:27 PDT
Date: Mon 10 Oct 1988 15:38-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #99
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 11 Oct 1988      Volume 8 : Issue 99

 Mathematics and Logic - The Ignorant assumption (6 messages)

----------------------------------------------------------------------

Date: 16 Sep 88 01:58:53 GMT
From: garth!smryan@unix.sri.com  (Steven Ryan)
Subject: Re: The Ignorant assumption

>" ... We all know (I hope) formal systems are either
>" incomplete or inconsistent.
>
>I don't know that.  Can you show this for predicate logic?

Either a system is too simple (like propositional calculus) to
do number theory (which is equivalent to everything else) or it's
powerful enough in which Godel's theorem come into play: any
system powerful enough for number theory is either incomplete or
omega-inconsistent.

Simple systems like propositional calculus are complete within their domain,
but their domain is incomplete with respect to number theory and all
other formal system.

(Predicate calculus includes quantifiers; propositional calculus does not.)

------------------------------

Date: 26 Sep 88 23:09:07 GMT
From: grad3!nlt@cs.duke.edu  (Nancy L. Tinkham)
Subject: Re: The Ignorant assumption


     Robert Firth offers the following proposed refutation of the Church-Turing
thesis:

> The conjecture is almost instantly disprovable: no Turing
> machine can output a true random number, but a physical system can.  Since
> a function is surely "computable" if a physical system can be constructed
> that computes it, the existence of true random-number generators directly
> disproves the Church-Turing conjecture.


     The claim of the Church-Turing thesis is that the class of functions
computable by a Turing machine corresponds exactly to the class of functions
which can be computed by some algorithm.  The notion of an algorithm is a
somewhat informal one, but it includes the requirement that the computation be
"carried forward deterministically, without resort to random methods or
devices, e.g., dice" (Rogers, _Theory of Recursive Functions and Effective
Computability_, p.2).  If it is demonstrated that a physical system, by using
randomness, can generate the input-output pairs of a function which cannot be
computed by a Turing machine, we have merely shown that there exists a
non-Turing-computable function whose output can be generated by non-algorithmic
means -- hardly surprising, and not relevant to the Church-Turing thesis.

                                            Nancy Tinkham
                                            {decvax,rutgers}!mcnc!duke!nlt
                                            nlt@cs.duke.edu

------------------------------

Date: 27 Sep 88 15:07:41 GMT
From: firth@sei.cmu.edu  (Robert Firth)
Subject: Re: The Ignorant assumption

In a previous article, Nancy L. Tinkham writes:

>     The claim of the Church-Turing thesis is that the class of functions
>computable by a Turing machine corresponds exactly to the class of functions
>which can be computed by some algorithm.

No it isn't.  The claim is that every function "which would naturally
be regarded as computable" can be computed by a Turing machine.  At
least, that's what Turing claimed, and he should know.

[A M Turing, Proc London Math Soc 2, vol 442 p 230]

------------------------------

Date: 28 Sep 88 05:45:56 GMT
From: bbn.com!aboulang@bbn.com  (Albert Boulanger)
Subject: Re: The Ignorant assumption


In <13763@mimsy.UUCP> Darren F. Provine writes

  You see,
          ``every function "which would naturally be regarded as computable"''
  and
          ``the class of functions which can be computed by some algorithm''

  are pretty much the same thing.  Do you have some way of computing a
  function without an algorithm that nobody else in the entire world knows
  about?

Yup, Quantum Computers! (Half Serious :-))

Let me quote the abstract of the following paper: "Quantum Theory, the
Church-Turing Principle and the Universal Quantum Computer", D.
Deutsch, Proc. R. Soc. Lond. A400 97-117 (1985)

"It is argued that underlying the Church-Turing hypothesis there is an
implicit physical assertion. Here, this assertion is presented explicitly as a
physical principle: 'every finitely realizable physical system can be
perfectly simulated by a universal model computing machine operating by finite
means'. Classical physics and the universal Turing machine, because the former
is continuous and the latter discrete, do not obey the principle, at least in
the strong form above. A class of model computing machines that is the quantum
generalization of the class of Turing machines is described, and is shown that
quantum theory an the 'universal quantum computer' are compatible with the
principle. Computing machines resembling the universal quantum computer could,
in principle, be built and would have remarkable properties not reproducible
by any Turing machine. These do not include the computation of non-recursive
Functions, but they do include 'quantum parallelism', a method by which
certain probabilistic tasks can be performed faster by a universal quantum
computer than any classical restriction of it. The intuitive explanation of
these properties places an intolerable strain on all interpretations of
quantum theory other than Everett's. (Multiple-Worlds interpretation - ed)
Some of the numerous connections between quantum theory of computation and the
rest of physics are explored. Quantum complexity theory allows a physically
more reasonable definition of the 'complexity' or 'knowledge' in a physical
system than does classical complexity theory."

For 1 one page description of this paper, see John Maddox's News and Views
"Towards the Quantum Computer?", Nature Vol 316, 15 August 1985, 573.
For a perspective and a readable account of why Deutsch reasons that universal
quantum computers support the Many-Worlds interpretation of Quantum Mechanics,
see the chapter on Deutsch in the book, "The Ghost in the Atom", P.C.W.
Davies, & J.R. Brown eds, Cambridge University Press, 1986 (Chapter 6).

I should also mention that thinking along these lines have led others to
investigate the ultimate randomness in quantum mechanics. See "Randomness in
Quantum Mechanics - Nature's Ultimate Cryptogram?", T. Erber & S. Putterman,
Nature, Vol 317, 7 Nov. 1985, 41-43. Since this report, they have actually
analyzed data from NBS ionic-trap data, and so far QM checks out to be really
random.

Now to some, this stuff about quantum computers may sound MAX flaky, but
consider the fact that intuitive people like Feynman wrote papers on the topic.

A key new theory that helps put the question of randomness and the question of
determinism (to some extent) into perspective is algorithmic complexity
theory. In this theory, one can assign a measure of randomness to a number
string, by using as a metric the shortest algorithm that could produce that
string. If one considers the decimal expansion of the reals, than "most" of
the the number line is dominated by numbers with infinite algorithmic
complexity.  Furthermore, these numbers are inaccessible in any way to
"classical" Turing machines in finite time or space.

By the way, Erber & Putterman point out in their paper that "the axiomatic
development (of QM) is deliberately silent concerning any requirements that the
measurable functions be non-determinate or that the elements of probability
space correspond to inherently unpredictable or erratic events."

The way I think of nondeterminism is operational. For example, if I were to be
given an infinite-complexity number like Chaitin's omega, and an infinite
resource  universal computer I could use it as a seed to a random number
generator (ie a chaotic system) and generate truly non-repeating random
numbers. But since the initial seed required infinite resources, I could never
realize it on a 'classical' computer. The important question is whether nature
has access to such numbers.




Albert Boulanger
aboulanger@bbn.com
BBN Systems & Technologies, Inc.

------------------------------

Date: 29 Sep 88 14:51:41 GMT
From: firth@sei.cmu.edu  (Robert Firth)
Subject: Re: The Ignorant assumption

Somehow, I get the feeling that our machines are better at forward
chaining than we are.  Please let me run this Turing machine stuff
by you once again.  (Translation: this post says nothing new, merely
recapitulates.)
----

The question that originally prompted me to speak was this one

[ <388@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe)]
>But is there any reason to suppose that the universe _is_ a Turing machine?

As I understood it, the question referred to the physical world, as
imperfectly revealed to us by science, and so I replied

[ <7059@aw.sei.cmu.edu> firth@bd.sei.cmu.edu (Robert Firth) ]
>None whatever.  The conjecture is almost instantly disprovable: no Turing
>machine can output a true random number, but a physical system can.

To elaborate:  I can build a box, whose main constituents are a supply
of photons and a half-silvered mirrir, that, when triggered, will emit
at random either the value "0" or the value "1".  This can be thought
of as a mapping

        {0,1} => 0|1

where I introduce "|" to designate the operator that arbitrarily selects
one of its operands.  The obvious generalisation of this - the function
that selects an arbitrary member of an input set - is surely not unfamiliar.

Nobody has denied that a Turing machine can't do this.  The assertion that
a physical system can do it rests on the quantum theory; in particular on
the proposition that the indeterminacy this theory ascribes to the
physical world is irreducible.  Since every attempt to build an alternative
deterministic theory has foundered, and no prediction of the quantum theory
has yet been falsified, this rests on pretty strong ground.

Now, it is not my job to supply an "algorithm" for this function: as the
physicist I have given you a specification and a model implementation; as
the computer scientist it is your job to give me an equivelent program.
However, being a kind-hearted soul, I shall point you to an algorithm;
it is given as equation (3.1) in the paper

[Deutsch: Proc Roy Soc A vol 400 pp 97-117]

Naturally, it uses primitive operations that you won't find in a
classical computing engine, which is why the title reads "Quantum theory,
the Church-Turing principle, and the universal quantum computer".

Turning now to that "principle":  The formulation I learned was, briefly,
that any function that would naturally be regarded as computible can be
computed by a universal Turing machine.  Once again, I made my opinion
on this absolutely clear [art. cit.]:

   Since a function is surely "computable" if a physical
   system can be constructed that computes it, ...

from which, I submit, the conclusion follows:

   ... the existence of true random-number generators directly
   disproves the Church-Turing conjecture.

Granted, one can readily evade this conclusion.  It is necessary merely
to redefine "natural", "computable", "function", or some other key
term.  For example, one could stipulate

   A function is to be regarded as computable only if it can be
   described by an algorithm written in a programming language
   implementable on a universal Turing machine.

In which case, the conjecture becomes vacuously true, and the discipline
of AI becomes vacuously futile.  For the point of "artificial intelligence",
surely, is accurately to reproduce, in some computing engine, the
behaviour of certain physical systems, especially those that show goal-
directed behaviour, judgement, creativity, or whatever else one means
by "intelligence".

If this is to be remotely feasible, then the model of the computation
process must be at least general enough to embrace the known basic
operational features of physical systems.  After all, if your programming
tools cannot reproduce so simple a physical system as my random Boolean
generator, the chance of their being able to reproduce a complicated
physical system - the brain of a flatworm, for instance - must be very
close to zero.

Robert Firth

------------------------------

Date: 30 Sep 88 01:31:58 GMT
From: nau@mimsy.umd.edu  (Dana S. Nau)
Subject: Re: The Ignorant assumption

In article <7202@aw.sei.cmu.edu> firth@bd.sei.cmu.edu (Robert Firth) writes:
<  ... I can build a box, whose main constituents are a supply
< of photons and a half-silvered mirrir, that, when triggered, will emit
< at random either the value "0" or the value "1".  This can be thought
< of as a mapping
<
<       {0,1} => 0|1
<
< where I introduce "|" to designate the operator that arbitrarily selects
< one of its operands.  The obvious generalisation of this - the function
< that selects an arbitrary member of an input set - is surely not unfamiliar.

As far as I can see, what you have defined is not a function.  A function is
normally defined to be a set F of ordered pairs (x,y) such that for each x,
there is at most one y such that (x,y) is in F (and this y we normally call
F(x)).  Until all of the ordered pairs that comprise F have been
unambiguously determined, you have not defined a function.

Note that this does NOT mean that you have to tell us what all of the
ordered pairs are or how to compute them, or that you know what they are, or
that it is even possible to compute them (for some interesting examples, see
page 9 of Hartley Rogers' book, "Theory of Recursive Functions and Effective
Computability).  It just means that it must be unambiguous what they are.

If your mapping "|" is a function, then it must be one of the following:

        | = {(0.0), (1,0)}
        | = {(0.0), (1,1)}
        | = {(0.1), (1,0)}
        | = {(0.1), (1,1)}

If it were unambiguous WHICH function "|" was, then "|" WOULD be
Turing-computable.  In fact, it would even be primitive recursive.  But if
we assume that the output of your box is truly random, then your definition
leaves it indeterminate which of the above functions "|" actually is.  Thus,
as a function, "|" is ill-defined.

<  ... The formulation I learned was, briefly,
< that any function that would naturally be regarded as computible can be
< computed by a universal Turing machine.  Once again, I made my opinion
< on this absolutely clear [art. cit.]:
<
<    Since a function is surely "computable" if a physical
<    system can be constructed that computes it, ...
<
< from which, I submit, the conclusion follows:
<
<    ... the existence of true random-number generators directly
<    disproves the Church-Turing conjecture.

I disagree.  The point of my above argument is that true random-number
generators do not satisfy the definition of a function, so the theory of
Turing computability does not apply to them.

Just one other point, to avoid possible confusion:  A random variable IS
normally defined as a function.  However, it is not a function such as "|",
but is instead the function which maps the sample space of a random
experiment into the set of real numbers.  In your example, the sample space
is the set {0,1}, so to map this into the set of real numbers you can simply
use the identity function.
--

Dana S. Nau                             ARPA & CSNet:  nau@mimsy.umd.edu
Computer Sci. Dept., U. of Maryland     UUCP:  ...!{allegra,uunet}!mimsy!nau
College Park, MD 20742                  Telephone:  (301) 454-7932

------------------------------

End of AIList Digest
********************

∂10-Oct-88  1552	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #100 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 10 Oct 88  15:52:26 PDT
Date: Mon 10 Oct 1988 15:41-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #100
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 11 Oct 1988     Volume 8 : Issue 100

 More on ... The Ignorant assumption (6 messages)

----------------------------------------------------------------------

Date: 30 Sep 88 07:51:15 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: Re: The Ignorant assumption

In article <7202@aw.sei.cmu.edu> firth@bd.sei.cmu.edu (Robert Firth) writes:
>   Since a function is surely "computable" if a physical
            ********
>   system can be constructed that computes it, ...
>from which, I submit, the conclusion follows:
>   ... the existence of true random-number generators directly
>   disproves the Church-Turing conjecture.

>Granted, one can readily evade this conclusion.  It is necessary merely
>to redefine "natural", "computable", "function", or some other key term.

It is not necessary to REdefine "function", only to use the usual meaning.
Given the same inputs, a function must always yield the same output(s).
The kind of physical system Firth has described is a realisation of
a(n indexed) random variable, and it has been held for many years that
"true random numbers" are not computable.  (See section 3.5 ("What is a
random sequence") of Knuth's "The Art of Computer Programming, Vol 2",
this statement is implicit in definition R6.

The original question was a purely rhetorical one (I _don't_ believe that
the universe is a Turing machine), but it's worth pointing out that we
only have a finite set of imprecise observations, so that a sufficiently
good simulation of a quantum-mechanical system (with top-notch pseudo-
random number generation!) *might* be fooling us.  You can only appeal to
phsyical random number generators to disprove the Church-Turing hypothesis
if you assume that the quantum-mechanical laws a really true, which is to
say if you already assume that the universe is not running on a Turing machine.
I believe it, but a circular "proof" like that is no proof!

------------------------------

Date: 30 Sep 88 14:00:44 GMT
From: uhccux!lee@humu.nosc.mil  (Greg Lee)
Subject: Re: The Ignorant assumption

From article <13791@mimsy.UUCP>, by nau@mimsy.UUCP (Dana S. Nau):
" In article <7202@aw.sei.cmu.edu> firth@bd.sei.cmu.edu (Robert Firth) writes:
" ...
" < of as a mapping
" <
" <     {0,1} => 0|1
" ...
" As far as I can see, what you have defined is not a function.  A function is

There are a couple (>=2) of things I don't understand about this discussion:

Why does it matter whether Turing machines compute functions?  If one
wants to compute non-functional relations, why not just define the
machines accordingly?  If there's a terminological problem, then call
the machines something else.

What does it matter to Church's thesis whether what is computed is
a function?  Sometimes the thesis is phrased using the word function,
but is that essential to the thesis?

And anyhow, why can't `0|1' be considered a single value?

                Greg, lee@uhccux.uhcc.hawaii.edu

------------------------------

Date: 30 Sep 88 20:34:32 GMT
From: garth!smryan@unix.sri.com  (Steven Ryan)
Subject: Re: The Ignorant assumption

>random number generation!) *might* be fooling us.  You can only appeal to
>phsyical random number generators to disprove the Church-Turing hypothesis
>if you assume that the quantum-mechanical laws a really true, which is to
>say if you already assume that the universe is not running on a Turing machine.
>I believe it, but a circular "proof" like that is no proof!

Well, just to keep things straight, I'm the one who mentionned TM and CT. I
used them as a conditionals, `If the universe was a TM, then such and such
would follow.' It wasn't intended to assert, prove, or disprove CT, but just
engage in withywanderring philosophical speculation.

To me, the Ignorant Assumption is not any particular theory or religion, but
the meta-assumption that assumptions are unnecessary.

------------------------------

Date: 1 Oct 88 04:36:59 GMT
From: romeo!nlt@cs.duke.edu  (N. L. Tinkham)
Subject: Re: The Ignorant assumption


     I have no objection to the formulation "any function that would naturally
be regarded as computable can be computed by a universal Turing machine", as
long as it is clear that being "naturally...regarded as computable" includes
the list of conditions associated with algorithms.  Setting aside those
conditions would introduce a broader definition of "computable" than is in
common use; such a definition may well be interesting to consider, but it might
reduce confusion to use a different term (say, "q-computable").

     The claim that "a function is surely 'computable' if a physical system
can be constructed that computes it" is the disputed point.  In order to
believe that a function f is computable, I will require that I be shown that
there is an algorithm by which f may be computed.  This algorithm need not be
a Turing-machine program (if that were the case, the thesis would indeed be
trivial), but it should conform to the general requirements of an algorithm:
ability to be specified in a description of finite length, computation in
discrete steps, and so forth.  And one of these requirements is that the
computation should not use random methods.  (Reference, again, is to chapter 1
of Rogers' text.

     Falsifying the Church-Turing thesis would require presenting a function f
for which such an algorithm exists, and then showing that f cannot be computed
on a Turing machine.


[We have drifted quite far from religion here.  Followups are directed to
 comp.ai.]

                                            Nancy Tinkham
                                            {decvax,rutgers}!mcnc!duke!nlt
                                            nlt@cs.duke.edu

------------------------------

Date: 3 Oct 88 05:18:16 GMT
From: vax5!w25y@cu-arpa.cs.cornell.edu
Subject: Re: The Ignorant assumption


    The Church-Turing thesis deals with relations that always give the same
output value for a given input value.  Any quantum-generated random function
would not have this property.

                   -- Paul Ciszek
                      W25Y@CRNLVAX5               Bitnet
                      W25Y@VAX5.CCS.CORNELL.EDU   Internet

------------------------------

Date: 5 Oct 88 15:31:43 GMT
From: shire!ian@psuvax1.psu.edu  (Ian Parberry)
Subject: Ignorance about the ignorant assumption

In article <7202@aw.sei.cmu.edu> firth@bd.sei.cmu.edu (Robert Firth) writes:

>To elaborate:  I can build a box, whose main constituents are a supply
>of photons and a half-silvered mirrir, that, when triggered, will emit
>at random either the value "0" or the value "1".  This can be thought
>of as a mapping
>
>       {0,1} => 0|1
>
>where I introduce "|" to designate the operator that arbitrarily selects
>one of its operands.  The obvious generalisation of this - the function
>that selects an arbitrary member of an input set - is surely not unfamiliar.
>
>Nobody has denied that a Turing machine can't do this.  The assertion that
>a physical system can do it rests on the quantum theory; in particular on
>the proposition that the indeterminacy this theory ascribes to the
>physical world is irreducible.  Since every attempt to build an alternative
>deterministic theory has foundered, and no prediction of the quantum theory
>has yet been falsified, this rests on pretty strong ground.

Have I missed something here?  Theoretical Computer Scientists have
been stepping beyond the bounds of the Church-Turing thesis for years.
The obvious question which was first asked a long time ago (I believe that
Michael Rabin was amongst the first to do so) is whether the kind of
randomness described above helps computation.

An obvious answer is that it sometimes helps speed things up.
For example, there are several polynomial time probabilistic primality testing
algorithms, but no deterministic one is known (although it must be admitted
that one exists if the Riemann Hypothesis is true).

The Church-Turing thesis is not and never has been a sacred cow amongst
Theoretical Computer Scientists.  Most view it as a handy rule of thumb,
and nothing else.  It's not hard to invent machines which violate the
Church-Turing thesis.  The hard part is developing a non-trivial,
entertaining, elegant and useful theory of computation around them.
My favourite work of this kind is on non-uniform circuit complexity.
Curiously, many lower-bounds proved to date hold for non-uniform
(read non-Church-Turing- thesis) circuits, and have matching uniform
(read Church-Turing-thesis) upper-bounds.

Most of the postings I've seen to date have been from non-TCS people.
However, since the Church-Turing thesis is a part of Theoretical
Computer Science, it is worth finding out what the TCS'ers have had
to say about it.

For a ton of reading, look for articles that mention the key words
probabilistic algorithm, RP, BPP, RNC in the proceedings from the IEEE
Symposium on Foundations of Computer Science and the ACM Symposium on the
Theory of Computing for the last decade.  For more polished but less
up-to-date material, consult theory journals such as SIAM J. Computing,
Journal of the ACM, Theoretical Computer Science, Journal of Computer
and System Sciences, Journal of Algorithms, Information Processing Letters.

Of course, I'm not saying that Theoretical Computer Scientists have
all of the answers.  But they do seem to have made a good try
at addressing the obvious questions.
-------------------------------------------------------------------------------
                        Ian Parberry
"The bureaucracy is expanding to meet the needs of an expanding bureaucracy"
  ian@psuvax1.cs.psu.edu  ian@psuvax1.BITNET  ian@psuvax1.UUCP  (814) 863-3600
 Dept of Comp Sci, 333 Whitmore Lab, Penn State Univ, University Park, Pa 16802

------------------------------

End of AIList Digest
********************

∂10-Oct-88  1815	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #101 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 10 Oct 88  18:15:29 PDT
Date: Mon 10 Oct 1988 15:52-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #101
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 11 Oct 1988     Volume 8 : Issue 101

 Queries and Responses:

  MicroExplorer vs. MacIvory  (still/again)
  address Xerox PARC
  TICOM
  Info on Automatic Reasoning
  Optical Character Recognition
  knowledge acquisition info
  Q1? A New AI Algebra?
  ES for Statistical Analysis - Summary of Responses
  Qualitative Reasoning mail-list
  Philip E. Slatter Address Request

----------------------------------------------------------------------

Date: 20 Sep 88 16:57:35 GMT
From: sm.unisys.com!csun!polyslo!mshapiro@oberon.usc.edu  (Joe AI)
Subject: MicroExplorer vs. MacIvory  (still/again)


Well?  Well?

    About three weeks ago I posted a request polling any
users who have had experience with the Texas Instruments
MicroExplorer and/or the Symbolics MacIvory machines.  I received
a number of replies -- all of them being requests to post the
results.  Nobody has sent any experienced based opinions.

I understand there are some MacIvory beta units out there,
and I know that TI has been shipping for some time.  If you have any
light that you can shed on on this, I'd appreciate you following up
to this article.  I'd really like to see a good discussion ensue.

------------------------------

Date: Wed, 21 Sep 88 12:53:18 MET
From: Hans Borgman <HBORGMAN%HROEUR1.BITNET@MITVMA.MIT.EDU>
Subject: query: address Xerox PARC


In a recent issue of AI-List a review of the latest Spang Robinson
Report was included. In it there was a discussion on Xerox PARC
and their research on how people "actually do design-work". Since
this is very close to my research, I would like to get in touch
with the people at Xerox PARC. Who knows the name anyone involved
at Xerox PARC (or elsewhere!) and his/her (Email-)address?

Thanks in advance

Hans Borgman
Asst. Prof. Information Sciences
Erasmus University Rotterdam
P.O. Box 1738
3000 DR Rotterdam
The Netherlands
BITNET: hborgman@hroeur1

------------------------------

Date: 26 September 88, 21:05:10
From: Ramu Kannan                                    KANNAN   at SUVM
Subject: TICOM

I am interested in expert systems that can be used for evaluating
internal controls. Does anyone know the status of TICOM, a computer
assisted method of modeling and evaluating internal control systems?
Has it been fully developed and if so, what are it capabilities?
Are there any other packages?
                                   Ramu Kannan
                             Bitnet address: KANNAN@SUVM

------------------------------

Date: Tue, 27 Sep 88 12:47:57 EST
From: munnari!nswitgould.oz.au!osborn@uunet.UU.NET (Tom Osborn)
Subject: Re: Info on Automatic Reasoning

Contact A/Prof Graham Wrightson (now) at Newcastle Uni. (probably
graham@nucs.oz).  The network to Newcastle is very very very poor
so you may have to call him by phone.

Alternatively, get a copy of 'AI in Australia' from A/Prof John Debenham
at UTS. AIIA list all AI activities in oz by category and researcher.

(A/Prof J K Debenham,
 School of Computing Sciences,
 University of Technology, Sydney,
 PO Box 123 BROADWAY 2007.)

Cheers, Tomasso.

------------------------------

Date: 28 Sep 88 02:55:49 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: Optical Character Recognition


     Kurtzweil and Palantir both build special-purpose machines to do
general multi-font character recognition.  Their algorithms are proprietary.
The Palantir unit is said to have about 300 MIPS of computational power
inside.

     Doing this badly is easy.  Doing it well is very, very hard.

                                        John Nagle

------------------------------

Date: 28 Sep 88 20:36:09 GMT
From: gao@bu-cs.bu.edu  (Yong Gao)
Subject: knowledge acquisition info

More requests!

Could somebody tell me where to order proceedings for:

1. the 5th Machine Intelligence  workshop

2. the Knowledge Acquisition for Knowledge Based Systems Workshop, Banff,
   Canada, 1986 and 1987.

Thanks.

Yong Gao (gao@bu-cs.bu.edu)
Dept. of Computer Science
Boston University

------------------------------

Date: 29 Sep 88 06:12:14 GMT
From: portal!cup.portal.com!spl@uunet.uu.net
Subject: Q1? A New AI Algebra?

I read in Science News (September 3, 1988) that Brian Williams of MIT has
developed a new mathematics for use in AI called "Q1". Has anyone received
any definitions of this "new algebra"?
I would be very interested in any information about this Q1 that can be passed
along to me at the following UseNet address.

                                            Thanks in advance,
                                               Shawn P. Legrand

                                            +-----------------------------+
                                            |  spl@cup.portal.com         |
                                            |          or                 |
                                            |  ...sun!cup.portal.com!spl  |
                                            +-----------------------------+

------------------------------

Date: 4 Oct 88 16:49:00 GMT
From: goldfain@osiris.cso.uiuc.edu
Subject: Re: Q1? A New AI Algebra?


This was presented in a paper at AAAI-88, so you can get the details from the
proceedings.  Brian won one of the "best paper" awards presented by the
editors of the conference proceedings.

------------------------------

Date: Thu, 29 Sep 88 09:12 CDT
From: <KDM2520%TAMSIGMA.BITNET@MITVMA.MIT.EDU>
Subject: ES for Statistical Analysis - Summary of Responses

To all those who sent me mail requesting a summary of responses I received
for my query on Commercial Expert Systems for Statistical Analysis, here is
a summary of all the responses I have received so far:

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
From: Todd.Kaufmann@NL.CS.CMU.EDU
To: KDM2520@TAMSIGMA.BITNET
Subject: re:  Expert Systems for Statistical Analysis

There is a program called REX, which is an expert system for regression
analysis.

See the books "Artificial Intelligence & Statistics", edited by Wm. A
Gale, and also
"Discovering Causal Structure:  artificial intelligence, philosophy of
science, and statistical modeling"  by Clark Glymour, Peter Spirtes,
Kevin Kelly, & ?.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
From:     Dick (R.D.) Peacocke <RICHARD@BNR.CA>
Subject:  Expert Systems for Statistical Analysis
Sender:   Dick (R.D.) Peacocke <RICHARD@BNR.CA>

A company called KnowledgeWorks has developed " an exploratory data
analysis/knowledge acquisition package called TEASE -- jointly funded
by the National Research Council (Canada). TEASE has been used to conduct
a variety of data analysis tasks leading to rule induction" ...."the
preliminary step in expert systems construction."

The quotes are from their advertising blurb. We've started to use the
system a liitle bit on some software quality and hardware manufacturing
data.

TEASE isn't really an expert system for statistical analysis, more the
reverse, statistical analysis for an expert system, but I thought you
might be interested anyway.

For more details contact

KnowledgeWorks Research Systems Ltd.
57 Stevenson Ave.
Ottawa
CANADA K1Z 6M9  phone (613) 725-0633

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

From:     Dick (R.D.) Peacocke <RICHARD@BNR.CA>
Subject:  Another ES for Statistical Analysis
Sender:   Dick (R.D.) Peacocke <RICHARD@BNR.CA>

At AAAI '88 a "software user consultant for statistics" was
mentioned by Prof. Raj Reddy in his Presidential Address.
He was listing some AI achievements, and included this system
in the list. It's under development at AT&T apparently - you might
like to contact them or Reddy to get more details.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Many of the other responses I received were information regarding William
Gale's book.  I thank all the readers who responded to my request and I
will appreciate any additional information available on this. Thank you again.

Murali Krishnamurthi                               MURALI@TAMLSR (BITNET)
Knowledge Based Systems Lab                        MURALI@LSR.TAMU.EDU (ARPANET)
Texas A&M University                               KDM2520@TAMSIGMA (BITNET)

------------------------------

Date: 30 Sep 88 09:49:38 GMT
From: mcvax!hp4nl!swivax!bert@uunet.uu.net  (Bert Bredeweg)
Subject: Qualitative Reasoning mail-list

I have been told that there is a mailing list for Qualitative
Reasoning (or that it is being set up). Can anyone give me more
information about this (like: the correct e-mail address; how do
I become a member; etc) ?


Thanks in advance,

    Bert Bredeweg

    University of Amsterdam
    Department of Social Science Informatics (S.W.I.)
    Herengracht 196
    1016 BS Amsterdam (The Netherlands)
    Phone: +31-20-245365 ext. 13

    E-mail: bert@swivax.UUCP

------------------------------

Date: 3 Oct 88 04:01:04 GMT
From: fordjm@byuvax.bitnet
Subject: Philip E. Slatter Address Request


Can anyone supply me with an address (e-mail or regular mail) for
Philip E. Slatter of Telecomputing plc, Oxford?

[Dr. Slatter is the author of Building Expert Systems: Cognitive
Emulation (1987).]

Thanks,

John M. Ford              Brigham Young University
131 Starcrest Drive       fordjm@byuvax.bitnet
Orem, UT 84058
USA                       (801) 224-3974



------------------------------

End of AIList Digest
********************

∂10-Oct-88  2046	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #102 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 10 Oct 88  20:46:41 PDT
Date: Mon 10 Oct 1988 15:59-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #102
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 11 Oct 1988     Volume 8 : Issue 102

 Even More Queries and Responses:

  Implementation of Evidential Reasoning
  Expert systems and weather forecasting
  Concept Learning & ID3 (Quinlan)
  Genetic Learning Algorithms
  AI applications to building design and construction
  Language Translator (lisp)
  Fuzzy Relational Data Bases
  Looking for common-sense engineering situations
  Dictionary or Thesaurus
  Info on PROTEGE/RIME

----------------------------------------------------------------------

Date: 4 Oct 88 13:51:48 GMT
From: efrethei@afit-ab.arpa  (Erik J. Fretheim)
Subject: Implementation of Evidential Reasoning

Does anyone have an implementation which evaluates Evidential
Reasoning prepositions (Dempster-Schafer Theory).  I would
perfer an Ada package but am not extremely particular.
thank you.
ejf

------------------------------

Date: 4 Oct 88 15:25:53 GMT
From: mcvax!ukc!reading!cf-cm!cybaswan!cslaurie@uunet.uu.net  (Laurie
      Moseley )
Subject: Expert systems and weather forecasting


##########################################################################

I'm trying to find out if anyone is working on expert systems in weather
forecasting. Names, addresses, references ... would all be welcome. With
thanks in advance

                                Laurence Moseley

##########################################################################

Laurie Moseley

Computer Science, University College, Swansea SA2 8PP, UK

Tel: +44 792 295399

JANET:  cslaurie@uk.ac.swan.pyr

###########################################################################

------------------------------

Date: 5 Oct 88 10:33:52 GMT
From: mcvax!prlb2!kulcs!uiag!gerrit@uunet.uu.net  (Cap Gerrit)
Subject: Concept Learning & ID3 (Quinlan)

Does anybody out there in the world has an implementation of
the ID3 algorithm of Quinlan ?? Or (not exclusive) of the CLS algorithm
of Hunt which is being used in ID3.
A very good description of these algo's would also being appreciated !

We have developped a theoretical framework of a concept learning algo
and as an example I have a model in our framework which should be
a simulation of this ID3 algo.
Unfortunately I didn't found much more than a vage description of ID3
in various books and papers.
What I want to do is to make an implemantation of our framework and
testing the simulation against the original ID3.

It would be fine if the implementation was in C or in Prolog but
other languages would do also.

Gerrit Cap
Dep. of Mathematics and Computer Science
University of Antwerp
Belgium (Europe)


mail address : ...!prlb2!uiag!gerrit
I'm sorry if this posting is a bit unusual to other news readers but
this is my first posting. Also I want to appologise for my bad English
and the limited mail address.

------------------------------

Date: 5 Oct 88 17:13 GMT
From: umix!caen.engin.umich.edu!offutt@uunet.UU.NET (daniel m offutt)
Subject: Genetic Learning Algorithms


>           In addition, offutt@caen.engin.umich.edu (Daniel M. Offutt) is
>     offering a GA function optimization package.  Contact him for details.


Rather, a package is available from John Grefenstette (gref@nrl-aic.arpa).
Ask him for his package called "GENESIS" and ask him to put you on the
"GA-LIST" mailing list while you are at it.

I have used John's package extensively and can recommend it highly.  But
I did not write it and it is not available from me.   For more details,
see my post on genetic algorithms and John's package on sci.physics or
sci.math.stat, a few weeks back.

-----------------------------------------------------------------------------
Dan Offutt                                        offutt@caen.engin.umich.edu

------------------------------

Date: 5 Oct 88 21:26:58 GMT
From: sean@cadre.dsl.pittsburgh.edu  (Sean McLinden)
Subject: AI applications to building design and construction

I am well aware of a number of AI applications to CAD that are used
in building design, but I am curious to know if anyone has looked at
the various processes that occur during the engineering phase of a
project.

It seems to me that the process is quite interesting. I don't propose
to know a lot about it, but once the architect has finished the basic
design it is left to the engineer to determine what materials and
methods to use for the support systems (this is a GROSS oversimpli-
fication, I realize). (S)he, in turn, must approach subsystem engineers
for things like mechanical, electrical, plumbing designs. These, to
carry it further, will deal with the contractors, and their suppliers,
to get estimates of the cost and strenghth of material, the availablility
of components, and the like. The whole process goes through a number of
iterations. What seems interesting about it is how much information is
available during this process, yet most of the decision-makers need only
know a selected amount in order to answer specific questions posed
to them. In that sense information transfer between agents involves
a type of structured query (no reflection on the relational database),
that is to say. There is a common expertise between participants which
allows them to make decisions quickly by sifting through a lot of
information while retrieving only that which pertains to the problem
at hand.

Considering the number of dollars involved in U.S. Government funded
construction, it seems that GSA or OMB might be interested in developing
such a system. It would, of course, require a set of standards for
expressing certain concepts which almost assuredly does not exist in
the industry, already.

Any pointers would be appreciated.

Sean McLinden
Decision Systems Laboratory
University of Pittsburgh

------------------------------

Date: 6 Oct 88 12:41:34 GMT
From: mcvax!enea!kth!draken!chalmers!tekn01.chalmers.se!m85_miche@uune
      t.uu.net  (When lispers hack ... the fun begun)
Subject: Language Translator (lisp)

Hello Out there !

Is there by any chance anyone sitting on a source translating some
language to another ?

I've heard that there is some tryings to translate English to
Chinese .... is there any truth in that ?

Which litterature can I seek what I want ?

Thanks for any reply !

/Michel

------------------------------

Date: 10 Oct 88 17:26:11 GMT
From: mailrus!ncar!tank!tartarus.uchicago.edu!mitchell@ohio-state.arpa
        (Mitchell Marks)
Subject: Re: Language Translator (lisp)

In article <227@tekn01.chalmers.se> m85_miche@tekn01.chalmers.se writes:
:Is there by any chance anyone sitting on a source translating some
:language to another ?
:
:I've heard that there is some tryings to translate English to
:Chinese .... is there any truth in that ?
:
:Which litterature can I seek what I want ?

_Computational_Linguistics_ had a couple special issues on machine translation
not too long ago: Vol 11 No. 1 (January-March 1985) and
Vol 11 Nos. 2-3 (April-Sept 1985)
with a review article by Jonathan Slocum in the first issue and reports
of particular projects filling out the rest of these issues.

A recent book on this topic is _Machine_Translation:_Theoretical_and_
methodological_issues, ed. Sergei Nirenburg, Cambridge U.P. 1987.  The
volume starts with overview articles by Nirenburg and by Allan Tucker,
and contains articles addressing a variety of issues.

            -- Mitch Marks
               mitchell@tartarus.UChicago.EDU

------------------------------

Date: 6 Oct 88 21:09:11 GMT
From: mailrus!eecae!cps3xx!usenet@ohio-state.arpa  (Usenet file owner)
Subject: Fuzzy Relational Data Bases

Hi,
   I am looking for a paper by Zemankova-Leech, M., and A. Kandel
    "Fuzzy Relational Data Bases - A Key to Expert Systems."
    Verlag TUV Rheinland, Koln, Germany.

   Is there anybody who can tell me where I can find it ?
   Thanks a lot.

Kuan, Yih-pyng

Dept. of Computer Science
Michigan State University

kuan@cpsvax.msu.edu

------------------------------

Date: 7 Oct 88 05:17:18 GMT
From: csli!rustcat@labrea.stanford.edu  (Vallury Prabhakar)
Subject: Looking for common-sense engineering situations

Hello,

   I am looking for books/literature which contains a (hopefully) large
   collection of engineering problems based on day-to-day phenomena.  I
   include in this definition, ways and means of roughly approximating
   the required characteristics such as, say Young's modulus, frictional
   coefficients, etc.  As might be apparent, I am mainly interested in
   problems related to Mechanical Engineering subjects such as S of M,
   Dynamics, statics, controls, fluids etc.  The problems should be
   mainly solvable by basic engineering theory, and indeed should test
   the complete understanding of these basic concepts by the person
   trying to solve them.

   Some example problems that I can think of are:

   1) Given a flexible rod of certain dimensions, how would you find
      the Young's modulus?

   2) You're given the wheel of a car, and a torsional shaft.  How
      would you approximate the Ip of the wheel?

   3) Why is it apparently easier to turn a screw with a long screw-
      driver than a shorter one?


Hopefully, this gives you an idea of what I wish to find.  What I *don't*
want is "Oh, you need a basic book on Dynamics/Statics/whatever".  I
do not need any more theoretical background and derivations.  I have those,
thank you very much.  What I wish to find is a collection of seemingly
trivial and unimportant problems whose solution lies in these truly
basic principles.

I have a sneaky suspicion that I might get flamed for putting this in
comp.ai.  If this is in violation of USENET practice, I apologize.  I was
unable to think of/find any other relevant newsgroup, and besides where
would AI be without principles of common sense engineering?

Enjoy.

                                                -- Vallury Prabhakar
                                                -- rustcat@cnc-sun.stanford.edu

------------------------------

Date: 7 Oct 88 13:15:57 GMT
From: uflorida!mailrus!eecae!cps3xx!usenet@gatech.edu  (Usenet file
      owner)
Subject: Dictionary or Thesaurus


 We are working on a natural language processing program and need an
 english dictionary or thesaurus which, for each word, lists the part of
 speech of that word. Does anyone know where we can get such a file?

                                        -brian

------------------------------

Date: 8 Oct 88 21:00:26 GMT
From: suroy@steppenwolf.rutgers.edu (Subrata Roy)
Subject: Info on PROTEGE/RIME


In the recent AAAI-88 invited talk on "Knowledge acquisiton..."
John Mcdermott mentioned about two systems

PROTEGE & RIME which provide a methodology for defining
problem solving methods.

I would appreciate if anyone can help me find out about these
systems by pointing out suitable references.

thanks,
-Subrata
--
___
Subrata Roy
ARPA: suroy@paul.rutgers.edu
      suroy@aramis.rutgers.edu
UUCP: {ames, cbosgd, harvard, moss}!rutgers!paul.rutgers.edu!suroy

------------------------------

End of AIList Digest
********************

∂11-Oct-88  0141	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #103 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 11 Oct 88  01:40:49 PDT
Date: Mon 10 Oct 1988 16:03-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #103
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 11 Oct 1988     Volume 8 : Issue 103

 Philosophy:

  State and change/continuous actions
  Continuity and computability
  Belief and awareness
  Intelligence / Consciousness Test for Machines (Neural-Nets)???
  The Grand Challenge is Foolish

----------------------------------------------------------------------

Date: 26 Sep 88 12:18:24 GMT
From: steve@hubcap.UUCP ("Steve" Stevenson)
Subject: Re: state and change/continuous actions

From a previous article, by smryan@garth.UUCP (Steven Ryan):
>
> Continuous systems are computably using calculus, but is this `effective
> computation?' Calculus uses a number of existent theorems which prove some
> point or set exists, but provide no method to effectively compute the value.


Clearly numerical analysis emulates continuous systems.  In the phil of
math, this is, of course, an issue.  For those denying reals but
allowing the actual infinity of integers, NA is as good as the Tm.

Not only are there existence theorems for point sets, but such theorems
as the Peano Kernel Theorem are effective computations.  At the point
set level, one uses things called ``simple functions''.

BTW, you're being too restrictive.  There are many ``continuous'' systems which
have a denumerable number of points of nondifferentiablity: there
are several ways to handle this (e.g., measure theory).  These are
not ``calculus'' in the usual sense.  Important applications are in diffusion
and probability.  So, is Riemann-Stiltjes the only true calculus? Nah.
There's one per view.


--
Steve (really "D. E.") Stevenson           steve@hubcap.clemson.edu
Department of Computer Science,            (803)656-5880.mabell
Clemson University, Clemson, SC 29634-1906

------------------------------

Date: Mon, 3 Oct 88 13:49:33 GMT
From: Mr Jack Campin <jack%cs.glasgow.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: continuity and computability


smryan@garth.UUCP (Steven Ryan) wrote:

>> "Insufficient attention has been paid to the problem of continuous
>> actions." Now, a question that immediately comes to mind is "What problem?"

> Continuous systems are computable using calculus, but is this `effective
> computation?' Calculus uses a number of existence theorems which prove some
> point or set exists, but provide no method to effectively compute the value.

> It is not clear that all natural phenomena can be modelled on the discrete
> and finite digital computer. If not, what computer could we use?

I brought up this same point in the Usenet sci.logic newsgroup a short while
ago. There is a precise sense in which analogue computers are more powerful
than digital ones - i.e. there are continuous phenomena unsimulatable on a
Turing machine.

Most of the work on this has been done by Marian Pour-El and her coworkers.
An early paper is "A computable ordinary differential equation which possesses
no computable solution", Annals of Mathematical Logic, volume 17, 1979, pages
61-90. This result is a bit of a cheat (the way the equation is set up has
little relation to anything in the physical world) but I believe later papers
tighten it up somewhat (one uses the wave equation, which you'd expect to be a
powerful computing device given that interferometers can calculate Fourier
transforms in constant time). I haven't seen these later articles, though.

--
Jack Campin,  Computing Science Department, Glasgow University, 17 Lilybank
Gardens, Glasgow G12 8QQ, SCOTLAND.   041 339 8855 x6045 wk 041 556 1878 ho
ARPA: jack%cs.glasgow.ac.uk@nss.cs.ucl.ac.uk      USENET: jack@glasgow.uucp
JANET: jack@uk.ac.glasgow.cs   PLINGnet: ...mcvax!ukc!cs.glasgow.ac.uk!jack

------------------------------

Date: 5 Oct 88 17:59:14 PDT
From: "Joseph Y. Halpern"  <HALPERN@ibm.com>
Subject: Belief and awareness

In response to Fabrizio Sebastiani's question of Sept. 23 regarding
further work on Fagin and my notion of "awareness", here is what I am
aware of: (a) Kurt Konolige wrote a critique of the paper which appeared
in the proceedings of the 1986 Conference on Theoretical Aspects of
Reasoning About Knowledge; (b) Robert Hadley wrote a critique (and
discussed other ways of dealing with the problem) which appeared as a
Tech Report at Simon Fraser University (the exact reference can be found
in the journal version of our paper, which appears in Artificial Intelligence,
vol. 34, pp. 39-76); (c) Yoram Moses provided a model for polynomial time
knowledge, which can be viewed as a notion of awareness; Yoram's paper
appears in the proceedings of the 1988 Conference on Theoretical Aspects
of Reasoning About Knowledge; (d) Mark Tuttle, Yoram Moses, and I have
a paper in the 1988 Symposium on Theory of Computing which focuses
on zero-knowledge protocols, but also extends Yoram's definitions
to deal with learning. --  Joe Halpern

------------------------------

Date: 9 Oct 88 12:54:39 GMT
From: TAURUS.BITNET!shani@ucbvax.berkeley.edu
Subject: Re: Intelligence / Consciousness Test for Machines
         (Neural-Nets)???

In article <1141@usfvax2.EDU>, mician@usfvax2.BITNET writes:
>
> When can a machine be considered a conscious entity?
>
Oh no! not that again! ;-)

Okay, I'll make it short and somewhat cyinc this time: The answer is: NEVER!

You see andy, the only reason for you to assume that there is such a thing as
a conscious entity at all, is that otherwise, YOU are not a conscious entity,
and that probebly sounds nonsense to you (Actualy when saying that, I already
take a dangerious step forward, assuming that YOU ARE... the only thing I can
know is that I AM a conscious entity...).

I hope that helps...

O.S.

------------------------------

Date: 9 Oct 88 18:21:51 GMT
From: uwslh!lishka@spool.cs.wisc.edu (Fish-Guts)
Reply-to: uwslh!lishka@spool.cs.wisc.edu (Fish-Guts)
Subject: Re: The Grand Challenge is Foolish


In article <ohbWO@SAIL.Stanford.EDU> JMC@SAIL.STANFORD.EDU writes:
>[In reply to message sent Mon 26 Sep 1988 23:22-EDT.]
>If John Nagle thinks that "The lesson of the last five years seems to
>be that throwing money at AI is not enormously productive.", he is
>also confusing science with engineering.  It's like saying that the
>lesson of the last five years of astronomy has been unproductive.
>Progress in science is measured in longer periods than that.

     I don't think anyone could have said it better.  If AI is going
to progress at all, I think it will need quite a bit of time, for its
goals seem to be fairly "grand."  I think this definitely applies to
research in Neural Nets and Connectionism: many people criticize this
area, even though it has only really gotten going (again) in the past
few years.  There *have* been some really interesting discoveries due
to AI; however, they have not been as amazing and earth-shattering as
some would like.

     In my opinion, the great amount of hype in AI is what leads many
people to say stuff such as "throwing money at AI is not enormously
productive."  If many scientists and companies would stop making their
research or products out to be much more than they actually are, I
feel that others reviewing the AI field would not be so critical.
Many AI researchers and companies need to be much more "modest" in
assessing their work; they should not make promises they cannot keep.
After all, the goal of achieving true "artificial intelligence" (in
the literal sense of the phrase) is not one that will occur in the
next two, ten, fifty, one-hundred, or maybe even one-thousand years.

                                        .oO Chris Oo.
--
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp
                                     ----
"...Just because someone is shy and gets straight A's does not mean they won't
put wads of gum in your arm pits."
                         - Lynda Barry, "Ernie Pook's Commeek: Gum of Mystery"

------------------------------

End of AIList Digest
********************

∂12-Oct-88  0813	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #104 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 12 Oct 88  08:12:52 PDT
Date: Wed 12 Oct 1988 10:49-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #104
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 12 Oct 1988    Volume 8 : Issue 104

 Queries:

  PDP and Neural Networks in Music Research
  Classifier system software packages
  AI and 'Conventional' programming
  PFL
  CLOS & CommonLOOPS

 Responses:

  TICOM
  AI applications to building design and construction
  Language Translator (lisp)
  knowledge acquisition info
  AAAI-88 Proceedings

----------------------------------------------------------------------

Date: Tue 11 Oct 1988 14:19 CDT
From: <UUCJEFF%ECNCDC.BITNET@MITVMA.MIT.EDU>
Subject: PDP and Neural Networks in Music Research

Does anyone have any references on papers regearding the use of Parallel
Distributed Processing or Neural Networks in Music Research?  I saw that
the Computer Music Journal is calling for papers on the topic, if nyone
has heard of any research, please let me know via email.  I am intersted
in a neural network topic for my M.S. thesis, so I cannot wait for the
CMJ issue to come out.  Any help will be appreciated. thankx.

Jeff Beer, Academic Computing
Northeastern Ill University, UUCJEFF@ECNCDC.BITNET

disclaimer:  Myreference to Computer Music Journal's call for papers is
on my behalf, I am not speaking on their behalf.  Please do not interpret
as on thier behalf.

------------------------------

Date: 11 Oct 88 14:05:20 GMT
From: steinmetz!boston!powell@itsgw.rpi.edu  (Powell)
Subject: Classifier system software packages

Recently, I have read some interesting articles on induction and classifier
systems. To better understand their capabilities and functionalities,
I am looking for a free, classifier software package to experiment with.

I have recently used John Grefenstette's very impressive GENESIS package
for optimization and became very excited and convinced about the capabilities
of genetic algorithms. I would now like to experiment with a classifier
system as described by Holland with a bucket brigade or similar
algorithm for credit apportionment and a genetic algorithm for rule
combination. If someone can send me such a package then I can quickly evaluate
the power and appropriateness of classifiers to my problem.

Thanks in advance
Dave Powell

------------------------------

Date: Tue, 11 Oct 88 09:37:27 -0400
From: davis%community-chest.mitre.org@gateway.mitre.org
Subject: AI and 'Conventional' programming


I have an interest in merging AI technology with software development
for large, conventional projects, particularly in Ada.
I observe a trend in private industry and government toward attempting
to use Expert system tools as a part of delivered products, to be
used by non-engineers/scientists.
An aspect of this trend is the potential to assist members of
development teams in software engineering in managing complexity.
There have been some articles published in this general area
during the last 4-5 years, dealing with high-level/specification
languages, modelling, and rule-based formal proofs of software
requirements, which suggests that some useful work is being done.


I would like to see if there is an interest in these areas on the part
of others who read the AI bulletin board, and to offer published
information related to these interests.

Dave Davis

------------------------------

Date: 11 Oct 88 10:07 +0100
From: fred moerman
      <f_moerman%avh.unit.uninett%NORUNIX.BITNET@MITVMA.MIT.EDU>
Subject: PFL


A Frame-based representation language : PFL.
============================================

In the November and December issues of "AI Expert", mr.Tim Finn
describes a pedagogical frame language : PFL.
He explains some of the principles of repesenting knowledge
with a FBRL, using this LISP-based frame language. He also mentioned
that a version is ported to VAXLISP, and that is why I posted this
request.

Is there anyone who can help me get hold of such a public domain
frame-based language. It should preferably be able to run on our VAX,
but a version for either Macintosh or IBM-PC are welcome as well.


Thanks,

        Fred Moerman.




Fred Moerman                    <f_moerman%avh.unit.uninett@NORUNIX>
Inst. for Informatikk
UNIT-AVH
N-7055 Dragvoll
NORWAY.

------------------------------

Date: Mon, 10 Oct 88 19:03 O
From: Antti Ylikoski tel +358 0 457 2704
      <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: CLOS & CommonLOOPS

I would be very grateful if someone could let me know if an academic
license for the CLOS, the Common Lisp Object System, is available.

I would also like to know whom to contact to obtain it and the price.


Also, I would like to know if an academic license for the CommonLOOPS
is available.


Thanks in advance, Andy

------------------------------------------------------------------------------
Antti Ylikoski                          |YLIKOSKI@FINFUN        (BITNET)
Helsinki University of Technology       |
Laboratory of Information Processing    |ay@hutcs.hut.fi        (UUCP)
Science                                 |
Otakaari 5 A                            |
SF-02150 Espoo, Finland                 |

------------------------------

Date: 11 Oct 88 22:58:28 GMT
From: eric@aragorn.cm.deakin.OZ (Eric Y.H. Tsui)
Reply-to: eric@aragorn.UUCP (Eric Y.H. Tsui)
Subject: Re: TICOM

Information about the TICOM (The Internal COntrol Model) can be obtained
from Professor Andrew Bailey, Arthur Young Professor of Accounting,
School of Accounting, University of Minnesota. I have a few papers about
this system and I can send the references to you, if you want.

Eric Tsui                               eric@aragorn.oz
Division of Computing and Mathematics
D e a k i n   U n i v e r s i t y
Geelong, Victoria 3217
AUSTRALIA

------------------------------

Date: Tue, 11 Oct 88 16:40:16 EDT
From: info@scarecrow.csee.lehigh.edu (Info Directory-x4508)
Subject: RE: AI applications to building design and construction

In response to sean@cadre.dsl.pittsburgh.edu  (Sean McLinden)
query for pointers to AI applications to building design and construction

Sean states:

>I am well aware of a number of AI applications to CAD that are used
>in building design, but I am curious to know if anyone has looked at
>the various processes that occur during the engineering phase of a
>project....
>Considering the number of dollars involved in U.S. Government funded
>construction, it seems that GSA or OMB might be interested in developing
>such a system.

As   part of a government  directive  to assist the U.S.  construction
industry to be  more   competative in the  world  marketplace,  Lehigh
University was awarded a  NSF block grant  of  $10 million to  develop
technical   innovations  for  the industry.    Lehigh University's NSF
sponsored Advanced  Technology  for Large  Structural  Systems (ATLSS)
Engineering  Research Center has several projects,  some  of which are
looking     into developing  intelligent   interfaces between  various
phases/processes of  the multi-million  dollar fragmented construction
industry.

>There is a common expertise between participants which
>allows them to make decisions quickly by sifting through a lot of
>information while retrieving only that which pertains to the problem
>at hand.

True.  It is this fact that allows us  to start developing intelligent
interfaces   between  these groups.   Only  the  necessary information
required to make a decision is asked.  The  rest is shared amongst the
players involved in the construction process.  The key is to determine
and classify the    common  information and  the  specific information
required to assist in making construction decisions.

The project that I am involved in is the Designer/Fabricator Interface
(DFI) which will  assist design  engineers in understanding downstream
fabrication and erection problems associated  by their upstream design
decisions.  The initial limited domain of DFI  deals with design fitup
of beam to column connections in buildings.  The DFI  system critiques
the designer's initial connection details and reports gross and subtle
fitup  errors  before he  sends out  his  design document  for  bid to
fabricators.  This requires  the system to utilize general fabrication
and erection knowledge in one mode of operation and specific knowledge
if a particular fabricator has won the bid and is working closely with
the designer.

The system will  later be expanded to  include  fitup of beam to  beam
connections  as  well as  provide  a functional  critique  (real Civil
Engineering strength issues).

Also, an architect/designer interface is under development  as well as
completion of several specific KB systems (including simple connection
design and bridge fatigue investigator (BFI) that determins what to do
in repairing cracks in bridges).


For more information contact:

        General ATLSS Information       KBS Systems Information

        Dr. John Fisher                 Dr. John Wilson
        Director, ATLSS Center          KBS Thrust Leader
        Room A206                       Room 220
        Building H                      Fritz Engineering Lab, #13
        Lehigh University               Lehigh University
        Bethlehem, PA 18015             Bethlehem, PA 18015
        (215)758-3535                   (215)758-4828

        jwf2@lehigh.BITNET              jlw2@lehigh.BITNET
or

jwf2%lehigh.bitnet@ibm1.cc.lehigh.edu   jlw2%lehigh.bitnet@ibm1.cc.lehigh.edu

------------------------------

Date: 10 Oct 88 22:20:15 GMT
From: sunybcs!rapaport@rutgers.edu  (William J. Rapaport)
Subject: Response to: Language Translator (lisp)

In article <227@tekn01.chalmers.se> m85_miche@tekn01.chalmers.se
>
>Is there by any chance anyone sitting on a source translating some
>language to another ?
>
>Which litterature can I seek what I want ?

There are several sources of info on machine translation.  Begin with
"Machine Translation" in S. C. Shapiro (ed.), Encyclopedia of AI (Wiley,
1987).

There are two recent books:

Sergei Nirenburg (ed.), Machine Translation:  Theoretical and
Methodological Issues (Cambridge UP, 1987).

and another book by, I think, a fellow named Hutchings, published by
Ellis Horwood, in England; it's a good survey.

There are two major journals:

Computational Linguistics, published by MIT Press for the Association
for Computational Linguistics,

and

Computers and Translation, published by Kluwer Academic Publishers.


                                        William J. Rapaport
                                        Associate Professor

Dept. of Computer Science||internet:  rapaport@cs.buffalo.edu
SUNY Buffalo             ||bitnet:    rapaport@sunybcs.bitnet
Buffalo, NY 14260        ||uucp: {decvax,watmath,rutgers}!sunybcs!rapaport
(716) 636-3193, 3180     ||fax:  (716) 636-3464

------------------------------

Date: 10 Oct 88 20:32:12 GMT
From: att!alberta!calgary!!gaines@bloom-beacon.mit.edu  (Brian Gaines)
Subject: Re: knowledge acquisition info

In article <25126@bu-cs.BU.EDU>, gao@bu-cs.BU.EDU (Yong Gao) writes:
> More requests!
>
> Could somebody tell me where to order proceedings for:
>
> 1. the 5th Machine Intelligence  workshop
>
> 2. the Knowledge Acquisition for Knowledge Based Systems Workshop, Banff,
>    Canada, 1986 and 1987.
>
> Thanks.
>
> Yong Gao (gao@bu-cs.bu.edu)
> Dept. of Computer Science
> Boston University


The following notes are a complete guide to getting the KAW papers:

Knowledge Acquisition Workshop Publications

We are attempting to make the Knowledge Acquisition Workshop materials
as widely available as possible.  The following sections detail the
availability of publications from each workshop.

KAW86, Banff, November 1986

Preprints distributed to attendees only.

Revized and updated papers published in the
International Journal of Man-Machine Studies,
January, February, April, August and September 1987 special issues.

Papers plus editorial material and index collected in two books:

Gaines, B.R. & Boose, J.H. (Eds) Knowledge Acquisition for Knowledge-Based
Systems. London: Academic Press, 1988 (released October 1988).

Boose, J.H. & Gaines, B.R. (Eds) Knowledge Acquisition Tools for Expert
Systems. London: Academic Press, 1988 (released October 1988).

EKAW87, Reading, UK, September 1986

Proceedings available as:
Proceedings of the First European Workshop on Knowledge Acquisition for
Knowledge-Based Systems.

Send sterling money order or draft for 39.00 payable to
University of Reading to:
Prof. T.R.Addis, Department of Computer Science,
University of Reading, Whiteknights, PO
Box 220, Reading RG6 2AX, UK.

KAW87, Banff, October 1986

Preprints distributed to attendees only.

Revized and updated papers being published in the
International Journal of Man-Machine Studies, 1988 regular issues.
(just beginning to appear)

Papers plus editorial material and index will be collected in book form,
together with other KA papers from IJMMS in 1989.

EKAW88, Bonn, West Germany, June 1988

Proceedings available as:
Proceedings of the European Workshop on Knowledge Acquisition for
Knowledge-Based Systems (EKAW88).
Send order to (the GMD will invoice you for DM68.00 plus postage):
Marc Linster, Institut fr Angewandte Informationstechnik der
Gesellschaft fr Mathematik und
Datenverarbeitung mbH, Schloss Birlingoven, Postfach 1240,
D-5205 Sankt Augustin 1, West Germany.

KAW88, Banff, November 1988

Preprints available (400-500 pages, early November 1988) as:
Proceedings of the 3rd Knowledge Acquisition for Knowledge-Based
Systems Workshop.
Send money order, draft, or check drawn on US or Canadian bank for
US$65.00 or CDN$85.00 to:
SRDG Publications, Department of Computer Science, University of Calgary,
Calgary, Alberta, Canada T2N 1N4.

Revized and updated papers being published in the International Journal
of Man-Machine Studies, 1989 regular issues.

Papers plus editorial material and index will be collected in book form,
together with other KA papers from IJMMS in 1990.

----

Brian Gaines, gaines@calgary.cdn

------------------------------

Date: 28 Sep 88 14:05:48 GMT
From: woodl@byuvax.bitnet
Subject: re: AAAI-88 Proceedings


   Proceedings for AAAI-88 and past years can be ordered from
Morgan-Kaufmann publishers, 95 First St., Los Altos, CA 94022.

  Larry Wood

------------------------------

End of AIList Digest
********************

∂12-Oct-88  1156	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #105 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 12 Oct 88  11:56:37 PDT
Date: Wed 12 Oct 1988 10:59-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #105
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 12 Oct 1988    Volume 8 : Issue 105

 Philosophy -- Consciousness (4 messages)

----------------------------------------------------------------------

Date: Mon, 10 Oct 88 18:48:27 PDT
From: larry@VLSI.JPL.NASA.GOV
Subject: Philosophy: Consciousness

>Self-awareness may, in turn, be defined as the capacity of a sentient
>system to monitor itself.

Over the years I've heard many people object to self-referential systems
for a variety of reasons (in AIList, for instance, in the recent discussion
of linguistic paradoxes.)  Some of this seems to based on emotional grounds,
others on the fact that we have no analytical theory to handle
self-reference.  Yet self-reference seems to be at the core of much human
thought, certainly of consciousness, so we must develop such theory.
                              ------------
>Julian Jaynes (The Origin of Consciousness and the Bicameral Mind)
>is very persuasive when he argues that consciousness is not
>required for use of human language or every-day human activities.

Human thought seems to be a hierarchy of cooperating (and sometimes
competing processes).  Consciousness seems to have a (the?) major role of
integrating these processes.  So even though the components of
human-language use might not require consciousness, the fullest use of
human language would.
                              ------------
> [W]ould such a machine have to be "creative"?  And if so, how would
>we measure the machine's creativity?

(Apologies to any who've heard this before.)  As an artist in several art
forms (though expert in only a couple), I use creativity as routinely and
reflexively as I walk.  It's no more (and no less) mysterious than walking.

Basically, creativity is the combining of memes (which I define as MEMory
Elements) to form more complex memes.  This combining has a random element
but is guided to some extent.  One form of guidance involves a goal-seeking
mechanism that provides a mask against which new memes are compared.  Parts
of the mask have don't-care attributes that can be turned on or off to
make the search for a solution more or less open.  Those that filter
through the mask then become part of the meme-pool and can be used as
components of other memes, or mutated to form yet-newer memes.

A fair amount of skill is involved in selecting the right amount of
meme-passing.  Too little and one is over-whelmed by wild ideas; too much
and you may filter out the odd but elegant solution--or the ridiculous
solution that forms the root of a search that does find the solution.

Skill is also involved in setting up the creative search.  Most of the
search is done subconsciously, but it is launched by a conscious decision.
Before this is done, you must stock up on memes relevant to the problem,
which includes ingesting them (via reading, talking with co-workers,
watching videos or experiments, etc.) and learning them (by playing with
them and through repetition making them part of long-term memory).

And skill is involved in insuring that conscious activity does not
interefere with the subconscious search.  Part of this involves staying
away from the particular problem or similar problems, and keeping from
launching a second creative episode before receiving the results of the
first.

I see no reason, however, why the conscious mechanisms that affect
creativity should have to be conscious.  This is not to conclude that it
doesn't enrich or support the mechanisms of consciousness.

            Larry @ vlsi.jpl.nasa.gov

------------------------------

Date: 11 Oct 88 13:20:01 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Intelligence / Consciousness Test for Machines
         (Neural-Nets)???

In article <263@balder.esosun.UUCP> jackson@esosun.UUCP (Jerry Jackson) writes:
>Consciousness is a *subjective* phenomenon.
>It is truly not even possible to determine if your neighbor is conscious.

I think the best way to determine if someone is conscious is to carry
on a conversation with them.  (The interaction need not be verbal.
One can use visual or tactile channels, or non-verbal auditory channels.)
There are interesting anecdotes about autistic children who were coaxed
into normal modes of communication by starting with primitive stimulus-
response modes.  The Helen Keller story also dramatizes such a breakthrough.

One of the frontiers is the creation of a common language between
humans and other intelligent mammals such as chimps and dolphins.

--Barry Kort

------------------------------

Date: 12 Oct 88 00:17:53 GMT
From: clyde!watmath!watdcsu!smann@bellcore.bellcore.com  (Shannon
      Mann - I.S.er)
Subject: Re: Intelligence / Consciousness Test for Machines
         (Neural-Nets)???

In article <1141@usfvax2.EDU> mician@usfvax2.usf.edu.UUCP (Rudy Mician) writes:

>When can a machine be considered a conscious entity?
>
>For instance, if a massive neural-net were to start from a stochastic state
>and learn to interact with its environment in the same way that people do
>(interact not think), how could one tell that such a machine thinks or exists
>(in the same context as Descarte's "COGITO ERGO SUM"/"DUBITO ERGO SUM"
>argument- that is, how could one tell whether or not an "I" exists for the
>machine?

Only the _machine_ can adequately answer the question.  If the _machine_ asks
'What/Who Am I?', by the definition of self-awareness (any reasonable one I can
think of) the machine is self-aware.  If the _machine_ can sense and react to
the environment, it is (on some primitive level) aware.  Science has already
provided us with machines that are far more _aware_ than the common amoeba.
Until the science community refines its' ideas of what awareness, and self-
awareness entails, the above question cannot be answered with any accuracy.

Is it possible?  Certainly!  Consciousness occurs with in biological systems,
so why not mechanical systems of sufficient complexity.  If we consider the
vastness of space and time, and that an event can occur once, it is reasonable
to conclude that _self-awareness_ will occur out there again and that, more than
likely, it will be in a different form than ours.  Knowing this, is it so
difficult to accept the possibility of creating the same?

>Furthermore, would such a machine have to be "creative"?  And if so, how would
>we measure the machine's creativity?

This question could/should be asked about humans.  When is a human creative?
When We invent something, is it not the re-application of some known idea?
Or an accidental discovery?  In my mind, creativity is the ability to syn-
thesize _something_ from a group of _something_different_.  My definition does
not include the concept of self-direction, and so should be modified.
Regardless, it does touch upon the basic idea that _to_create_ means to take
_what_is_ and make _something_new_.  By this definition, _life_ is creative :-)

>I suspect that the Turing Test is no longer an adequate means of judging
>whether or not a machine is intelligent.

Here we go upon a different tack.  Intelligence is quite different than self-
awareness.  I do not want to define intelligence as it is a term that is used
and misused in so many ways that coherent dialog about the subject is highly
suspect of worth.  My definition certainly would not clear up any ambiguity,
but would probably create a flame war of criticism.   Self-awareness is exactly
that, to be aware of one-self, separate from the environment you exist in.
Intelligence...       well, you go figure.  However, there is a difference.

>If anyone has any ideas, comments, or insights into the above questions or any
>questions that might be raised by them, please don't hesitate to reply.

Well, you asked....  I know about much of the research that has been done on the
topic of self-learning systems.  The idea is that, if a machine can learn like
humans, then it must be like humans.  However, humans do not learn in the
simplified manner that these systems employ.  Humans use a system where they
learn how a particular system or process works, and then can re-apply that
heuristic (am I using this term correctly?) under different circumstances.
Has the heuristic approach be attempted in machine learning systems?  I don't
believe so, and would appreciate any response.

>Rudy Mician     mician@usfvax2.usf.edu
>Usenet:                ...!{ihnp4, cbatt}!codas!usfvax2!mician

        -=-
-=- Shannon Mann -=- smann@watdcsu.UWaterloo.ca
        -=-

P.S.  Please do not respond with any egocentric views about what it is to be
human, etc.  I see Humanity as different than the rest of the animal kingdom,
but, in no way superior.  Having the power to damage our planet the way we
do does not mean we our superior.  Possessing and using that power only shows
our foolishness.

------------------------------

Date: 12 Oct 88 00:33:52 GMT
From: clyde!watmath!watdcsu!smann@bellcore.bellcore.com  (Shannon
      Mann - I.S.er)
Subject: Re: Here's one ...

In article <409@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>
>Have you ever thought about what the brain is doing between thoughts?

Thinking, what else?  We are aware of one thought/idea/concept etc. at one
time.  Evidently, the mind does not cease functioning when we choose not to
focus upon it's internal workings.  There is a continuous cosmic soup of
thought circulating through your brain at any one time, operating at many
different levels.  Our awareness is of only a small segment of the total whole.
For example.  The mind is constantly deleting stimulus that doesn't change
(stimulus adaptation).  We are not conscious of the process, yet, it is con-
tinuous.

Next question...   Is this thought?

        -=-
-=- Shannon Mann -=- smann@watdcsu.UWaterloo.ca
        -=-

------------------------------

End of AIList Digest
********************

∂19-Oct-88  1749	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #106 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Oct 88  17:49:37 PDT
Date: Wed 19 Oct 1988 20:22-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #106
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 20 Oct 1988     Volume 8 : Issue 106

 Announcements:

  Congress on Cybernetics and Systems
  MIRRORS/II Connectionist Simulator Now Available
  6th Scandinavian Conference on Image Analysis
  Neural Network Symposium Announcement

----------------------------------------------------------------------

Date: 8 Oct 88 03:28:19 GMT
From: spnhc@cunyvm.bitnet  (Spyros Antoniou)
Subject: Congress on Cybernetics and Systems


             WORLD ORGANIZATION OF SYSTEMS AND CYBERNETICS

         8 T H    I N T E R N A T I O N A L    C O N G R E S S

         O F    C Y B E R N E T I C S    A N D   S Y S T E M S

 JUNE 11-15, 1990 at Hunter College, City University of New York, USA

     This triennial conference is supported by many international
groups  concerned with  management, the  sciences, computers, and
technology systems.

      The 1990  Congress  is the eighth in a series, previous events
having been held in  London (1969),  Oxford (1972), Bucharest (1975),
Amsterdam (1978), Mexico City (1981), Paris (1984) and London (1987).

      The  Congress  will  provide  a forum  for the  presentation
and discussion  of current research. Several specialized  sections
will focus on computer science, artificial intelligence, cognitive
science, biocybernetics, psychocybernetics  and sociocybernetics.
Suggestions for other relevant topics are welcome.

      Participants who wish to organize a symposium or a section,
are requested  to submit a proposal ( sponsor, subject, potential
participants, very short abstracts ) as soon as possible, but not
later  than  September 1989.  All submissions  and correspondence
regarding this conference should be addressd to:

                    Prof. Constantin V. Negoita
                         Congress Chairman
                   Department of Computer Science
                           Hunter College
                    City University of New York
             695 Park Avenue, New York, N.Y. 10021 U.S.A.

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|   Spyros D. Antoniou  SPNHC@CUNYVM.BITNET  SDAHC@HUNTER.BITNET    |
|                                                                   |
|      Hunter College of the City University of New York U.S.A.     |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

------------------------------

Date: Thu, 13 Oct 88 15:05:29 EDT
From: C Lynne D'Autrechy <lynne@brillig.umd.edu>
Subject: MIRRORS/II Connectionist Simulator Now Available


                  MIRRORS/II Connectionist Simulator Available


               MIRRORS/II is a general-purpose connectionist simulator
          which  can  be used to implement a broad spectrum of connec-
          tionist  (neural  network)  models.   MIRRORS/II   is   dis-
          tinguished  by  its support of an extensible high-level non-
          procedural language, an indexed library of networks, spread-
          ing  activation methods, learning methods, event parsers and
          handlers, and a generalized event-handling mechanism.

               The MIRRORS/II language allows relatively inexperienced
          computer  users  to  express the structure of a network that
          they would like to study and the parameters which will  con-
          trol their particular connectionist model simulation.  Users
          can select an existing spreading activation/learning  method
          and  other  system  components  from the library to complete
          their connectionist model; no programming  is  required.  On
          the  other hand, more advanced users with programming skills
          who are interested in research  involving  new  methods  for
          spreading  activation  or  learning  can  still derive major
          benefits from using MIRRORS/II.  The advanced user need only
          write functions for the desired procedural components (e.g.,
          spreading activation method, control strategy, etc.).  Based
          on language primitives specified by the user MIRRORS/II will
          incorporate the user-written components into the connection-
          ist  model;  no  changes to the MIRRORS/II system itself are
          required.

               Connectionist models developed using MIRRORS/II are not
          limited  to  a  particular  processing  paradigm.  Spreading
          activation methods, and Hebbian learning, competitive learn-
          ing,  and  error  back-propagation  are  among the resources
          found in the MIRRORS/II library.  MIRRORS/II  provides  both
          synchronous  and asynchronous control strategies that deter-
          mine which nodes should have their activation values updated
          during  an iteration.  Users can also provide their own con-
          trol strategies and have control over a  simulation  through
          the generalized event handling mechanism.

               Simulations  produced  by  MIRRORS/II  have  an  event-
          handling  mechanism  which  provides a general framework for
          scheduling certain actions to  occur  during  a  simulation.
          MIRRORS/II  supports  system-defined events (constant/cyclic
          input, constant/cyclic output,  clamp,  learn,  display  and
          show)  and user-defined events.  An event command (e.g., the
          input-command) indicates which event is to occur, when it is
          to  occur,  and  which  part of the network it is to affect.
          Simultaneously occurring events are prioritized according to
          user  specification.   At  run  time,  the appropriate event
          handler performs  the  desired  action  for  the  currently-
          occurring event.  User-defined events can redefine the work-
          ings of system-defined  events  or  can  create  new  events
          needed for a particular application.

               MIRRORS/II is implemented in Franz Lisp  and  will  run
          under  Opuses  38, 42, and 43 of Franz Lisp on UNIX systems.
          It is currently running on a MicroVAX, VAX and  SUN  3.   If
          you  are  interested  in obtaining more detailed information
          about the MIRRORS/II system see D'Autrechy, C.  L.  et  al.,
          1988, "A General-Purpose Simulation Environment for Develop-
          ing  Connectionist  Models,"   Simulation,  51,  5-19.   The
          MIRRORS/II  software  and reference manual are available for
          no charge via tape or ftp.  If you are interested in obtain-
          ing  a  copy of the software send your U.S. Mail address via
          e-mail to

                          mirrors@mimsy.umd.edu
                                      or
                          ...!uunet!mimsy!mirrors


          or send your U.S. Mail address to

                       Lynne D'Autrechy
                       University of Maryland
                       Department of Computer Science
                       College Park, MD  20742


          and we will send you back a license which you must sign  and
          return  to  us and further instructions on how to obtain the
          MIRRORS/II software and manual.

------------------------------

Date: Fri, 14 Oct 88 16:18:11 +0200
From: scia@steks.oulu.fi (SCIA confrence in OULU)


      The 6th Scandinavian Conference on Image Analysis
      =================================================

      June 19 - 22, 1989
      Oulu, Finland

      Second Call for Papers



      INVITATION TO 6TH SCIA

      The 6th Scandinavian Conference on Image  Analysis   (6SCIA)
      will  be arranged by the Pattern Recognition Society of Fin-
      land from June 19 to June 22, 1989. The conference is  spon-
      sored  by the International Association for Pattern Recogni-
      tion. The conference will be held at the University of Oulu.
      Oulu is the major industrial city in North Finland, situated
      not far from the Arctic Circle. The conference  site  is  at
      the Linnanmaa campus of the University, near downtown Oulu.

      CONFERENCE COMMITTEE

      Erkki Oja, Conference Chairman
      Matti Pietik{inen, Program Chairman
      Juha R|ning, Local organization Chairman
      Hannu Hakalahti, Exhibition Chairman

      Jan-Olof Eklundh, Sweden
      Stein Grinaker, Norway
      Teuvo Kohonen, Finland
      L. F. Pau, Denmark

      SCIENTIFIC PROGRAM

      The program will  consist  of  contributed  papers,  invited
      talks  and special panels.  The contributed papers will cov-
      er:

              * computer vision
              * image processing
              * pattern recognition
              * perception
              * parallel algorithms and architectures

      as well as application areas including

              * industry
              * medicine and biology
              * office automation
              * remote sensing

      There will be invited speakers on the following topics:

      Industrial Machine Vision
      (Dr. J. Sanz, IBM Almaden Research Center)

      Vision and Robotics
      (Prof. Y. Shirai, Osaka University)

      Knowledge-Based Vision
      (Prof. L. Davis, University of Maryland)

      Parallel Architectures
      (Prof. P. E. Danielsson, Link|ping University)

      Neural Networks in Vision
      (to be announced)

      Image Processing for HDTV
      (Dr. G. Tonge, Independent Broadcasting Authority).

      Panels will be organized on the following topics:

      Visual Inspection in the  Electronics  Industry  (moderator:
      prof. L. F. Pau);
      Medical Imaging (moderator: prof. N. Saranummi);
      Neural Networks and Conventional  Architectures  (moderator:
      prof. E. Oja);
      Image Processing Workstations (moderator: Dr.  A.  Kortekan-
      gas).

      SUBMISSION OF PAPERS

      Authors are invited to submit four  copies  of  an  extended
      summary of at least 1000 words of each of their papers to:

              Professor Matti Pietik{inen
              6SCIA Program Chairman
              Dept. of Electrical Engineering
              University of Oulu
              SF-90570 OULU, Finland

              tel +358-81-352765
              fax +358-81-561278
              telex 32 375 oylin sf
              net scia@steks.oulu.fi

      The summary should contain sufficient  detail,  including  a
      clear description of the salient concepts and novel features
      of the work.  The deadline for submission  of  summaries  is
      December  1, 1988. Authors will be notified of acceptance by
      January 31st, 1989 and final camera-ready papers will be re-
      quired by March 31st, 1989.

      The length of the final paper must not exceed 8  pages.  In-
      structions  for  writing the final paper will be sent to the
      authors.

      EXHIBITION

      An exhibition is planned.  Companies  and  institutions  in-
      volved  in  image analysis and related fields are invited to
      exhibit their products at demonstration stands,  on  posters
      or video. Please indicate your interest to take part by con-
      tacting the Exhibition Committee:

              Matti Oikarinen
              P.O. Box 181
              SF-90101 OULU
              Finland

              tel. +358-81-346488
              telex 32354 vttou sf
              fax. +358-81-346211

      SOCIAL PROGRAM

      A social program will be arranged,  including  possibilities
      to  enjoy  the  location  of the conference, the sea and the
      midnight sun. There  are  excellent  possibilities  to  make
      post-conference  tours  e.g.  to Lapland or to the lake dis-
      trict of Finland.

      The social program will consist of a get-together  party  on
      Monday June 19th, a city reception on Tuesday June 20th, and
      the conference Banquet on Wednesday June 21st. These are all
      included  in the registration fee. There is an extra fee for
      accompanying persons.

      REGISTRATION INFORMATION

      The registration fee will be 1300  FIM  before  April  15th,
      1989  and 1500 FIM afterwards. The fee for participants cov-
      ers:  entrance  to  all  sessions,  panels  and  exhibition;
      proceedings; get-together party, city reception, banquet and
      coffee breaks.

      The fee is payable by
              - check made out to 6th SCIA and mailed to the
                Conference Secretariat; or by
              - bank transfer draft account or
              - all major credit cards

      Registration forms, hotel information and  practical  travel
      information  are  available from the Conference Secretariat.
      An information package will be sent to authors  of  accepted
      papers by January 31st, 1989.

      Secretariat:
              Congress Team
              P.O. Box 227
              SF-00131 HELSINKI
              Finland
              tel. +358-0-176866
              telex 122783 arcon sf
              fax +358-0-1855245

      There will be hotel rooms available for  participants,  with
      prices  ranging  from  135 FIM (90 FIM) to 430 FIM (270 FIM)
      per night for a single room (double room/person).

------------------------------

Date: Sat, 15 Oct 88 13:10:39 EDT
From: RUSS EBERHART <RCE1%APLVM.BITNET@MITVMA.MIT.EDU>
Subject: Neural Network Symposium Announcement


               ANNOUNCEMENT AND CALL FOR ABSTRACTS

    SYMPOSIUM ON THE BIOMEDICAL APPLICATIONS OF NEURAL NETWORKS
    ***********************************************************
                      Saturday, April 22, 1989
                         Parsons Auditorium
      The Johns Hopkins University Applied Physics Laboratory
                          Laurel, Maryland

The study and application of neural networks has increased significantly
in the past few years.  This applications-oriented symposium focuses on
the use of neural networks to solve biomedical tasks such as the
classification of biopotential signals.

Abstracts of not more than 300 words may be submitted prior to January
31, 1989.  Accepted abstracts will be allotted 20 minutes for oral
presentation.

Registration fee is $20.00 (U.S.); $10.00 for full-time students.
Registration fee includes lunch.  For more information and/or to
register, contact Russ Eberhart (RCE1 @ APLVM), JHU Applied Physics
Lab., Johns Hopkins Road, Laurel, MD 20707.

The Symposium is sponsored by the Baltimore Chapter of the IEEE Engineering
in Medicine and Biology Society.  Make check for registration fee payable
to "EMB Baltimore Chapter".

------------------------------

End of AIList Digest
********************

∂19-Oct-88  2057	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #107 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 19 Oct 88  20:57:04 PDT
Date: Wed 19 Oct 1988 20:26-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #107
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 20 Oct 1988     Volume 8 : Issue 107

 Seminars:

  The SB-ONE Knowledge Representation Workbench 	- Alfred Kobsa
  Cooperative Problem Solving Systems 			- Gerhard Fischer
  Machiavelli : A Polymorphic Lang. for oo db 		- Atsushi Ohori
  The Computational Linguistics of DNA 			- David Searls
  OSCAR:  A General Theory of Rationality 		- John Pollock
  What My Robot Should Do Next 				- Ian Horswill
  Expert Systems in Predictive Toxicology 		- Arnott and Snow
  Church's Thesis, Connectionism, and Cognitive Science	- Raymond J. Nelson

----------------------------------------------------------------------

Date: Tue, 27 Sep 88 11:21:08 EDT
From: finin@PRC.Unisys.COM
Subject: The SB-ONE Knowledge Representation Workbench - Alfred Kobsa

                                  AI SEMINAR
                         UNISYS PAOLI RESEARCH CENTER


                The SB-ONE Knowledge Representation Workbench

                                 Alfred Kobsa
              International Computer Science Institute, Berkeley
         (on leave from the University of Saarbruecken, West Germany)

The SB-ONE system is an integrated knowledge representation workbench for
conceptual knowledge which was specifically designed to meet the requirements
of the field of natural-language processing. The representational formalism
underlying the system is comparable to KL-ONE, altough different in many
respects. A Tarskian semantics is given for the non-default part of it.

The user interface allows for a fully graphical definition of SB-ONE knowledge
bases. A consistency maintenance system checks for the syntactical
well-formedness of knowledge definitions. It rejects inconsistent entries, but
tolerates and records incomplete definitions. A partition mechanism allows for
the parallel processing of several knowledge bases, and for the inheritance of
(incomplete) knowledge structures between parititons.

The SB-ONE system is being employed in XTRA, a natural-language access system
to expert systems. The use of SB-ONE for meaning representation, user
modeling, and access to the expert system's frame knowledge base will be
briefly described.


                          10:00am Friday, October 14
                             BIC Conference Room
                         Unisys Paoli Research Center
                          Route 252 and Central Ave.
                                Paoli PA 19311

       -- non-Unisys visitors who are interested in attending should --
       --   send email to finin@prc.unisys.com or call 215-648-7446  --


*  COMING ATTRACTION: On October 19, Marilyn Arnott (PhD from Texas in    *
*  Chemistry) will speak on the topic of an expert system for predictive  *
*  toxicology.  The seminar will be held at 2:00 PM in the BIC Conference *
*  Room.  An exact title and an abstract will be distributed when they    *
*  become available.                                                      *

------------------------------

Date: Wed, 12 Oct 88 14:32:30 edt
From: dlm@allegra.att.com
Subject: Cooperative Problem Solving Systems - Gerhard Fischer


                   Cooperative Problem Solving Systems

                             Gerhard Fischer
                          University of Colorado
                             October 13, 1988
              AT&T Bell Labs -- Murray Hill 3D-436 -- 10:00 am



                                 ABSTRACT

       Over the last few years we have constructed a number of
       intelligent support systems (e.g. documentation systems,
       help systems, critics, and a "software oscilloscope") which
       support limited cooperative problem solving processes.
       These systems and their limitations will be discussed and
       future research directions towards the goal of truly
       cooperative problem solving systems will be presented.

       Sponsor: R.J.Brachman

------------------------------

Date: Sun, 16 Oct 88 14:46:30 EDT
From: finin@PRC.Unisys.COM
Subject: Machiavelli : A Polymorphic Lang. for oo db - Atsushi Ohori


                              AI SEMINAR
                     UNISYS PAOLI RESEARCH CENTER

                            Atsushi Ohori
                      University of Pennsylvania


                 Machiavelli : A Polymorphic Language
                    for Object-oriented Databases

Machiavelli is a programming language for databases and object-oriented
programming with a strong, statically checked type system. It is an
extension of the programming language ML with generalized relational
algebra, type inheritance and general recursive types. In Machiavelli,
various database operations including join and projection are available
as polymorphic operations, ML's abstract data types are extended with
inheritance declarations, and the type system includes general recursive
types.

In this talk, I will first introduce Machiavelli and show examples
demonstrating its expressive power in the context of both database
programming and object-oriented programming. I will then describe the
theoretical aspects of the language.

For the theoretical aspects of the language, I will show that, by defining
syntactic orderings on subsets of terms and types that correspond to
database objects, a generalized relational algebra can be introduced in a
strongly typed functional programming language. By allowing conditions on
substitutions for type variables, Milner's type inference algorithm can be
also extended to those new constructs. I will then show that by using the
type inference mechanism, ML's abstract data types can be extended to
support inheritance. Finally I will describe how the above mechanisms can
be extended to recursive types.

Joint work with Peter Buneman.



                     10:30 am  - November 2, 1988
                         BIC Conference Room
                     Unisys Paoli Research Center
                      Route 252 and Central Ave.
                            Paoli PA 19311

   -- non-Unisys visitors who are interested in attending should --
   --   send email to finin@prc.unisys.com or call 215-648-7446  --

------------------------------

Date: Tue, 18 Oct 88 08:40:47 EDT
From: finin@PRC.Unisys.COM
Subject: The Computational Linguistics of DNA - David Searls


                      UNIVERSITY OF PENNSYLVANIA

                        DEPARTMENT OF COMPUTER
                       AND INFORMATION SCIENCE


                 The Computational Linguistics of DNA

                             David Searls
                     Unisys Paoli Research Center

Genetic information, as expressed in the four-letter alphabet of the
DNA of living organisms, represents a complex and richly-expressive
linguistic system that encodes procedural instructions on how to
create and maintain life.  There is a wealth of understanding of the
semantics of this language from the field of molecular biology, but
its syntax has been elaborated primarily at the lowest lexical levels,
without benefit of formal computational approaches that might help to
organize its description and analysis.  In this talk, I will examine
some linguistic properties of DNA, and propose that generative
grammars can and should be used to describe genetic information in a
declarative, hierarchical manner.  Furthermore, I show how a Definite
Clause Grammar implementation can be used to perform various kinds of
analyses of sequence information by parsing DNA.  This approach
promises to be useful in recombinant DNA experiment planning systems,
in simulation of genetic systems, in the interactive investigation of
complex control sequences, and in large-scale search over huge DNA
sequence databases.

                      THURSDAY, OCTOBER 20, 1988

                             REFRESHMENTS
                             2:30 - 3:00
                              129 Pender

                              COLLOQUIUM
                             3:00 - 4:30
                              216 MOORE

------------------------------

Date: 18 Oct 88 15:18:18 GMT
From: sunybcs!rapaport@rutgers.edu  (William J. Rapaport)
Subject: OSCAR:  A General Theory of Rationality - John Pollock


===============================================================================
 UPDATE UPDATE UPDATE UPDATE UPDATE UPDATE UPDATE UPDATE UPDATE UPDATE UPDATE
===============================================================================

                         UNIVERSITY AT BUFFALO
                      STATE UNIVERSITY OF NEW YORK

                        DEPARTMENT OF PHILOSOPHY
                  GRADUATE GROUP IN COGNITIVE SCIENCE
                                  and
   GRADUATE RESEARCH INITIATIVE IN COGNITIVE AND LINGUISTIC SCIENCES

                                PRESENT

                              JOHN POLLOCK

                        Department of Philosophy
                         University of Arizona

                OSCAR:  A General Theory of Rationality

The enterprise is the construction of a general  theory  of  rationality
and  its  implementation  in  an automated reasoning system named OSCAR.
The paper describes a general architecture for rational  thought.   This
includes  both theoretical reasoning and practical reasoning, and builds
in important interconnections between them.  It is urged that a  sophis-
ticated  reasoner  must be an _introspective reasoner_, capable of moni-
toring its own reasoning and reasoning about it.  An introspective  rea-
soner  is  built  on top of a non-introspective reasoner that represents
the system's default reasoning strategies.  The  introspective  reasoner
engages in practical reasoning about reasoning in order to overide these
default strategies.  The paper  concludes  with  a  discussion  of  some
aspects of the default reasoner, including the manner in which reasoning
is interest-driven and the structure of defeasible reasoning.

                      Wednesday, October 26, 1988
                               4:00 P.M.
                     684 Baldy Hall, Amherst Campus

           There will be an evening discussion at 8:00 P.M.,
           at Mary Galbraith's, 130 Jewett Parkway, Buffalo.

Copies of the paper are available from Bill Rapaport, Dept. of  Computer
Science, 636-3193.  Contact Rapaport or Jim Lawler, Dept. of Philosophy,
636-2444, for further information.

------------------------------

Date: Tue 18 Oct 88 12:24:18-EDT
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: What My Robot Should Do Next - Ian Horswill

                    BBN Science Development Program
                       AI Seminar Series Lecture

       WHAT MY ROBOT SHOULD DO NEXT: NAVIGATION WITHOUT PLANNING;
                     VISION WITHOUT INVERSE-OPTICS.

                            Ian D. Horswill
                    MIT Artificial Intelligence Lab
                       (IDH@WHEATIES.AI.MIT.EDU)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                      10:30 am, Tuesday October 25


In this talk I will discuss a system which performs a variety of
low-level navigation activities without many of the traditional
trappings of robot navigation such as mapping, planning, callibrated
cameras, surface reconstruction, or dead reconning.  In particular, the
system chases moving objects, investigates static ones, and follows
along corridors using a camera for visual feedback.

Rather than committing to a pre-planned path and attempting to follow
it accurately, the system constantly re-answers the question "what
should I do next?".  By continuously reasessing the situation, the
system is able to operate in dynamic and even unpredictable
environments where mapping and planning are unfeasible.  By breaking
the problem up into managable routine tasks such as corridor
following, the system is able to perform the tasks using dramatically
simpler machinery than conventional systems while guaranteeing bounded
response time (0.2 seconds in our present implementation).

------------------------------

Date: Tue, 18 Oct 88 14:32:32 EDT
From: finin@PRC.Unisys.COM
Subject: Expert Systems in Predictive Toxicology - Arnott and Snow


                              AI SEMINAR
                     UNISYS PAOLI RESEARCH CENTER


               EXPERT SYSTEMS IN PREDICTIVE TOXICOLOGY

                  Marilyn S. Arnott and  Ina B. Snow
                            LogiChem Inc.
                         Boyertown, PA  19512


A prototype system focusing on the possible teratogenicity of members
of one class of chemicals, aliphatic acids, has been developed and
validated.  The system evaluates any chemical which can be metabolized
to an aliphatic acid, then performs structure-activity relationship
(SAR) analysis on the resulting acid to determine its potential
teratogenicity.  The prototype was validated by comparing results from
the system to laboratory results from three types of teratogenesis
bioassays on 36 aliphatic acids.  The outcome cast doubt on the
usefulness of one of the bioassays, and, additionally, detected an
error in the published structure of one of the compounds tested.

We are presently in the early design phase of an expert system to
predict carcinogenic potential of chemicals.  The system is being
developed in cooperation with senior scientists at the EPA, who use
SAR analysis to evaluate the potential health hazards of new chemicals
under review by the agency.

                     Wednesday, October 19, 2:00
                         BIC Conference Room
                     Unisys Paoli Research Center
                              Paoli Pa

   -- non-Unisys visitors who are interested in attending should --
   --   send email to finin@prc.unisys.com or call 215-648-7446  --

------------------------------

Date: Wed, 19 Oct 88 16:53:17 EDT
From: rapaport@cs.Buffalo.EDU (William J. Rapaport)
Subject: Church's Thesis, Connectionism, and Cognitive Science -
         Raymond J. Nelson


                         UNIVERSITY AT BUFFALO
                      STATE UNIVERSITY OF NEW YORK

                        BUFFALO LOGIC COLLOQUIUM
                  GRADUATE GROUP IN COGNITIVE SCIENCE
                                  and
   GRADUATE RESEARCH INITIATIVE IN COGNITIVE AND LINGUISTIC SCIENCES

                                PRESENT

                           RAYMOND J. NELSON

                  Truman Handy Professor of Philosophy
                    Case Western Reserve University

         CHURCH'S THESIS, CONNECTIONISM, AND COGNITIVE SCIENCE

                      Wednesday, November 16, 1988
                               4:00 P.M.
                     684 Baldy Hall, Amherst Campus

The Church-Turing Thesis (CT) is a  central  principle  of  contemporary
logic  and  computability  theory as well as of cognitive science (which
includes philosophy of mind).  As a mathematical  principle,  CT  states
that  any  effectively  computable  function of non-negative integers is
general recursive; in computer and cognitive-science  terms,  it  states
that  any  effectively algorithmic symbolic processing is Turing comput-
able, i.e., can be carried out by an  idealized  stored-program  digital
computer  (one with infinite memory that never fails or makes mistakes).
In this form, CT is essentially an empirical principle.

Many cognitive scientists have adopted the working hypothesis  that  the
mind/brain  (as  a  cognitive organ) is some sort of algorithmic symbol-
processor.  By CT, it follows that the mind/brain  is  (or  realizes)  a
system of recursive rules.  This may be interpreted in two ways, depend-
ing on two types of algorithm, free or embodied.  A  free  algorithm  is
represented  by  any  program; an embodied algorithm is one built into a
network (such as an ALU unit or a neuronal group).

CT is being challenged by connectionism, which asserts that many  cogni-
tive  processes,  including  perception  in  particular,  are not symbol
processes, but rather subsymbol  processes  of  entities  that  have  no
literal semantic interpretation.  These are parallel, distributed, asso-
ciative memory processes totally unlike  serial,  executive-driven,  von
Neumann  computers.   CT is also being challenged by evolutionism, which
is a form of connectionism that  denies  that  phylogenesis  produces  a
mind/brain  adapted  to  fixed  categories or distal stimuli (even fuzzy
ones).  Computers deal only with fixed  categories  (either  in  machine
language,   codes   such  as  ASCII,  or  declarations  in  higher-level
languages).  So, if connectionists are right, CT is  false:   there  are
processes that are provably (I will suggest a proof) effective and algo-
rithmic but are not Turing-computable.

However, if CT in empirical form is true, and if the processes  involved
are  effective, then connectionism or, in general, anti-computationalism
is false.

A direct argument that does not appeal to CT but that tends  to  confirm
it is that embodied algorithm networks as a matter of fact are parallel,
distributed, associative, and subsymbolic even in von Neumann computers,
not  to  say  super-multiprocessors.  Finally, I claim that the embodied
algorithm network models are not only _not_ antithetical to evolutionism
but  dovetail nicely with the theory that the mind/brain evolves through
the life of the individual.

REFERENCES

Edelman, G. (1987), _Neural Darwinism_ (Basic Books).
Nelson R. J. (1988), ``Connections among  Connections,''  _Behavioral  &
Brain Sci._ 11.
Smolensky, P. (1988), ``On  the  Proper  Treatment  of  Connectionism,''
_Behavioral & Brain Sci._ 11.

There will be an evening discussion at a time and place to be announced.

Contact John Corcoran, Department of Philosophy,  636-2444  for  further
information.

------------------------------

End of AIList Digest
********************

∂20-Oct-88  0002	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #108 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 20 Oct 88  00:02:22 PDT
Date: Wed 19 Oct 1988 20:30-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #108
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 20 Oct 1988     Volume 8 : Issue 108

 Intelligence / Consciousness Test for Machines (5 Messages)

----------------------------------------------------------------------

Date: 11 Oct 88 19:19:01 GMT
From: hp-sde!hpcuhb!hpindda!kmont@hplabs.hp.com  (Kevin Montgomery)
Subject: Re: Intelligence / Consciousness Test for Machines
         (Neural-Nets)???


In article <1141@usfvax2.EDU>, mician@usfvax2.BITNET writes:
> When can a machine be considered a conscious entity?

May I suggest that this discussion be moved to talk.philosophy?

While it has many implications to AI (as do most of the more
philosophical arguments which take place in comp.ai), it has a
broader scope and should have a broader reader base.  There are
a number of implications of a definition of consciousness-
rights and responsibilities of something deemed conscious, whether
mere consciousness is a sufficient criterion for personhood
(should non-biological entities be deemed conscious, and if
consciousness is sufficient for personhood, and the constitutional
rights are bestowed upon persons "born" in a country, then these
entities have all the rights of the constitution (in this country)).

The implication of this example would be that if machines (or animals,
or any non-human or non-biological entity) have rights, then one may
be arrested for murder if one should halt the "life process" of
such an entity either by killing an animal or by removing power
from a machine.

Moreover, the question of when humans are conscious (and thus are
arguably persons) has implications in the areas of abortion, euthanasia,
human rights, and other areas.

For these reasons, I suggest we drop over to talk.philosophy (VERY
low traffic over there, anyway), resolve these questions (if possible,
but doubtful), and post a response to the interested newsgroups (comp.ai,
talk.abortion, etc).

Rather than attacking all questions at once and getting quite confused
in the process, I suggest that we start with the question of whether
consciousness is a necessary and sufficient criterion for personhood.
In other words, in order to have rights (such as the right to life),
does something have to have consciousness?  Perhaps we should start
with a definition of consciousness and personhood, and revise these
as we see fit (would someone with a reputable dictionary handy post
one there?).

Note that there are implications about things such as anencephalic
babies (born with only the medulla, no higher brain areas exist),
commissurotomy (split-brain) patients, and even people we consider
to be knocked unconscious (or even sleeping!) have personhood
(and therefore rights).

                                kevin

------------------------------

Date: 13 Oct 88 12:20:12 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Intelligence / Consciousness Test for Machines
         (Neural-Nets)???


                            The Turtling Test

                                Barry Kort

                  (With apologies to Douglas Hofstadter)


       Achilles: Good morning Mr. T!

       Tortoise: Good day Achilles.  What a wonderful day for
                 touring the computer museum.

       Achilles: Yes, it's quite amazing to realize how far our
                 computer technology has come since the days of Von
                 Neumann and Turing.

       Tortoise: It's interesting that you mention Alan Turing, for
                 I've been doing some biographical research on him.
                 He is a most interesting and enigmatic character.

       Achilles: Biographical research?  That's a switch.  Usually
                 people like to talk about his Turing Test, in
                 which a human judge tries to distinguish which of
                 two individuals is the human and which is the
                 computer, based on their answers to questions
                 posed by the judge over a teletype link.  To tell
                 you the truth, I'm getting a little tired of
                 hearing people talk about it so much.

       Tortoise: You have a fine memory, my friend, but I'm afraid
                 you'll be disappointed when I tell you that the
                 Turing Test does come up in my work.

       Achilles: In that case, don't tell me.

       Tortoise: Fair enough.  Perhaps you would be interested to
                 know what Alan Turing would have done next if he
                 hadn't died so tragically in his prime.

       Achilles: That's an interesting idea, but of course it's
                 impossible to say.

       Tortoise: If you mean we'll never know for sure, I would
                 certainly agree.  But I have just come up with a
                 way to answer the question anyway.

       Achilles: Really?

       Tortoise: Really.  You see, I have just constructed a model
                 of Alan Turing's brain, based on a careful
                 examination of everything he read, saw, did, or
                 wrote about during his tragic career.

       Achilles: Everything?

       Tortoise: Well, not quite everything -- just the things I
                 know about from the archives and from his notes
                 and effects.  That's why it's just a model and not
                 an exact duplicate of his brain.  It would be a
                 perfect model if I could discover everything he
                 ever saw, learned, or discovered.

       Achilles: Amazing!

       Tortoise: Since Turing had a very logical mind, I merely
                 start with his accumulated knowledge and reason
                 logically to what he would have investigated next.
                 Interestingly, this leads to a possible hypothesis
                 explaining why Turing committed suicide.

       Achilles: Fantastic!  Let's hear your theory.

       Tortoise: A logical next step after devising the Turing Test
                 would be to give the formal definition of a Turing
                 Machine to computer `A' (which, since it's a
                 computer, happens to be a Turing Machine itself)
                 and ask it to decide if another system (call it
                 machine `B') is a Turing Machine.

       Achilles: I don't get it.  What is machine `A' supposed to
                 do to decide the question?

       Tortoise: Why it merely devises a test which only a Turing
                 Machine could pass, such as a computation that a
                 lesser beast would choke on.  Then it administers
                 the Test to machine `B' to see how it handles the
                 challenge.

       Achilles: Are you sure that a Turing Machine knows how to
                 devise such a test in the first place?

       Tortoise: That's a good question.  I suppose it depends on
                 how the definition of a Turing Machine is stated.
                 Clearly, a good definition would be one which
                 states or implies a practical way to decide if an
                 arbitrary hunk of matter possesses the property of
                 being a Turing Machine.  In this case, it's safe
                 to assume that the problem was well-posed, meaning
                 that the definition was sufficiently complete.

       Achilles: So what happened next?

       Tortoise: You mean what does my model of Turing's brain
                 suggest as the next logical step?

       Achilles: Of course, Mr. T.  I quite forgot what level we
                 were operating on.

       Tortoise: Next, Machine `A' would be asked if Machine `A'
                 itself fit the definition of a Turing Machine!

       Achilles: Wow!  You mean you can ask a machine to examine
                 its own makeup?

       Tortoise: Why not?  In fact many modern computers have
                 built-in self diagnostic systems.  Why can't a
                 computer devise a diagnostic program to see what
                 kind of computer it is?  As long as it's given the
                 definition of a Turing Machine, it can administer
                 the test to itself and see if it passes.

       Achilles: Holy Holism!  Computers can become self-aware of
                 what they are?!

       Tortoise: That would seem to be the case.

       Achilles: What happens next?

       Tortoise: You tell me.

       Achilles: The Turing Machine tries the Turing Test on a
                 human.

       Tortoise: Very good.  And what is the outcome?

       Achilles: The human passes?

       Tortoise: Right!

       Achilles: So Alan Turing concludes that he's nothing more
                 than a Turing Machine, which makes him so
                 depressed he eventually commits suicide.

       Tortoise: Maybe.

       Achilles: What else could there be?

       Tortoise: Let's go back to your last conclusion.  You said,
                 "Turing concludes that he's nothing more than a
                 Turing Machine."

       Achilles: I don't follow your point.

       Tortoise: Suppose Turing wants to prove conclusively that he
                 was something more than "just a Turing Machine."

       Achilles: I see.  He had a Turing Machine in him, but he
                 wanted to know what else he was that was more than
                 just a machine.

       Tortoise: Right.  So he searched for some way to discover
                 how he differed from a machine in an important
                 way.

       Achilles: And he couldn't discover any way?

       Tortoise: Not necessarily.  He may have known of several
                 ways.  For example, he could have tried to fall in
                 love.

       Achilles: Why falling in love is the easiest thing in the
                 world.

       Tortoise: Not if you try to do it.  Then it's impossible!

       Achilles: I see your point.

       Tortoise: In any event, there is no evidence that Turing
                 ever fell in love, even though he must have known
                 it was possible.  Maybe he didn't know that one
                 shouldn't try so hard.

       Achilles: So he committed suicide in despair?

       Tortoise: Maybe.

       Achilles: What else could there be?

       Tortoise: The last possibility that comes to mind is that
                 Turing suspected there was something he was
                 overlooking.

       Achilles: And what is that?

       Tortoise: Could a Turing Machine discover the properties of
                 a Turing Machine without being told?

       Achilles: Gee, I don't know.  But it could discover the
                 properties of another machine that it could do
                 experiments on.

       Tortoise: Would it ever think to do such experiments on
                 itself?

       Achilles: I don't know.  Does it even know what the word
                 "itself" points to?

       Tortoise: Who would have given it the idea of "self"?

       Achilles: I don't know.  It reminds me of Narcissus
                 discovering his reflection in a pool of water and
                 falling in love with himself.

       Tortoise: Well, I haven't finished my research yet, but I
                 suspect that a Turing Machine, without outside
                 assistance, could not discover the complete
                 definition of itself, nor would it think to ask
                 itself the question, "Am I a Turing Machine?" if
                 it were simply given the definition of one as a
                 mathematical abstraction.

       Achilles: In other words, if Alan Turing did ask himself the
                 question, "Am I (Alan Turing) a Turing Machine?"
                 the very act of posing the question proves he
                 isn't one!

       Tortoise: That's my conjecture.

       Achilles: So he committed suicide to prove he wasn't one,
                 because he didn't realize that he already had all
                 the evidence he needed to prove that he was
                 intellectually more complex than a mere Turing
                 Machine.

       Tortoise: Perhaps.

       Achilles: Well, I would be most interested to discover the
                 final answer when you complete your research on
                 this most interesting question.

       Tortoise: My friend, if we live long enough, we're bound to
                 find the answer.

       Achilles: Good day Mr. T!

       Tortoise: Good day Achilles.

------------------------------

Date: 13 Oct 88 18:43:22 GMT
From: dewey.soe.berkeley.edu!mkent@ucbvax.berkeley.edu  (Marty Kent)
Subject: Re: Intelligence / Consciousness Test for Machines
         (Neural-Nets)???

In article <1141@usfvax2.EDU>, mician@usfvax2.BITNET writes:
> When can a machine be considered a conscious entity?

I think an important point here is exactly the idea that a machine (or
other entity) is generally -considered- to be conscious or not. In other
words, this judgement shouldn't be expected to reflect some deep -truth-
about the entity involved (as if it REALLY is or isn't conscious). It's
more a matter of the -usefulness- of the judgement: what does it buy you
to consider an entity conscious...

So a machine can be considered (by -you-) conscious any time
1) you yourself find it helpful to think this way, and
2) you're not aware of anything that violates this judgement.

If you really want to consider entities conscious, you can come up with a
workable definition of consciousness that'll include most anything (or, at
least,  exclude almost nothing). If you're really resistant to the idea,
you can keep pushing up the requirements until nothing and noone passes
your test.

Chief Dan George said "The human beings [his own tribe, of course :-)]
think -everything- is alive: earth, grass, trees, stones. The white man
thinks everything is dead. If things keep trying to act alive, the white
man will rub them out."


Marty Kent      Sixth Sense Research and Development
                415/642 0288    415/548 9129
                MKent@dewey.soe.berkeley.edu
                {uwvax, decvax, inhp4}!ucbvax!mkent%dewey.soe.berkeley.edu
Kent's heuristic: Look for it first where you'd most like to find it.

------------------------------

Date: 14 Oct 88 13:48:51 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Definition of thought

From the Hypercard Stack, "Semantic Network"...

                        * * *

Thinking is a rational form of information processing which
reduces the entropy or uncertainty of a knowledge base,
generates solutions to outstanding problems, and conceives
goal-oriented courses of action.

Antonym:  See "Worrying"

                        * * *

Worrying is an emotional form of information processing which
fails to reduce the entropy or uncertainty of a knowledge base,
fails to generate solutions to outstanding problems, or fails
to conceive goal-achieving courses of action.

Antonym:  See "Thinking"


--Barry Kort

------------------------------

Date: 18 Oct 88 06:38:37 GMT
From: leah!gbn474@bingvaxu.cc.binghamton.edu  (Gregory Newby)
Subject: Re: Intelligence / Consciousness Test for Machines
         (Neural-Nets)???


(* sorry about typos/unreadability:  my /terminfo/regent100 file
 is rapidly approaching maximum entropy)
In article <3430002@hpindda.HP.COM>, kmont@hpindda.HP.COM (Kevin Montgomery)
writes:
> In article <1141@usfvax2.EDU>, mician@usfvax2.BITNET writes:
> > When can a machine be considered a conscious entity?
>
> May I suggest that this discussion be moved to talk.philosophy?
>
> While it has many implications to AI (as do most of the more
> philosophical arguments which take place in comp.ai), it has a
> broader scope and should have a broader reader base.

I would like to see this discussion carried through on comp.ai.
It seems to me that these issues are often not considered by scientists
working in ai, but should be.  And, it may be more useful to
take an "operational" approach in comp.ai, rathar than a philosophical
or metaphysical approach in talk.philosophy.

This topic has centered about the definition of consciousness, or
the testing of consciousness.

Turing (_Mind_, 1950) said:  "The only to know that a machine is
thinking is to be that machine and feel oneself thinking. " (paraphrase)

A better way of thiking about consciousness may be to consider
_self_ consciousness.  That is, is the entity in question capable
of considering its own self ?

Traditional approaches to defining "intelligent behaviour" are
PERFORMANCE based.  The Turing test asks a machine to *simulate* a human.
  (as an aside:  how could a machine, which has none of the experience
  of a human, be expected to act as one.  Unless someone were to
  somehow 'hard-code' all of a human's experience in some computer system,
  but who would call that intelligence?)
Hofstadter (_Goedel, Escher, Bach_, p24) gives a list of functions as
criteria for intelligent behaviour which many of today's smart expert
systems can perform, but they certainly aren't intelligent!

If a machine is to be considered as "intelligent," or "conscious,"
no test will suffice.  It will be forced to make an argument on its
own behalf.

This argument must begin, "I am intelligent"

  (or, "I am conscious" --means the same thing, here)

The self concept has not, to my knowledge, been treated in the AI
literature.  (My thesis, "A self-concept based approach to artificial
intelligence, with a case study of the Galileo(tm) computer system,"
SUNY Albany, dealt with it, but I'm a social scientist.)

As Mead (see, for instance, _Social Psychology_) suggests, the
difference between lower animals and man is twofold:

1)  the self concept:  man may consider the self as an object, separate
from other objects and in relation to the environment.

2)  the generalized other:  man is able to consider the self as
seen by other selves.

The first one's relatively easy.  The second must be learned through
social interaction.


So, (if anyone's still reading)
What kind of definition of intelligence are we talking about here?
I would bet that for any performance criteria you can give me, if I
gave you a machine that could do it, the machine would not be considered
intelligent without also exhibiting a self-concept.

'Nuff said.

--newbs
  (
   gbnewby@rodan.acs.syr.edu
   gbn474@leah.albany.edu
  )

------------------------------

End of AIList Digest
********************

∂20-Oct-88  0248	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #109 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 20 Oct 88  02:48:40 PDT
Date: Wed 19 Oct 1988 20:34-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #109
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 20 Oct 1988     Volume 8 : Issue 109

 Philosophy:

  The Grand Challenge (3 Messages)
  What the brain is doing when it isn't thinking (3 Messages)

----------------------------------------------------------------------

Date: Wed, 12 Oct 88 15:29:08 pdt
From: Ray Allis <ray@BOEING.COM>
Subject: Grand Challenges

If AI is to make progress toward machines with common sense, we
should first rectify the preposterous inverted notion that AI is
somehow a subset of computer science, or call the research something
other than "artificial intelligence".  Computer science has nothing
whatever to say about much of what we call intelligent behavior,
particularly common sense.

Ray Allis
Boeing Computer Services-Commercial Airplane Support
CSNET: ray@boeing.com
UUCP:  bcsaic!ray

------------------------------

Date: 13 Oct 88 11:48:58 GMT
From: uwmcsd1!wsccs!dharvey@uunet.UU.NET (David Harvey)
Subject: Re: The Grand Challenge is Foolish


In a previous article, John McCarthy writes:
> [In reply to message sent Mon 26 Sep 1988 23:22-EDT.]
>
        < part of article omitted >

> If John Nagle thinks that "The lesson of the last five years seems to
> be that throwing money at AI is not enormously productive.", he is
> also confusing science with engineering.  It's like saying that the
> lesson of the last five years of astronomy has been unproductive.
> Progress in science is measured in longer periods than that.

Put more succinctly, the payoff of Science is (or should be) increased
understanding.  The payoff of Engineering on the other hand should be
a better widget, a way to accomplish what previously couldn't be done,
or a way to save money.  Too many people in our society have adopted
the narrow perspective that all human endeavors must produce a monetary
(or material) result.  Whatever happened to the Renaissance ideal of
knowledge for knowledge's sake?  I am personally fascinated about what
we have recently learned about the other planets in our solar system.
Does that mean we must reap some sort of material gain out of the
endeavor?  If we use this type of criteria as our final baseline we
may be missing out on some very interesting discoveries.  If I read
John McCarthy correctly, we are just short-sighted enough not to know
whether they will turn into "Engineering" ideas in the future.  Kudos to
him for pointing this out.

dharvey@wsccs

The only thing you can know for sure,
is that you can't know anything for sure.

------------------------------

Date: Wed, 19 Oct 88 14:03 EDT
From: PGOETZ%LOYVAX.BITNET@MITVMA.MIT.EDU
Subject: Neural I/O

Two important messages which were ignored:

Quote #1:

>From: peregrine!zardoz!dhw68k!feedme!doug@jpl-elroy.arpa  (Doug Salot)
>
>If we were to accept the premise that Big Science is a Good Thing,
>what should our one big goal?  I personally think an effort to
>develop a true man-machine interface (i.e., neural i/o) would be
>the most beneficial in terms of both applications and as a driving
>force for several disciplines.

Quote #2:

>markh@csd4.milw.wisc.edu  (Mark William Hopkins): Why?
>
>       The first thing that comes to mind is our current situation
>as regards science -- its increasing specialization.  Most people will agree
>that this is a trend that has gone way too far ... to the extent that we may
>have sacrificed global perspective and competence in our specialists; and
>further that it is a trend that needs to be reversed.  Yet fewer would dare
>to suggest that we can overcome the problem.  I dare.  One of the most
>important functions of AI will be to amplify our own intelligence.  In fact,
>I believe that time is upon us that this symbiotic relation between human and
>potentially intelligent machine is triggering an evolutionary change in our
>species as far as its cognitive abilities are concerned.

Here's some possiblities for research:

Neural format:  How the brain stores/retrieves/manipulates
                   data/knowledge/etc., with the goal of learning to
                   hook into this system

Neural input:   Camera eyes for the blind
                Artificial ears for the deaf
                Generic data input
                Other

Neural output:  Direct computer interface of some type
                Neural communications/control systems for quadraplegics

        I'm looking forward to the day when we'll have little
calculator/calendar/watches interfaced with our brains which will tell us
the time, notify us of appointments, and do arithmetic.  Beyond that, as
noted in Mark Hopkins' letter, it may be possible for devices to store &
recall information for us (a big data bank which can communicate to your
brain all those things we now spend years memorizing - foreign words,
the effects of medical drugs, mathematical formulae, chemical compositions
of materials, laws & equations of physics, the Gettysburg Address,
the complete works of Pink Floyd, etc.)  Note that such data might be
manually entered at a terminal.  (Also note that it might be nearly as good
to carry around a small computer with intelligent search capabilities -
provided they were allowed in exams....)

        Does anyone know:
                how realistic such hopes are?
                what work is being done towards them?
                from what discipline (computer science, biology, medical
                   engineering,...)
                how soon (in decades) advancements might be made?
                any graduate programs that touch on this (ie the
                   MIT cognitive science dept.)?

        I gather that a major problem is that those little neurons
are too darn small & numerous to link up to...

Phil Goetz                      Nord: What's that sticking out of your hat?
PGOETZ@LOYVAX.bitnet            Bert: Oh, that's my optical drive.

FRED'S BRAIN-MATES: Here's our PhD model for $100,000... our MS for $50,000...
   our BS for $25,000... and our MBA for $1.75!

                        Shatner & Nimoy in '92!

------------------------------

Date: 16 Oct 88 21:07:06 GMT
From: buengc!bph@bu-cs.bu.edu  (Blair P. Houghton)
Subject: Re: Here's one ...

In article <409@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>>>
>>>Have you ever thought about what the brain is doing between thoughts?

In article <1116@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton) writes:
>>
>>Sleeping.

In article <1614@cadre.dsl.PITTSBURGH.EDU> cww@cadre.dsl.pittsburgh.edu.UUCP
(Charles William Webster) writes:
>
>Do you mean that between the rapid succession of conscious "moments"
>is a sleeplike state, or that there is nothing between these "moments",
>except longer periods of sleep?  Much current research on dreaming
>is converging on the generalization that dreaming is a kind of
>"consciousness".  If this were true, then what is between dream
>thoughts?  You may have been joking but I think it would be fascinating
>if the the brain was sleeping between "thoughts".   But would it be
>the sleep of dreams or the sleep of little deaths?

I meant that there is no "between thoughts" except for sleep,
expecially the deeper sleep, not that associated with partial
consciousness, such as REM sleep.

                                --Blair
                                  "...with a hedge for whatever
                                   non-thinking states
                                   meditationalists are able
                                   to achieve..."

------------------------------

Date: 18 Oct 88 05:27:29 GMT
From: leah!gbn474@bingvaxu.cc.binghamton.edu  (Gregory Newby)
Subject: Re: Here's one ...

In a previous article, Blair P. Houghton writes:
> >>>Have you ever thought about what the brain is doing between thoughts?
> >>Sleeping.
>
> >
> >Do you mean that between the rapid succession of conscious "moments"
> >is a sleeplike state, or that there is nothing between these "moments",
> >except longer periods of sleep?

Research result (unpublished) from a recent conference:

Participants were instructed to watch a string of blinking holiday
lights (the xmas tree kind, which blink more or less randomly).
A beatles' song was played (I forget which one).  When polled
afterwards, most participants reported seeing the lights blink
on and off IN RYTHM to the music.

Possible conclusion:  consciousness, like most things we can name
in nature, oscillates.

I leave it for your consideration.

--newbs
  (
   gbnewby@rodan.acs.syr.edu
   gbn474@leah.albany.edu
  )

------------------------------

Date: 18 Oct 88 13:52:33 GMT
From: mnetor!utzoo!utgpu!water!watmath!watdcsu!smann@uunet.uu.net 
      (Shannon Mann - I.S.er)
Subject: Re: Here's one ...

In a previous article, Gregory Newby writes:
>Research result (unpublished) from a recent conference:
>
>Participants were instructed to watch a string of blinking holiday
>lights (the xmas tree kind, which blink more or less randomly).
>A beatles' song was played (I forget which one).  When polled
>afterwards, most participants reported seeing the lights blink
>on and off IN RYTHM to the music.
>
>Possible conclusion:  consciousness, like most things we can name
>in nature, oscillates.
>
Other possible conclusion:  we unconsciously attach meaning to apparently
randon patterns i.e. we hear the music, we see the lights, we notice that there
are some of the lights lit on the beat, and disregard the rest as noise.
Hence, we have a pattern where none existed before.  Sounds like pattern-
recognition to me. :-)
Seriously, I believe Neuro-Linguistics uses tapping, or rubbing motions to
influence the pace of communications between two people.  Don't know why,
just seems to work.

>I leave it for your consideration.
>
>--newbs

        -=-
-=- Shannon Mann -=- smann@watdcsu.UWaterloo.ca
        -=-

'I have no brain, and I must think...' - An Omynous
'If I don't think, AM I' - Another Omynous

P.S.  I'd like to know what 'oscillating consciousness' is supposed to mean.

------------------------------

End of AIList Digest
********************

∂20-Oct-88  0457	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #110 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 20 Oct 88  04:57:35 PDT
Date: Wed 19 Oct 1988 20:43-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #110
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 20 Oct 1988     Volume 8 : Issue 110

 Queries:

  Model curriculum in AI
  Inductive Tools - RuleMaster or First Class
  Semantic Databases
  PC-based Expert System shells
  WARPLAN
  Daryl Pregibon's address?
  Definition of Codesignation
  Today-ES
  C-Linkable Expertshells (also 3 responses)

----------------------------------------------------------------------

Date: Thu, 6 Oct 88 09:00 EDT
From: INDUGERD%CNEDCU51.BITNET@CUNYVM.CUNY.EDU
Subject: Model curriculum in AI

Hello,

As AI teaching is developping in Swiss universities, I would like to
establish a kind of model curriculum for AI, both at undergraduate and graduate
level.

So I would greatly appreciate suggestions and advices from people involved
in AI teaching, and know what courses are actually taught in universities
having programs in AI.


Thank you.

Philippe Dugerdil
Institute of Informatics
Univ.of Neuchatel
Switzerland

Bitnet: indugerd@cnedcu51.

------------------------------

Date: 7 Oct 88 09:32:49 GMT
From: mcvax!hp4nl!dnlunx!marlies@uunet.uu.net  (Marlies van
      Steenbergen)
Subject: Inductive Tools - RuleMaster or First Class


Hello,

Is there anyone who has been using the inductive tools RuleMaster or First
Class? I have some questions about these tools and I am looking for someone
who has had some experience with them and might be able to answer my
questions.

I am investigating the possibilities of induction and the differences between
the various tools available. I have posted a more general request for
information about inductive tools before and received some very helpful
information, but not from anyone who has used the tools mentioned above.

------------------------------

Date: 12 Oct 88 13:15:50 GMT
From: mcvax!hp4nl!dnlunx!adriana@uunet.uu.net  (Florescu A.)
Subject: Semantical Databases.


Hello everyone!

I am investigating the possibilities of semantical and object-oriented
databases. After reading some introductory articles about them, it is still
not clear to me what the difference is between these two types of
databases and what is the state of art in this area. I have read about VBASE,
but I would like to know more about other commercially available databases of
this kind and their possibilities to support really large amounts of data.

I am also interested in statistical methods in inductive reasoning. I have
been trying to find some literature about this subject, but the most recent
article I could find had been published in 1984!!! That's quite old, isn't it.

I would be very grateful to everyone sending me some tips about books,
articles and existing tools or any remarks about these subjects. Any
information that can help me is welcome.

My address is : !mcvax!dnlunx!adriana

Thanks!

                                                               Adriana

------------------------------

Date: Wed, 12 Oct 88 11:33 CDT
From: <A0J5791%TAMSTAR.BITNET@MITVMA.MIT.EDU>
Subject: PC-based Expert System shells

Has anyone compiled a survey of PC-based Expert System shells recently?
                                               ...Arshad Jamil
                                                Dept. of Ind. Eng.
                                              Texas A&M University
                                              College Station,Texas 77843

------------------------------

Date: 13 Oct 88 22:30:59 GMT
From: dalcs!aucs!850153d@uunet.uu.net  (Jules R. d'Entremont)
Subject: WARPLAN


     I am doing some research into D.H.D. Warren's planning program
WARPLAN and would like to know where I can get more information.  If
you have any references on WARPLAN I would appreciate it if you could
e-mail them to me at 850153d@aucs.UUCP.

            Jules d'Entremont         Acadia University, N.S.
            {seismo|watmath|utai|garfield}!dalcs!aucs!850153d

------------------------------

Date: Fri, 14 Oct 88 11:19:57 PDT
From: "Leonard" <XT.A08%STANFORD.BITNET@MITVMA.MIT.EDU>
Subject: Daryl Pregibon's address?

  To : AIList (ailist@ai.ai.mit.edu)
Date : 14:X:88

   I would be grateful for Daryl Pregibon's address at Bell Labs.
Thanks very much.

--------------------------------------------------------------------
Leonard Lutomski          Center for Applied Artificial Intelligence
                          American Institutes for Research
xt.a08@stanford           1791 Arastradero Road
415-493-3550              Palo Alto, CA 94302

------------------------------

Date: Tue, 18 Oct 88 13:56 EDT
From: "Carol Ann, Tower220, 5-0609" <BROVERMAN@cs.umass.EDU>
Subject: Definition of Codesignation


Can someone tell me the original source of the terms "codesignation,"
"possible codesignation," and "necessary codesignation" (equiv. to
unification)?

My source is Chapman's AI Journal paper on "Planning Using Conjunctive
Goals," but I am interested in the original introduction of these terms
and the precise difference between "unification" and "possible codesignation."

-Carol Broverman
(broverman@cs.umass.edu)

------------------------------

Date: 18 Oct 88 22:21:03 GMT
From: sundc!potomac!grover@seismo.css.gov  (Mark D. Grover)
Subject: Today-ES

I would like to know who sells the commercial machine learning tool
known as Today-ES.  I understand it is based on Quinlan's C4 program.
An address and phone number of the company would be appreciated.

- MDG -
--
Mark D. Grover (grover@Potomac.ADS.COM)
Advanced Decision Systems          1500 Wilson Blvd #512; Arlington, VA 22209
703-243-1611                       "Back off, man.  I'm a scientist."

------------------------------

Date: 17 Oct 88 10:27:48 GMT
From: mcvax!hp4nl!tnoibbc!sp@uunet.uu.net  (Silvain Piree)
Subject: C-Linkable Expertshells

I'm designing a system that will have modules programmed in C and others
programmed using an expertshell. My aim is to fully integrate all modules and
therefore I need to be able to link the expertshell with the C code.

I know of two shells that offer this capability :

- CxPERT from Software Plus  ( Generates C source )

- KES    from Software Architecture & Engenering, Inc. ( Linked with C )

Does anyone know any other expertshells that can be integrated with C ?

Please e-mail to me and I'll summarize to the net.


--
Silvain Piree: TNO - IBBC                   USENET : sp@tnosel
             : lange kleiweg 5              UUCP   : ..!mcvax!tnosel!sp
             : 2288 GH  Rijswijk
             : the Netherlands              VOICE  : +31 15 606405

------------------------------

Date: 18 Oct 88 19:56:48 GMT
From: msimpson@teknowledge-vaxc.arpa  (Mike Simpson)
Subject: Re: C-Linkable Expertshells


Teknowledge's M.1 and COPERNICUS shells are fully integrable with C
(they're written in C, in fact.)
--
Mike Simpson            Teknowledge, Inc.       Los Angeles, CA
Internet/Domain:msimpson@teknowledge-vaxc.arpa
Usenet: ...!{decwrl,harvard,sdcsvax,sri-unix,ucbvax,uw-beaver,uunet}!
                teknowledge-vaxc.arpa!msimpson

------------------------------

Date: 19 Oct 88 12:37:00 GMT
From: osu-cis!dsacg1!ntm1169@ohio-state.arpa  (Mott Given)
Subject: Re: C-Linkable Expertshells

From article <852@tnoibbc.UUCP>, by sp@tnoibbc.UUCP (Silvain Piree):
> I'm designing a system that will have modules programmed in C and others
> programmed using an expertshell. My aim is to fully integrate all modules and
> therefore I need to be able to link the expertshell with the C code.

Another software package that you can use is CLIPS, C Language Integrated
Production System.  It is a LISP-like language in appearance that primarily
uses forward chaining with the Rete algorithm.  It allows you to embed an AI
application in a C program.  It was developed under a contract for NASA and
is available for use through COSMIC, 404-542-3265 (for around $200 I believe).
It was not intended to be a commercial product, so it is not as user-
friendly as one would expect from a commercial product.
--
Mott Given @ Defense Logistics Agency ,DSAC-TMP, P.O. Box 1605,
            Systems Automation Center, Columbus, OH 43216-5002
UUCP:  mgiven%dsacg1.uucp@daitc.arpa              I speak for myself
Phone:       614-238-9431

------------------------------

Date: 19 Oct 88 16:05:54 GMT
From: mailrus!ncar!dinl!noren@ohio-state.arpa  (Charles Noren)
Subject: Re: C-Linkable Expertshells

G2 by Gensym (Cambridge, MA) can link with C code.
Gensym phone: (617) 547-9606.
--
Chuck Noren
NET:     ncar!dinl!noren
US-MAIL: Martin Marietta I&CS, MS XL8058, P.O. Box 1260,
         Denver, CO 80201-1260
Phone:   (303) 971-7930

------------------------------

End of AIList Digest
********************

∂20-Oct-88  0733	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #111 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 20 Oct 88  07:32:49 PDT
Date: Wed 19 Oct 1988 20:47-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #111
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 20 Oct 1988     Volume 8 : Issue 111

 Responses:

  AI applications to building design and construction
  Info on PROTEGE/RIME
  CLOS & CommonLOOPS (2 Messages)
  PFL
  Concept Learning & ID3 (Quinlan) - in prolog   (3 Messages)
  Robotics; Universities offering
  Classifier Systems
  Expert systems and weather forecasting
  AAAI-88 ordering info

----------------------------------------------------------------------

Date: 11 Oct 88 08:40:01 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net  (Gilbert
      Cockton)
Subject: Re: AI applications to building design and construction

Professor Geoffrey Trimble at Loughborough University of Technology has
been associated with a number of construction industry expert systems.
He has a chapter in a forthcoming book on Knowledge Elicitation edited
by Dan Diaper of Liverpool Polytechic and published by Ellis Horwood.

Geoffrey Trimble is Professor of Construction Management in the
Department of Civil Engineering.

Loughborough University of Technology is in Loughborough,
Leicestershire, LE11 3TU, UK.

Domain experts have been heavily involved in the coding of some of
these systems, as well as the knowledge elicitation.

Committment from sponsors to use a system has proved to be major factor
in the succesful completion of a system.  Not surprising, but important
to anyone developing something in a research setting.
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

------------------------------

Date: Wed, 12 Oct 88 09:01:12 PDT
From: Mark Musen <MUSEN@SUMEX-AIM.Stanford.EDU>
Subject: Re: Info on PROTEGE/RIME

  PROTEGE was the subject of my Ph.D. dissertation, and represents a
metalevel knowledge-acquisition tool that generates other
knowledge-acquisition tools that are custom-tailored for particular
application areas.  There are not yet any journal articles in print
describing the work, although a paper on PROTEGE appears in the
Proceedings of the 1988 Workshop on Knowledge Acquisition for Knowledge
Based Systems (Banff, Canada).  The most thorough description of PROTEGE
is in my dissertation ("Generation of Model-Based Knowledge-Acquisition
Tools for Clinical-Trial Advice Systems," Stanford University, January,
1988).  Although I have no more copies available for distribution, the
dissertation can be ordered from University Microfilms (phone
800-521-0600).

  A good description of the RIME methodology appears as a chapter by Judy
Bachant in a collection just edited by Sandy Marcus entitled "Automating
Knowledge Acquisition for Expert Systems" (Kluwer, 1988).

Mark Musen
Medical Computer Science Group
Knowledge Systems Laboratory
Stanford University

------------------------------

Date: Wed, 12 Oct 88 16:18 EDT
From: Brad Miller <miller@CS.ROCHESTER.EDU>
Reply-to: miller@CS.ROCHESTER.EDU
Subject: Re: CLOS & CommonLOOPS

    Date: Mon, 10 Oct 88 19:03 O
    From: Antti Ylikoski tel +358 0 457 2704
          <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>

    I would be very grateful if someone could let me know if an academic
    license for the CLOS, the Common Lisp Object System, is available.

    I would also like to know whom to contact to obtain it and the price.

Here's what I have in my distribution directory on PCL also known as CLOS:
I hope it helps.

****
Here is the standard information about PCL.

Portable CommonLoops (PCL) started out as an implementation of
CommonLoops written entirely in CommonLisp.  It is in the process of
being converted to an implementation of CLOS.  Currently it implements a
only a subset of the CLOS specification.  Unfortunately, there is no
detailed description of the differences between PCL and the CLOS
specification, the source code is often the best documentation.

  Currently, PCL runs in the following implementations of
  Common Lisp:

   Xerox Common Lisp (Lyric Release)
   Symbolics (Release 7.2)
   Lucid (2.0)
   CMU
   VAXLisp (2.0)
   ExCL (Franz)
   Ibuki Common Lisp (01/01)
   HP Common Lisp
   TI
   Golden Common Lisp
   Pyramid Lisp
   Coral Common Lisp (Allegro)

There are several ways of obtaining a copy of PCL.

*** Arpanet Access to PCL ***

The primary way of getting PCL is by Arpanet FTP.

The files are stored on arisia.xerox.com.  You can copy them using
anonymous FTP (username "anonymous", password "anonymous"). There are
several directories which are of interest:

/pcl

This directory contains the PCL sources as well as some rudimentary
documentation (including this file).

In the directory /pcl the files:

readme.text   READ IT

notes.text    contains notes about the current state of PCL, and some
              instructions for installing PCL at your site.  You should
              read this file whenever you get a new version of PCL.

get-pcl.text  contains the latest draft of this message


/pcl/doc

This directory contains TeX source files for the most recent draft of
the CLOS specification.  There are TeX source files for two documents
called concep.tex and functi.tex.  These correspond to chapter 1 and 2
of the CLOS specification.


/pcl/archive

This directory contains the joint archives of two important mailings
lists:

  CommonLoops@Xerox.com

    is the mailing list for all PCL users.  It carries announcements
    of new releases of PCL, bug reports and fixes, and general advice
    about how to use PCL and CLOS.

  Common-Lisp-Object-System@Sail.Stanford.edu

    is a small mailing list used by the designers of CLOS.

The file cloops.text is always the newest of the archive files.

The file cloops1.text is the oldest of the archive files.  Higher
numbered versions are more recent versions of the files.


*** Xerox Internet Access to PCL ***

Xerox XNS users can get PCL from {NB:PARC:XEROX}<PCL>


*** Getting a copy of PCL from ISI ***

ISI distribute PCL with its Common Lisp distribution.  For further
information about this send a message to ACTION@ISI.EDU.



Send any comments, bug-reports or suggestions for improvements to:

   CommonLoops.pa@Xerox.com

Send mailing list requests or other administrative stuff to:

  CommonLoops-Request@Xerox.com


Thanks for your interest in PCL.
----
Brad Miller             U. Rochester Comp Sci Dept.
miller@cs.rochester.edu {...allegra!rochester!miller}

------------------------------

Date: 13 Oct 88 09:47 PDT
From: hdavis.pa@Xerox.COM
Subject: CLOS & CommonLOOPS

You request an academic license for CLOS.

1. CLOS is the (now accepted) standard specification for an object-oriented
extension to CommonLisp.  In fact, the X3J13 commitee has made it a
standard part of CommonLisp.  It is not a particular product to be sold or
licensed.

2. The only currently available implementation of CLOS is called PCL, which
was (and is being) developed at Xerox PARC by Gregor Kiczales.  You can ftp
it from arisia@xerox.com via anonymous login.  This implementation is in
the public domain; no licensing agreements or payments are needed.  Since
PCL was developed by Xerox, you are required to keep the copyright notices
on all the files.

3. The CLOS distribution list is commonloops.pa@xerox.com.  Join up!

        -- Harley

------------------------------

Date: Thu, 13 Oct 88 07:08 MDT
From: WOODL@BYUVAX.BITNET
Subject: re: PFL

Posting-Version: unknown; site unknown
Subject: re: PFL

In answer to the query about Finnin's frame rep language, I downloaded from
compuserve and have used it on a Mac with ExperCommon Lisp, and I also have
it running on an HP9000-350 in Common Lisp.  If you look in the front of the
AI magazine, it will give you the compuserve acct.
Larry Wood, Brigham Young University

------------------------------

Date: 15 Oct 88 01:27:52 GMT
From: cs.utexas.edu!sm.unisys.com!aero!mcguire@ohio-state.arpa  (Rod
      McGuire)
Subject: Re: Concept Learning & ID3 (Quinlan) - in prolog

In article <395@uiag.UUCP> gerrit@uiag.UUCP (Cap Gerrit) writes:
>Does anybody out there in the world has an implementation of
>the ID3 algorithm of Quinlan

Prolog is one of the worst possible languages in which to write the
ID3 algorithm. I don't know if it can be written to be efficient.
The problem is that one needs to count up statistics in parallel and
then somewhat randomly access these statistics.  While in prolog it
is possible to kludge up arrays with mutable elements, I will bet
that the overhead of this technique makes a prolog implementation
slower by a factor of at least 100 to say a fortran implementation.
Since the ID3 algorithm is usually applied to a moderate amount of
data (let's say, to construct a tree discriminating 10 classes from
analysis of 5000 attribute vectors each with 20 attributes that can
take on 1 of 5 values), I think that this performance difference can
make prolog implementation unusable. Also, I think any prolog version
that strives for efficiency is likely to be ugly and far removed
from the functional specification of the algorithm.

However I would love to see an analysis that proves me wrong.  Below,
I present in lisp the central part of the ID3 algorithm - the
definition of the metric B(a U) which gives a value for the goodness
of attribute "a" as a discriminant for the set of
class-attribute-vectors "U". Following that is an array-based
implementation for the time consuming parts.

Let U be a set of class-attribute-vectors where each element "u" is
a "cav" data-structure defined as:

 (cav-class u) = c, the class determined by vector u.
                 (range 1 to nc)
 (cav-av u a) = v, the value for attribute a in vector u.
                (range 1 to nv)

; the metric B for splitting U on attribute "a" is defined as:
(defun (B U a)
  (/ (sum (v 1 nv)         ; sum for v=1 to nv
       (-  (sum (c 1 nc)
             (* (N c a v U)
                (log (N c a v U))))
           (* (S a v U)
              (log (S a v U)))))
     (size-of U)))

where
  (S a v U) = number of elements u in U s.t.
      (cav-av u a) = v
and
  (N c a v U) = number of elements u in U s.t.
      (cav-av u a) = v
    & (cav-class u) = c

In order to avoid processing the elements in U over and over again,
it is reasonable to pre-compute N (as below) and define S in terms of N.

(defvar N (make-array (list nc na nv)))

(loop for v in U
  do (loop for a from 1 to na
       do  (increment (aref N (cav-class v) a (cav-av v a)))))

------------------------------

Date: 16 Oct 88 01:39:59 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: Re: Concept Learning & ID3 (Quinlan) - in prolog

In article <39500@aero.ARPA> mcguire@aero.aerospace.org (Rod McGuire) writes:
>In article <395@uiag.UUCP> gerrit@uiag.UUCP (Cap Gerrit) writes:
>>Does anybody out there in the world has an implementation of
>>the ID3 algorithm of Quinlan


>Prolog is one of the worst possible languages in which to write the
>ID3 algorithm. I don't know if it can be written to be efficient.
>The problem is that one needs to count up statistics in parallel and
>then somewhat randomly access these statistics.  While in prolog it
>is possible to kludge up arrays with mutable elements,

There is not the slightest need for mutable arrays in an implementation
of ID3 or similar algorithms, and Prolog is not at all a poor choice.
The Iterative Dichotomiser involves two kinds of steps:

    (1) making a sweep through the training set collecting a random
        sample of *incorrectly predicted* examples to add to the
        "window" (this is the "Iterative" part)
    (2) doing a kind of back-to-front radix sort on the contents of
        the "window" to turn it into a decision tree (this is the
        "Dichotomiser" part)

>Since the ID3 algorithm is usually applied to a moderate amount of
>data (let's say, to construct a tree discriminating 10 classes from
>analysis of 5000 attribute vectors each with 20 attributes that can

If you are working with tiny training sets like that, you don't need
ID3.  Quinlan's innovation was the "windowing" technique -- forming
decision trees is nothing new, you will even find a tiny Fortran
implementation in Algorithm AS 165, JRSS -- which permitted him to
work with training sets having millions of examples.  The "windowing"
idea can be applied to many induction schemes:

    {Initialise}
        set the window to a random sample of N1 examples.
    {Induce}
        induce a "rule" from the current window.
    {Evaluate}
        make a pass through the complete training set,
        adding a random sample of N2 examples which are incorrectly
        classified (if fewer than N2 misclassifications, take all).
        If performance was adequate, stop.
    {Iterate}
        Go back to {Induce}.

Two references:
        "Discovering rules by induction from large collections of examples",
        Quinlan, J.R.
        (in) Expert Systems in the micro-electronic age, (Ed. D. Michie)
        Edinburgh University Press, 1979

        "Learning efficient classification procedures",
        Quinlan, J.R.
        (in) Machine learning: an artificial intelligence approach
        Eds Michalski, Carbonell, & Mitchell
        Tioga press, 1983
The algorithm descriptions in these articles are quite clear.

------------------------------

Date: 18 Oct 88 16:27:45 GMT
From: mist!tgd@cs.orst.edu  (Tom Dietterich)
Subject: Re: Concept Learning & ID3 (Quinlan) - in prolog

There is evidence that the windowing feature of ID3 does not provide
much benefit.  Consult the following paper for details:

Wirth, J. and Catlett, J. (1988).  Experiments on the costs and
benefits of windowing in ID3.  In Proceedings of the Fifth
International Conference on Machine Learning, Ann Arbor, MI.
Available from Morgan-Kaufmann, Inc, Los Altos, CA.  87--99.

Here is the abstract:

"Quinlan's machine learning system ID3 uses a method called windowing
to deal economically with large training sets.  This paper describes a
series of experiments performed to investigate the merits of this
technique.  In nearly every experiment, the use of windowing
considerably increased the CPU requirements of ID3, but produced no
significant benefits.  We conclude that in noisy domains (where ID3 is
now commonly used), windowing should be avoided."

The paper reports several studies involving training sets as large as
20,000 examples.  The authors state that if you have the physical
memory to store the examples, it is best to avoid windowing.
Windowing seems to work best on noise-free training sets where there
are many redundant features.  These turn out to be rather uncommon
although the initial domains in which ID3 was developed had these properties.


--Tom Dietterich
tgd@cs.orst.edu
Department of Computer Science
Oregon State University
Corvallis, OR 97331

------------------------------

Date: 17 Oct 88 15:01:53 GMT
From: att!mtuxo!rsn@bloom-beacon.mit.edu  (XMRH2-S.NAGARAJ)
Subject: ROBOTICS; Universities offering


I am still getting e-mail for the above request.  Since the number of
requests were quite a few, I decided that I would post a summary
of responses to my request seeking names of US universities offering
graduate degrees in Robotics.

The list (summary) provided below is a partial list since there was
considrable duplication in the info that I received.

Raj Nagaraj
>

Add the University Of Southern California to the list. It's Computer Eng
department offers a combined degree in Artificial Intelligence and Robotics
and the Computer Eng dept is ranked #3 in the nation by IEEE. I know I'm
a biased alumnus :-)

<

>
Raj,

I have  been working in Robotics  for a few years,  and from
what what I know, there is now univ. that offers a degree in
Robotics  although following  institutions have  very strong
programs and research in Robotics.

MIT, Stanford,  CMU, Purdue,  Univ. Of. Michigan,  RPI, USC,
Univ. of Texas at Austin, Univ. Penn.


P.S.: Above list is by no means the final word on standing of
      various schools in Robotics and there may even be some
<

Columbia University Computer Science Department has a very active and
prominent Robotics group

>
 You might want to check out the University of Texas at Arlington...
 (Dallas Fort Worth, Tx area).  When I was a grad student there several
 years ago, they got a *LARGE* grant to start a robotics facility,
 with specific staffing in robotics.  Don't know what the status is now,
 though.
<
>
try carnegie-mellon.
<
>

One university is,

Carnegie Mellon University through the dept of Civil engineering.

The address is

Dept of Civil engg
Porter Hall
Carnegie Mellon Univ
Pittsburgh
PA 15213

<
>

You'll probably get this suggestion from others: check out Cornell.
The Robotics program here is very strong, the project headed by John
Hopcroft (1986 Turing Award).  The emphasis seems to be on solid
modeling, motion planning and machine vision (judged by faculty
representation -- the project also supports some research associates
each year who expand the scope of subjects).

Disclaimer: I'm not even in Robotics, I'm in machine learning and I
know they're doing good robotics work!

<
>

I saw the list posted on the bulletin board in the Electrical Engineering
department a while back, so I don't remember. However some of the schools with
the top comp eng/electrical eng departments were :

Bekerley
Stanford
USC
MIT
UCLA
Caltech

I don't remember the rest.

<

------------------------------

Date: Mon, 17 Oct 88 11:12:13 PDT
From: rik%cs@ucsd.edu (Rik Belew)
Subject: Classifier Systems

  Date: 11 Oct 88 14:05:20 GMT
  From: steinmetz!boston!powell@itsgw.rpi.edu  (Powell)
  Subject: Classifier system software packages

  Recently, I have read some interesting articles on induction and classifier
  systems. To better understand their capabilities and functionalities,
  I am looking for a free, classifier software package to experiment with.

Check with Rick Riolo at the Univ. Michigan. He has developed a
comprehensive, portable version of Holland's Classifier System
called CFS-C. I think you can reach him as Rick_Riolo@um.cc.umich.edu .

Richard K. Belew

        rik%cs@ucsd.edu

        Assistant Professor
        CSE Department  (C-014)
        UCSD
        San Diego, CA 92093
        619 / 534-2601 or  534-5948 (messages)

------------------------------

Date: 18 Oct 88 17:04:53 GMT
From: hubcap!ncrcae!gollum!rolandi@gatech.edu  (Walter Rolandi)
Subject: RE: Expert systems and weather forecasting

Regarding:
>I'm trying to find out if anyone is working on expert systems in weather
>forecasting. Names, addresses, references ... would all be welcome. With
>thanks in advance
>
>                               Laurence Moseley

You might want to get a copy of the proceedings of 17 JAIIO/PANEL '88
EXPODATA from SADIO, the Argentine Operations Research and Computer
Science Society.  At their recent conference, a paper describing an
expert system weather forecaster was presented.  The paper was entitled,
"Um sistema especialista para previsao de tempo".  Its authors are,
V.H.de Avila Duarte and F.A.de Castro Giorno.  I think the researchers
work at the National University of Brazil at Rio but I am not sure.

Walter Rolandi
rolandi@ncrcae.Columbia.NCR.COM
NCR Advanced Systems Development, Columbia, SC

------------------------------

Date: 19 Oct 88 16:48:45 GMT
From: mailrus!uflorida!haven!h.cs.wvu.wvnet.edu!b.cs.wvu.wvnet.edu!sip
      ing@ohio-state.arpa  (Siping Liu)
Subject: Re: AAAI-88


  The following information is from the proceedins of AAAI-88:

  To order the proceedings, write to:

    Morgan Kaufmann Publishers, Inc.
    P.O.Box 50490
    Palo Alto, CA 94303
    (414) 578-9911 or (415) 965-4081

  For AAAI-88, $75/$56.25 AAAI members.

------------------------------

End of AIList Digest
********************

∂21-Oct-88  1649	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #112 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 21 Oct 88  16:49:42 PDT
Date: Fri 21 Oct 1988 19:28-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #112
To: AIList@AI.AI.MIT.EDU


AIList Digest           Saturday, 22 Oct 1988     Volume 8 : Issue 112

 Queries:

  Statistical methods in inductive reasoning
  Machine Learning Applications in Software Engineering
  References on Writing
  Ajay Gupta's email address
  Common LISP Src for Tomita Algorithm
  Object Oriented database mgmt
  ES in commercial environments
  ES in business, economic, and statistical forecasting

----------------------------------------------------------------------

Date: 18 Oct 88 08:31:51 GMT
From: mcvax!hp4nl!dnlunx!adriana@uunet.uu.net  (Florescu A.)
Subject: Statistical methods in inductive reasoning


Hello everyone!

I am studying methods in inductive reasoning and I am particularly interested
in the statistical ones. I have been trying to find some literature about this
subject, but the most recent article I could find had been published in
1984!!! That's quite old, isn't it.

I would be very grateful to everyone sending me some tips about books,
articles and existing tools or any remarks about this subject. Any information
that can help me is welcome.

My address is : !mcvax!dnlunx!adriana

Thanks!

                                                               Adriana

------------------------------

Date: 18 Oct 88 13:31:16 GMT
From: mcvax!enea!kth!draken!tut!tolsun!jto@uunet.uu.net  (Jarkko
      Oikarinen)
Subject: Machine Learning Applications in Software Engineering

[I'm only forwarding this message, so please send all replies to the]
[address specified at the end of this message... --jto]

            I'm currently working in a project where we are aiming at
            applying
            machine learning techniques to software engineering. In
            our case the whole software development process is open
            for applying ML systems and it means that almost all
            kinds of learning strategies will do. We have made some
            experiments with SOAR, but we'll try cover other
            approaches as well.

            Well, to the point: This query tries to find out, if (1)
            you have a machine learning architechture (or you are
            developing one) that might be applied to SE, (2) you know
            a project where the same kind of topic has been studied,
            or (3) you know some articles on the subject. Please send
            your replies to:


            EARN:  jkn%vvtko1%finfun.bitnet
            UUCP:  jkn%vttko1@ompvax.kpo.fi

            Many thanks in advance,

                 Jukka Korhonen,       Technical Research Centre of Finland,
                                       Computer Technology Laboratory.
--
Jarkko Oikarinen            OuluBox: WIZARD     UUCP:...!mcvax!tut!oulu!jto
Institute of Information Processing Science    INTERNET: jto@tolsun.oulu.fi
University of Oulu, Finland                    EARN/BITNET: toljto at finou
                     "It ain't logic. It's magic !"

------------------------------

Date: 19 Oct 88 17:40:17 GMT
From: ecsvax!skyler@mcnc.org  (Patricia Roberts)
Subject: References on Writing


Teaching writing used to be just something that high school teachers
did the way it had always been done and that college teachers avoided
(they made graduate students do it.)  Recently, however, people have
figured out that the way you teach writing and what you teach as good
writing involve a lot of assumptions about discourse, cognition, language,
and imagination (among other issues.)  Some people are writing about
the implications that research in communication, linguistics, cognitive
science, and psychology have for teaching writing.  For complicated
reasons, I have the feeling that I'm not seeing the best which is being
done in that area. In addition, I seem only to run across things written
by writing teachers who are dabbling in linguistics, and so on, rather
than by people who are really trained in those fields dabbling in teaching
writing.

So, I am looking for references ...



--
====================================================================
-Trish                                  "...a lifetime is too narrow
skyler@ecsvax.uncecs.edu                too understand it all..."
                                                        --A. Rich

------------------------------

Date: Wed, 19 Oct 88 20:22:51 cdt
From: park@m.cs.uiuc.edu (Young-Tack Park)
Subject: Ajay Gupta's email address


I would like to get Mr. Gupta's thesis at Dept.of AI, U. of Edinburgh.
Does anyone know his email address or how to get it from the U. of
Edinburgh?

  Ajay Gupta, Failure Recovery Using a Domain Model. Dept. of AI,
  U. of Edinburgh 1985

Thanks in advance,

Young-Tack

E-mail: park@m.cs.uiuc.edu

------------------------------

Date: 20 Oct 88 02:28:07 GMT
From: sun.soe.clarkson.edu!tree@tcgould.tn.cornell.edu  (Tom Emerson)
Subject: Common LISP Src for Tomita Algorithm

I am looking for the Common LISP source of Tomita's parsing algorithm and
LR-parse table generator.  Any help in this would be greatly appreciated.

Thanx in advance for any help in this matter.

Tom Emerson
LISP Coordinator
SOE Network, Clarkson University
tree@sun.soe.clarkson.edu

------------------------------

Date: 21 Oct 88 07:07:30 GMT
From: gryphon!crash!rome@jpl-elroy.arpa  (Sean Rome)
Subject: Object Oriented database mgmt


I've recently been introduced to the concept  of object-oriented
database management, but only very superficially.

It seems like an intriguing field, and I'm eager to find out more
about it.

Q: What are the primary advantages/disadvantages when compared to
   traditional relational or network DBMSs ?

Q: What types of data manipulation options do you gain by going the
   object oriented route ?  What do you lose ?

Q: What problem areas lend themselves most readily to object
   oriented dbms solutions ?

Q: How does an object oriented query (or report) differ from its
   relational counterpart ?

Essentially, I'm brand new to the technology.
I would greatly appreciate any answers/opinions about the above
questions and any pointers to introductory literature.

Please e-mail responses to me here or call me collect at

   (619) 455-1398

          Thanks,

               Sean Rome

------------------------------

Date: Fri, 21 Oct 88 08:23:25
From: RZ89%DKAUNI11.BITNET@CUNYVM.CUNY.EDU
Subject: ES in commercial environments

=========================================================================
Date: 21 October 1988, 07:42:55 MEZ
From: RZ89     at DKAUNI11
To:   AILIST   at AI.AI.MI

Hello AILIST-Users,

My name is Harald Eckert and I am an active researcher in the field of
AI, especially under a system analytical view.  I`m interested in
gathering more information about the use of expert systems in commercial
environments and under commercial conditions.

So I ask all the AILIST-Users to give me more information about the following
topics:

1> The problem of the integration of an expert system (either a prototype or
   a fully developed system) in its given environment in a factory, an office
   a bank or wherever it is used.  Are their publications worth reading ?
   Reports from consulting companies or expert system developers ?
2> The problem of maintaining an expert system. Not only the shell but
   the knowledge of the expert(s) coded in the system.
   What techniques are suitable?  Another cycle with a knowledge engineer,
   using a knowledge acquisition tool or is it the experts` turn to do
   this "refreshing" of the knowledge base?
3> The problem of augmenting the user response to an expert system,
   the acceptance of such an system. Are their experiences, reports,
   literature?
4> The problem of knowledge engineering not only under the view of
   knowledge acquisition but under the view of an system analyst, dealing
   with the problems of the management of a factory and the management of
   of an expert system project, cost-benefit questions and much more.
   Are their experiences, literature yet available for me ?

These are the main questions arising from my work in the Universitiy of
Karlsruhe, W-Germany.

Please send mail to  RZ89@DKAUNI11.BITNET
My address is:        Harald Eckert
                      Universitaet Karlsruhe
                      Rechenzentrum
                      Zirkel 2
                      7500 Karlsruhe
                      W-Germany

Thanks for your response!

                      Harald Eckert

------------------------------

Date: Fri, 21 Oct 88 15:56:13 EDT
From: "H.$brahim TEMEL" <IBRAHIM@TRANAVM2>
Subject: ES in business, economic, and statistical forecasting


      Dear Members,

      I'm attending MS courses at Anatolia University and I'm at
 dissertation level now.  My subject is "An Expert Systems Approach
 to Forecasting Techniques".  I mean business, economic, and statistical
 forecasting and I want to develop an Expert System with the ability of
 learning, giving advice, and solving some forecasting models.

      If anyone knows of any book or paper about or related to this
 subject would you send me information about the sources. Please.
 And how can I get it.


       Yours Sincerely.
                                    H. $brahim TEMEL

                                    IBRAHIM at TRANAVM2

                                    Anadolu Universitesi
                                    Bilgi Islem Merkezi
                                      26470 - Eskisehir/TURKEY

------------------------------

End of AIList Digest
********************

∂24-Oct-88  1558	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #113 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 24 Oct 88  15:58:22 PDT
Date: Sun 23 Oct 1988 23:01-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #113
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 24 Oct 1988      Volume 8 : Issue 113

  Artificial Insects (2 messages)
  Consciousness (2 messages)

----------------------------------------------------------------------

Date: 14 Oct 88 20:56:27 GMT
From: lethin@athena.mit.edu  (Richard A Lethin)
Subject: Re: Looking for contacts at MIT AI labs ("artificial
         insects")

In article <321@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes:
>A sizable horde of artificial insects would make a good weapon (see
>Stanislaw Lem, _The Invincible_). We send a few billion of them over
>to the asian steppes, where they hide in the trees and bushes around
>all the missile silos. When the lids swing open and the babies start
>arming, all the bugs buzz on over and climb in, gumming things up...
>

Also see _Engines of Creation_ by Drexler for more on this topic;
Drexler's machines are called "assemblers" and they do wonderful
and horrible things.  He's serious too, claiming it's all possible in
about 20-50 years.   Or read newsgroup sci.nanotech...

>Dan Mocsny
>If at first you don't succeed, use a larger hammer.

Brooks presented some material recently at a well-attended AI seminar
discussing the role of planning in AI.  He gave examples of how a
jumping spider makes its way around in the world, pretty much without
any planning at all.  It just looks around for something that moves,
turns toward it, and either eats it or mates with it.  Very simple,
and the spider gets by.

The model he was using was of simple subsystems, each doing their own
thing without much "planning" working together to for a behavior.
In the case of the spider, examples of these subsystems were the
side vision system (which is used to detect motion), the forward
vision tracking system system (keep forward eyes on target) and the
IFF (identification, friend or foe) system to decide whether to
eat or mate.

He put forth the argument that this also takes place in humans, that
the 97% of what we do intelligently is done without plans, and that
consciousness only serves to rationalize (sometimes incorrectly) why
we do things.  He cited experimental evidence involving people who
had undergone commisurotomies (sp?) who were unable to correctly
rationalize their actions.

lethin@wheaties.ai.mit.edu

------------------------------

Date: 19 Oct 88 14:32:14 GMT
From: hacgate!janus!ge1cbx!rick@jpl-elroy.arpa  (Trashotron)
Subject: Re: Looking for contacts at MIT AI labs ("artificial
         insects")

In article <7476@bloom-beacon.MIT.EDU>, lethin@athena.mit.edu writes:
> In article <321@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes:
> >A sizable horde of artificial insects would make a good weapon (see
> >Stanislaw Lem, _The Invincible_). We send a few billion of them over

See also "One Human Minute", the section titled "The Upside-Down Evolution".
This his "review" of a book called "Weapons Systems of the Twenty First
Century", wherein he discusses the arrival of "artificial instinct".
--
we look real tame, and we act real mild
when we bite your hands, you say your pet's gone wild......

------------------------------

Date: 19 Oct 88 08:35:29 GMT
From: clyde!watmath!watdcsu!smann@bellcore.bellcore.com  (Shannon
      Mann - I.S.er)
Subject: Re: Intelligence / Consciousness Test for Machines
         (Neural-Nets)???

In article <734@wsccs.UUCP> dharvey@wsccs.UUCP (David Harvey) writes:
>If you claim that you are a conscious entity, but that I am not (your
>view of the world), and that I am a conscious entity but that you are not
>(my view of the world), then I can only assume that you are talking
>about self awareness.  But is this what determines whether something is
>a conscious entity or not?  If I am not mistaken, your view is that for
>anything to be a conscious entity, it must have self awareness, and only
>it can determine whether it is a conscious entity.  Please correct me if
>I misinterpreted you.  But then why in the world am I writing this
>article in response?  After all, I have no guarantee that you are a
>conscious entity or not.  But for some reason, I have this persistent
>but unverifiable belief that you are a conscious entity.  Otherwise,
>why would I write?





Yes, but consider that the _other_ conscious entity may only be another
part of yourself (in the sense of multiple personalities.)  You may be
unaware that each separate entity is actually the same entity.






>                    In other words, we have a sticky problem that may
>or may not have a solution.  Yes, Bertrand Russell is someone who had
>a neat idea with the proposal of a third indeterminate state.  In other
>words, I prefer to consider this type of question the mystery it should
>be categorized as.
>
>dharvey@wsccs

        -=-
-=- Shannon Mann -=- smann@watdcsu.UWaterloo.ca
        -=-

'I have no brain, and I must think...' - An Omynous
'If I don't think, AM I' - Another Omynous

P.S.  I am unaware of Russell's postulate re: a third indeterminate state.
Could you give me some references?  Thanks in advance.

------------------------------

Date: 20 Oct 88 15:40:55 GMT
From: leah!gbn474@bingvaxu.cc.binghamton.edu  (Gregory Newby)
Subject: oscillating consciousness

In article <1119@leah.Albany.Edu> gbn474@leah.Albany.Edu writes:
  (sorry, Shannon:  I lost your reference line)
>>Possible conclusion:  consciousness, like most things we can name
>>in nature, oscillates.
>>
>Other possible conclusion:  we unconsciously attach meaning to apparently
>randon patterns i.e. we hear the music, we see the lights, we notice that there
>are some of the lights lit on the beat, and disregard the rest as noise.
>Hence, we have a pattern where none existed before.  Sounds like pattern-
>recognition to me. :-)
>>--newbs


>-=- Shannon Mann -=- smann@watdcsu.UWaterloo.ca

>P.S.  I'd like to know what 'oscillating consciousness' is supposed to mean.

Take it as you want.  We know that people can not attend to the entire
environment at once (or, at least that's what the cog. psychologists
have found).  Possibly, people are attuned more at some times than
others, in a regular pattern.  Personally, I would consider a sort of
continuous sine function before a binary on-off type of model.

An upcoming article, I believe in _Quality and Quantity_, considers
seriously the idea that ALL human phenomena are based on such
osciallations and the interference patterns they produce.  We're
talking from individual memory and thought to dyadic interaction
to group or mob behavior.  The article is by John Foldy, and is
an extension of his Dissertation.  I would be happy to mail the
reference to interested parties.

--newbs
   (
    gbnewby@rodan.acs.syr.edu
    gbn474@leah.albany.edu
    )

ps:  a book which I do NOT recommend, but makes similar considerations
in ways that make anyone knowledgable of natural science feel very
ill, is _Stalking the Wild Pendulum_ by Iztac Benton (sp?).

------------------------------

End of AIList Digest
********************

∂27-Oct-88  0305	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #114 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 27 Oct 88  03:05:02 PDT
Date: Thu 27 Oct 1988 05:37-EDT
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #114
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 27 Oct 1988     Volume 8 : Issue 114

 Queries:

  Prolog for 4-D
  Representation of biological systems
  Poetry composing programs (and 2 responses)
  Software for teaching AI techniques
  Functions for heuristics (Genetic Algorithms) (and 1 response)
  Temporal reasoning
  ES for Scheduling Flexible Manufacturing Systems
  Canadian AI Magazine: Request for French Translators
  ES builders for the IBM PC
  Flight Simulation

 Responses:

  C-Linkable Expertshells (4 messages)
  ES in weather forcasting
  Daryl Pregibon's address
  Common LISP Src for Tomita Algorithm
  PFL

----------------------------------------------------------------------

Date: 17 Oct 88 14:56:18 GMT
From: mnetor!utzoo!utgpu!jarvis.csri.toronto.edu!me!ecf!soosaar@uunet.
      uu.net  (Robert Soosaar)
Subject: Prolog for 4-D

I am currently looking for a good/any Prolog that runs on the
SGI 4-D series machines.

Any comments from users would be appreciated, especially Mprolog by
Logicware.

        Rob Soosaar
        soosaar@ecf.toronto.edu

------------------------------

Date: 19 Oct 88 16:17 +0800
From: Ulises Cortes <mcvax!fib.upc.es!ia@uunet.UU.NET>
Subject: Representation of biological systems

Our AI group is dealing with the representation of
byological systems. Is anybody out there working in
this field. Actually, we are working with the representation
of an ecosystem (Medas' Islands).

We're looking for people to interchange information and/or
experiences.

                Ulises Cortes
                Computer Science School
                Barcelona. Spain.

------------------------------

Date: 22 Oct 88 20:40:42 GMT
From: bpa!cbmvax!vu-vlsi!lehi3b15!lafcol!chaudhas@rutgers.edu 
      (Chaudhary Sharad )
Subject: Poetry composing programs

I'm a novice prolog programmer and as a semester project I'm writing
a prolog program that composes simplistic verse. I'm not familiar with
the literature in this field and would appreciate pointers to the
relevant literature. I'm also interested in the more general area
of natural language generators (particularly those written in prolog)
and any references to this field too would be very useful.

                           Thanks in advance
                           sharad

------------------------------

Date: 23 Oct 88 16:53:50 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Subject: Re: poetry composing programs


      I once generated poetry using 1940's vintage IBM plugboard-wired
accounting machines.  This was back in the 1960s, when computer time was
harder to come by.  I used an IBM 85 collator, a 402 accounting machine,
and an 82 sorter.  The basic technique involved imposing the grammatical
pattern of an existing poem on random words.  Some additional checks
insured that the word-to-word transitions were similar to ones that had
appeared in other text.

      The result was not particularly profound, but read well in spots.

                                        John Nagle

------------------------------

Date: 25 Oct 88 02:18:59 GMT
From: apple!well!jax@rutgers.edu  (Jack J. Woehr)
Subject: Re: poetry composing programs

In article <288@lafcol.UUCP> chaudhas@lafcol.UUCP (Chaudhary Sharad ) writes:
>I'm a novice prolog programmer and as a semester project I'm writing
>a prolog program that composes simplistic verse. I'm not familiar with
>the literature in this field and would appreciate pointers to the
>relevant literature. I'm also interested in the more general area
>of natural language generators (particularly those written in prolog)
>and any references to this field too would be very useful.
>
>                           Thanks in advance
>                           sharad

        How about a Forth program that composes Chinese Limericks?
See _Forth Notebook_ by Dr. C.H.Ting, Offete Enterprises, 1987
pp. 245 - 250.

( Offete Enterprises in 1306 South B Street, San Mateo, CA 94402)

Dr. Ting has also implemented a tiny Prolog in Forth called Forlog.


{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}
{}                                                                        {}
{} jax@well     ." Sysop, Realtime Control and Forth Board"               {}
{} jax@chariot  ." (303) 278-0364 300/1200 8-n-1 24 hrs."                 {}
{} JAX on GEnie         ." Tell them JAX sent you!"                       {}
{}                                                                        {}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}

------------------------------

Date: Sat, 22 Oct 88 17:17 EST
From: steven horst                        
      <GKMARH%IRISHMVS.BITNET@MITVMA.MIT.EDU>
Subject: Software for teaching AI techniques

I am teaching a course on AI for Arts & Letters undergrads
this spring semester, and should appreciate suggestions on freeware
or inexpensive software that is useful for teaching basic AI concepts
and techniques.  The course is NOT primarily a PROGRAMMING course,
and some students may actually have no progamming background at all.
The time allocated for introducing AI techniques, moreover, is
limited to a maximum of about 8 sessions, because the course covers
philosophical issues and technology and society questions as well.
(In other words, teaching LISP or Prolog is pretty much out of
the question.)

Since concepts like semantic networks, frames and feedback loops and
techniques like backward chaining are language-independent, I have
some hope that either (a) someone has developed some good educational
software for teaching AI concepts and techniques to people who don't
use LISP or Prolog, or (b) something on the order of an expert
systems shell might be easily adaptable to educational ends.

I should appreciate suggestions on good freeware or commercial
software that can be bought or licenced at a low price.  Applications
that run on the Macintosh would be of particular interest, as my
university has become highly Mac-oriented.

I should also be interested in hearing other people's experiences in
trying to put together a broad introduction to AI for undergraduates
who are NOT specializing in an area closely related to AI (computer
science, cognitive psychology, logic, &c.)

Thank you in advance.

*****************************************************
*    Steven Horst         Bitnet: gkmarh@irishmvs   *
*    Department of Philosophy        219-239-7458   *
*    Notre Dame, IN  46556                          *
*****************************************************

------------------------------

Date: 24 Oct 88 05:17:12 GMT
From: apple!bionet!agate!pasteur!cory.Berkeley.EDU!carasso@bloom-beaco
      n.mit.edu  (George of the Jungle)
Subject: functions for heuristics (Genetic Algorithms)


I was interested in designing an experimental program, where the program
would try to solve a problem by using various heuristics.  What I would like
to do is model it after evolution.  If a heuristic is successful,
then it creates mutant heuristic versions of itself, and then those
heuristics are put to the test, et cetera.  Each time, the ones that
solve the problem in the shortest time are allowed to have children,
others are "terminated".

In wanting mutant children created, I do not simply want to change
some constants in the parent's fuctions, but rather would like it to
change the function itself and create totally new ones.  In starting this
off, I also do not want to enter the base case for all the fuctions I know.
For example \x. x*x; or \x. c; or \x. x↑x; or what ever fuction I could dream
up.  Is there any way to create "all function groups", within reason?
Where two functions are in the same group if they differ by a constant,
or number of repeatitions.

Of course, the eventual goal is to see the program find a good heuristic
on its own.

Roger Carasso,
UCB


"My ignorance is my own, and is no way related to any organization"

------------------------------

Date: 24 Oct 88 21:57:38 GMT
From: nau@mimsy.umd.edu  (Dana S. Nau)
Subject: Re: functions for heuristics

In a previous article, George of the Jungle writes:
>I was interested in designing an experimental program, where the program
>would try to solve a problem by using various heuristics. ... If a heuristic
>is successful, then it creates mutant heuristic versions of itself, and then
>those heuristics are put to the test, et cetera. ...

Ping-Chung Chi has done a nice Ph.D. dissertation at the University of
Maryland, studying game tree searching.  Among other things, he has done
the above on game trees using a genetic algorithms approach.  For more
information, write to chi@mimsy.umd.edu.
--
Dana S. Nau
Computer Science Dept.          ARPA & CSNet:  nau@mimsy.umd.edu
University of Maryland          UUCP:  ...!{allegra,uunet}!mimsy!nau
College Park, MD 20742          Telephone:  (301) 454-7932

------------------------------

Date: Tue, 25 Oct 88 09:31 CDT
From: ANDERSJ%ccm.UManitoba.CA@MITVMA.MIT.EDU
Subject: temporal reasoning

I am currently doing research into temporal reasoning methods.  I have
gathered a great deal of material on classic general methods, but
what I require is any references to temporal reasoning methods used
by various medical AI systems.  Any info anyone could give me would
be greatly appreciated
                           John Anderson
                           <ANDERSJ@UOFMCC.BITNET>

------------------------------

Date: Tue, 25 Oct 88 10:32 CDT
From: <A0J5791%TAMSTAR.BITNET@MITVMA.MIT.EDU>
Subject: ES for Scheduling Flexible Manufacturing Systems

Hi!... I am compiling a survey of expert systems for scheduling CIM in general
and FMS in particular. I would appreciate views/comments from persons in academi
a and industry, who are involved in developing/using such systems. I am
particularly interested in the limitations and effectiveness of these expert
systems, and what role a human scheduler would play in the highly complex
and constantly evolving environment of FMS. Does anyone know if ISIS still
being used at Westinghouse? I would welcome comments from U.S. and the
worldwide readers of the AILIST DIGEST.
                                                     Arshad Jamil
                                                   Graduate student
                                           Department of Industrial Engineering
                               Texas A&M University, College Station, TX 77843

------------------------------

Date: 25 Oct 88 12:35 -0600
From: Christopher G Prince <prince%noah.arc.cdn@relay.ubc.ca>
Subject: Canadian AI Magazine: Request for French Translators

Canadian Artificial Intelligence (a publication of the Canadian Society
for Computational Studies of Intelligence) translates abstracts of
articles and some whole articles into french language.

We are in need of extra translators. These people should have computer
science background with at least some knowledge of AI terminology, and
be able to do french translation.

Any help would be appreciated.

[The deadlines for the comming issues:
November 15, 1988
February 15, 1989
May 15, 1989
August 15, 1989]

For responses to this message, please send email to me:


Christopher G. Prince
Alberta Research Council,
6815 8th Street NE, 3rd Floor
Calgary, Alberta, CANADA  T2E 7H7
(403) 297-2600

UUCP: chris%arcsun.uucp%ubc.csnet@relay.cs.net
   or prince%noah.arc.cdn@alberta.uucp
   or ...!ubc-cs!calgary!arcsun!chris
CDNnet: prince@noah.arc.cdn


For Content and Submission requests:


Canadian AI Magazine
C/O Alberta Research Council,
6815 8th Street NE, 3rd Floor
Calgary, Alberta, CANADA  T2E 7H7
(403) 297-2600

UUCP: cscsi%arcsun.uucp%ubc.csnet@relay.cs.net
   or cscsi%noah.arc.cdn@alberta.uucp
   or ...!ubc-cs!calgary!arcsun!cscsi
CDNnet: cscsi@noah.arc.cdn

Subscription Requests:


CIPS
243 College Street (5th floor),
Toronto, Ontario, CANADA
M5T 2Y1

------------------------------

Date: 26 Oct 88 12:41:56 GMT
From: izimbra!dsc@uunet.uu.net  (David S. Comay)
Subject: ES builders for the IBM PC

i'm looking for information and/or recommendations on expert system
builders for the ibm pc and compatibles.  the application will be a
`small' consultation-based expert system (on the order of a hundred
rules) and though i have heard of these three products out there, i
know little more about them or any others: ti's pc personal consultant,
vp-expert & the level5 system.

i would appreciate any information and or opinions on these products or
others out there that might fit the bill.

thanks for the help,

dsc

------------------------------

Date: 26 Oct 88 15:23:48 GMT
From: mailrus!ulowell!masscomp!daved@ohio-state.arpa  (Dave Davis)
Subject: Flight Simulation

I'm looking for articles, proceedings and perhaps textbooks that discuss
the role of ai in flight simulators. I have seen citations for Rolfe's
book (a college text), but I haven't laid hands on a copy yet. I would
be particularly interested in looking at anything that discusses the role
of computer architectures for ai applications in flight simulation.

Thanks in advance.

Dave Davis

------------------------------

Date: 20 Oct 88 20:31:59 GMT
From: pioneer.arc.nasa.gov!raymond@ames.arpa  (Eric A. Raymond)
Subject: Re: C-Linkable Expertshells


CLIPS is NASA's shell:
  - built in C
  - can turn rules into C (if desired)
  - comes with full source
  - a subset of ART (Inference Corp.)
  - nice integrated environment for Mac
  - extremely portable
  - fast
  - low memory requirements (relative to other systems)
  - cheap ($250 for public, free for government)

Available from COSMIC (404) 542-3265

Name: Eric A. Raymond
ARPA: raymond@pioneer.arc.nasa.gov
SLOW: NASA Ames Research Center, MS 244-17, Moffett Field, CA 94035

Nothing left to do but :-) :-) :-)

------------------------------

Date: 21 Oct 88 20:05:51 GMT
From: portal!cup.portal.com!TechServices@uunet.uu.net  (Angelo C
      Micheletti)
Subject: Re: C-Linkable Expertshells

Neuron Data's NEXPERT OBJECT is not only written in C but
is completely embeddable in YOUR application program and
runs on IBM AT/PS2, Mac, Vax, HP, Sun, Appolo and has the
same look on each.  Be happy to furnish more information
if you'll let me know.

------------------------------

Date: 24 Oct 88 03:33:04 GMT
From: wucs1!slustl!patnaik@uunet.uu.net  (Gagan Patnaik)
Subject: Re: C-Linkable Expertshells

In article <852@tnoibbc.UUCP>, sp@tnoibbc.UUCP (Silvain Piree) writes:
>
> Does anyone know any other expertshells that can be integrated with C ?
>

Yes, RuleMaster2 by Radian Corp. is based in C, and produces C and
FORTRAN source codes. Radian Corp. is based in Austin, Texas.

------------------------------

Date: 25 Oct 88 17:45:41 GMT
From: bbking!rmarks@burdvax.prc.unisys.com  (Richard Marks)
Subject: Re: C-Linkable Expertshells

In article <852@tnoibbc.UUCP>, sp@tnoibbc.UUCP (Silvain Piree) writes:
>
> Does anyone know any other expertshells that can be integrated with C ?
>

Yes.  KES marketed and supported by Unisys.  This is a very good product.
It is "industrial strength"; if you have a real number of rules, this is
about the best product we have seen.

It runs quite well on PC's.  We have also ported it to several of our Unix
boxes.  Work to port it to our mainframes is proceeding.  On the PC, it
is integratable with MicroSoft C.

Richard Marks
rmarks@KSP.unisys.COM

------------------------------

Date: Fri, 21 Oct 88 17:35 PST
From: HEARNE%wwu.edu@RELAY.CS.NET
Subject: ES in weather forcasting


REGARDING:

Inquiry about expert systems in weather forcasting.  The 1987 NOAA Conference
on Artificial Intelligence Research in Environmental Science at Boulder
Colorado (Sept 15-17) was almost exclusively devoted to expert systems
in weather forcasting.  It produced neither proceedings nor a participant
list so the best thing is to write to one of the organizers.  The name I
have is Chris Fields, Computing Research Lab, New Mexico State University,
Las Cruces NM  88003-0001

Jim Hearne
Computer Science
Western Washington University
Bellingham WA 98225

------------------------------

Date: Mon, 24 Oct 88 08:58:50
From: GOLUMBIC%ISRAEARN.BITNET@CUNYVM.CUNY.EDU

Date: 24 October 88, 08:37:13 IDT
From: Martin Charles Golumbic   972 4 296282         GOLUMBIC at ISRAEARN
To:   AILIST at AI.AI.MIT

Daryl Pregibon's address is:
 AT&T Bell Labs 2C-264                      email:    daryl@research.att.com
 600 Mountain Ave.                           UUCP:    ihnp$!alice!daryl
 Murray Hill, NJ 07974 U.S.A.


Advanced Registration Forms should be sent to Daryl by December 1, 1988 for
the Second International Workshop on Artificial Intelligence and Statistics
    Fort Lauderdale, Florida Jan. 4-7, 1989
    Chairman: William Gale  (gale@research.att.com)

The papers will be strictly refereed and revised to produce a volume of the
    Annals of Mathematics and Artificial Intelligence
as a permanent record of the workshop.

------------------------------

Date: Tue, 25 Oct 88 18:38 EDT
From: Brad Miller <miller@CS.ROCHESTER.EDU>
Reply-to: miller@CS.ROCHESTER.EDU
Subject: Re: Common LISP Src for Tomita Algorithm

    Date: 20 Oct 88 02:28:07 GMT
    From: sun.soe.clarkson.edu!tree@tcgould.tn.cornell.edu  (Tom Emerson)

    I am looking for the Common LISP source of Tomita's parsing algorithm and
    LR-parse table generator.  Any help in this would be greatly appreciated.

    Thanx in advance for any help in this matter.

    Tom Emerson
    LISP Coordinator
    SOE Network, Clarkson University
    tree@sun.soe.clarkson.edu

We have this version:

(print "**************************************************************
(print "********
(print "******** LFG Compiler/Parser with the TOMITA parsing algorithm
(print "********
(print "********           Center for Machine Translation
(print "********             Carnegie-Mellon University
(print "********            Version 6.9,  September 1986
(print "********  (c) Carnegie-Mellon University, all rights reserved
(print "********
(print "**************************************************************

so I suggest contacting them for info.

----
Brad Miller             U. Rochester Comp Sci Dept.
miller@cs.rochester.edu {...allegra!rochester!miller}

------------------------------

Date: 27 Oct 88 01:12:30 GMT
From: rutgers!prc.unisys.com!finin@ucsd.edu (Tim Finin)
Reply-to: rutgers!prc.unisys.com!finin@ucsd.edu (Tim Finin)
Subject: Re: PFL


The PFL package is also available via anonymous ftp on
linc.cis.upenn.edu in ~ftp/pub/pfl .  PFL is a relatively simple
frame-based knowledge representation language implemented in Common
Lisp. Tim

  Tim Finin                     finin@prc.unisys.com
  Paoli Research Center         ..!{psuvax1,sdcrdcf,cbmvax}!burdvax!finin
  Unisys                        215-648-7446 (office)  215-386-1749 (home)
  PO Box 517, Paoli PA 19301    215-648-7412 (fax)

------------------------------

End of AIList Digest
********************

∂30-Oct-88  1842	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #115 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 30 Oct 88  18:42:10 PST
Date: Sun 30 Oct 1988 21:24-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #115
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 31 Oct 1988      Volume 8 : Issue 115

 Announcements:

  Call for Papers - ASME Computers in Engineering
  Call For Papers: AI/Expert Systems in the Biosciences
  Rocky Mountain AI Conference
  Congress on Cybernetics and Systems
  New Journal:  Philosophical Psychology
  Robotics PhD Program at CMU

----------------------------------------------------------------------

Date: Thu, 20 Oct 88 17:39:29 EDT
From: decvax!cvbnet!cheetah.LOCAL!rverrill@decwrl.dec.com (Ralph
      Verrilli)
Subject: Call for Papers - ASME Computers in Engineering


                           Call For Papers

                Computers in Engineering Conference (CIE) 1989
                 1989 American Society of Mechanical Engineers
                Internation Computers in Engineering Conference

                     Anaheim Hilton, Anaheim, CA
                      July 30 - August 2, 1989

  Sponsored by the Computers in Engineering Division of The Society of
  Mechanical Engineers (ASME), the theme of this annual event - 1989
  ASME International Computers in Engineering Conference and
  Exposition - will focus on two aspects of Design for the 1990's:
  modern tools and environments for engineering design and strategic
  issues which must be face by all companies to stay competitive in
  the global economy of the 1990's.

  Technical papers are invited in all areas relevant to the
  utilization of computers in the engineering profession; from
  research and development to applications, education, business and
  management issues and challenges.

  Contributions in the form of full-length papers should be submitted
  directly to the appropriate program chairperson as indicated below.
  Accepted papers will be published in the bound Conference
  Proceedings by the ASME.  Papers of special note will be further
  reviewed for publication in Computers in Mechanical Engineering
  (CIME) Magazine.


  Deadline Dates for Technical Papers :

   - Contributions, 4 copies, to be submitted no
     later than.................................... November 15, 1988

   - Review process completed, authors notified
     of acceptance of their paper(s)................February 15, 1989

   - Final paper(s) on author-prepared mats
     due at ASME headquaters............................April 2, 1989



  Technical program Chairpersons :


  Mr. Fatih Kinoglu
  3M Company
  3M Center
  Building 260-6A-08
  St. Paul, MN 55144-1000
  612-733-3537

  for :

    Artificial Intelligence, Expert Systems, Knowledge Based Systems,
    Design Theory



  Dr. Gary Gabriele
  Rensselaer Polytechnic Institute
  Department of Mechanical Engineering
  110 Eight Street
  Troy, NY 12180-3590
  518-276-2601

  for :

    Interactive Computer Graphics, Computer Aided Design, CAD/CAM
    Integration, Computer Aided Testing, Computer Aided Manufacturing,
    Computer Simulation



  Dr. Gary Kinzel
  The Ohio State University
  Department of Mechanical Engineering
  206 West 18th Ave
  Columbus, OH 43210
  614-292-6884

  for :

    Computers in Education, Robotics in Education, Teaching CAD,
    Computer Aided Learning Systems



  Dr. Kumar Tamma
  Mechanical Engineering Dept.
  University of Minnesota
  111 Church Street
  Minneapolis, MN 55455
  612-625-1821

  for :

    Finite Element Techniques, Computational Mechanics, Software
    Standards, Software Engineering



  Mr. David Bennett
  Battelle Pacific Northwest Lab
  Post Office Box 999
  Richland, WA 99352
  509-375-2159

  for :

    Robotics, Real Time Control, Adaptive Control, Process Control,
    High Performance Computing



  Dr. Ahmed A. Busnaina
  Clarkson University
  Mechanical Engineering Department
  Potsdam, NY 13676
  315-268-6574

  for :

    Computers in Energy Systems, Computational Heat Transfer,
    Computational Fluid Dynamics



  Dr William Rasdorf
  North Carolina State University
  Civil Engineering Dept.
  Post Office Box 7908
  Raleigh, NC 27695
  919-737-2331

  for :

    Engineering Database Systems

------------------------------

Date: Fri, 21 Oct 88 05:54:28 PDT
From: modelevsky%pobox.DEC@decwrl.dec.com (aka Mojo: Biotech & AI,
      Central Area- DTN 474-5491)
Subject: Call For Papers: AI/Expert Systems in the Biosciences...

Subject: CABIOS CALL 4 PAPERS...


               ANNOUNCEMENT: Call For Papers...

"Computer Application in the BIOSciences" (CABIOS) will produce a
special issue devoted to the application of Artificial
Intelligence/Expert Systems technology to problems in the
biosciences.  All manuscripts will be fully reviewed in
accordance with CABIOS editorial policy.  All CABIOS manuscript
categories will be suitable; however, authors who wish to
contribute First Byte or Review articles should first consult one
of the Executive Editors.

The submission deadline is March 31, 1989, for publication in
late 1989.  Please submit manuscripts for this special issue to
one of the Executive Editors.

Eastern Hemisphere:                     Western Hemisphere:
--------------------------              -----------------------
R.J. Beynon                             J. Modelevsky
Department of Biochemistry              Digital Equipment Corp.
University of Liverpool                 100 Northwest Point
P.O. Box 147                            Elk Grove Village, IL
Liverpool L69 3BX                                   6007-1018
UNITED KINGDOM                          USA

------------------------------

Date: 27 Oct 88 17:23:30 GMT
From: uswat!caribou!jima@boulder.colorado.edu  (Jim Alexander)
Subject: Rocky Mountain AI Conference


                      Call for Papers
          Fourth Annual Rocky Mountain Conference
                 on Artificial Intelligence

                      June 8 & 9, 1989
                                           Clarion Hotel
                      Denver, Colorado

            Augmenting Human Intellect by Computer


This conference is designed to explore the means by which Artificial
Intelligence can enhance human cognitive abilities. We are particularly
interested in work that addresses the way computer systems can support
their user's problem solving needs.  Relevant topics for this conference
are:

        - Intelligent support of human communication
        - Computer supported cooperative work
        - Automated reasoning and problem solving
        - User interfaces and user interface management systems
        - Tutoring, Training & Education
        - Design, Manufacturing & Control
        - Planning
        - Human Problem Solving

Although purely theoretical papers will be considered, all papers should
indicate how the technology will be used to change the way people
think and communicate.

Papers due: December 9, 1988
Authors must submit three copies of their paper, not to exceed 4000
words.  Each submission must include the keywords describing the research
and state whether it will be submitted to another conference.

Send to: James H. Alexander, RMCAI Program Chair
         U S WEST Advanced Technologies
         6200 S. Quebec #320, Englewood, CO 80111

------------------------------

Date: 28 Oct 88 04:14:05 GMT
From: spnhc@cunyvm.bitnet  (Spyros Antoniou)
Subject: Congress on Cybernetics and Systems


             WORLD ORGANIZATION OF SYSTEMS AND CYBERNETICS

         8 T H    I N T E R N A T I O N A L    C O N G R E S S

         O F    C Y B E R N E T I C S    A N D   S Y S T E M S

 JUNE 11-15, 1990 at Hunter College, City University of New York, USA

     This triennial conference is supported by many international
groups  concerned with  management, the  sciences, computers, and
technology systems.

      The 1990  Congress  is the eighth in a series, previous events
having been held in  London (1969),  Oxford (1972), Bucharest (1975),
Amsterdam (1978), Mexico City (1981), Paris (1984) and London (1987).

      The  Congress  will  provide  a forum  for the  presentation
and discussion  of current research. Several specialized  sections
will focus on computer science, artificial intelligence, cognitive
science, biocybernetics, psychocybernetics  and sociocybernetics.
Suggestions for other relevant topics are welcome.

      Participants who wish to organize a symposium or a section,
are requested  to submit a proposal ( sponsor, subject, potential
participants, very short abstracts ) as soon as possible, but not
later  than  September 1989.  All submissions  and correspondence
regarding this conference should be addressd to:

                    Prof. Constantin V. Negoita
                         Congress Chairman
                   Department of Computer Science
                           Hunter College
                    City University of New York
             695 Park Avenue, New York, N.Y. 10021 U.S.A.

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|   Spyros D. Antoniou  SPNHC@CUNYVM.BITNET  SDAHC@HUNTER.BITNET    |
|                                                                   |
|      Hunter College of the City University of New York U.S.A.     |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

------------------------------

Date: 26 Oct 88   18:39 EDT
From: PHLPWB%GSUVM1.BITNET@MITVMA.MIT.EDU
Subject: New Journal:  Philosophical Psychology

PHILOSOPHICAL PSYCHOLOGY is a new journal promoting the interaction of
philosophers, psychologists, and other investigators of behavioral and
mental phenomena.  The editorial board is especially keen to encourage
publication of articles which deal with the application of philosophical
psychology to the cognitive and brain sciences, and to areas of applied
psychology.  Thus, we are committed to publishing high-quality
papers on such topics as the philosophical foundations of cognitive
science, the potential of connectionism as an alternative to symbolic
models, the scientific status of psychological explanations, the inter-
disciplinary endeavors of psycholinguistics, and the implications of
developments in clinical psychology for theories of the mind.

A special feature of the journal is periodic publication of symposia on
important recently published books.  The current issue (volume 1, number
2) contains two discussions of Alfred Mele's recent book IRRATIONALITY:
AN ESSAY ON AKRASIA, SELF-DECEPTION, AND SELF CONTROL (Oxford, 1987).
A forthcoming issue (volume 2, number 1, will contain three discussions of
William Lycan's LOGICAL FORM IN NATURAL LANGUAGE (MIT, 1986).

PHILOSOPHICAL PSYCHOLOGY will appear three times a year.  1988 was the
first year of publication and we have sent the final issue for volume 1
to the publisher.  We are now reviewing manuscripts for volume 2.  If you
have a paper that you think is appropriate, I urge you to consider
submitting to PHILOSOPHICAL PSYCHOLOGY.  All submissions that are deemed
appropriate for the journal will be subjected to refereeing, generally
by both a philosopher and a psychologist.  Thus, we hope to be able to offer
substantive evaluations and suggestions on all papers submitted to us.

Submissions may be sent to either editor:
1. Dr. John Rust, Editor, Univ. of London Institute of Education, 25
   Woburn Square, London, WC1H 0AA, ENGLAND
2. Dr. William Bechtel, Associate Editor, Department of Philosophy, Georgia
   State University, Atlanta, GA 30303-3083   e-mail: PHLPWB@GSUVM1.BITNET

At this stage we are still developing our pool of reviewers.  If you are
interested in refereeing papers for PHILOSOPHICAL PSYCHOLOGY, please
send a statement of interest to William Bechtel (PHLPWB@GSUVM1.bitnet).

------------------------------

Date: Fri, 28 Oct 88 14:30:06 EDT
From: Steven.Shafer@IUS1.CS.CMU.EDU
Subject: Robotics PhD Program at CMU


              CARNEGIE MELLON UNIVERSITY PHD PROGRAM IN ROBOTICS

Carnegie  Mellon announces a new interdisciplinary PhD program in Robotics that
brings together the Robotics Institute, the Carnegie  Institute  of  Technology
(engineering  school), the Computer Science Department, and the Graduate School
of Industrial Administration (business school).  Students in this program  take
courses in Robotics and other related areas of study, and carry out research in
the unique laboratories of the Robotics Institute.  Students with  interest  in
robotics and a strong academic background are encouraged to apply for admission
to this program.

TOPICS OF STUDY IN ROBOTICS

Students in the Robotics Program study the basic sciences and  technologies  of
robotics:

   - PERCEPTION

   - COGNITION

   - MANIPULATION

Students in the Robotics Program also study and do research in large integrated
robot systems:

   - MANUFACTURING AUTOMATION

   - MOBILE ROBOTS

THE PhD PROGRAM

Students in this course of study are enrolled in the Ph.D. Program in  Robotics
at  Carnegie  Mellon University.  Each student designs an individualized course
of study from a selection of basic and applied topics in robotics.  The program
is  heavily  research-oriented  in all phases with hands-on experience from the
very beginning of graduate studies.  Students have the  opportunity  to  pursue
their  own  ideas,  and  to publish their results by writing scientific papers.
All applicants admitted to the PhD program will receive full financial support.

FOR INFORMATION

Students interested in the program should apply for admission to  the  Robotics
Program at Carnegie Mellon.  For application information, write to:
                    Graduate Admissions Coordinator, Robotics Program
                    The Robotics Institute
                    Carnegie Mellon University
                    Pittsburgh PA 15213.
Application  information  should  be available in the Winter of 1988-9; the
first admissions to the program will be in Fall 1989.

------------------------------

End of AIList Digest
********************

∂31-Oct-88  1722	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #116 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 31 Oct 88  17:21:43 PST
Date: Mon 31 Oct 1988 18:47-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #116
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 1 Nov 1988      Volume 8 : Issue 116

 Queries:

  ES for building management?
  ES for student advising (1 response)
  ES for Crop Pathology
  ES on the IBM PC
  E.S./A.I. in Net Management
  References in mobile robot research

 Responses:

  Machine Learning School Summary
  Poetry composing programs (2 messages)
  Statistical methods in inductive reasoning

----------------------------------------------------------------------

Date: 25 Oct 88 14:21:30 GMT
From: mcvax!ukc!etive!epistemi!rda@uunet.uu.net  (Robert Dale)
Subject: ES for building management?

Anyone know of any work that's been done in the area of expert systems
for building management or related areas?  I'm thinking particularly
of systems that monitor resource usage (heating, lighting etc) and
maybe change the environment appropriately, making use of knowledge
such as aproximately how long it takes to heat the building to a
certain temperature, and so on.

I'll summarise replies if there is sufficient interest.

BTW, if you reply to this and I don't acknowledge your reply, please
accept my apologies in advance:  sometimes it's hard to get mail to US
addresses from this side of the pond.

R

--
Robert Dale        Phone: +44 31 667 1011 x6470 | University of Edinburgh
UUCP:   ...!uunet!mcvax!ukc!its63b!epistemi!rda | Centre for Cognitive Science
ARPA:   rda%epistemi.ed.ac.uk@nss.cs.ucl.ac.uk  | 2 Buccleuch Place
JANET:  rda@uk.ac.ed.epistemi                   | Edinburgh EH8 9LW Scotland

------------------------------

Date: 27 Oct 88 03:16:34 GMT
From: a5v@psuvm.bitnet
Subject: ES for student advising

I would appreciate receiving any lead to works done on expert systems
apply to student advising (curriculum advising)
                                               Thanks
                                                      Al VAlbuena

------------------------------

Date: 31 Oct 88 02:46:44 GMT
From: mailrus!uflorida!haven!h.cs.wvu.wvnet.edu!b.cs.wvu.wvnet.edu!sip
      ing@rutgers.edu  (Siping Liu)
Subject: Re: expert systems for student advising

I did a project in an advanced AI class last term which seems what you
are looking for. It was done in LASER, a C-based object-oriented knowledge
representation facility (similiar to KEE).

Of course you won't like to look through the 2,700 lines of codes and
I dono't think you can run it yourself -- you need LASER and RPS (a PS like
OPS5).

Functions:
  . A student can input his interests and get advise on who is better to be
    his reserach advisor and his class plan to get his degree according to the
    degree policy and what class he has taken. He can also specify classes
    he wants to take next term and the program will check the time conflicting
    among the class schedule, if he has satisfied the pre-requires of the
    class, if the class has saturated (if so, a message will send to the
    professor and he is put into the waitting list. The professor can put him
    in the class if he wants to), warning if this guy has chosen too many
    classes for one term or if too much programming work he'll face, etc.
  . The Dept. sectary can set up/modify a student's record (what class he has
    taken before and scores). She can check every student's record.
  . The head of the Dept. has the priviledge to see every student's record,too.
    He can also set policy for each class.
  . A professor can see the class enrollment and student names. He can only see
    the records of students advised by him.
  . Many more things I prefer to skip for the sake of saving your time.

I planed to bring in some features such as in case of a contradictory between
a student and his teacher, the problem will be submitted to the head of Dept.
The motivation for my professor to give this assignment is to have a taste on
the problem of Concurrent Engineering (where a lot of experts work together to
solve a design problem) which is a research project in West Virinia University.

I will be glad if I could be any help.

------------------------------

Date: Fri, 28 Oct 88 14:58:46 EDT
From: <ganguly@ATHENA.MIT.EDU>
Subject: ES for Crop Pathology


I am posting this request on behalf of a friend
of mine. I would appreciate if someone can provide
information on expert systems for identifying crop diseases.

Thanks,

Jaideep Ganguly
ganguly@athena.mit.edu

------------------------------

Date: 28 Oct 88 22:20:05 GMT
From: dsc@izimbra.CSS.GOV (David S. Comay)
Subject: ES on the IBM PC


i'm looking for information and/or recommendations on expert system
builders for the ibm pc and compatibles.  the application will be a
`small' consultation-based expert system (on the order of a hundred
rules) and though i have heard of these three products out there, i
know little more about them or any others: ti's pc personal consultant,
vp-expert & the level5 system.

i would appreciate any information and or opinions on these products or
others out there that might fit the bill.

thanks for the help,

dsc

------------------------------

Date: 28 Oct 88 23:11:46 GMT
From: att!mtuxo!rsn@bloom-beacon.mit.edu  (XMRH2-S.NAGARAJ)
Subject: E.S./A.I. in Net Management


I am interested in finding out information regarding efforts to
include ES/AI in network management.  I am interested in large
scale networks.

I would appreciate any information on references, papers,
conferences, books, organizations, contacts, etc.

Please send me e-mail if you can give me any kind of help.

Thanks.

Raj Nagaraj
mtuxo!rsn

------------------------------

Date: Sun, 30 Oct 88 16:18:20 PST
From: tutiya@russell.Stanford.EDU (Syun Tutiya)
Reply-to: russell!tutiya@russell.Stanford.EDU (Syun Tutiya)
Subject: references in mobile robot research

I am a philosopher who happens to be intersted in the state of the art
about mobile robot research.  Could anybody out there tell me the
least biased, most illuminating and insightful yet readble
introduction to the field?

Thanks.

Syun Tutiya
(tutiya@csli.stanford.edu)

------------------------------

Date: 20 Oct 88 20:32:00 GMT
From: mirror!rayssd!raybed2!applicon!bambi!webb!webb@bu-cs.bu.edu
Subject: Machine Learning School Summary


  I recently posted a request for information about graduate schools which have
good programs in Artificial Intelligence and Machine Learning.  This is a
summary of the information which I recieved.  To all those who responded,
thank you very much.  I invite further comments on the opinions expressed
below, and further input from those at these or other schools.

******Eastern Schools:
Rutgers:
        - Strong learning program.

University of North Carolina:
        - No AI program.

Yale:
        - Dominated by Roger Schank, who is reputedly very hard on his
          students.  Strong recommendations against going here.
        - Dana Angluin doing excellent theoretical work.

Harvard:
        - Small program (5-6 students/year), correspondingly close contact
          with faculty.
        - Les Valiant is doing theoretical machine learning work.
        - William Woods is willing to support machine learning work, though
          his usual field is natural language.

Carniege-Mellon University:
        - Very difficult to get in.
        - Rated consistiently as one of the top AI and Machine Learning
          schools in the world.
        - Diverse program
        - Allen Newell; SOAR project
        - Tom Mitchell, Jamie Carbonell, John Anderson in Machine Learning,
          many others in other fields of AI and connectionism.  Berliner,
          Kenade, Reddy, Hinton, etc.
        - Focus on research rather than classwork.

University of Pennsylvania:
        - Well-known for their natural language work, not so much so for
          machine learning.
        - One complaint about terrible student/administration relationships.

MIT:
        - Very difficult to get in.
        - Famous for requiring 8-9 years of work for PhD.
        - Rumored: (from Stanford student)
                - Unfriendly
                - One dimensional Department.
                - Many professors were MIT undergrads.

University of Mass. @ Amherst:
        - Strong AI and learning programs.

Georgia Tech:
        - Dr. Janet Kolodner; Case Based Reasoning, Experiential learning,
          PhD from Yale under Roger Schank.
        - Connection with DARPA through Col. Bob Simpson who recieve MS in
          Machine Learning from Georgia Tech under Kolodner.  He is head of
          DARPA Machine Learning research.

University of Pittsburgh:
        - Bruce Buchanan has come here from Stanford to set up a big-time
          AI lab.  If he stays, excitement will follow.
        - Focus on Expert Systems.

******Central Schools:

University of Illinois @ Champaign-Urbana:
        - 6 AI faculty whose primary interest is learning, 4 have it as a
          secondary interest.  Fields include:
                - EBL  (Jerry DeJong)
                - Theory of Learning (Lenny Pitt)
                - Probabalistic learning, applied and theoretical
                        (Sylvian Ray, Larry Rendell)
                - Conceptual Clustering (Bob Stepp)
                - KBS Learning, automated programming (David Wilkins)
        - Interdiscplinary approach, esp. re. the psychology dept.
                - Doug Medin, Dedre Genter, William Brewer, William Greenough
                - Work also being done in Lingusitics, Statistics, Electrical
                  Engineering and Physics Depts.
        - Beckman Institute on campus
                - Brand new $50M facility for study of intelligence and
                  complex systems.

University of Michigan:
        - Holland; Classifiers and Genetic Algorithms
        - Host of last year's (1987) Machine Learning conference.

******Western Schools:

University of Texas @ Austin:
        - Machine Learning group headed by Bruce Porter.
        - Many well-known and respected scientists working and visiting there.
            (eg. Silberschatz, Boyer and Moore, Dijkstra)
        - Relationship with MCC and Doug Lenat.

Stanford:
        - Very difficult to get in.
        - Famous for requiring 8-9 years of work for PhD.
        - Bruce Buchanan, their best learning professor, has relocated to
          U. Pittsburgh.
        - AI department is dominated by those who believe that rigorous
          logic is the representation best suited to solving problems.
        - Rich Keller; explaination based learning. Their only specialist.
        - David Rumelhart, connectionist, works in psych. dept.
        - Most professors will support machine learning research however.
        - Terrific connections with industry:
                - Schlumberger
                - NASA Ames
                - Xerox PARC
                - Lockheed AI Center
        - Do not have an active learning group.

University of California @ San Diego:
        - Most, if not all, of their machine learning work is centered
          around connectionism.

University of California @ Berkeley:
        - AI is not the focus of their CS department.
        - Main AI professor is Wilensky, a clone of Roger Schank.
        - Stuart Russell, Stanford graduate.

University of California @ Irvine:
        - Strong psychological orientation.
        - Good funding, good equipment.
        - CS dept. is up and coming.
        - Pat Langely main Learning professor.
        - 4 faculty doing learning work
                - 2 doing explaination-based learning
                - 1 doing empirical work
        - 45min to 1hr from LA.

University of California @ Los Angeles:
        - Not recommended for machine learning

******Foreign schools:

University of Edinburgh, Scotland:
                                Peter Webb.

{allegra|decvax|harvard|yale|mirror}!ima!applicon!webb,
{mit-eddie|raybed2|spar|ulowell|sun}!applicon!webb, webb@applicon.com

------------------------------

Date: 27 Oct 88 10:04:47 GMT
From: mcvax!ukc!etive!aiva!ken@uunet.uu.net  (Ken Johnson)
Subject: Re: poetry composing programs


Look for `The policeman's beard is half constructed' by ``Racter''.
--
==============================================================================
From:       Ken Johnson
Address:    AI Applications Institute, The University, EDINBURGH, Scotland
Phone:      031-225 4464 ext 212
Email:      k.johnson@ed.ac.uk
Quotation:  Everyone said it couldn't be done
            But he buckled down and set to it;
            He tackled the Job That Couldn't Be Done,
            And he couldn't do it.

------------------------------

Date: Fri, 28 Oct 88 13:07:37
From: ALFONSEC%EMDCCI11.BITNET@CUNYVM.CUNY.EDU
Subject: Poetry composing programs

I know of work done in this area by J. Ruiz de Torres. The program
was written in APL/PC and generated blank verse in Spanish.
A book was published describing the system (also in Spanish):
"El Ordenador y la Literatura" (J. Ruiz de Torres), Siglo Cultural,
Madrid, 1987.
The program had a set of definitions of grammar structures (correct
sentences) and long lists of words. The result was quite
impressive, at least the first time you saw it.


Regards,

Manuel Alfonseca, ALFONSEC at EMDCCI11

------------------------------

Date: Mon Oct 31 16:15:44 1988
From: Oren.Etzioni@VIOLET.LEARNING.CS.CMU.EDU
Subject: Statistical methods in inductive reasoning.

reply to query on: statistical methods in inductive reasoning.

Please see my paper "Hypothesis Filtering: A Practical Approach to
Reliable Learning" in the proceedings of the 1988 Machine Learning
Conference.

oren

------------------------------

End of AIList Digest
********************

∂31-Oct-88  2034	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #117 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 31 Oct 88  20:34:25 PST
Date: Mon 31 Oct 1988 20:39-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #117
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 1 Nov 1988      Volume 8 : Issue 117

 Philosophy:

  Oscillating consciousness
  What does the brain do between thoughts?
  When is an entity conscious?
  Bringing AI back home (Gilbert Cockton)
  Huberman's 'the ecology of computation' book
  Limits of AI

----------------------------------------------------------------------

Date: 21 Oct 88 23:10:45 GMT
From: vdx!roberta!dez@uunet.uu.net  (Dez in New York City)
Subject: Re: oscillating consciousness

> Take it as you want.  We know that people can not attend to the entire
> environment at once (or, at least that's what the cog. psychologists
> have found).

No that is not what cognitive psychologists have found.  What we have found is:
        a) people gain as much information from the environment as their
           sensory systems are able to pick up.  This is a very large amount
           of information, it may well be the entire environment, at least as
           far as the environment appears at the sense receptors.

        b) people have a limited capacity to reflect or introspect upon the
           wide range of information coming in from sensory systems.  Various
           mechanisms, some sense specific, some not, operate to draw people's
           immeadiate awareness to information that is important.  It is this
           immeadiate awareness that is limited, not perception of, or
           attention to, the environment.

Dez - Cognitive Psychologist  uunet!vdx!roberta!dez

------------------------------

Date: Mon, 24 Oct 88 11:28:36 PDT
From: lambert@cod.nosc.mil (David R. Lambert)
Subject: What does the brain do between thoughts?

Discussion history (abbreviated; see AIList for detail & sources):
>>> 1) What does the brain do between thoughts?
>>> 2) ... there is no "between thoughts" except for sleep....
>>> 3) [Subjects] reported seeing "randomly" blinking lights blink
         IN RYTHM to a song. Possible concl:  consciousness
         oscillates.
>>> 4) Other possible concl:  we unconsciously attach meaning to
         apparently randon patterns (e.g., notice those lit on the
         beat and disregard others. ... use of tapping, or rubbing
         motions to influence pace of communications....  P.S.  I'd
         like to know what 'oscillating consciousness' is supposed
         to mean.

As I recall, there are some nice psycholinguistic "click"
experiments (I don't know the references--about 1973) which show
that the perceived location of a click which actually occurs at a
random time during a spoken sentence migrates to a semantic (or,
perhaps, syntactic) boundary.  Perhaps the brain is actually
thinking (processing information) all/most/much of the time.  But we
PERCEIVE (or experimentally observe) the brain as thinking
intermittently 1) because we notice only the RESULTS of this
thinking, and 2) do so only when these results become available at
natural (irregularly spaced) breakpoints in the processing.

David R. Lambert
lambert@nosc.mil

------------------------------

Date: 24 October 1988, 20:50:31
From: Stig Hemmer                                    HEMMER   at
      NORUNIT
Subject: When is an entity conscious?

First a short quote from David Harvey

>                       But then why in the world am I writing this
>article in response?  After all, I have no guarantee that you are a
>conscious entity or not.

>dharvey@wsccs

I think mr. Harvey touched an important point here, as I see it the question
is a matter of definision. We don't know other people to be conscious, we
DEFINE then to be. It is a very useful definition because other people behave
more or less as I do, and I am conscious.

Here it is possible to transfer to programs in two ways:

1) Programs are conscious if they behave as people, i.e. the Turing test.

2) Find the most useful definition. For many people this will be to define
  programs not to be conscious beings, to avoid ethical and legal problems.

This discusion is therefore fruitless because it concerns the basic axioms
people can't argue for or against.
                                   -Tortoise

------------------------------

Date: 25 Oct 88 09:24:07 GMT
From: Gilbert Cockton <mcvax!cs.glasgow.ac.uk!gilbert@uunet.UU.NET>
Reply-to: Gilbert Cockton
          <mcvax!cs.glasgow.ac.uk!gilbert@uunet.UU.NET>
Subject: Bringing AI back home (Gilbert Cockton)


In a previous article, Ray Allis writes:
>If AI is to make progress toward machines with common sense, we
>should first rectify the preposterous inverted notion that AI is
>somehow a subset of computer science,
Nothing preposterous at all about this.  AI is about applications of
computers, and you can't sensibly apply computers without using computer
science.  You can hack together a mess of LISP or PROLOG (and have I
seen some messes), but this contributes as much to our knowledge of
computer applications as a 14 year old's first 10,000 line BASIC program.

> or call the research something other than "artificial intelligence".
Is this the real thrust of your argument?  Most people would agree,
even Herb Simon doesn't like the term and says so in "Sciences of the
Artificial".  Many people would be happy if AI boy scouts came down
from their technological utopian fantasies and addressed the sensible
problem of optimising human-computer task allocation in a humble,
disciplined and well-focussed manner.

There are tasks in the world.  Computers can assist some of these
tasks, but not others.  Understanding why this is the case lies at the
heart of proper human-machine system design.  The problem with hard AI is
that it doesn't want to know that a real division between automatable
and unautomatable tasks does exist in practice.  Because of this, AI
can make no practical contribution to real world systems design.
Practical applications of AI tools are usually done by people on the
fringes of hard AI.  Indeed, many AI types do not regard Expert Systems
types as AI workers.

> Computer science has nothing  whatever to say about much of what we call
> intelligent behavior, particularly common sense.
Only sociology has anything to do with either of these, so to
place AI within CS is to lose nothing.  To place AI within sociology
would result in a massacre :-)

Intelligence is a value judgement, not a definable entity.  Why are so
many AI workers so damned ignorant of the problems with
operationalising definitions of intelligence, as borne out by nearly a
century of psychometrics here?  Common sense is a labelling activity
for beliefs which are assumed to be common within a (sub)culture.
Hence the distinction between academic knowledge and common sense.
Academic knowledge is institutionalised within highly marginal
sub-cultures, and thus as sense goes, is far less common than the
really common stuff.

Such social constructs cannot have a machine embodiment, nor can any
academic discipline except sociology sensibly address such woolly
epiphenomena.  I do include cognitive psychology within this exclusion,
as no sensible cognitive psychologist would use terms like common sense
or intelligence.  The mental phenomena which are explored
computationally by cognitive psychologists tend to be more basic and
better defined aspects of individual behaviour.  The minute words like
common sense and intelligence are used, the relevant discipline becomes
the sociology of knowledge.
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert
--
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
        gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

------------------------------

Date: 29 Oct 88 01:38:57 GMT
From: mailrus!sharkey!emv@rutgers.edu  (Ed Vielmetti)
Subject: Huberman's 'the ecology of computation' book

(why is there no sci.chaos or sci.ecology ?)

Has anyone else read this book?  I'm looking for discussion of
what might be labelled as 'computational ecology' or 'computational
ecosystems'.  Just looking at the relevant references in the
two papers I have, the seminal works appear to be Davis and Smith
(1983), 'Negotiation as a Metaphor for Distributed Problem Solving'
in 'Artificial Intelligence 20', and Kornfeld and Hewitt's
"The Scientific Community Metaphor" in IEEE Trans Systems Man
& Cybernetics 1981.

Followups go wherever - I really don't know which if any of these
newsgroups have any interest.  My approach to this is based on
a background in economics and in watching congestion appear in
distributed electronic mail systems.

------------------------------

Date: 31 Oct 88 15:17:14 GMT
From: orion.cf.uci.edu!paris.ics.uci.edu!venera.isi.edu!smoliar@ucsd.e
      du  (Stephen Smoliar)
Subject: Re: Limits of AI

In article <5221@watdcsu.waterloo.edu> smann@watdcsu.waterloo.edu (Shannon
Mann - I.S.er) writes:
>
>Now consider the argument posed by Dr. Carl Sagan in ch. 2, Genes and
>Brains, of the book _The Dragons of Eden_.  He argues that, at about the
>level of a reptile, the amount of information held within the brain
>equals that of the amount of information held within the genes.  After
>reptiles, the amount of information held within the brain exceeds that
>of the genes.
>
>Now, of the second argument, we can draw a parallel to the question asked.
>Lets rephrase the question:
>
>Can a system containing X amount of information, create a system containing
>Y amount of information, where Y exceeds X?
>
>As Dr. Sagan has presented in his book, the answer is a definitive _YES_.
>
Readers interested is a more technical substantiation of Sagan's arguments
should probably refer to the recent work of Gerald Edelman, published most
extensively in his book NEURAL DARWINISM.  The title refers to the idea that
"mind" is essentially a result of a selective process among a vast (I am
tempted to put on a Sagan accent, but it doesn't come across in print)
population of connections between neurons.  However, before even considering
the selective process, one has to worry about how that population came to be
in the first place.  I quote from a review of NEURAL DARWINISM which I
recently submitted to ARTIFICIAL INTELLIGENCE:

        This population is an EPIGENETIC result of prenatal development.
        In other words, the neural structure (and, for that matter, the
        entire morphology) of an organism is not exclusively determined
        by its genetic repertoire.  Instead, events EXTERNAL to strictly
        genetic activity contribute ot the develo9pment of a diverse
        population of neural structures.  Specific molecular agents,
        known as ADHESION MOLECULES, are responsible for determining
        the course of a morphology and, consequentlty, the resulting
        pattern of neural cells which are formed in the course of that
        morphology;  and these molecules are responsible for the formation,
        during embryonic development, of the population from which selection
        will take place.

Those who wish to pursue this matter further and are not inclined to wade
through the almost 400 pages of NEURAL DARWINISM will find an excellent
introduction to the approach in the final chapter of Israel Rosenfield's
THE INVENTION OF MEMORY.  (This remark is also directed to Dave Peru, who
requested further information about Edelman.)

------------------------------

End of AIList Digest
********************

∂02-Nov-88  2033	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #118 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 2 Nov 88  20:32:31 PST
Date: Wed  2 Nov 1988 23:07-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #118
To: AIList@AI.AI.MIT.EDU


AIList Digest            Thursday, 3 Nov 1988     Volume 8 : Issue 118

 Announcements:

  18th ASIS Mid-Year Meeting
  2nd IFIP/IFAC/IFORS Workshop on AI in Economics and Management - Singapore
  Fifth Israeli Symposium on AI
  AISNE 89
  Symposium on AI Research for Exploitation of Battlefield Environment

----------------------------------------------------------------------

Date: Mon, 31 Oct 88 12:08 PST
From: Christine Borgman                   
      <IIN4CLB%OAC.UCLA.EDU@MITVMA.MIT.EDU>
Subject: 18th ASIS Mid-Year Meeting

MEETING ANNOUNCEMENT:  Please forward to anyone who may be interested.

Submitted by Christine Borgman, Graduate School of Library and
Information Science, UCLA, Los Angeles, CA  90024, 213/825-1379.
IIN4CLB@UCLAMVS

Call for papers and session proposals

THE USER INTERFACE
1989 American Society for Information Science Mid-Year Meeting

May 21-24, 1989, San Diego

   THe 18th ASIS Mid-Year Meeting, May 21-24, 1989, in San Diego,
California, will present the state-of-the art in the design of
interfaces for information retrieval systems, online public
access catalogs and other information technologies.  We will
identify major tendencies, trends, influences, and approaches
in interface design and discuss their significance for the systems
of the future.

  Particular focus will be placed on the major activities involved
in the design of interfaces, such as tools and techniques for
interface design, user models and their application, and the
process of task and interaction analysis.

  Program ideas and contributions are invited in all related areas.
Discussions and explorations of existing interfaces, particularly
those that imply lessons for other designers, are also welcome.

Major themes in *The User Interface*:

Tools
- Rapid prototyping systems
- Design environments
- User interface development and management tools

Natural Language Interfaces
- Interfaces that allow natural language input by users

Artificial Intelligence Approaches
- Expert systems and knowledge-based approaches

Interface styles
-Command, menu, or direct manipulation:  their uses and advantages

Adaptive-Adaptable Systems
- Systems or systems features that are user-modifiable or that adapt
to specific users

Usability
- Evaluation and testing of interfaces and interface ideas

Guidelines and Standards for Interfaces
- International, national, or in-house


  Interested presenters are encouraged to expand on any of these theme
ideas.  Acceptance will be based on the relevance of the topic,
substantive nature of the presentation, and clarity.

  Proposals should take the following form:

Contributed papers:  Submit the title and a 250-word abstract

Demonstration proposals:  Submit a written description of a
demonstration of a particular system related to the meeting topic.
Include a statement of the equipment requirements to support the
proposed demonstration.

Panel discussion proposals:  Submit a one-page description of a
topic for a panel discussion and a list of possible speakers to
address the topic.

  For fullest consideration, all proposals should be submitted by
November 21, 1988.  Notification of acceptance or rejection will
be made by January 3, 1989.

All proposals and inquiries should be submitted to:

Martin Dillon, 1989 ASIS Mid-Year Meeting
Director, Office of Research
OCLC
6565 Frantz Road
BITNET:  MJD@OCLCRSUN

ASIS Special Interest Groups wishing to sponsor SIG programs should
contact Debora Shaw, Indiana University, at SHAW@IUBACS.

ASIS SIG/UOI (User Online Interaction) is interested in co-sponsoring
sessions.  For further information contact Thomas Martin of Syracuse
University at TMARTIN@SUVM.

------------------------------

Date: Tue, 01 Nov 88 11:06:47 SST
From: Joel Loo <ISSLPL%NUSVM.BITNET@MITVMA.MIT.EDU>
Subject: 2nd IFIP/IFAC/IFORS Workshop on AI in Economics and
         Management - Singapore

An advanced program for the conference:


                     2ND INTERNATIONAL
                IFIP/IFAC/IFORS WORKSHOP ON
                ARTIFICIAL INTELLIGENCE IN
                 ECONOMICS AND MANAGEMENT
                         SINGAPORE
                    January 9 - 13, 1989

Sponsors
IFIP (International Federation for Information Processing) (Main Sponsor)
IFAC (International Federation of Automatic Control)
IFORS (International Federation of Operational Research Societies)

In Cooperation with
AAAI (American Association for Artificial Intelligence)
ACM-SIGART (Association for Computing Machinery
-Special Interest Group for Computer Architecture)
IEEE-CS (Institute of Electrical & Electronics Engineers Computer Society)
IPSJ (Information Processing Society of Japan)
SEARCC (South East Asia Regional Computer Confederation)

            Supporters
                 IBM Singapore Pte Ltd
                 Digital Equipment Singapore Pte Ltd
                 Texas Instruments Singapore P

            Organizer
            The Institute of Systems Science
            National University of Singapore

            Co-Organizer
            SCS (Singapore Computer Society)


            GENERAL INFORMATION
            For registration and other information, please contact:
            Mrs Vicky Toh
            Institute of Systems Science
            National University of Singapore
            Heng Mui Keng Terrace
            Kent Ridge
            Singapore 0511

            Telephone:               772-2003 / 772-2096
            Telex:                   ISSNUS RS 39988
            Telefax:                 778-2571
            Bitnet:                  ISSVCT @ NUSVM

            FEES
            Tutorials (January 9 -10)     US$200.00
            Workshop (January 11 - 13)    US$200.00

            WORKSHOP PROGRAM (January 11 - 13, 1988)




            Key note speech will be delivered by Professor Herbert Simon,
            Nobel Laureate, Carnegie-Mellon University


ORGANIZATIONAL STRUCTURE & STRATEGY
1.Michael Masuch  University of         Artificial Intelligence
  &               Amsterdam             in Organizations :
                  The NETHERLANDS       Complementing the Garbage
 Perry J LaPotin Dartmouth CollegeCan Model Organizational
                  USA                   Choice

2.Michael PrietulaCarnegie-Mellon       Configuring AI Systems to
  Peng Si Ow &    University            Organizational Structure:
  Wen-Ling Hsu    USA                   Issues Examples from
                                        Multiple Agent Support

3.Borge Obel      Odense UniversityEvaluating Organizations
  &               DENMARK               Using An Expert Systems
  Richard M BurtonDuke University
                  USA

4.Gene Sakamoto   IBM Los Angeles       SPARK: A Facilitative
  Patricia Gongla,Scientific            Systems Identifying
  R Clay Sprowls  Center                Competitive Uses of
  & Patricia                            Information Technology
  Goldweic

BANKING

1.L F Pau         Technical             Applications of
                  University of         Artificial Intelligence
                  Denmark               in Banking, Financial
                  DENMARK               Services and Economics

2.Mariani         Carisma S R L         CR.E.S. (CRedit Granting
  Francesco       ITALY                 Expert Systems) - How
  Roda Claudia                          to grant credit using
  & Valeriani                           frames
  Giuliano

3.R Bhaskar &     IBM Thomas            Qualitative Reasoning in
  & Seshashayee S J Watson              the Commercial Lending
  Murthy          Research Center       Decision: The Role of
                  USA                   Naive Mathematics

4.Jean Roy        Universite Laval      A Clever Screening for
                  CANADA                Commercial Loans
                                        Applications

5.Michael J Shaw  University of         Applying Inductive
  James A Gentry  Illinois -            Learning in a Evaluation
                  Champaign/Urbana      Decision Support System
                  USA

6.Nancy A         Digital EquipmentTowards a Domain specific
  Broderick &     Corporation           Tool for Underwriting
  Peter Politakis USA                   Commercial Insurance


FINANCE

1.Lin Zhangxi     Economic              RASF - Automating Routine
                  Information           Analysis of Financial
                  Centre for            Data
                  Fujian Province
                  CHINA

2.Soumitra Dutta  University of         An Artificial
  & Shashi        California            Intelligence Approach to
  Shekhar         Berkeley              Predicting Bond Ratings
                  USA

3.Irene M Y Woon  Aston University      Qualitive Modelling in
  & Peter         UNITED KINGDOM        Financial Analysis
  Coxhead

4.Y Y Chan,       LaTrobe               Port-Man - An Expert
  T S Dillon      University            System of Portfolio
  & E G Shaw      AUSTRALIA             Management in Banks

5.Noberto A TorresEscola de             Expert Modelling System
                  Administracao de      for Futures Markets
                  Empresas de Sao       Operations
                  Paulo BRAZIL

6.B J Garner &    Deakin UniversityA Canonical Graph Model
  E Tsui          AUSTRALIA             for Personal Financial
                                        Planning

MANUFACTURING

1.Phaih-Lan Law   Digital EquipmentManaging AI Technology
  Mitchel Tseng   Corp USA              Transfer Manufacturing
  & Peter Ow      Digital EquipmentApplications
                  Intl Ltd
                  SINGAPORE

2.Juha E Hynynen  Helsinki              Knowledge-Based
                  University of         Coordination in
                  Technology            Distributed Production
                  FINLAND               Management

3.Mitchel M Tseng Digital EquipmentIntelligent Integrated
  & Dennis        Corp                  Support Systems for
  O'Connor        SINGAPORE             Manufacturing Enterprise


MARKETING

1.Giorgio M       IFOA, Reggio          Applications for
  Gandellini      Emilia &              Artificial Intelligence
                  Macerata              in Marketing Decision
                  University            Support Systems
                  ITALY

2.Alan Tse        Massey UniversityA Prolog-Based Expert
                  NEW ZEALAND           System for Price Decision
                                        Making Under Incomplete
                                        Knowledge

3.C Apte,         IBM Thomas J          Utilizing Knowledge
  J Griesmer      Watson Research       Intensive Techniques in
  S J Hong        Center USA            An Automated Consultant
  M Karnaugh                            for Financial Marketing
  J Kastner,
  M Laker &
  E Mays

4.R S Dhananjayan Bharathia             Application of Expert
  VS Janaki Raman University            System Techniques for
  & K Sarukesi    INDIA                 Analyzing Firm's Fall in
                                        Market Square


PLANNING & SCHEDULING

1.Chae Y Lee      Korea Institute       An Intelligent
                  of Technology         Classification and
                  KOREA                 Formulation of Network
                                        Flow Square

2.Elisabetta Tesi Olivetti              Expert Systems for the
  & Roberto Meli  ITALY                 Project Management of
                                        Information Systems

3.Mohammed I      King Fahd             A Multi-Level Mani-
  Bu-Hulaiga      University of         Technique for Structuring
                  Petroleum &           Ill-Structured Decision
                  Minerals              Problems
                  SAUDI ARABIA

4.Leong Yit San   Rubber Research       The Rubber Research
                  Institute of          Institute of Malaysia
                  Malaysia              Environmax Planting
                  MALAYSIA              Recommendation Expert
                                        System


USER INTERFACE

1.Hwee Tou Ng     University of         A Computerized Prototype
                  Texas (Austin)        Natural Language Tour
                  USA                   Guide

2.Toshiro Bando   The Sumitomo          Texpert
  San             Bank, Ltd
                  JAPAN

3.Peter Mertens   Universitat           Derivation of Verbal
                  Erlangen-Nurberg      Expertises from
                  FEDERAL REP OF        Accounting Data
                  GERMANY

4.Dvid Kendrick   University of         A Production Model
                  (Austin) USA          Construction Program

ECONOMICS

1.Zhongtuo Wang   Dalian UniversityApplication of DSS in the
                  of Technology         Regional Developing
                  CHINA                 Strategy Analysis

2.Edward J KrowitzEconotech             An Expert System to
                  USA                   Analyse Balance of
                                        Payments Problems

3.Odile Palies    University P Et       Knowledge Bases for
  Francois Libeau Curie Hendyplan       Economic Forecasting
  & Jean-Marc     SA
  Philip          BELGIUM

4.Claudio GianottiPolitecnico di        Estimating Unobservable
                  Milano                Decisions Through
                  ITALY                 Conjuntural Enquiries:
                                        Preliminary Results

5.R J Berndsen    University of         Sequential Casuality and
  & H A M Daniels Tilburg               Qualitative Reasoning in
                  THE NETHERLANDS       Economics

6.Kuan-Pin Lin    Portland State        Analysis of Data for
  & Stan Perry    University            Economic Rationality: An
                  USA                   Expert Systems Approach

AI METHODOLOGY

1.Jasbir Singh    University of         A Model for the Empirical
  Dhaliwal &      British Columbia      Investigation of
  Izak Benbasat   CANADA                Knowledge Acquisition
                                                    Techniques

2.Phlip Ein-Dor   Tel-Aviv              Representing Commonsense
  & Yaakov        University            Business Knowledge: An
  Ginzberg        ISRAEL                Initial Implementation

3.M Schumann      Universitat           Comparison of Rule Based
  & W Geis        Erlangen-NurnbergExpert Systems with
                  FEDERAL REP OF        Traditional Technology
                  GERMANY               Selected Examples

4.Jae Kyu Lee     Korea Advanced        A Knowledge-Based
  Soek Chin Chu   Institute of          Formulation of Linear
  Min Yong Kim    Science &             Programming Models Using
  &               Technology            Unikopt
  Sung Hoon Shim

5.Arie-Ben David  Case Western          A Methodology for
  & Yoh-Han Pao   University            Capturing Technology via
                  USA                   Neural Networks

6.Stuart L        Advanced DecisionData Driven Assessment &
  Crawford        Systems Stanford      Decision-Making
  Robert Fung&    University
  Edison Tse      USA

7.Udo Hahn        Universitate          A Sketch of Group Truth
                  Passau                Maintenance
                  FEDERAL REP OF
                  GERMANY


SERVICES

1.James R Marsden University of         The Use of Expert Systems
  David E Pingry  Kentucky              in Legal Contracting
  & Reza Saidi    University of
                  Arizona
                  Clarkson
                  University
                  USA

2.H A M Daniels   University of         Assessment of Expert
  &               Tilburg               Systems in Tax
  P van der Horst THE NETHERLANDS       Consultancy
                  Coopers & Lybrand
                  THE NETHERLANDS

3.Varghese S JacobThe Ohio State        A Decision Process
  Andrew D Balley University            Approach to Expert
  & William       USA                   Systems in Auditing
  Gallivan

4.Tan Ah Hwee     National              Connectionist Expert
  &               University of         System for Intelligent
                  Singapore             Advisory Application
  Chee Lai Kin    Institute of
                  Systems Science
                  SINGAPORE

5.Anna Bodi &     Monash UniversityCATA:A Computer Aided
  John Zeleznikow AUSTRALIA             Assistant

6.Daniel Schwabe  Pontificia            Expert Systems and Social
  & Celso Escobar Universidade          Welfare Benefits
  Pinheiro        Catolica              Regulations: The
                  BRAZIL                Brazilian Case


SOFTWARE & DATA ENGINEERING

1.Tridas          Carnegie-Mellon       Knowledge-Based
  Mukhopadhyay    University            Components of Software
  Michael PrietulaUSA                   Development Effort
  & Steve                               Estimation: An
  Vincinanza                            Exploratory Study

2.Martyn Richard  UNISYS                Data Modelling for
  Jones           Corporation           Multi-Modal Systems: An
                  SPAIN                 Expert Systems Approach

3.Mikko Hiisalmi  Technical             Expert Support for
                  Research Center       Information Retrieval
                  of Finland            Using Graphical and
                  FINLAND               Object-Oriented
                                        Techniques

4.Chan Huang Seng KE Services           Transformation of a
  &               Pte Ltd               Semantic Network into
  Por Hau Joo     Institute of          Object-Oriented &
                  Systems Science       Relational Database
                  SINGAPORE             Design

     CONFERENCE COMMITTEE

     CHAIRMAN:
                  Juzar MOTIWALLA
                  Institute of Systems Science
                  National University of Singapore
                  Kent Ridge
                  Singapore 0511

     PROGRAM COMMITTEE CHAIRMAN:
                  Yoh-Han PAO
                  Case Western Reserve University, USA

                  L F PAU
                  Technical University of Denmark, DENMARK

                  Hoon Heng TEH
                  Institute of Systems Science, SINGAPORE

      ORGANIZING COMMITTEE CHAIRMAN
                  Desai NARASIMHALU
                  Institute of Systems Science, SINGAPORE


      INTERNATIONAL PROGRAM COMMITTEE

      Jan AIKINS          C H Hu              Suzanne PINSON
      AION, USA           Academy of Science  University Paris II
                          PRC                 LAFORIA, FRANCE

      A R BACINELLO       Andreas HUBER       M G RODD
      Institute of        SKA, CH             University of Wales
      Financial                               UK
      Mathematics
      ITALY

      Miroslav BENDA      Bruce JOHNSON       Shoji SAKAMOTO
      Boeing Computer     Arthur Andersen &   Sanwa Bank, JAPAN
                          Co., USA

      Lars B BENGTSSON    Yoichi KAYA         Piero SCARUFFI
      Skandinovisha       University of Tokyo Olivetti, USA
      Enskilda Bank       JAPAN
      SWEDEN

      Jason CATLETT       David KENDRICK      Hans J SCHNEIDER
      Sydney University   University of Texas Technical University
      AUSTRALIA           USA                 Berlin, FRG

      Claudio GIANTTI     Jae Kyu LEE         Edison TSE
      Politecnico di      KAIST, KOREA        Stanford University
      Milano, ITALY                           USA

      Volkmar HAASE       J MULOPOULOS        T VAMOS
      Institut fur        University of       Academy of Sciences
      Maschinelle         Toronto, CANADA     HUNGARY
      Dokumentation
      AUSTRIA

      Peter HART          James NESTOR        Victor V VIDAL
      Syntelligence,      Ernst & Whinney     Technical University
      USA                 USA                 DENMARK

      David B HERTZ       Dennis E O'CONNOR   Andrew WHINSTON
      University of Miami Digital Equipment   Purdue University
      USA                 USA                 USA

      Floyd HOLLISTER     Peng Si OW          G W de WIT
      Texas Instruments   Carnegie-Mellon     Nationale
      Inc, USA            University, USA     Nederlanden,
                                                    NETHERLANDS

      Se June HONG        Judea PEARL         William WOODS
      IBM, USA            University of       Applied Expert
                          California,         Systems, USA
                          Los Angeles, USA

                          David PESSEL
                          BP, USA


      TRAVEL ARRANGEMENTS

      For special air fares please contact: Charles OR Angela Man
      of SINO-AMERICAN TOURS INC. (NEW YORK) at:

            Tel:      (212) 925-3388 or (800) 221 7982

            Telex:    233607 SINO

            Fax:      (212) 925 6483





                      TUTORIALS

                 January 9 - 10, 1988

 Monday
 January 9

 9.00 - 12.30 pm     Applications of Neural Nets
                     Professor Yoh-Han Pao
                     Case Western University

                     Introduction to AI
                     Professor Teh Hoon Heng
                     Institute of Systems Science
                     National University of Singapore

 12.30 - 1.30 pm     LUNCH

  1.30 - 5.00 pm      Application of AI in Manufacturing
                      Professor Mark Fox
                      Carnegie Mellon University


 Tuesday
 January 10

  9.00 - 10.30 pm     AI & Financial Services
                      Professor L F Pau
                      Technical University of Denmark

  12.30 - 1.30 pm     LUNCH

  1.30 - 5.00 pm      AI and Information Retrieval
                      Dr Nick Belkin
                      Rutgers University

------------------------------

Date: Tue, 1 Nov 88 19:15:01 JST
From: Shmuel Peleg <peleg%humus.Huji.AC.IL@MITVMA.MIT.EDU>
Subject: Fifth Israeli Symposium on AI

         Fifth Israeli Symposium on Artificial Intelligence
                   Tel-Aviv, Ganei-Hata`arucha
                      December 27-28, 1988


Preliminary Program

Tuesday,  December 27.

08:00-09:00     Registration

09:00-12:00     Openning Session, Joint with ITIM/CASA
Openning addresses.
Invited Talk:   Three dimensional vision for robot applications
David Nitzan, SRI International


12:00-13:30     Lunch Break

13:30-15:15     Session 2.4     Constraints

Invited Talk:  An Overview of the Constraint Language CLP(R)
Joxan Jaffar, IBM TJW Research Center

Belief maintenance in dynamic constraint networks
Rina Dechter, UCLA, and Avi Dechter, California State University


13:30-15:15     Session 2.5     Vision

Multiresolution shape from shading
Gad Ron and Shmuel Peleg, Hebrew University

Describing geometric objects symbolically
Gerald M. Radack and Leon S. Sterling, Case Western Reserve University

A vision system for localization of textile pieces on a light table
(short talk)
H. Garten and M. Raviv,  Rafael

15:15-15:45     Coffee Break

15:45-17:45     Session 3.4     Reasoning Systems


A classification approach for reasoning systems - a case study in
graph theory
Rong Lin, Old Dominion University


Descriptively powerful terminological representation
Mira Balaban and  Hana Karpas, Ben-Gurion University


Bread, Frappe, and Cake: The Gourmet's Guide to Automated Deduction
Yishai A. Feldman and Charles Rich, MIT


15:45-17:45     Session 3.5     Vision

Invited Talk:  Cells, skeletons and snakes
Martin D. Levine, McGill University

The Radial Mean of the Power Spectrum (RMPS) and adaptive image restoration
Gavriel Feigin and Nissim Ben-Yosef, Hebrew University

Geometric and probabilistic criteria with an admissible cost structure
for 3-d object recognition by search
Hezekiel Ben-Arie, Technion

17:45-18:15     IAAI Bussiness meeting


Wednesday, December 28.

09:00-10:30     Session 4.4     Computer Aided Instruction

The implementation of artificial intelligence in computer based training
Avshalom Aderet and Sachi Gerlitz, ELRON Electronic Industries

A logical programming approach to research and development of a
student modeling component in a computer tutor for characteristics of
functions
Baruch Schwarz and Nurit Zehavi, Weizmann Institute

Meimad --- A database integrated with instructional system for retrieval
(in Hebrew)
Avigail Oren and David Chen, Tel-Aviv University

09:00-10:30     Session 4.5     Robotics/Search

Invited Talk: Principles for Movement Planning and Control
Tamar Flash, Weizmann Institute

Strategies for efficient incremental nearest neighbor search
Alan J. Broder, The MITRE Corporation

10:30-11:00     Coffee Break

11:00-13:00     Session 5.4     Legal Applications/Language

Towards a computational model of concept acquisition and
modification using cases and precedents from contract law
Seth R. Goldman, UCLA

Expert Systems in the Legal Domain
Uri J. Schild, Bar-Ilan University

Machinary for Hebrew Word Formation
Uzzi Ornan, Technion

What's in a joke?
Michal Ephratt, Haifa University

11:00-13:00     Session 5.5     Expert Systems

Explanatory Meta-rules to provide explanations in expert systems
C. Millet, EUROSOFT, and M. Gilloux, CNET

Declarative vs. procedural representation in an expert system: A perspective
Lev Zeidenberg, IET,  and Ami Shapiro IDF

Automatic models generation for troubleshooting
Arie Ben-David, Hebrew University

A general expert system for resource allocation (in Hebrew)
Zvi Kupelik, Ehud Gudes, Amnon Mizels, and Perets Shoval, Ben-Gurion University


13:00-14:30     Lunch Break

14:30-16:00     Session 6.4     Logic Programming

Invited Talk: The CHIP constraint programming system
Mehmet Dincbas, ECRC

Time constrained logic programming
Andreas Zell, Stuttgart University

Automatic generation of control information in five steps
Kristof Verschaetse, Danny De Schreye and Maurice Bruynooghe,
Katoliche Universitet Leuven


14:30-16:00     Session 6.5     Data Structures for Vision

Invited Talk:   An Overview of Hierarchical Spatial Data Structures
Hanan Samet, University of Maryland

Optimal Parallel Algorithms for Quadtree Problems
Simon Kasif, Johns Hopkins University


16:00-16:30     Coffee Break

16:30-18:00     Session 7.4     Reasoning and Nonmonotonic Logic

Preferential Models and Cumulative Logics
Daniel Lehmann, Hebrew University

Invited Talk: Baysian and belief-functions formalisms for evidential
reasoning: A conceptual analysis
Judea Pearl, UCLA

16:30-18:00     Session 7.5     Pattern Matching

Scaled pattern matching
Amihood Amir, University of Maryland

Term Matching on a Mesh-Connected Parallel Computer
Arthur L. Delcher and Simon Kasif, The Johns Hopkins University


18:00-18:15     Closing remarks



-------------------------------------

For registration information please contact:

5th ISAI Secretariat
IPA, Kfar Maccabiah,
Ramat Gan 52109
Israel
(972) 3-715772

Or by e-mail:
udi@wisdom.bitnet
hezy@taurus.bitnet

------------------------------

Date: Tue, 1 Nov 88 09:23:30 EST
From: kgk@CS.BROWN.EDU
Subject: AISNE 89


           Artificial Intelligence Society of New England
                           Annual Meeting
                        November 11-12, 1988
     T. J. Watson Center for Information Technology, 4th Floor
                          Brown University
                      Providence, Rhode Island


The Annual Meeting of the Artificial Intelligence Society of New
England will be held on the evening of Friday, November 11th, and on
November 12th, 1988 at Brown University in Providence, Rhode Island.
This year, we will have a different format from previous years.
Instead of a single series of presentations, there will be parallel
workshops where researchers with similar interests can explore topics
in-depth.  There will be four workshops, two each in the morning and
afternoon, on the Saturday of the meeting.

Within each workshop, there will be short presentations by students,
followed by a discussion led by a faculty member.  The topics of the
workshops will be selected from the following:

Automated Reasoning
Connectionism
Formal Theories
Knowledge Representation
Learning
Natural Language
Planning
Robotics and Vision
Reasoning about Uncertainty

Our guest speaker this year will be Ramesh Patil from MIT, who will
speak on ``Artificial Intelligence and Medical Diagnosis''.  We have
just moved into a new building at Brown, and we are excited to have
everybody come and join us in celebrating Friday after the talk.

The tentative schedule of events is as follows:

Friday, November 11

 7:30PM --  8:30PM    Invited Talk        Ramesh Patil, MIT
 8:30PM --            General Merriment

Saturday, November 12

 9:30AM -- 12:30PM    Workshops
12:30PM --  2:00PM    Lunch
 2:00PM --  5:00PM    Workshops

As usual, sleeping accomodations will be provided by the host
institution's students and faculty -- bring a sleeping bag.

The current list of invitees includes BBN, BU, Brandeis, Dartmouth,
GE, Harvard, ITT, MIT, MITRE, NYU, On Technology, Rochester,
Schlumberger, Thinking Machines, Tufts, UConn, UMass (Amherst), UNH,
Vassar, and Yale.  If you can think of somebody else to invite,
please pass on this note or let us know.

So that we have an idea of how many people to expect, we would like
to ask that you contact us at the address below if you would like to
attend.  Please be sure to include your name, telephone number,
electronic mail address, and whether or not you need sleeping
accomodations.  If you have any questions, please don't hesitate to
contact us.

Professor Eugene Charniak
Department of Computer Science
Brown University
Box 1910
Providence, RI 02912
(401) 863-7636
ec@cs.brown.edu

------------------------------

Date: 2 Nov 88 19:22:15 GMT
From: ai.etl.army.mil!john@ames.arpa  (John Benton)
Subject: Symposium on AI Research for Exploitation of Battlefield Env.


                        Tutorials in Support of the

                           U. S. ARMY SYMPOSIUM ON

               ARTIFICIAL INTELLIGENCE RESEARCH FOR EXPLOITATION

                        ON THE BATTLEFIELD ENVIRONMENT


                                Tutorial Agenda




Monday, 14 November 1988


     0830      STI Tutorial #620
               Artifical Intelligence:  An Overview of Current Research
                    Dr. Dan Patterson, University of Texas, El Paso

     1200      Lunch

     1330      STI Tutorial #645
               Environmental Effects on the Battlefield:  Potential for AI
               Applications
                    Dr. Donald W. Hoock, Dr. Robert A. Sutherland,
                    Dr. Richard C. Shirkey, U. S. Army Atmospheric Sciences
                    Laboratory

               STI Tutorial #668
               Geographical Information Systems (GIS)
                    Dr. H. Dennis Parker, Colorado State University

     1700      Tutorials End

     1830      STI Tutorial #634
               Artifical Intelligence in Geographic Information Systems (GIS)
               Applications
                    Dr. Andrew U. Frank, University of Maine in Orono

     2200      Tutorial Ends










!
                           U. S. ARMY SYMPOSIUM ON

               ARTIFICIAL INTELLIGENCE RESEARCH FOR EXPLOITATION

                        ON THE BATTLEFIELD ENVIRONMENT


Technical Agenda



Monday, 14 November 1988

1800      Early Registration - Grand Ball Room


Tuesday, 15 November 1988


0700      Registration

OPENING SESSION
Chairman, Dr. Richard B. Gomez, U. S. Army Engineer Topographic Laboratories

0800      Administrative Remarks
Dr. Paul D. Try, Science and Technology Corporation

0805      Introductory Comments
Dr. Richard B. Gomez, U. S. Army Engineer Topographic
Laboratories

0815      Welcome by Co-Host
Dr. Diana S. Natalicio, Univeristy of Texas at El Paso

0825
Dr. Darrell Collier, TRADOC Analysis Command,White Sands Missile Range

0835      Welcome by Co-Host
BG Jay M. Garner, U. S. Army Air Defense Artillery Center
and School

0845      Introduction of Keynote Speaker
BG Jay M. Garner, U. S. Army Air Defense Artillery Center
and School

0850      Keynote Address
Gen. Glenn K. Otis, USA, Ret. Past Commander in Chief,
U. S. Army Europe
0930      Break





SESSION 2:Military Overview (Doctrine)
Chairman, TBA

1000      Introduction
TBA

1000      Air Land Battle Doctrine
TBA

1020      Terrain Support Doctrine
TBA

1040      Space Support Realistic Battlefield
Col. Ronan I. Ellis, Army Space Institute

1100      Functional Area Overview (Relationships to Artificial
Intelligence)
TBA
Maneuver
Fire Support
Combat Service Support
Intelligence and Electronic Warfare
               Air Defense

     1200      Lunch

     1330      Opening Address
                    Dr. Hamid M. El-Bisi, Office of the Assistant Secretary of
                    the Army for Research, Developement and Acquisition


                     SESSION 3:  The Realistic Battlefield
          Chairman:  Dr. Howard Holt, Atmospheric Sciences Laboratory

     1400      Design of a Software Environment for Tactical Stituation
               Development
                    M. J. Coombs, and R. T. Hartley, Computing Research
                    Laboratory and J. R. Thompson, Science Applications
                    International
                    Presented by:  M. J. Coombs

     1430      Issues Surrounding Development of Meteorological Environmental
               Expert Systems
                    Timothy Sletten and Mark Stunder, GEOMET Technology, Inc.
                    Presented by:  Timothy Sletten

     1450      Shootout-89:  A Comparative Evaluation of AI Systems for
               Convective Storm Forecasting
                    W. R. Moninger, NOAA/Forecast Systems Laboratory
                    Presented by:  Chris Fields, CRL

     1505      A Heuristic Low Level Turbulence Forecast Decision Aid
                    Martin E. Lee, U. S. Army Atmospheric Sciences Laboratory

     1515           An Expert Systems Approach to Advisory Weather Forecasting
                    Young P. Yee and David J. Novlan, U. S. Army Atmospheric
                    Sciences Laboratory
                    Presented by:  Young P. Yee

     1530      Break

     1600      Synopsis of Talk on Parallel Neural Models for Knowledge
               Presentation
                    Larry Lesser, ParaSoft

     1610           The Realistic Battlefield:  AI and Battlefield Management
                    of the Biological Warfare Threat
                    Gilcin F. Meadors, III, LTC, MC; Dallas C. Hack, MAJ, MC,
                    and Glen A. Higbee, USAMRIID
                    Presented by:  LTC Gilcin Meadors

     1620           Symbolic Image and Terrain Processing to Automate the
                    Intelligence Preparation of the Battlefield
                    Paul D. Lampru, Consultant's Choice, Inc.

     1635      Army Requirements for an Intelligent Interface to a Real Time
               Meteorological Data Base
                    G. McWilliams, S. Kirby, U. S. Army Atmospheric Sciences
                    Lab and C. A. Fields, M. J. Coombs, T. C. Eskridge, R. T.
                    Hartley, H. D. Pfeiffer, C. A. Soderlund, New Mexico State
                    University
                    Presented by:  G. McWilliams

     1645      Concept for Weather Related Decision Aids for the Tactical
               Commander
                    Bernard F. Engebos, Robert R. Lee and Robert Scheinhartz,
                    U. S. Army Atmospheric Sciences Laboratory
                    Presented by:  Bernard F. Engebos

     1655      Panel Responses
                    M. J. Coombs
                    R. Dyer
                    T. Sletten
                    C. Fields
                    L. Lessor

     1710      General Discussion

     1740      Session Ends

     1900      Symposium Dinner and Special Entertainment







Wednesday, 16 November

     0700      Registration

     0800      Opening Address
                    Mr. Bob Benn, Headquarters, U. S. Army Corps of Engineers

                     SESSION 4:Automated Terrain Reasoning
     Chairman:  John Benton, U. S. Army Engineer Topographic Laboratories

     0830      Representation Issues in the Design of a Spatial Database
               Management System
                    Richard Antony, U.S. Army Center for Signals Warfare

     0850      Scale-Space Representations for Flexible Automated Terrain
               Reasoning
                    David M. Keirsey, Jimmy Krozel and David W. Payton, Hughes
                    Artificial Intelligence Center
                    Presented by:  David M. Keirsey

     0910      Spatial Analysis for Automated Terrain Reasoning
                    David L. Milgram, Richard F. Shu, and Michael J. Black,
                    Advanced Decision Systems
                    Presented by:  David L. Milgram

     0930      Break

     1000      Panel Discussion, Dr. Leopoldo Gemoets, Moderator
               Panel Members:  R. Antony, D. Keirsey, D. McDermott, D. Milgram,
               and H. Samet

     1030           Allocating Sensor Envelope Patterns to a MAP Partitioned by
                    Territorial Contours
                    Terence M. Cronin, U. S. Army Center for Signals Warfare

     1050      Terrain Reasoning in Support of Air Land Battle Management
                    Thomas Garvey and Charles Ortiz, SRI International
                    Presented by:  Charles Ortiz

     1110      A Multi-Level Knowledge Representation for Terrain Reasoning
                    Iris Cox Hayslip, Gruman Corporate Research Center

     1130      Future Minefield Terrain Analysis Requirements
                    Robert Sickler, US Army Engineering School

     1145      A Knowledge Representation Schema and Surface Feature and
               Terrain Elevation Data:  With Special Application to Meteorology
                    Stephen Kirby, Gary McWilliams, US Army Atmospheric
                    Sciences Lab and Cathy Cavendish, Computing Research
                    Laboratory
                    Presented by:  Cathy Cavendish

     1200      Lunch


     Please note:  SESSION 5  and  SESSION 6 run concurrently


                    SESSION 5:State of the Art Applications
  Chairman:  Dr. Morton A. Hirschberg, US Army Ballistic Research Laboratory

     1330      Architecture of MERCURY Mesoscale Met Data Fusion System
                    C. A. Fields, M. J. Coombs, T. C. Eskridge, R. T. Hartley,
                    H. D. Pfeiffer and S. A. Soderlund, New Mexico State
                    University; S. Kirby and G. McWilliams, U. S. Army
                    Atmospheric Sciences Lab
                    Presented by: Chris Fields

     1350      Applying Artifical Intelligence Techniques to the GIS Data
               Acquisition Problem
                    MAJ Robert F. Richbourg, United States Military Academy

     1410      An Expert System for Minefield Site Prediction
                    Johnathan Doughty and Ann Downs, Par Government Systems
                    Corporation

     1430      Neural Networks, Complexity, and the Realistic Battlefield
                    Edward M. Measure and Jeff M. Balding, US Army Atmospheric
                    Sciences Laboratory

     1450      Break

     1520      Decision Support Systems Software for the Battlefield
               Environment
                    Kerry Gates and Scott Barrett, PAR Government Systems
                    Corporation
                    Presented by:  Scott Barrett

     1540      Avenue of Approach Generation
                    Dennis R. Powell and Greg Storm, Los Alamos National
                    Laboratory
                    Presented by:  Dennis Powell

     1600      Mobility Corridor Generation a Different Approach
                    James C. Wright, Los Alamos National Laboratory

     1620      Between Prototype and Deployment:  Lessons Learned Field Testing
               and Expert System
                    Rosemary M. Dyer, Air Force Geophysics Laboratory

     1640      Session Ends








             SESSION 6:Basic Research in Aritificial Intelligence
                          Chairman: Dr. Gordon Novak
                     CoChairman:  Dr. Joseph H. Pierluissi

     1330      Computer Detection and Tracking of Multiple Object in Television
               Images
                    Andrew Bernat, Stephen Riter, and Darrel Schroder,
                    University of Texas at El Paso

     1350      Spatial Averaging of Soil Moisture
                    P. J. LaPotin, Dartmouth College and H. L. McKim, US Army
                    Cold Regions Research and Engineering Laboratory


     1410      Utility of an Artifical Intelligence System in Forecasting of
               Boundary-Layer Dynamics
                    M. D. McCorcle, S. E. Taylor and J. D. Fast, Iowa State
                    University

     1430      Optimal Pattern Recognition for Perception Neural Networks
                    Perry J. LaPotin, Harlan L. McKim, Dartmouth College and
                    Michael F. Masuch, University of Amsterdam

     1450      Break

     1520      Research in Terrain Knowledge Representation for Image
               Interpretation and Terrain Analysis
                    Olin Mintzer, US Army Engineer Topographic Laboratories

     1540      Ground Combat Vehicle Decision Support
                    Thomas R. Hester, The AI Center, FMC

     1600      Pattern Oriented Expert Systems
                    G. Morales and R. McIntyre, University of Texas

     1620      Improved Expert System Performance Through Knowledge Shaping
                    Joseph A. Vrba and Juan A. Herrera, Perceptics Corporation

     1640      Symposium Ends








!

Contact Science Technology Corporation for further Information at (804)865-
8721.


--
John R. Benton  at the U.S. Army Engineer Topographic Laboratories
Fort Belvoir, VA 22060-5546      ARPA: john@etl.arpa

------------------------------

End of AIList Digest
********************

∂06-Nov-88  1446	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #119 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 6 Nov 88  14:46:30 PST
Date: Sun  6 Nov 1988 17:31-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #119
To: AIList@AI.AI.MIT.EDU


AIList Digest             Monday, 7 Nov 1988      Volume 8 : Issue 119

 Queries:

  References On Mass Terms (1 response)
  Medical ES
  ES in Dermatology
  ES Shells for Equipment Diagnosis
  Use of alternative metaphors/analogies
  Semantic databases, ADA and SETL
  Matching of knowledge representation structures
  How to fit ATMS into Frames
  Tomita's algorithm
  Method of Lauritzen and Spiegelhalter
  Valiant's Learning Model

----------------------------------------------------------------------

Date: 31 Oct 88 14:41:46 GMT
From: bpa!temvax!pacsbb!rkaplan@rutgers.edu  (Randy Kaplan)
Subject: References On Mass Terms


I am doing research on knowledge acquisition from NL text. I am in
need of references on MASS TERMS. If anyone has any references they
would be most helpful.

Randy Kaplan
kaplan@vuvaxcom.bitnet

------------------------------

Date: 3 Nov 88 17:56:49 GMT
From: attcan!utgpu!jarvis.csri.toronto.edu!neat.ai.toronto.edu!gh@uune
      t.uu.net  (Graeme Hirst)
Subject: Re: References On Mass Terms

>I am doing research on knowledge acquisition from NL text. I am in
>need of references on MASS TERMS. If anyone has any references they
>would be most helpful.

The work of Harry Bunt, published in his book "Mass terms and
model-theoretic semantics (Cambridge UP), would be a start.  There is
also work by Francis Jeffry Pelletier, University of Alberta.  He is a
philosopher interested in NL issues; try looking him up in the
appropriate indexes.


\\\\   Graeme Hirst    University of Toronto    Computer Science Department
////   uunet!utai!gh  /  gh@ai.toronto.edu  /  416-978-8747

------------------------------

Date: Mon, 31 Oct 88 23:20:29 PST
From: Ed Ipser <ipser@vaxa.isi.edu>
Subject: Medical ES

I am interested in knowing what is the state of the science
in medical expert systems.  Specifically, i would like
some pointers to:

-  medical expert systems which are actually in use;

-  medical expert system shells which are available
   -  commercial products
   -  university prototypes

-  expert system shells which are available;
   e.g. KEE

I am more interested in working products or prototypes
than in current research in the field (which abounds
in the literature).

thanks,
ed ipser.

------------------------------

Date: 2 November 1988, 17:31:51
From: M. De Bernardinis         +39 0521-991261      CHIRUR2  at
      IPRUNIV
Subject: ES in Dermatology

A professor here is interested in Expert Systems in Dermatology.
Any help locating references, products or researchers on this topic
would be appreciated.
                                              Thanks
                                          M. De Beranrdinis
                                         University of Parma
                                         CHIRUR2 AT IPRUNIV.EARN

------------------------------

Date: Tue, 1 Nov 88 12:54 CST
From: <A0J5791%TAMSTAR.BITNET@MITVMA.MIT.EDU>
Subject: ES Shells for Equipment Diagnosis

Has anyone used EXSYS or PERSONAL CONSULTANT (TI) shells for rule-based
ES for the diagnosis of industrial equipment production defects/failures?
I am trying to develop such a system for a Wave Soldering process for PWB
manufacture. I am interested in learning about other persons experience
in using the above mentioned shells (or any other ones), and the suitability
of these shells for process trouble-shooting and diagnosis.
    I convey my appreciation in advance for any comments or help provided.
                                                    Arshad Jamil
                                           Dept. of Industrial Engineering
                                         Texas A&M University, C.S., Texas

------------------------------

Date: 1 Nov 88 11:56:07 GMT
From: mcvax!ukc!eagle.ukc.ac.uk!icdoc!ivax!sme@uunet.uu.net  (Steve M
      Easterbrook)
Subject: Use of alternative metaphors/analogies

Hi. I am trying to recall the reference to a paper I read a while ago
which discussed the use of analogies in learning. In particular this
paper showed how different metaphors can be used to illustrate different
features of the same concept. I think the example used was that of the
behaviour of gas molecules, using such metaphors as crowded rooms, etc
to help understand such concepts as pressure. Or it might have been the
one which used an example of explaining how a variable works by
comparing it to a box, amongst other things. However, I may be mixing
these examples up with other papers on analogy.

The reason I am trying to recall this paper is because I am studying
how experts might use different abstractions of a concept when
explaining it to a knowledge engineer, where the explanations at
first appear to be in conflict, but the experts really agree with
each other at a deep level.

Any related references anyone can point me towards would be most
useful. Ta.

Steve

------------------------------

Date: 3 Nov 88 11:32:04 GMT
From: mcvax!unido!infhil!schunk@uunet.uu.net  (Michael Schunk)
Subject: Semantic databases, ADA and SETL

I have the task to make arbitrary data objects of the two
languages Ada and Setl ( as the name implies a set oriented
language, that uses sets, tuples and maps as type constructors )
persistent.
Programs in one of the languages should be able to access
persistent objects created from a program in the other language.
My idea was to use a semantic/obj. oriented database and
to create interfaces for each of the languages:
persistent objects are stored in the database and can
be accessed with the interface.
My problem is to get a semantic database. We only have
a normal relational database and it seems to be a lot
of work to store arbitrary objects with it.
Does anybody know from where to get a prototype of a
semantic or object oriented database system?
Advanced properties such as inheritance, generalization etc
are nice but not necessary, because we do not plan to
make an object oriented language persistent at the moment.

Thanks in advance,

Michael Schunk

------------------------------

Date: Thu, 3 Nov 88 16:48 EDT
From: LEWIS@cs.umass.EDU
Subject: Matching of knowledge representation structures

Can anyone point me to some references on matching of subparts of
frame-based knowledge representation structures? Essentially what I'm
interested in is equivalent to finding some/all/the biggest of the
isomorphic subgraphs of two directed graphs, except that edges and vertices
are labeled, and there are restrictions on what labels are allowed to match.
For additional fun, there might be weights on the edges and vertices as
well, and you might not just be interested in large-sized isomorphic
subgraphs, but in maximal scoring ones.

Still more interesting would be if anything has been done on the case where
you can inferences to the structures before matching, so that you actually
have to search a space of alternative representations, as well as comparing
them.

Suggestions? If text content matching had been a bigger application of NLP
in the past, there'd be a bunch of stuff on this, but as it is, I suspect that
vision or case based reasoning people may have done more on this.

Best,
David D. Lewis                                     ph. 413-545-0728
Computer and Information Science (COINS) Dept.     BITNET: lewis@umass
University of Massachusetts, Amherst               ARPA/MIL/CS/INTERnet:
Amherst, MA  01003                                        lewis@cs.umass.edu
USA
             UUCP: ...!uunet!cs.umass.edu!lewis@uunet.uu.net

------------------------------

Date: 3 Nov 88 16:27:53 GMT
From: haven!h.cs.wvu.wvnet.edu!b.cs.wvu.wvnet.edu!siping@purdue.edu 
      (Siping Liu)
Subject: How to fit ATMS into Frames

In frame knowledge representation systems, knowledge
can be inherited through the tree-style world hierarchies.
i.e., each world has only one parent world.

The question is: if the intersection of the confined problem
spaces for two (or more) brother worlds is not empty, why can not
they have a common child world with the intersection as its
problem space ?

BTW, the question is raised when I am thinking how to fit ATMS
(Assumption-based Truth Maintenance System) into a frame system.

------------------------------

Date: Tue, 01 Nov 88 21:33:57 EST
From: "James H. Coombs" <JAZBO%BROWNVM.BITNET@MITVMA.MIT.EDU>
Subject: Tomita's algorithm

Has anyone worked on this algorithm or published about it since IJCAI 85?
Also, just out of curiosity, does Tomita use an SLR(1), LR(1), or LALR(1)?

--Jim

Dr. James H. Coombs
Software Engineer, Research
Institute for Research in Information and Scholarship (IRIS)
Brown University
jazbo@brownvm.bitnet
Acknowledge-To: <JAZBO@BROWNVM>

------------------------------

Date: Wed, 2 Nov 88 16:24:57 EST
From: rpg@CS.BROWN.EDU
Subject: Method of Lauritzen and Spiegelhalter


In this year's JRSS, Lauritzen and Spiegelhalter give an algorithm for
computing probability distributions over bayes/causal/belief networks.
I'd very much like to experiment with this technique, but dread the
thought of implementing it.  ESPECIALLY if someone else has already
done it better.  Do any of you out there have code for this algorithm
that you wouldn't mind sharing?  It would be strictly for academic
research, of course, and I'd be willing to undertake any sensible
agreement to protect your ownership of the code.

Thanks,

Robert Goldman
        BITNET          rpg@BROWNCS.BITNET
        INTERNET        rpg@cs.brown.edu
        UUCP            {decvax,allegra}!brunix!rpg
        U.S. Mail:      Brown C.S. Dept.,
                        Box 1910, Providence, RI 02912
                        (401) 863-7669

------------------------------

Date: 6 Nov 88 06:55:45 GMT
From: techunix.BITNET!dario@ucbvax.berkeley.edu  (Dario Ringach)
Subject: Valiant's Learning Model

Is it fair to assume a constant probabilistic distribution Px on space
X during the learning process?  I mean a *good* teacher would draw
points of X so as to minimize the error between the current hypothesis
and the concept to be learnt , so that the distribution Px could
change after presenting each sample (i.e. Px(n) is now a stochastic
process).  Are these two models equivalent in the sense that they can
learn the same classes of concepts?

Has anyone attempted to approach learning as a discrete time Markov
process on the hypothesis space H?  For instance at any time k let
h1=h(k) be the current hypothesis obviously there is defined for any
h2 in H a transition probability P(h(h+1)=h2|h(k)=h1) that depends
on the probability distribution Px and the learning algorithm A.

------------------------------

End of AIList Digest
********************

∂06-Nov-88  1708	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #120 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 6 Nov 88  17:08:02 PST
Date: Sun  6 Nov 1988 17:38-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #120
To: AIList@AI.AI.MIT.EDU


AIList Digest             Monday, 7 Nov 1988      Volume 8 : Issue 120

 Responses:

  Natural Language Universities
  ES for student advising (3 messages)
  Genetic Learning Algorithms
  Semantic Databases
  Poetry composing programs
  Use of alternative metaphors/analogies
  Summary of C-Linkable Expertshells

----------------------------------------------------------------------

Date: 31 Oct 88 23:22:45 GMT
From: sunybcs!rapaport@boulder.colorado.edu  (William J. Rapaport)
Subject: Natural Language Universities

In article <30300005@levison> levison@levison.applicon.UUCP writes:
>
>       I am interested in going back to school to get a Masters (and
>    possibly a PhD) in Computer Science.  Specifically I am interested in
>    the Natural Language branch of AI.

See:  Directory of Graduate Programs in Computational Linguistics, 2nd
ed., compiled by Martha Evens, in Computational Linguistics 12 (1986).

                                        William J. Rapaport
                                        Associate Professor

Dept. of Computer Science||internet:  rapaport@cs.buffalo.edu
SUNY Buffalo             ||bitnet:    rapaport@sunybcs.bitnet
Buffalo, NY 14260        ||uucp: {decvax,watmath,rutgers}!sunybcs!rapaport
(716) 636-3193, 3180     ||fax:  (716) 636-3464

------------------------------

Date: 1 Nov 88 17:44:54 GMT
From: sm.unisys.com!aero!srt@oberon.usc.edu  (Scott R. Turner)
Subject: ES for student advising

In article <58885A5V@PSUVM> A5V@PSUVM.BITNET writes:
>I would appreciate receiving any lead to works done on expert systems
>apply to student advising (curriculum advising)

I believe that Harry Tennant's Ph.D. thesis from University of Illinois
from ages ago (70s?) was about a curriculum advising system.  This was
probably the first major attempt at such a system.  The thesis was
published in book form, but I doubt that it is still available.  Maybe
a good library will have a copy, or you could try ordering the dissertation
from U of I.

                                        -- Scott

------------------------------

Date: Wed, 02 Nov 88 09:18:19
From: GOLUMBIC%ISRAEARN.BITNET@CUNYVM.CUNY.EDU
Subject: ES for student advising

You may be interested in the paper "A knowledge-based expert system for
student advising" by M.C. Golumbic, M. Markovich, S. Tsur and U.J. Schild,
IEEE Trans. on Education 29 (1986) 120-123.

Two other technical reports have been submitted for publication this year.

------------------------------

Date: 4 Nov 88 18:03:55 GMT
From: finin%prc.unisys.com@burdvax.prc.unisys.com  (Tim Finin)
Subject: ES for student advising

In article <40522@aero.ARPA>, srt@aero (Scott R. Turner) writes:
 >In article <58885A5V@PSUVM> A5V@PSUVM.BITNET writes:
 > >I would appreciate receiving any lead to works done on expert systems
 > >apply to student advising (curriculum advising)
 >
 >I believe that Harry Tennant's Ph.D. thesis from University of Illinois
 >from ages ago (70s?) was about a curriculum advising system.  This was
 >probably the first major attempt at such a system.  The thesis was
 >published in book form, but I doubt that it is still available.  Maybe
 >a good library will have a copy, or you could try ordering the dissertation
 >from U of I.

Tennants 1981 Dissertation was about the evaluating NLP systems.  The
exact title is "Evaluation of Natural Language Processors".  It was
available as report T-103, Coeerdinated Science Laboratory, University
of Illinois, Urbana, IL.  Tennant publised a book in the same year
entitled "Natural Language Processing" (Petrocelli Books, Inc.,
Princeton; ISBN 0-89433-100-0).  Which was a survey of the NLP field.
In that book, he discussed the "Automatic Advisor", a system he did
(circa 1976-77) for his MS thesis at the U. of Illinois at Chigaco
Circle.
--
  Tim Finin                     finin@prc.unisys.com
  Paoli Research Center         ..!{psuvax1,sdcrdcf,cbmvax}!burdvax!finin
  Unisys                        215-648-7446 (office)  215-386-1749 (home)
  PO Box 517, Paoli PA 19301    215-648-7412 (fax)

------------------------------

Date: 2 Nov 88 23:48:00 GMT
From: mailrus!caen.engin.umich.edu!brian@ohio-state.arpa  (Brian
      Holtz)
Subject: Genetic Learning Algorithms

In a previous article, Michael A. de la Maza writes:
>
> Has anyone compiled a bibliography of gla articles/books?


In "Classifier Systems and Genetic Algorithms" (Cognitive Science and
Machine Intelligence Laboratory Technical Report No. 8) Holland
lists some 80 or so applications of GAs, and offers a complete
bibliography to interested parties.  He can be reached at the EECS Dept.,
Univ. of Michigan, Ann Arbor MI 48109 (he doesn't seem to have an obvious
email address here...).  You can get a copy of the technical report from
Sharon_Doyle@ub.cc.umich.edu.

------------------------------

Date: 3 Nov 88 11:32:04 GMT
From: mcvax!unido!infhil!schunk@uunet.uu.net  (Michael Schunk)
Subject: Semantic Databases


You may find some literature in the acm computing surveys:
        2/87 Atkinson/ Buneman
        3/87 Hull/ King
        3/88 Peckham/ Maryanski

Springers Lecture Notes in Computing Science, Volume 334,
contains papers from the second international
workshop on obj. oriented database systems.

Michael Schunk

------------------------------

Date: 3 Nov 88 17:31:10 GMT
From: kww@amethyst.ma.arizona.edu (K Watkins)
Subject: Re: Poetry composing programs


I believe that Marie Borroff worked on a project to "teach a computer to write
poetry" something over a decade ago.  My vague recollection is that she
concluded that such an effort can produce verse but not poetry.  Sorry I don't
remember more.  If you follow up on this, I would be interested in hearing
about it!

I don't even know whether she is on the net, let alone her electronic address.
But you can reach her through the English department of Yale University in New
Haven, Connecticut.

Karellynne ("K") W. Watkins  -  watkins@rvax.ccit.arizona.edu
standard disclaimer

------------------------------

Date: 3 Nov 88 21:16:01 GMT
From: august@locus.ucla.edu  (Stephanie August)
Subject: Use of alternative metaphors/analogies

In article <488@gould.doc.ic.ac.uk> sme@doc.ic.ac.uk (Steve M Easterbrook)
writes:
    >Hi. I am trying to recall the reference to a paper I read a while ago
    >which discussed the use of analogies in learning. In particular this
    >paper showed how different metaphors can be used to illustrate different
    >features of the same concept. I think the example used was that of the
    >behaviour of gas molecules, using such metaphors as crowded rooms, etc
    >to help understand such concepts as pressure.

The article you want is

    Gentner, Dedre, and Gentner, Donald R.  (1983)
        Flowing Waters or Teeming Crowds: Mental Models of
        Electricity.  In Dedre Gentner and Albert L. Stevens (Eds.),
        _Mental Models_.  Hillsdale, N.J.: Lawrence Erlbaum Associates. p.99

    >Or it might have been the
    >one which used an example of explaining how a variable works by
    >comparing it to a box, amongst other things.

You might also be thinking of the programming examples in papers
on the GRAPES simulation of John Anderson's ACT theory of learning.  See:

    Anderson, John R. (1986)
        Knowledge compilation: the general learning mechanism.
        In R.S. Michalski, J.G. Carbonell, and T.M. Mitchell (Eds.),
        _Machine Learning: An Artificial Intelligence Approach_,
        Kaufmann, Los Altos CA.
    Anderson, John R., Farrell, Robert, and Sauers, Ron.  (1984)
        Learning to Program in LISP.  _Cognitive Science_, 8, 87-129.

                                        -- Stephanie E. August
                                        Computer Science Dept, UCLA
                                        <august@cs.ucla.edu>

------------------------------

Date: 4 Nov 88 09:54:08 GMT
From: mcvax!hp4nl!tnoibbc!sp@uunet.uu.net  (Silvain Piree)
Subject: Summary of C-Linkable Expertshells


Three weeks ago I posted a question about C-linkable shells; well here's the
summary. I would like to thank everybody who helped me. (I am not responsible
for any errors in the summary )

If anyone has info-material for the mentioned shells could you please
send it to me either by email or directly to the adress shown at the
bottom of this letter or use telefax ( 3115-620304 ).


Summary of C-Linkable expertshells :

Name     : KES
Firm     : Software Architects and Engineering, Inc.
Adress   : Sussex Suite, City Gates, 2-4 Southgate
           Chichester, West Sussex, PO19 2DJ
           United Kingdom
Computer : MS-DOS, DEC Vaxs, Apollo, Sun
Linking  : Language hooks for c
Price    : Approxomately 2300 pounds for pc's,
                         4000 pounds for workstations and
                         14200 pounds for vaxs

Name     : CxPERT
Firm     : Software Plus
Adress   : 1652 Albermarie Dr
           Crofton, MD 21114
           (301) 261-0264
Computer : MS-DOS
Linking  : Produces C-source
Price    : $495

Name     : TRC ( "Translate rules to C" )
Firm     : Public domain ( uunet.uu.net, 1 or 2 years ago in comp.sources.unix )
             ( If somebody has the source could you please email it to me !!! )
Adress   : --
Computer : Computers with C-compiler
Linking  : Produces C-source
Price    : --

Name     : Rulemaster
Firm     : Radian Corp.
Adress   : PO Box 201088
           Austin, Texas 78720-1088

           512-454-4797
Computer : Unix, VMS, MS-DOS
Linking  : Produces C-source
Price    : ??

Name     : Stimulus
Firm     : National Engineering Laboratory
Adress   : ??
Computer : ??
Linking  : Produces C-source
Price    : ??

Name     : ERS ( Embedded Rule-Based System )
Firm     : PAR Government Systems Corp.
Adress   : 220 Seneca Turnpike
           New Hartford, NY 13413-1191
           (315)-738-0600
Computer : Vaxes, Suns, MS-DOS, UNIX or any computer with C-compiler
Linking  : They wrote me ERS can easily be integrated with C but they
           didn't write me how. ( ERS is written in C )
Price    : ??

Name     : CLIPS
Firm     : COSMIC
Adress   : ??
           404-542-3265
Computer : MS-DOS, Amiga ( and I guess every computer with a C-compiler )
Linking  : Produces C-source
Price    : $250

Name     : NEXPERT OBJECT
Firm     : Neuron Data Inc.
Adress   : 444 High St.
           Palo Alto
           California
           94301
Computer : IBM AT/PS2, Mac, Vax, HP, Sun, Appolo
Linking  : Language hooks for c
Price    : MS-DOS - 5800 ECU
           UNIX   - 9800 ECU

Name     : G2
Firm     : Gensym Corporation
Adress   : ??
           Cambridge, MA
           (617) 547-9606
Computer : ??
Linking  : Link with C-code
Price    : ??

Name     : M.1, S.1 and COPERNICUS
Firm     : Teknowledge, Inc.
Adress   : ??
           Los Angeles, CA
Computer : ??
Linking  : fully integrable with C ( All written in C )
Price    : ??

Name     : OPS83
Firm     : Production Systems Technologies
Adress   : 642 Gettysburg Street
           Pittsburg, Pennsylvania
           (412) 362-3117
Computer : ??
Linking  : Linked to C
Price    : ??

Name     : Sierra OPS5
Firm     : Inference Engines Technology
Adress   : ??
Computer : MS-DOS
Linking  : Language hooks for C
Price    : ??
--
Silvain Piree: TNO - IBBC                   USENET : sp@tnosel
             : lange kleiweg 5              UUCP   : ..!mcvax!tnosel!sp
             : 2288 GH  Rijswijk
             : the Netherlands              VOICE  : +31 15 606405

------------------------------

End of AIList Digest
********************

∂06-Nov-88  1918	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #121 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 6 Nov 88  19:17:59 PST
Date: Sun  6 Nov 1988 17:45-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #121
To: AIList@AI.AI.MIT.EDU


AIList Digest             Monday, 7 Nov 1988      Volume 8 : Issue 121

 Philosophy:

  AI as CS and the scientific epistemology of the common sense world
  Lightbulbs and Related Thoughts
  Re: Bringing AI back home
  Computer science as a subset of artificial intelligence
  Re: Limits of AI -or- The Kaleidoscope of Ideas

----------------------------------------------------------------------

Date: 31 Oct 88  2154 PST
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: AI as CS and the scientific epistemology of the common sense
         world

[In reply to message sent Mon 31 Oct 1988 20:39-EST.]

Intelligence can be studied

(1) through the physiology of the brain,

(2) through psychology,

(3) through studying the tasks presented in the achievement of
goals in the common sense world.

No one of the approaches can be excluded by a priori arguments,
and I believe that all three will eventually succeed, but one
will succeed more quickly than the other two and will help mop up
the other two.  I have left out sociology, because I think its
contribution will be peripheral.

AI is the third approach.  It proceeds mainly in computer science
departments, and many of its methods are akin to other computer
science topics.  It involves experimenting with computer programs
and sometimes hardware and rarely includes either psychological
or physiological experiments with humans or animals.  It isn't
further from other computer science topics than they are from
each other and there are more and more hybrids of AI with
other CS topics all the time.

Perhaps Simon doesn't like the term AI, because his and Newell's
work involves a hybrid with psychology and has involved psychological
experiments as well as experimental computer programming.  Surely
some people should pursue that mixture, which has sometimes
been fruitful, but most AI researchers stick to experimental
programming and also AI theory.

In my opinion the core of AI is the study of the common sense world
and how a system can find out how to achieve its goals.  Achieving
goals in the common sense world involves a different kind of
information situation than science has had to deal with previously.
This fact causes most scientists to make mistakes in thinking about
it.  Some pick an aspect of the world that permits a conventional
mathematical treatment and retreat into it.  The result is that
their results often have only a metaphorical relation to intelligence.
Others demand differential equations and spend their time rejecting
approaches that don't have them.

Why does the common sense world demand a different approach?  Here are
some reasons.

(1) Only partial information is available.  It is partial not merely
quantitatively but also qualitatively.  We don't know all the
relevant phenomena.  Nevertheless, humans can often achieve goals
using this information, and there is no reason humans can't understand
the processes required to do it well enough to program them in computers.

(2) The concepts used in common sense reasoning have a qualitatively
approximate character.  This is treated in my paper ``Ascribing
Mental Qualities to Machines.''

(3) The theories that can be obtained will not be fully predictive
of behavior.  They will predict only when certain conditions are
met.  Curiously, while many scientists demand fully predictive theories,
when they build digital hardware, they accept engineering specifications
that aren't fully predictive.  For example, consider a flip-flop with
a J input, a K input and a clock input.  The manufacturer specifies
what will happen if the clock is turned on for long enough and then
turned off provided exactly one of the J and K inputs remains high
during this period and the other remains low.  The specifications
do not say what will happen if both are high or both are low or
if they change while the clock is turned on.  The manufacturer
doesn't guarantee that all the flip-flops he sells will behave
in the same way under these conditions or that he won't change
without notice how they behave.  All he guarantees is what
will happen when the flip-flop is used in accordance with the
``design rules''.  Computer scientists are also quite properly
uninterested in non-standard usage.  This contrasts with linear
circuit theory which in principle tells how a linear circuit will
respond to any input function of time.  Newtonian and
non-relativistic quantum mechanics tell how particles respond to
arbitrary forces.  Quantum field theory seems to be more picky.
Many programs have specified behavior only for inputs meeting
certain conditions, and some programming languages refrain
from specifying what will happen if certain conditions aren't
met.  The implementor make the compiler do whatever is convenient
or even not figure out what will happen.

What we can learn about the common sense world is like what is
specified about the flip-flop, only even more limited.
Therefore, some people regard the common sense world as unfair
and refuse to do science about it.

------------------------------

Date: 2 Nov 88 13:55:48 GMT
From: pitstop!sundc!rlgvax!tony@sun.com  (Tony Stuart)
Subject: Lightbulbs and Related Thoughts

On the way into work this morning I was stopped at a light
near an office building. They were washing the windows using
a large crane. This lead me to think about the time that a
light was out in CCI's sign and they used a crane to replace
it. I began to wonder whether they replace all the lights while
they have the crane up or just the one that is out. Maybe it
depends on how close the lights are to the end of their rated
life. This got me thinking about the lights in the vanity at
home. Two of the four have blown in the last couple of weeks.
I remarked to Anne how it was interesting that lightbulbs do
start to blow out at around the same time. This lead me to
suggest that we remember to replace the blown out lightbulbs.

The point is that an external stimulus, seeing the men wash
the windows of the building, lead to a problem to solve, replacing
the lights in the vanity. I have no doubt that if I had replaced
those lights already then the train of thought would have
continued until I encountered a problem that needed attention.
The mind seems optimized for problem solving and perhaps one
reason for miscellaneous ramblings is that they uncover problems.

On a similar track, I have often thought that once we find a
solution to a problem it is much more difficult to search for
another solution. Over evolutionary history it is likely that
life was sufficiently primitive that a single good solution was
sufficient. The brain might be optimized such that the first
good solution satisifies the problem seeking mode and to go
beyond that solution requires concious effort. This is an
argument for not resorting to a textbook as the first line of
problem solving. The textbook is sure to give a good solution
but perhaps not the best. With the textbook solution in mind
it may be much more difficult to come up with an original
solution that is better than the textbook one. For this reason
it is best to try to solve the problem internally before going
to some external device.

There may also be some insite into how to make computers
think. Lets say I designed my computer to follow trains of
thought and at each thought it looked for unresolved questions.
If there were no unresolved questions it would continue onto
the next linked thought. Otherwise it would look for the
solution to the problem. If the search did not turn up the
information in memory it would result in the formation of
a question. Anne suggests that these trains of thought are
often triggered by external stimulae that a computer would
not have. She says that we live in a sea of stimulae.

I've often wondered about the differences between short term
and long term memory. Here's a computer model for it. Assume
that short term memory is information stored as sentences and
long term memory is information stored in data structures with
organized field name/field value/relationship links. Information
is initially stored in the sentence based short term memory.
In a background process, or when our minds are otherwise idle,
a task searches through the short term memory for data that
might resolve questions (holes) in the long term memory. (Which
is searched I don't really know.) This information in the short
term memory is then appropriately cataloged in the long term
memory. Another task is responsible for purging sentences
from the short term memory. It could use a first in-first out
or more likely a least frequently used algorithm.

A side effect of this model is that information in short
term memory cannot be used unless there is a hole in the long
term memory. This leads to problems in bootstrapping the
process, but assuming there is a solution to that problem, it
also models behavior that is present in humans. This is the
case of feeling that one hears a word or phrase a lot after
he knows what it means. Another part of the side effect is
that one cannot use information that he has unless it fits.
This means that it must be discarded until the long term
memory is sufficiently developed to accept it.

--

        Anthony F. Stuart, {uunet|sundc}!rlgvax!tony
        CCI, 11490 Commerce Park Drive, Reston, VA 22091

------------------------------

Date: 2 Nov 88 15:54:37 GMT
From: umix!umich!itivax!dhw@uunet.UU.NET (David H. West)
Subject: Re: Bringing AI back home


In article <1776@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>
>In a previous article, Ray Allis writes:
>>If AI is to make progress toward machines with common sense, we
>>should first rectify the preposterous inverted notion that AI is
>>somehow a subset of computer science,
>Nothing preposterous at all about this.  AI is about applications of
>computers, and you can't sensibly apply computers without using computer
>science.

All that this shows is that AI has a non-null intersection with CS,
not that it is a subset of it.

>  Many people would be happy if AI boy scouts came down
>from their technological utopian fantasies and addressed the sensible
>problem of optimising human-computer task allocation in a humble,
>disciplined and well-focussed manner.

Many people would have been happier (in the short term) if James
Watt had stopped his useless playing with kettles and gone out and got
a real job.

>There are tasks in the world.  Computers can assist some of these
>tasks, but not others.  Understanding why this is the case lies at the
>heart of proper human-machine system design.  The problem with hard AI is
>that it doesn't want to know that a real division between automatable
>and unautomatable tasks does exist in practice.

You seem to believe that this boundary is fixed.  Well, it will be
unless people work on moving it.

>  Why are so
>many AI workers so damned ignorant of the problems with
>operationalising definitions of intelligence, as borne out by nearly a
>century of psychometrics here?

There was a time when philosophers concerned themselves with
questions such as whether magnets move towards each other out
of love or against their will.  Why were those who wanted instead to
measure forces so damned ignorant of the problems with the
philosophical approach?   Maybe they weren't, and that's *why* they
preferred to measure forces.

>Common sense is a labelling activity
>for beliefs which are assumed to be common within a (sub)culture.

Partially.

>Such social constructs cannot have a machine embodiment, nor can any

Cannot? Why not?  "Do not at present" I would accept.

>The minute words like common sense and intelligence are used, the
>relevant discipline becomes the sociology of knowledge.

*A* relevent discipline.  AI is at present more concerned with the
structure and machine implementation of common sense than with its
detailed content.

-David West            dhw%iti@umix.cc.umich.edu
                       {uunet,rutgers,ames}!umix!itivax!dhw
CDSL, Industrial Technology Institute, PO Box 1485,
Ann Arbor, MI 48106

------------------------------

Date: Wed, 2 Nov 88 14:55:01 pst
From: Ray Allis <ray@BOEING.COM>
Subject: Computer science as a subset of artificial intelligence


In <1776@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:

>In a previous article, Ray Allis writes:
>>If AI is to make progress toward machines with common sense, we
>>should first rectify the preposterous inverted notion that AI is
>>somehow a subset of computer science,
>Nothing preposterous at all about this.  AI is about applications of
>computers,

I was disagreeing with that too-limited definition of AI.  *Computer
science* is about applications of computers, *AI* is about the creation
of intelligent artifacts.  I don't believe digital computers, or rather
physical symbol systems, can be intelligent.  It's more than difficult,
it's not possible.

>> or call the research something other than "artificial intelligence".
>Is this the real thrust of your argument?  Most people would agree,
>even Herb Simon doesn't like the term and says so in "Sciences of the
>Artificial".

No, "I said what I meant, and I meant what I said".  The insistence that
"artificial intelligence research" is subsumed under computer science
is preposterous.  That position precludes the development of intelligent
machines.

>> Computer science has nothing  whatever to say about much of what we call
>> intelligent behavior, particularly common sense.
>Only sociology has anything to do with either of these, so to
>place AI within CS is to lose nothing.

Only the goal.

>Intelligence is a value judgement, not a definable entity.

"Intelligence" is not a physical thing you can touch or put in a bottle,
like water or carbon dioxide.  "Intelligent" is an adjective, usually
modifying the noun "behavior", and it does describe something measurable;
a quality of behavior an organism displays in coping with its environment.
I think intelligent behavior is defined more objectively than, say, the
quality of an actor's performance in A Midsummer Night's Dream, which IS
a value judgement.

>                                Common sense is a labelling activity
>for beliefs which are assumed to be common within a (sub)culture.
>
>Such social constructs cannot have a machine embodiment, nor can any
>academic discipline except sociology sensibly address such woolly
>epiphenomena.  I do include cognitive psychology within this exclusion,
>as no sensible cognitive psychologist would use terms like common sense
>or intelligence.  The mental phenomena which are explored
>computationally by cognitive psychologists tend to be more basic and
>better defined aspects of individual behaviour.  The minute words like
>common sense and intelligence are used, the relevant discipline becomes
>the sociology of knowledge.

Common sense does not depend on social consensus.  I mean by common sense
those behaviors which nearly everyone acquires in the course of existence,
such as reluctance to put your hand into the fire.  I contend that symbol
systems in general, digital computers in particular, and therefore computer
science, are inadequate for artifacts which "have common sense".

Formal logic is only a part of human intellectual being, computer science
is about the mechanization of that part, AI is (or should be) about the
entire intellect.  Hence AI is something more ambitious than CS, and not
a subcategory.  That's why I used the word "inverted".

>Gilbert Cockton, Department of Computing Science,  The University, Glasgow
>       gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

Ray Allis
Boeing Computer Services, Seattle, Wa.
ray@boeing.com  bcsaic!ray

------------------------------

Date: 3 Nov 88 15:12:29 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Limits of AI -or- The Kaleidoscope of Ideas

In article <2211@datapg.MN.ORG> sewilco@datapg.MN.ORG writes:
> Life, and thus evolution, is merely random exceptions to entropy.

There is an emerging theory on the evolution of complex stable systems.
(See for example Ilya Prigogine's book, _Order out of Chaos_.)

The mathematical theory of fixed points, and the related system-theoretic
idea of eigenfunctions and eigenvalues suggest that stable, recurring
modes or patterns may emerge naturally from any system when "the outputs
are shorted to the inputs".

Consider for instance, the map whose name is "The Laws of Physics and
Chemistry".  Plug in some atoms and molecules into this map (or
processor) and you get out atoms and molecules.  By the Fixed Point
Theorem, one would expect there to  exist a family of atoms and
molecules which remain untransformed by this map.  And this family
could have arbitrarily complex members.  DNA comes to mind.  (Crystals
are another example of a self-replicating, self-healing structure).

So the "random exceptions to entropy" may not be entirely random.
They may be the eigenvalues and eigenfunctions of the system.  The
Mandelbrot Set has shown us how exquisitely beautiful and complex
structures can arise out of simple recursion and feedback loops.

--Barry Kort

------------------------------

End of AIList Digest
********************

∂08-Nov-88  1434	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #122 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 8 Nov 88  14:34:09 PST
Date: Tue  8 Nov 1988 17:10-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #122
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 9 Nov 1988     Volume 8 : Issue 122

  AI genealogy (Bibliographic Refs and Cultural Info)
  Spang Robinson Report Summary

 Chess:
  AI program solves Connect Four
  Desktop chess machines now rated above 2200
  Deep Thought makes it

----------------------------------------------------------------------

Date: Thu, 20 Oct 88 14:15:34 PDT
From: rik%cs@ucsd.edu (Rik Belew)
Subject: AI genealogy

                             AI GENEALOGY
                     Building an AI family tree

Over the past several years we have been developing a collection of
bibliographic references to the literature of artificial intelligence
and cognitive science. We are also in the process of developing a
system, called BIBLIO, to make this information available to
researchers over Internet. My initial work was aimed at developing
INDEXING methods which would allow access to these citations by
appropriate keywords. More recently, we have explored the use of
inter-document CITATIONS, made by the author of one document to
previous articles, and TAXONOMIC CLASSIFICATIONS, developed by editors
and librarians to describe the entire literature.

We would now like to augment this database of bibliographic information
with "cultural" information, specifically a family tree of the
intellectual lineage of the authors. I propose to operationalize this
tree in terms of each author's THESIS ADVISOR and COMMITTEE MEMBERS,
and also the RESEARCH INSTITUTIONS where they work. It is our thesis
that this factual information, in conjuction with bibliographic
information about the AI literature, can be used to characterize
important intellectual developments within AI, and thereby provide
evidence about general processes of scientific discovery. A nice
practical consequence is that it will help to make information
retrievals from bibliographic databases, using BIBLIO, smarter.

I am sending a query out to several EMail lists to ask for your help
in this enterprise. If you have a Ph.D. and consider yourself a
researcher in AI, I would like you to send me information about where
you got your degree, who your advisor and committee members were, and
where you have worked since then.  Also, please forward this query to
any of your colleagues that may not see this mailing list. The
specific questions are contained in a brief questionnaire below, and
this is followed by an example. I would appreciate it if you could
"snip" this (soft copy) questionnaire, fill it in and send back to me
intact because this will make my parsing job easier.

Also, if you know some of these facts about your advisor (committee
members), and their advisors, etc., I would appreciate it if you could
send me that information as well. One of my goals is to trace the
genealogy of today's researchers back as far as possible, to (for
example) participants in the Dartmouth conference of 1956, as well as
connections to other disciplines. If you do have any of this
information, simply duplicate the questionnaire and fill in a separate
copy for each person.

Let me anticipate some concerns you may have. First, I apologize for
the Ph.D. bias. It is most certainly not meant to suggest that only
Ph.D.'s are involved in AI research. Rather, it is a simplification
designed to make the notion of "lineage" more precise. Also, be
advised that this is very much a not-for-profit operation. The results
of this query will be combined (into an "AI family tree") and made
publically available as part of our BIBLIO system.

If you have any questions, or suggestions, please let me know. Thank
you for your help.

Richard K. Belew
        Asst. Professor
        Computer Science & Engr. Dept. (C-014)
        Univ. Calif. - San Diego
        La Jolla, CA 92093
        619/534-2601
        619/534-5948  (messages)
        rik%cs@ucsd.edu

  --------------------------------------------------------------
                          AI Genealogy questionnaire
                        Please complete and return to:
                                rik%cs@ucsd.edu


NAME:

Ph.D. year:

Ph.D. thesis title:

Department:

University:
Univ. location:

Thesis advisor:
Advisor's department:

Committee member:
Member's department:

Committee member:
Member's department:

Committee member:
Member's department:

Committee member:
Member's department:

Committee member:
Member's department:

Committee member:
Member's department:

Research institution:
Inst. location:
Dates:

Research institution:
Inst. location:
Dates:

Research institution:
Inst. location:
Dates:


 --------------------------------------------------------------
                          AI Genealogy questionnaire
                                  EXAMPLE

NAME:                   Richard K. Belew

Ph.D. year:             1986

Ph.D. thesis title:     Adaptive information retrieval: machine learning
                        in associative networks

Department:             Computer & Communication Sciences (CCS)

University:             University of Michigan

Univ. location:         Ann Arbor, Michigan

Thesis advisor:         Stephen Kaplan
Advisor's department:   Psychology

Thesis advisor:         Paul D. Scott
Advisor's department:   CCS

Committee member:       Michael D. Gordon
Member's department:    Mgmt. Info. Systems - Business School

Committee member:       John H. Holland
Member's department:    CCS

Committee member:       Robert K. Lindsay
Member's department:    Psychology

Research institution:   Univ. California - San Diego
                        Computer Science & Engr. Dept.
Inst. location          La Jolla, CA
Dates:                  9/1/86 - present

------------------------------

Date: Mon, 7 Nov 88 14:28:18 CST
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: Spang Robinson Report Summary

    Summary of Spang Robinson Report on Artificial Intelligence,
            September 1988, Volume 4, No. 9

Lead article is on AAAI-88 report.

AT AAAI-88
  Apollo, Apple, Data General, DEC, Hewlett-Packard, IBM, Sun Microsystems,
  Texas Instruments  (only one third of thos emarketing AI technologies
  or applications were there)

TI now has Explorer II Plus and Exploer III Plux LX
Explorer MP, 16-slot processor, new release of Personal Consultant +

Symbolics introduced MacIvory which combines Genera software and Mac II.
System costs $21900 for a system with one megabyte of memory and 300 megabyte
disk.

Integrated Inference is selling a Peripheral Processor Unit that
has a SM45000 Microprogrammable Inference Processor which
costs $9,950 to $39,950.

DEC will jointly market KEE from IntelliCorp and
Knowledge Craft from Carnegie Group.  They also will be selling VAX
Decision Expert (based on GE's GEN-X) and NEXPERT OBJECT.

Intellicorp is promising KEE on the IBM mainframe by December.
it is now being beta tested and will sell for $98,000.

Sun has six per cent of its business in AI.  They have 130 AI products
in their Catalyst third party program.  ENVOS will be selling the following for
SUN workstations
  Xerox's AI Development environment
  LOOPS
  ROOMS
  Flexis, manufacturing cell design
  Factories to model a complete factory line.
ENVOS is a Xerox spinoff which is partially owned by Xerox.

Data General will be joint marketing Gold Hill Computers GoldWorks on its
MV family.

Information Builders, known for FOCUS, has acquired Level 5 Research
and developed an interface between their respective products.

Neuron Data will work with Teknowledge to provide consulting and training.

Tree Age Software has produced a system that helps the constructionof
decison tables.  It allows equations to be attached to the node and
in calculating the probable financial outcome of a course of decisions.

________________________________________
Common Lisp

Common LISP comiittee accepted a working group report on a Common Lisp
Object Ssytem.  The informaiton is in Object-Oriented Programming
in Common Lisp.

Gold Hill is now selling a student version of Common Lisp for $49.95
and will be upgraded to include CLOS.
They are waiting for an upgrade of Portable Common Loops to
include CLOS compatability.
________________________________________
Neural Networks

Cognitive Software introduced a neural networking product for the Macintosh.
It uses the Levco transputer boards.

Brainmaker is a $99.95 product for Neural networks
________________________________________
Shorts:

TI will be adding to SPARC proprietary features of LISP machines
such as tags memory management and garbage collection.

Professor Larry Lidsky has developed a commercial product that schedules
maintenance and issues the requisite daily work orders in nuclear power
plants.

DEC will be cutting AI activities due to general business conditions.

Symbolics had June 30, 1988 year end revenues of $81,339,000
with a net loss of $46,036,000.  "Symbolics is looking for further
funding and may fact the alternative of liquidation."


Palladian Software of Cambridge released version 2.0 of its Operations Planner
which assesses the impact of changes on a PC based system.
It now has assembly modeling capability and label tailoring.

IBM's network Management system supports an IBM Knowledge Tool interface
to allow network management to be put into the system.

Gensym introduced the G2 network which supports cooperating expert systems
for distributed applications operation in real time.

------------------------------

Date: 16 Oct 88 14:06:58 GMT
From: mcvax!hp4nl!botter!star.cs.vu.nl!victor%cs.vu.nl@uunet.uu.net
Subject: AI program solves Connect Four

An AI program has solved Connect Four for the standard 7 x 6 board.
The conclusion: White wins, was confirmed by the brute force check made by
James D. Allen, which has been published in rec.games.programmer at October 1st.

The program called VICTOR consists of a pure knowledge-based evaluation
function which can give three values to a position:
 1 won by white,
 0 still unclear.
-1 at least a draw for Black,

This evaluation function is based on 9 strategic rules concerning the game,
which all nine have been (mathematically) proven to be correct.
This means that a claim made about the game-theoretical value of a position
by VICTOR, is correct, although no search tree is built.
If the result 1 or -1 is given, the program outputs a set of rules applied,
indicating the way the result can be achieved.
This way one evaluation can be used to play the game to the end without any
extra calculation (unless the position was still unclear, of course).

Using the evaluation function alone, it has been shown that Black can at least
draw the game on any 6 x (2n) board. VICTOR found an easy strategy for
these boardsizes, which can be taught to anyone within 5 minutes. Nevertheless,
this strategy had not been encountered before by any humans, as far as I know.

For 7 x (2n) boards a similar strategy was found, in case White does not
start the game in the middle column. In these cases Black can therefore at
least draw the game.

Furthermore, VICTOR needed only to check a few dozen positions to show
that Black can at least draw the game on the 7 x 4 board.

Evaluation of a position on a 7 x 4 or 7 x 6 board costs between 0.01 and 10
CPU seconds on a Sun4.

For the 7 x 6 board too many positions were unclear. For that reason a
combination of Conspiracy-Number Search and Depth First Search was used
to determine the game-theoretical value. This took several hundreds of hours
on a Sun4.

The main reason for the large amount of search needed, was the fact that in
many variations, the win for White was very difficult to achieve.
This caused many positions to be unclear for the evaluation function.

Using the results of the search, a database will be constructed
of roughly 500.000 positions with their game-theoretical value.
Using this datebase, VICTOR can play against humans or other programs,
winning all the time (playing White).  The average move takes less
than a second of calculation (search in the database or evaluation
of the position by the evaluation function).

Some variations are given below (columns and rows are numbered as is customary
in chess):

1. d1, ..  The only winning move.

After 1. .., a1 wins 2. e1. Other second moves for White has not been
checked yet.
After 1. .., b1 wins 2. f1. Other second moves for White has not been
checked yet.
After 1. .., c1 wins 2. f1. Only 2 g1 has not been checked yet. All other
second moves for White give Black at least a draw.
After 1. .., d2 wins 2. d3. All other second moves for White give black
at least a draw.

A nice example of the difficulty White has to win:

1. d1, d2
2. d3, d4
3. d5, b1
4. b2!

The first three moves for White are forced, while alternatives at the
fourth moves of White are not checked yet.

A variation which took much time to check and eventually turned out
to be at least a draw for Black, was:

1. d1, c1
2. c2?, .. f1 wins, while c2 does not.
2. .., c3 Only move which gives Black the draw.
3. c4, .. White's best chance.
3. .., g1!! Only 3 .., d2 has not been checked completely, while all
            other third moves for Black have been shown to lose.

The project has been described in my 'doctoraalscriptie' (Master thesis)
which has been supervised by Prof.Dr H.J. van den Herik of the
Rijksuniversiteit Limburg (The Netherlands).

I will give more details if requested.

Victor Allis.
Vrije Universiteit van Amsterdam.
The Netherlands.
victor@cs.vu.nl

------------------------------

Date: Tue, 1 Nov 88 08:58:15 PST
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Desktop chess machines now rated above 2200


      The current issue of Chess Life reports some new ratings of commercial
chess machines.  There are now portable, desktop machines playing above 2200.
One of the National (US) Masters involved in the evaluation, after losing
2 out of 3 to a Fidelity unit, said that he was beginning to feel like
John Henry going up against the steam drill.

      It's getting embarasssing in the chess world.  It wasn't so bad
losing to a supercomputer that occupied a sizable building.  But losing
to some little box seems to be getting to some of the chess masters.
Especially when you know that next year's box will be even better.

                                        John Nagle

------------------------------

Date: Mon 7 Nov 88 08:20:39-PST
From: Stuart Cracraft <CRACRAFT@venera.isi.edu>
Subject: Deep Thought makes it

Article 1620 of rec.games.chess:
Relay-Version: version B 2.10.3 4.3bsd-beta 6/6/85; site venera.isi.edu
From: tsa@vlsi.cs.cmu.edu (Thomas Anantharaman)
Newsgroups: rec.games.chess
Subject: Deep Thought in Hall of Fame Chess Festival
Message-ID: <3504@pt.cs.cmu.edu>
Date: 7 Nov 88 01:26:41 GMT
Date-Received: 7 Nov 88 08:08:23 GMT
Sender: netnews@pt.cs.cmu.edu
Organization: Carnegie-Mellon University, CS/RI
Lines: 35

Deep Thought tied for first with IM Igor Ivanov (2618), in the Hall of Fame
Chess Festival over the weekend.  Deep Thought scored 4.5 out of 5.  In the
first three rounds it beat Hugon (2007), Papenhausen (2143) and Marshall
(2170). In the fourth round DT defeated IM Calvin Blocker (2515).  In the
final round DT drew against IM Igor Ivanov (2618).  Deep Thought has now
defeated 4 IMs in its past 22 games.

DT's performance rating in this tournament was 2610.  Our estimate of DT's
new established rating is about 2510.  This is the first time a computer has
crossed over the 2500 threshold in established rating.

The games from the last two rounds are given below.

IM Calvin Blocker (2515) vs. Deep Thought (2495):
1. e4,Nf6; 2. e5,Nd5; 3. Nc3,N:c3; 4. b:c3,d5; 5. d4,c5;
6. h3,Nc6; 7. Nf3,e6; 8. Bd3,c:d4; 9. c:d4,Nb4; 10. Be2,Qc7;
11. o-o,N:c2; 12. Bb5,Bd7; 13. B:d7,K:d7; 14. Rb1,Rc8; 15. Rb3,b6;
16. Bb2,Bb4; 17. a3,Ba5; 18. Ng5,Rf8; 19. Qe2,h6; 20. Rc1,h:g5;
21. R:c2,Qb8; 22. Qb5,Ke7; 23. a4,Rc8; 24. Ba3,Kd8; 25. Bd6,Qb7;
26. Rb2,Bc3; 27. Rb1,B:d4; 28. Rbc1,R:c2; 29. R:c2,f6; 30. Qb4,Rh4;
31. Bc7,Ke8; 32. g4,f:e5; 33. Qd2,R:h3; 34. Q:g5,e4; 35. Qg6,Ke7;
36. Bf4,Rc3; 37. Rd2,Rc1; 38. Kh2,Bc3; 39. Re2,Qa6; 40. Bg5,Kd6;
41. Qf7,Q:e2; 42. Qe7,Kc6; 43. Qe6,Kc5; 44. Be7,Kc4; 45. Qc6,Kd4;
46. resigns.  (Blocker was under time pressure near the end.)

IM Igor Ivanov (2618) vs. Deep Thought (2495):
1. c4,e5; 2. Nc3,Bb4; 3. g3,Nf6; 4. Bg2,Nc6; 5. d3,d5;
6. c:d5,N:d5; 7. Bd2,Be6; 8. Nf3,N:c3; 9. b:c3,Be7; 10. Rb1,Rb8;
11. Qa4,o-o; 12. o-o,a6; 13. Be3,Qd7; 14. Rfd1,Qd5; 15. Rd2,b5;
16. Qd1,Qd8; 17. d4,e:d4; 18. B:d4,Qe8; 19. Be5,Bf5; 20. Rb3,Na5;
21. B:c7,N:b3; 22. a:b3,Rc8; 23. Ba5,Bc5; 24. Nd4,Be4; 25. Bh3,f5;
26. e3,g6; 27. Bf1,Qe5; 28. Nc2,Rf7; 29. c4,Bb7; 30. c:b5,Qe4;
31. f3,B:e3; 32. N:e3,Q:e3; 33. Kg2,a:b5; 34. B:b5,Rc1; 35. Rd8,Kg7;
36. Qd4,Q:d4; 37. R:d4,Rc2; 38. Bd2,Bc8; 39. Kf2,Be6; 40. b4,Ra7;
41. h4,Ra2; 42. ke3,h5; 43. Be2,Kf7; 44. Bd1,Rb2; 45. Be2, draw agreed.

------------------------------

End of AIList Digest
********************

∂08-Nov-88  1729	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #123 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 8 Nov 88  17:29:24 PST
Date: Tue  8 Nov 1988 17:19-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #123
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 9 Nov 1988     Volume 8 : Issue 123

 Seminars:

  Towards a Theory of Syntactic Constructions (and more)   - Arnold Zwicky
  Review of the First Workshop on Artificial Intelligence and Music
  Specialization Is For Insects                            - Tom Knight
  Generation and Recognition of Affixational Morphology    - John Bear
  Why AI needs Connectionism?                              - Lokendra Shastri
  Writing and Reading: the View From the U.K.              - John Dixon
  SHERLOCK: an Environment for Electronics Troubleshooting - Susanne Lajoie

----------------------------------------------------------------------

Date: Mon, 24 Oct 88 17:05:15 EDT
From: rapaport@cs.Buffalo.EDU (William J. Rapaport)
Subject: Towards a Theory of Syntactic Constructions (and more) -
         Arnold Zwicky


                         UNIVERSITY AT BUFFALO
                      STATE UNIVERSITY OF NEW YORK

                       DEPARTMENT OF LINGUISTICS
                  GRADUATE GROUP IN COGNITIVE SCIENCE
                                  and
   GRADUATE RESEARCH INITIATIVE IN COGNITIVE AND LINGUISTIC SCIENCES

                                PRESENT

                             ARNOLD ZWICKY

            Department of Linguistics, Ohio State University
             Department of Linguistics, Stanford University

            1.  TOWARDS A THEORY OF SYNTACTIC CONSTRUCTIONS

The past decade has seen the vigorous development of frameworks for syn-
tactic  description  that  not  only are fully explicit (to the point of
being easily modeled in computer programs) but also are integrated  with
an  equally explicit framework for semantic description (and, sometimes,
with equally explicit  frameworks  for  morphological  and  phonological
description).   This  has  made it possible to reconsider the _construc-
tion_ as a central concept in syntax.

Constructions are, like words, Saussurean signs--linkages of  linguistic
form  with  meanings  and pragmatic values.  The technical problem is to
develop the appropriate logics for the  interactions  between  construc-
tions,  both  with  respect  to  their  form  and  with respect to their
interpretation.  I am concerned here primarily with the formal  side  of
the  matter,  which turns out to be rather more intricate than one might
have  expected.   Constructions  are  complexes  of   categories,   sub-
categories, grammatical relations, conditions on governed features, con-
ditions on agreeing features, conditions on phonological  shape,  condi-
tions  on branching, conditions on ordering, _and_ specific contributory
constructions (so that, for example, the subject-auxiliary  construction
in  English  contributes  to  several  others, including the information
question construction, as in `What might you have seen?').  The  schemes
of  formal  interaction  I  will  illustrate  are overlapping, or mutual
applicability; superimposition, or invocation; and preclusion, or  over-
riding of defaults.

                       Thursday, November 3, 1988
                               5:00 P.M.
                       Baldy 684, Amherst Campus

       There will be an evening discussion on Nov. 3, 8:00 P.M.,
         at the home of Joan Bybee, 38 Endicott, Eggertsville.

=========================================================================

     2.  INFLECTIONAL MORPHOLOGY AS A (SUB)COMPONENT OF GRAMMAR

                        Friday, November 4, 1988
                               3:00 P.M.
                       Baldy 684, Amherst Campus

                       Wine and cheese to follow.

Call Donna Gerdts (Dept. of Linguistics, 636-2177) for further information.

------------------------------

Date: Wed, 26 Oct 88 12:04 EDT
From: "TSD::AIP1::\"Len@HEART-OF-GOLD\"%atc.bendix.com"@RELAY.CS.NET
Subject: Review of the First Workshop on Artificial Intelligence and
         Music

Date: Wed, 26 Oct 88 12:00 EDT
From: Len Moskowitz <Len@HEART-OF-GOLD>
Subject: Review of the First Workshop on Artificial Intelligence and Music
To: "3077::IN%\"AIList@ai.ai.mit.edu\""@TSD1
Message-ID: <19881026160039.2.LEN@HEART-OF-GOLD>

    The First Workshop on Artificial Intelligence and Music was held on August
24, 1988 during AAAI-88.  It brought together more than 40 researchers from the
U.S.A, Canada, Belgium, the U.K., and Israel.  The workshop was divided into
five sessions: expert systems; tutoring systems and languages; cognitive models
and knowledge representation; neural networks and parallelism; and a final
session covering perception, philosophy, and the symbiotic relationship between
music and artificial intelligence.  Many of the presentations included
audio/visual demonstrations.
    The workshop was sponsored by AAAI and organized by Mira Balaban (Ben
Gurion University, Israel), Kemal Ebcioglu (IBM Thomas J. Watson Research
Center), Marc Leman (University of Ghent, Belgium), and Linda Sorisio (IBM Los
Angeles Scientific Center).
    From an AI perspective, the workshop spanned a wide range of topics
including planning, machine learning, neural networks, tutoring systems,
knowledge representations, languages, parallelism, pattern recognition,
temporal reasoning, design, and expert systems.  From a music perspective, the
presenters focused on analysis of tonal and atonal music, composition, music
education, music perception and cognition, performance, automated
accompaniment, and user interfaces.
    This year's AAAI had a noteworthy event.  Thanks to the AI and Music
workshop and Harold Cohen's invited talk ("How to Draw Three People In a
Botanical Garden," part of AAAI proper), this was the first time, to my
knowledge, that research carried out within a humanities context received
significant attention.
    Many of the attendees expressed a desire for a similar workshop at next
year's IJCAI/AAAI gathering.  Anyone interested in organizing or assisting next
year's workshop should contact Dr. Ebcioglu (kemal@ibm.com).
    Copies of the workshop's proceedings are available from AAAI (445 Burgess
Drive, Menlo Park, CA 94025, U.S.A.) for $20 (U.S.) plus $2.40 shipping and
handling.

    A summary of the sessions follows:

    Otto Laske (New England Computer Arts Association) delivered an invited
talk.  Laske is perhaps the father of the AI/music synthesis and the field he's
dubbed cognitive musicology. He differentiated between traditional musicology
and the developing discipline of cognitive musicology in that the first is
narrowly centered around the artifacts that musical creates, while the latter
seeks to understand music as one of the processes and structures resulting from
man's 'being in the world.'  Viewed in this way, the programs we write are
formal speech accounts of an activity (referents to a human activity) but not
that activity.  While programs are not music, they can help us to understand
music.  Therefore, if I understand Laske's viewpoint correctly, the goal of the
AI/music synthesis is to investigate, within the music framework, how we become
and are an integral part of the world.
    The session on expert systems included presentations describing a computer
simulation of perception of musical rhythm based on Lerdahl and Jackendorff's
generative theory of tonal music; a forward-chaining rule-based system that
performs harmonic chord function analysis for tonal music; a system for tonal
composition that learns by reordering its production rule priorities and by
generalizing new rules based on recurring patterns in its histories; and a
"Cybernetic Composer" that uses constraint satisfaction and backtracking to
compose realistic music in four different genre's ('standard' jazz, Latin jazz,
rock, and ragtime).  During this last presentation, Charles Ames (Kurzweil
Foundation Automated Composition Project) played a wonderfully entertaining
audio tape of his system performing.
    The session on tutoring systems and languages included presentations about
an architecture for a tutoring system to teach musical structure analysis and
melody interpretation; a tutoring system to teach elementary aspects of music
theory and ear training; a method of generating music using linguistic
(syntactic) techniques; a preliminary description of a programming language for
generating and analyzing musical compositions; and a knowledge representation
for tutoring systems using constraint satisfaction embedded in a logic
programming language (applied to teach harmony).
    The session on cognitive models and knowledge representation had
presentations on modeling and generating music using multiple viewpoints based
on Markov models; a representation that provides sufficient expressiveness for
analysis of atonal music; and musical composition considered as problem
reduction.
    The session on neural networks and parallelism included papers on storing
and processing time-varying musical information in PDP networks; learning of
musical concepts (keys, major/minor, scales, chord expectancies) and cognitive
properties (pitch invariance) using neural nets; and a method of applying the
massively parallel Connection Machine to separate audio sources in polyphonic
music.  As part of this last presentation, Barry Vercoe (MIT Media Lab) showed
a videotape of an automatic accompanist that learns from rehearsal and adapts
to a performer's habitual idiosyncracies.
    The final, rather eclectic session heard papers on the cross-fertilization
between the fields of music and AI; computer implementations of a cognitive
model of music perception; the use of spatial/visual representations and
improved user interfaces to aid sound analysis; and the need to realize the
limitations of computable functions as descriptions/simulations of music
cognition.

Len Moskowitz
                                                   Allied-Signal Aerospace
moskowitz@bendix.com (CSnet)                       Test Systems Division
moskowitz%bendix.com@relay.cs.net (ARPAnet)        Mail code 4/8
arpa!relay.cs.net!bendix.com!moskowitz (uucp)      Teterboro, NJ 07608

------------------------------

Date: Thu 3 Nov 88 11:17:04-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Specialization Is For Insects - Tom Knight

                    BBN Science Development Program
                       AI Seminar Series Lecture

                     SPECIALIZATION IS FOR INSECTS

                               Tom Knight
                    MIT Artificial Intelligence Lab
                           (tk@AI.AI.MIT.EDU)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                      10:30 am, Tuesday 8 November


        The chaos of the last decade in parallel computer architecture
is largely due to the premature specialization of parallel computer
architectures to support particular programming models.  The careful
choice of the correct primitives to support in hardware leads to a
general purpose parallel architecture which is capable of supporting a
wide variety of programming models.

        This talk will argue that low latency communication emerges as
the essential component in parallel processor design, and will
demonstrate how to use low latency communication to support other
programming models such as data level parallelism and coherent shared
memory in large processor arrays.

        We are now designing a very low latency, high bandwidth, fault
tolerant communications network, called Transit.  It forms the
communications infrastructure - the replacement of the bus - for a high
speed MIMD processor array which can be programmed using a wide variety
of parallel models.  Transit achieves its high performance through a
interdisciplinary approach to the problem of communications latency.

        The packaging of Transit is done using near isotropic density
three dimensional wiring, allowing much tighter packing of components,
and routing of wires on a 3-D grid.  The network is direct contact
liquid cooled with Fluorinert.  The use of custom VLSI pad drivers and
receivers provides very high speed signalling between chips.  The
topology of the network provides self-routing, fault tolerant, short
pipeline delay communications between pairs of processors.  And finally,
the design of the processor itself allows high speed message dispatching
and low latency context switch.

------------------------------

Date: Fri, 4 Nov 88 22:44:04 EST
From: finin@PRC.Unisys.COM
Subject: Generation and Recognition of Affixational Morphology - John
         Bear


                              AI SEMINAR
                     UNISYS PAOLI RESEARCH CENTER

                              John Bear
                          SRI International

        Generation and Recognition of Affixational Morphology

Koskenniemi's two-level morphological analysis system can be improved
upon by using a PATR-like unification grammar for handling the
morphosyntax instead of continuation classes, and by incorporating the
notion of negative rule feature into the phonological rule
interpreter.  The resulting system can be made to do generation and
recognition using the same grammars.

                     1:00 am  - November 7, 1988
                         R&D Conference Room
                     Unisys Paoli Research Center
                      Route 252 and Central Ave.
                            Paoli PA 19311

   -- non-Unisys visitors who are interested in attending should --
   --   send email to finin@prc.unisys.com or call 215-648-7446  --

------------------------------

Date: 4 Nov 88 15:49:33 GMT
From: wucs1!loui@uunet.uu.net  (Ron Loui)
Subject: Why AI needs Connectionism? - Lokendra Shastri


                        COMPUTER SCIENCE COLLOQUIUM

                           Washington University
                                 St. Louis

                              4 November 1988


 TITLE: Why AI needs Connectionism? A Representation and Reasoning Perspective


                              Lokendra Shastri
                 Computer and Information Science Department
                           University of Pennsylvania



Any generalized notion of inference is intractable, yet we are capable of
drawing a variety of inferences with remarkable efficiency - often in a few
hundered milliseconds. These inferences are by no means trivial and support a
broad range of cognitive activity such as classifying and recognizing objects,
understanding spoken and written language, and performing commonsense
reasoning.  Any serious  attempt at understanding intelligence must provide a
detailed computational account of how such inferences may be drawn with
requisite efficiency.  In this talk we describe some  work within the
connectionist framework that attempts to offer such an account. We focus on
two connectionist knowledge representation and reasoning systems:

1) A connectionist semantic memory that computes optimal solutions to an
interesting class of inheritance and recognition  problems  extremely
fast - in time proportional to the depth of the conceptual hierarchy.  In
addition to being efficient, the connectionist realization is based on an
evidential formulation and provides a principled treatment of exceptions,
conflicting multiple inheritance, as well as the best-match or
partial-match computation.

2)  A connectionist system that represents knowledge in terms of multi-place
relations (n-ary predicates), and draws a limited class of inferences based on
this knowledge with extreme efficiency. The time taken by the system to draw
conclusions is proportional to the length of the proof, and hence,
optimal.  The system incorporates a solution to the "variable binding" problem
and uses the temporal  dimension to establish and maintain bindings.

We conclude that working within the connectionist framework is well motivated
as it helps in identifying interesting classes of limited inference that can
be performed with extreme efficiently, and aids in discovering constraints
that must be placed on the conceptual structure in order to achieve extreme
efficiency.


host:  Ronald Loui

------------------------------

Date: Mon 7 Nov 88 16:51:26-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Writing and Reading: the View From the U.K. - John Dixon

                    BBN Science Development Program
                  AI/EDUCATION Seminar Series Lecture

              WRITING AND READING: THE VIEW FROM THE U.K.

                               John Dixon

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                     10:30 am, Thursday November 10


        ********************************************************
        *                                                      *
        *   No abstract was available for this presentation.   *
        *      Below is a short biography of the speaker.      *
        *                                                      *
        ********************************************************


John Dixon is an educational writer and consultant from London,
England, who has been a teacher in an inner-city school in London
as well as a Senior Lecturer in a teacher training college at Leeds.
Dixon is the author of "Growth through English", the major report of
the Anglo-American Dartmouth Seminar in 1966.  His writing since then
has included anthologies for school use and a number of books on the
teaching of writing, the most recent of which is "Writing Narrative -
and Beyond".

For many years a member of and then chair of The Schools Council
Committee on English, Dixon has directed research and studies on
Teaching English to the School Leaving Age and has investigated
the effect of the questions asked on university examinations on
the teaching of literature in schools.

------------------------------

Date: Mon 7 Nov 88 16:52:16-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: SHERLOCK: an Environment for Electronics Troubleshooting -
         Susanne Lajoie

                    BBN Science Development Program
                       AI Seminar Series Lecture

               SHERLOCK:  A COACHED PRACTICE ENVIRONMENT
                 FOR AN ELECTRONICS TROUBLESHOOTING JOB

                             Susanne P. Lajoie
               Learning Research and Development Center,
                        University of Pittsburgh
                 (LAJOIE%LRDCA@Vms.Cis.Pittsburgh.Edu)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                      10:30 am, Tuesday November 15


Sherlock is a computer-based practice environment for teaching
first-term airmen avionics troubleshooting skills.  Sherlock's
instructional goals were determined by a cognitive task analysis of
skill differences in this domain. The predominant instructional
strategy is to support holistic practice of troubleshooting rather
than train discrete knowledge skills. Instruction is based on complex
decision graphs of skilled and less skilled plans and actions for each
troubleshooting problem. As a trainee works through a problem Sherlock
observes the quality of decisions the trainee makes and uses that
information to provide the level of hint explicitness necessary at
particular decision points in the problem. In this way, specific
competency building is situated within the troubleshooting context and
is sharpened to the extent that satisfies each individual's needs.

Sherlock was field tested in a controlled study that compared tutored
trainees with a control group that received no extra training other than
"on-the-job" experience.  Pre and post tests of verbal troubleshooting
indicated that the tutored group performed better than the control group
on post tests of troubleshooting proficiency.  Not only were more
problems solved but there were several indications of emerging
competence over the course of tutoring that demonstrated that trainees
were becoming more "expert-like" in the overall troubleshooting process.
In an independent evaluation the Air Force found the Sherlock treatment
to be equivalent to 47-51 months of "on the job" experience.

Enhancements have been added to Sherlock that could increase its
effectiveness even more.  An explicit articulation of expert and
student problem solving traces now exists that could facilitate the
comparison process of different levels of expertise. At the completion
of each problem trainees will be able to interrogate the trace of the
expert problem solution and see why an expert would make a particular
move as well as see the mental models used by an expert to test
different paths in the problem space.

------------------------------------
This research was made possible through the combined efforts of the
following individuals:  Alan Lesgold, Jaya Bajpayee, Marilyn Bunzo,
Gary Eggan, Linda Greenberg, Debra Logan,  Thomas McGinnis, Cassandra
Stanley, Arlene Weiner, Richard Wolf, and Laurie Yengo, as well as
researchers at AFHRL Brooks, and the Air Force personnel that made our
study possible.

------------------------------

End of AIList Digest
********************

∂13-Nov-88  1837	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #124 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 13 Nov 88  18:37:00 PST
Date: Sun 13 Nov 1988 21:20-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #124
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 14 Nov 1988      Volume 8 : Issue 124

 Announcements:

  Call for papers: AI and Communicating Process Architecture (OUG AISIG)
  CVPR 89 submission deadline - Nov. 16
  Call for papers:  AIRIES-89
  call for papers: Augmenting Human Intellect by Computer

----------------------------------------------------------------------

Date: Thu, 3 Nov 88 19:34:32 GMT
From: Steven Zenith <mcvax!inmos!zenith@uunet.UU.NET>
Subject: Call for papers: AI and Communicating Process Architecture
         (OUG AISIG)

                            occam user group
                      * artificial  intelligence *
                         special interest group

                             CALL FOR PAPERS
                 1st technical meeting of the OUG AISIG
                         ARTIFICIAL INTELLIGENCE
                                   AND
                   COMMUNICATING PROCESS ARCHITECTURE
         17th and 18th of July 1989, at Imperial College, London UK.
                        Keynote speakers will include
                            * Prof. Iann Barron *
                        "Inventor of the transputer"

The conference organising committee includes:
    Dr.med.Ulrich Jobst    Ostertal - Klinik fur Neurologie und
                           klinische Neurophysiologie
    Dr. Heather Liddell,   Queen Mary College, London.
    Prof. Dr. Y. Paker,    Polytechnic of Central London
    Prof. Dr. L. F. Pau,   Technical University of Denmark.
    Prof. Dr. Bernd Radig, Institut Fur Informatik, Munchen.
    Prof. Dr. David Warren Bristol University.

Conference chairmen:
    Dr. Mike Reeve         Imperial College, London
    Steven Ericsson Zenith INMOS Limited, Bristol (Chairman OUG AISIG)

Topics include:
         The transputer and a.i.                 Real time a.i
          Applications for a.i.            Implementation languages
       Underlying kernel support          Underlying infrastructure
          Toolkits/environments                Neural networks

    Papers must be original and of high quality. Submitted papers
    should be about 20 to 30 pages in length, double spaced and single
    column, with an abstract of 200-300 words. All papers will be
    refereed and will be assessed with regard to their quality and
    relevance.

    A volume is being planned to coincide with this conference to be
    published by John Wiley and Sons as a part of their book series on
    Communicating Process Architecture.

    Papers must be submitted by the 1st of February 1988. Notification
    of acceptance or rejection will be given by March 1st 1989.
    Final papers (as camera ready copy) must be provided by April 1st
    1989.

Submissions to be made to either:
    Steven Ericsson Zenith                Mike Reeve
    INMOS Limited,                        Dept. of Computing,
    1000 Aztec West,                      Imperial College,
    Almondsbury,                          180 Queens Gate,
    Bristol BS12 4SQ,                     London SW7 2BZ,
    UNITED KINGDOM.                       UNITED KINGDOM.
    Tel. 0454 616616 x513                 Tel. 01 589 5111 x5033
    email: zenith@inmos.uucp              email: mjr@doc.ic.ac.uk

Regional Organisers:
    J.T Amenyo             Ctr. Telecoms Research, Columbia University,
                           Rm 1220 S.W. Mudd, New York, NY 10027-6699.
    Jean-Jacques Codani    INRIA, Domaine de Voluceau - Rocquencourt,
                           B.P.105-78153 Le Chesnay Cedex, France.
    Pasi Koikkalainen      Lappeenranta University of Technology,
                           Information Technology Laboratory,
                           P.O.BOX 20, 53851 Lappeenrantra, Finland.
    Kai Ming Shea          Dept. of Computer Science,
                           University of Hong Kong, Hong Kong.
    Dr. Peter Kacsuk       Multilogic Computing, 11-1015 Budapest,
                           Csalogaiy u. 30-32. Hungary.

------------------------------

Date: 7 Nov 88 01:50:02 GMT
From: haven!uvaarpa!virginia!uvacs!wnm@mimsy.umd.edu  (Worthy N.
      Martin)
Subject: CVPR 89 submission deadline - Nov. 16


The following call for papers has appeared before,
however, it is being reissued to remind interested
parties of the first deadline, namely:

---->
----> November 16, 1988 -- Papers submitted
---->

This deadline will be held to firmly with the submission
date determined by postmark.
Thank you for your interest in CVPR89
   Worthy Martin


----------------------------------------------------------

                      CALL FOR PAPERS

              IEEE Computer Society Conference
                            on
          COMPUTER VISION AND PATTERN RECOGNITION

                    Sheraton Grand Hotel
                   San Diego, California
                      June 4-8, 1989.



                       General Chair


               Professor Rama Chellappa
               Department of EE-Systems
               University of Southern California
               Los Angeles, California  90089-0272


                     Program Co-Chairs

Professor Worthy Martin          Professor John Kender
Dept. of Computer Science        Dept. of Computer Science
Thornton Hall                    Columbia University
University of Virginia           New York, New York  10027
Charlottesville, Virginia 22901


                     Program Committee

Charles Brown         John Jarvis            Gerard Medioni
Larry Davis           Avi Kak                Theo Pavlidis
Arthur Hansen         Rangaswamy Kashyap     Alex Pentland
Robert Haralick       Joseph Kearney         Roger Tsai
Ellen Hildreth        Daryl Lawton           John Tsotsos
Anil Jain             Martin Levine          John Webb
Ramesh Jain           David Lowe



                    Submission of Papers

Four copies of complete  drafts,  not  exceeding  25  double
spaced  typed  pages  should be sent to Worthy Martin at the
address given above by November 16, 1988  (THIS  IS  A  HARD
DEADLINE).   All reviewers and authors will be anonymous for
the review process.  The cover page will be removed for  the
review  process.   The  cover  page  must contain the title,
authors'  names,  primary  author's  address  and  telephone
number, and index terms containing at least one of the below
topics.  The second page of the  draft  should  contain  the
title  and  an abstract of about 250 words.  Authors will be
notified of notified of acceptance by February 1,  1989  and
final  camera-ready  papers, typed on special forms, will be
required by March 8, 1989.  Submission of Video Tapes  As  a
new  feature  there  will  be  one or two sessions where the
authors can present their work using video tapes only.   For
information  regarding  the  submission  of  video tapes for
review purposes, please contact John Kender at  the  address
above.



                 Conference Topics Include:

          -- Image Processing
          -- Pattern Recognition
          -- 3-D Representation and Recognition
          -- Motion
          -- Stereo
          -- Visual Navigation
          -- Shape from _____ (Shading, Contour, ...)
          -- Vision Systems and Architectures
          -- Applications of Computer Vision
          -- AI in Computer Vision
          -- Robust Statistical Methods in Computer Vision



                           Dates

      November 16, 1988 -- Papers submitted
      February 1, 1989  -- Authors informed
      March 8, 1989     -- Camera-ready manuscripts to IEEE
      June 4-8, 1989    -- Conference

------------------------------

Date: Thu, 10 Nov 88 16:11:01 MST
From: <cfields@NMSU.Edu>
Subject: Call for papers:  AIRIES-89

Third AI Research in Environmental Science Workshop

2-4 May 89, Crowne Plaza Hotel, Rockville, MD.


The workshop will address topics of interest to practioners of AI
applied to environmental science.  The workshop will incorporate a
variety of presentation formats including invited and contributed
papers, panel discussions, and demonstrations.

Authors are requested to submit an abstract of approximately 300 words
in one of the following topic areas:

Demonstrable AI systems        Database usage and management
Work in progress               Data analysis systems
Knowledge engineering          Validation
Software engineering           Data quality control
Human factors                  Educational considerations
Hardware considerations        Research issues

to:

Dr. W. Moninger                           ***Deadline: 1 Jan. 89***
Environmental Research Laboratories
NOAA, R/E2
325 Broadway
Boulder, CO 80303-3328

303-497-6435

Authors should indicate whether the abstract is for oral. poster, or
demonstration presentation.  Authors will be notified of the status of
their presentations by 1 Mar. 89.  Written papers will not be
required, and there will be no proceedings.

------------------------------

Date: Fri, 11 Nov 88 11:40:24 MST
From: Emilie Young <young@boulder.Colorado.EDU>
Subject: call for papers: Augmenting Human Intellect by Computer


                       Call for Papers
          Fourth Annual Rocky Mountain Conference
                 on Artificial Intelligence

                      June 8 & 9, 1989
                       Clarion Hotel
                      Denver, Colorado

            Augmenting Human Intellect by Computer

This conference is designed to explore the means by which Artificial
In⊂telligence can enhance human cognitive abilities. We are particularly
interested in work that addresses the way computer systems can support
their user's problem solving needs.  Relevant topics for this conference
are:

        - Intelligent support of human communication
        - Computer supported cooperative work
        - Automated reasoning and problem solving
        - User interfaces and user interface management systems
        - Tutoring, Training & Education
        - Design, Manufacturing & Control
        - Planning
⊂       - Human Problem Solving

Although purely theoretical papers will be considered, all papers should
indicate how the technology will be used to change the way people
think and communicate.

Papers due: December 9, 1988
Authors must submit three copies of their paper, not to exceed 4000
words.  Each submission must include the keywords describing the research
and state whether it will be submitted to another conference.

Send to: James H. Alexander, RMCAI Program Chair
         U S WEST Advanced Technologies
         6200 S. Quebec #320, Englewood, CO 80111

------------------------------

End of AIList Digest
********************

∂13-Nov-88  2109	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #125 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 13 Nov 88  21:09:33 PST
Date: Sun 13 Nov 1988 21:51-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #125
To: AIList@AI.AI.MIT.EDU


AIList Digest            Monday, 14 Nov 1988      Volume 8 : Issue 125

 Philosophy:

  Epistemology of common sense
  The study of intelligence
  Computer science as a subset of artificial intelligence
  Lightbulbs and Related Thoughts
  IJCAI Panels

----------------------------------------------------------------------

Date: Mon, 7 Nov 88 11:21:08 EST
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: epistemology of common sense


In AIList Digest for Monday, 7 Nov 1988 (Volume 8, Issue 121), in a
message dated 31 Oct 88 2154 PST on the topic "AI as CS and the
scientific epistemology of the common sense world", John McCarthy
<JMC@SAIL.Stanford.EDU> has persuasive words for colleagues who prefer
to limit their research to things that are amenable to tidy mathematical
formulation.

The audience of "neats" he was addressing should ignore this.  I want to
talk about aspects of common sense that seem even less tidy.  (But there
is hope, cf. references at the end.)

JMC> Intelligence can be studied
   | . . .
   | (3) through studying the tasks presented in the achievement of
   | goals in the common sense world.
   | . . .
   |                     I have left out sociology, because I think its
   | contribution will be peripheral.
   | . . .
   | AI is the third approach.  It proceeds mainly in computer science

There is more to common sense than the study of tasks and goals
specified in physical terms.  Much of common sense involves social
facts, not just physical facts.  A telltale of social facts is that they
are matters of convention.  Absent intelligent agents conforming to
them, they do not exist.

Restricted to physical facts, common sense concerns things like "I can't
put the blue pyramid in the box, it's already in there" or "I can't put
the lintel on yet, I need to move the second column closer to the
first."

Suppose we had an AI equipped with common sense defined solely in terms
of physical facts.  This is somewhat like the proverbial person who
knows the price of everything but the value of nothing.

We deceive ourselves when we put labels on things like "road" or
"vehicle" or even "arch" in a knowledge base.  We have many expectations
and other associations with these terms that a knowledge base
lacks--unless we explicitly include those associations.

If and when we do begin to include such associations (that line defines
my lane, this is the slow-speed lane, drive on the right--unless in
England or Sweden or . . . that joker's trying to pass me in the
breakdown lane . . .  this must be Boston . . . ) we are involved with
the sociology of knowledge.

Look at Erving Goffman on, say, presentation of self or interaction
rituals.  Look at W. Pearce (UMass Amherst) on communication rules and
rules for constituting the social order.  For starters.

An AI must be responsive as a member of the social order if it is to be
regarded as intelligent by humans.  It does not need the physiological
or psychological mechanisms of humans, but it does need to understand
their conventions.

Bruce Nevin
bn@bbn.com
<usual_disclaimer>

------------------------------

Date: Mon, 7 Nov 88 10:22:14 PST
From: norman%ics@ucsd.edu (Donald A Norman-UCSD Cog Sci Dept)
Reply-to: danorman@ucsd.edu
Subject: The study of intelligence


Time for comment from a Cognitive Scientist on the appropriate
approach to the study of Intelligence.

As usual, John McCarthy has provided us with a cogent and coherent
analysis of the approaches one might take, but although his approach
appears sensible, I wish to disagree about the importance of several
aspects he downplayed.

McCarthy states:
     Intelligence can be studied
       (1) through the physiology of the brain,
       (2) through psychology,
       (3) through studying the tasks presented in the achievement of goals
           in the common sense world.
True enough, except that I would add several others:
        (4) through an analysis of intelligent behavior (in the abstract, as
          is most frequently done in philosophy, and in some AI and
          Cognitive Science endeavors)
        (5)  Through an analysis of how intelligent behavior results from
          an interaction of individual cognition, the cognitions of
          others, the social structures and cultures, and the physical
          environment,   [In part, what we here at UCSD call
          "Distributed Cognition," which is highly related to the
          recent work on "Situated Action" (See Lucy Suchman's book or
          the papers of Agre and Chapman, for example).]

Real intelligence takes place as an interaction among people, in a
social environment, constrained by the particular experiences of the
participants and by the biological structures of the organism (not
just the brain, but also the sensory systems, the locomotive and
grasping mechanisms, and the whole regulatory system which interacts
dramatically with our cognitions.

Traditional analyses of intelligent behavior leave out the role of
emotions, of limited sensory and reasoning capabilities, of the
example-driven aspects of interpretation and memory retrieval and
decision making.  These analyses make logical sense and can lead to
the development of intelligent machines, but they are not accurate
portrayals of human intelligence.  They also (and as a direct result)
miss the creative aspect of human intelligence and fail to
characterize properly real human behavior, both the insightful
variety, and the class of things called "human error."

McCarthy talks of "common sense" but has he really studied what common
sense is about?  One person's common sense is another's nonsense.
Common sense varies widely from culture to culture.  I highly
recommend the paper by Geertz (an anthropologist -- one field McCarthy
left out):
  Geertz, G.  (1983).  Local knowledge: Further essays in interpretive
     Anthropology.  New York: Basic Books.  (Especially see the essay
     "Common sense as a cultural system," pp.  73-93.)

In conclusion: John McCarthy has given a logical set of procedures to
follow in the study of Artificial Intelligence.  They make sense and
will lead to advancement in the understanding of one form of
Artificial Intelligence.

But there are many possible forms of Artificial Intelligence, and it
is highly likely that dramtically different other approaches will also
prove fruitful.

However, I am interested in Real Intelligence, and for this domain,
McCarthy's approach is much too limited, for it neglects the powerful
and important contribution of biological structure, of social
interaction, of the role of cultural knowledge, and of the interaction
among individuals and the environment.  We work in a world of
incomplete and erroneous knowledge, ambiguous situations and
communications, and partial specifications of all sorts, where much of
behavior is driven by the accidents of the environment or by
biological needs and limits.  And almost all of our intelligent
behavior results from social interaction and by the use of artificial
artifacts (which, of course, were created by us to aid our thought and
communication proceses -- cognitive artifacts, I call them).

We can only study Real Intelligence by studying Real Organisms in
interaction with other organisms, their cultural knowledge, and their
environment.

don norman

Donald A. Norman        [ danorman@ucsd.edu   BITNET: danorman@ucsd ]
Department of Cognitive Science C-015
University of California, San Diego
La Jolla, California 92093 USA

UNIX:  {gatech,rutgers,ucbvax,uunet}!ucsd!danorman
[e-mail paths often fail: please give postal address and all e-mail addresses.]

------------------------------

Date: 8 Nov 88 01:46:40 GMT
From: quintus!ok@Sun.COM (Richard A. O'Keefe)
Reply-to: quintus!ok@Sun.COM (Richard A. O'Keefe)
Subject: Re: Computer science as a subset of artificial intelligence


In a previous article, Ray Allis writes:
>I was disagreeing with that too-limited definition of AI.  *Computer
>science* is about applications of computers, *AI* is about the creation
>of intelligent artifacts.  I don't believe digital computers, or rather
>physical symbol systems, can be intelligent.  It's more than difficult,
>it's not possible.

There being no other game in town, this implies that AI is impossible.
Let's face it, connectionist nets are rule-governed systems; anything a
connectionist net can do a collection of binary gates can do and vice
versa.  (Real neurons &c may be another story, or may not.)

------------------------------

Date: 10 Nov 88 12:23:41 GMT
From: oodis01!uplherc!sp7040!obie!wsccs!dharvey@tis.llnl.gov  (David
      Harvey)
Subject: Re: Lightbulbs and Related Thoughts

In a previous article, Tony Stuart writes:
>
> On a similar track, I have often thought that once we find a
> solution to a problem it is much more difficult to search for
> another solution. Over evolutionary history it is likely that
> life was sufficiently primitive that a single good solution was
> sufficient. The brain might be optimized such that the first
> good solution satisifies the problem seeking mode and to go
> beyond that solution requires concious effort. This is an
> argument for not resorting to a textbook as the first line of
> problem solving.
>

Usually, advances by humans comes on top of what has gone before,
not inside a vacuumn.  I realize that this is not exactly what you
intended to present here, but it comes out that way regardless.
As to the better solution, that usually is the way it happens.
For examples, consider Keppler seeing inconsistencies between the
model proposed by Aristotle and the calculations (just think how
much faster his work would have been with a computer!) he made.
This of course prompted him to devise a new model.  Galileo and
Newton also saw inconsistencies between what was commonly believed
and the effects of gravity, ie, that accelaration was a constant
not affected by the mass of the object.  Einstein saw inconsistencies
even in this model and developed the theory of relativity.  In other
words, these people KNEW the textbook solutions.  What characterized
them as being different from the masses is that they had the tenacity
to reject the 'textbook' solution when a better model came to mind.
Just how this can be emulated in a computer is not that easy.  The
only thing that can be said is that inconsistencies of data with the
rule base must allow for a retraction of the rule and assertion for
new ones.

>
> I've often wondered about the differences between short term
> and long term memory.
>

Don't forget to include the iconic memory.  This is the buffers
so to speak of our sensory processes.  I am sure that you have
saw many aspects of this phenomenon by now.  Examples are staring
at a flag of the United States for 30 seconds, then observing the
complementary colors of the flag if you then look at a blank wall
(usually works best if the wall is dark).  There are other ways
of observing that there really is such a thing as iconic memory,
but these must be performed in a lab setting with blind studies.
I helped perform one of these at the University of Utah.  How do
you implement this into your model?  I don't know, and I doubt
anyone else does either since much research must yet be done to
see the relationship between iconic, short term, and long term
memory.  Also, the differences between conscious and subconscious
memory processes must be considered.  Much of this iconic information
makes its way into memory via the subconscious track,  which I
would cite as evidence the studies being performed by various
researchers in Psychology.

You have observed the linking process that takes place in our
long term memories.  This is of course a dandy model until you
begin to look at some of our links.  They have some of the
following characteristics:

        [1]     Some of them seem to link together totally randomly.
                I am sure you have observed the phenomenon that some
                of your own links are rather mysterious, where the
                items are not logically related at all.  Nevertheless
                most of them ARE logically related.  Maybe we can
                randomly throw in a time frame for the other links.
                This of course supposes that we can prove that time
                is indeed the model that determines them.  By time
                I mean close time proximity for the linked structures.
        [2]     There are a massive amount of them that we search,
                sometimes in vain.  As witness to this consider the
                tip-of-the-tongue phenomenon that we are cursed with.
                I am sure that we all have experienced it.  Perhaps
                those with photographic memory are not cursed with it,
                but not being so blessed I would not know.  Also, some
                of these sturctures unlink with time and fall away.
                This last tidbit of course goes against the conventional
                textbook wisdom that they stay there forever.
        [3]     Since there are so many, we MUST use parallel processing
                to search them all.  Also realize that they are massive
                in nature, perhaps to the point of exceeding most mass
                storage devices (disks) in use today.

The short term memory does not necessarily have to have a different
data representation.  It still has a linking type nature.  The main
difference I see between the two is that short term memory has far
fewer links than long term.  What needs to be done is to study why and
how this short term memory links up with the long term memory.  Perhaps
frequency of use could be researched as the causal factor.  Initially,
we must establish a linking base for these short term facts to attach
themselves to.  As I see it there are several ways it will link into
the established long term memory.  First of course is the logical link.
Another would be a time frame link where what was considered immediately
before or after would be what we attach it to.  Also, since it is a well
established fact that we can chain things much better via poetry than
prose, rhythm and actual morphemes must be considered for chaining.

> A side effect of this model is that information in short
> term memory cannot be used unless there is a hole in the long
> term memory. This leads to problems in bootstrapping the
> process, but assuming there is a solution to that problem, it
> also models behavior that is present in humans. This is the
> case of feeling that one hears a word or phrase a lot after
> he knows what it means. Another part of the side effect is
> that one cannot use information that he has unless it fits.
> This means that it must be discarded until the long term
> memory is sufficiently developed to accept it.
>

The problem is that there are more than enough holes for something
to be fit in.  Inconsistency seems to thrive in human beings.  It
is only when new information conflicts enough with old that we attempt
to rationalize the two conflicting 'facts'.  Unless the new information
outweighs the old in some way it never replaces it.  It can and does
coexist with the old in tension in many cases.  It is only when we reach
the discomfort level that we attempt to resolve the disparity of the
two in our fact base.

Well, now that I have given enough for Psychologists and AI researchers
to work on for the next 50 years (:-) I can go back to such mundane
chores as homework and sleeping.  Hmm, are we going to model the
activity of sleeping in our machine?

dharvey@wscss

The only thing you can know for sure,
is that you can't know anything for sure.

------------------------------

Date: 10 Nov 88 18:36:41 GMT
From: umix!umich!itivax!ttf@uunet.UU.NET (Fejel)
Subject: IJCAI Panels


At the IJCAI Local Arrangements Committee meeting this past Friday, we
were urged to submit panel suggestions.  I have noticed that the net
explodes whenever AI "infringes" on people's values.  It seems that
ethical and moral issues generally stir up much interest and
controversy, though unfortunately it is often the case that more heat
than light is generated.

With this in mind, I would like to propose a panel
discussion on AI, Ethics, and Morality.
It could have three viewpoints:
1)The ethics of AI application (ie. defense-related domains).
2) The ethics of building artificial persons
(as presented by Michael LaChat)(admittedly a bit blue-sky),
and most important,
3) The view of ethics and morality as cognitive tasks,
and therefore legitimate objects of research within
the AI community.

If you're interested, and especially if you are thinking of
coming to Detroit, please email me back your comments and
suggestions, and maybe send a panel discussion request to
the IJCAI organizing commmittee.

Tihamer

P.S. Unfortunately, I am not familiar with any work being done
in this domain.  Anyone have any pointers?

arpanet:  ttf%iti@umix.cc.umich.edu
uucp:  ...{pur-ee,well}!itivax!ttf (Tihamer T. Toth-Fejel)
Industrial Technologies Institute, Ann Arbor, Michigan  48106
work phone: (313) 769-4248 or 4345
*----*----*----*----*----*----*----*----*----*----*----*----*----*

------------------------------

End of AIList Digest
********************

∂14-Nov-88  1451	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #126 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 14 Nov 88  14:51:17 PST
Date: Mon 14 Nov 1988 17:33-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #126
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 15 Nov 1988     Volume 8 : Issue 126

 Queries:

  Classifier Systems
  NEXPERT OBJECT experiences
  Help finding a reference
  Sample expert systems
  Backward or forward chaining ES
  Applied AI and CAM (1 response)
  Seth R. Goldman's e-mail address
  ES in Electric Power Distribution
  Smalltalk opinions
  Learning arbitrary transfer functions (2 responses)

----------------------------------------------------------------------

Date: 2 Nov 88 23:48:00 GMT
From: mailrus!caen.engin.umich.edu!brian@ohio-state.arpa  (Brian
      Holtz)
Subject: Classifier Systems

Does anyone know of any references that describe classifier systems whose
messages are composed of digits that may take more than two values?
For instance, I want to use a genetic algorithm to train a classifier
system to induce lexical gender rules in Latin.  Has any work been done
on managing the complexity of going beyond binary-coded messages, or
(better yet) encoding characters in messages in a useful, non-ASCIIish way?
I will summarize and post any responses.

------------------------------

Date: 3-NOV-1988 13:55:24
From: CDTPJB%CR83.NORTH-STAFFS.AC.UK@MITVMA.MIT.EDU
Subject: NEXPERT OBJECT experiences

I am forwarding this query on behalf of one of my research students, Anton
Grashion.

"I am currently engaged in a research project which combines causal geological
models with stochastic mathematical ones. We are proposing to use NEXPERT
OBJECT as a front/back-end to our present system written in 'C'. I would be
grateful if anyone who has experience of Nexpert to let me know of their
opinions/complaints/plaudits regarding its use."

Many Thanks

Phil Bradley.                                | Post: Dept. of Computing
JANET: cdtpjb@uk.ac.nsp.cr83                 |       Staffordshire Polytechnic
DARPA: cdtpjb%cr83.nsp.ac.uk@cunyvm.cuny.edu |       Blackheath Lane
Phone: +44 785 53511                         |       Stafford, UK
                                             |       ST18 0AD

------------------------------

Date: 7 Nov 88 05:28:49 GMT
From: pollux.usc.edu!burke@oberon.usc.edu  (Sean Burke)
Subject: need help finding a reference


Dear Netland Folks,

   I've got a reference which I've had trouble locating in libraries and
publisher's catalogs,  presumably because I've got some or all of it wrong.
If anyone can fill in the missing parts of this puzzle, I would be grateful.
The book is
        Title:          "Knowledge Aquisition for Rule-Based Systems"
        Editor:         Sandra L Marcus
        Publisher:      Kluwer Academic Publishing

   If you recognize this work, please email me the correct details.

Thanx,
Sean Burke

------------------------------

Date: 7 November 1988 23:15:53 CST
From: U23405 at UICVM (Michael J. Steiner  )
Subject: Sample expert systems...

I am trying to learn about expert systems, and I have found PILES of
literature/articles about expert system theory, but few actual sample
programs that I could pick apart (not literally) to learn about expert
systems. If anyone has any examples of expert systems (forward- or
backward-chaining), could you possibly send me a copy to fool around with?
The program should preferably be in C, although PASCAL, BASIC, FORTRAN,
LISP, and PROLOG would be acceptable also. (If anyone sends a program in
BASIC, I promise not to ruin his reputation by telling everyone :-))

Send all replies to:-----------------+           Michael Steiner
                                     +------>>   Email: U23405@UICVM.BITNET

------------------------------

Date: 8 Nov 88 05:26:00 GMT
From: osu-cis!killer!texbell!merch!cpe!hal6000!tdpvax!miker@ohio-state
      .arpa
Subject: backward or forward expert


I am investigating the building of an expert system as a
diagnostic tool (for a production software/hardware system).
From the literature I have read, it appears that a backward-chaining
expert shell would be the best method.  Have I come to the wrong
conclusion?  Is it feasable to use a forward-chaining inference
engine?

------------------------------

Date: 11 Nov 88 09:45:40 GMT
From: martin@prodix.liu.se (Martin Wendel)
Subject: Applied AI and CAM


I am doing a project on applied AI and CAM.

It concerns building an AI planning system for automating
NC-Lathe operations planning. My goal is that the system
will be competent enough to support unmanned manufacturing
on our NC-Lathe.

I would be most grateful if You could send references
concerning this area of AI. If anyone is working on a
similar project mail me.

P.S. Does anyone know the mail-adress to Caroline Hayes
at Carnegie Mellon.

------------------------------

Date: 13 Nov 88 05:13:52 GMT
From: nau@mimsy.umd.edu  (Dana S. Nau)
Subject: Re: Applied AI and CAM

In article <76@prodix.liu.se> martin@prodix.liu.se (Martin Wendel) writes:
>I am doing a project on ... building an AI planning system for automating
>NC-Lathe operations planning. ...
>I would be most grateful if You could send references
>concerning this area of AI.

I and my students have done lots of research in this area.  A few of the
published papers are listed below.

D. S. Nau, ``Automated Process Planning Using Hierarchical Abstraction,''
{\em TI Technical Journal},  Winter 1987, pp. 39-46.  Award winner, Texas
Instruments 1987 Call for Papers on AI for Industrial Automation.

D. S. Nau and M. Gray,  ``Hierarchical Knowledge Clustering:  A Way to
Represent and Use Problem-Solving Knowledge,'' J. Hendler, ed., {\em Expert
Systems: The User Interface}, Ablex, Norwood, NJ, 1987, pp. 81-98.

D. S. Nau, R. Karinthi, G. Vanecek, and Q. Yang,
``Integrating AI and Solid Modeling for Design and Process Planning,''
{\em Second IFIP Working Group 5.2 Workshop on Intelligent CAD},
University of Cambridge, Cambridge, UK, Sept. 1988.
--
Dana S. Nau
Computer Science Dept.          ARPA & CSNet:  nau@mimsy.umd.edu
University of Maryland          UUCP:  ...!{allegra,uunet}!mimsy!nau
College Park, MD 20742          Telephone:  (301) 454-7932

------------------------------

Date: 9 Nov 88 09:35:06 GMT
From: andreas@kuling.UUCP (Andreas Hamfelt)
Reply-to: andreas@kuling.UUCP (Andreas Hamfelt)
Subject: Seth R. Goldman's e-mail address

Does anyone know the e-mail address to Seth R. Goldman at UCLA?

------------------------------

Date: Wed, 09 Nov 88 14:45:14 EST
From: <ganguly@ATHENA.MIT.EDU>
Subject: ES in Electric Power Distribution

I am posting this notice for getting information on existing expert
systems in the area of operation and maintainance of electric power
distribution networks (state wide).  Any information would be highly
appreciated.

Thanks in advance,

Jaideep Ganguly
ganguly@athena.mit.edu

------------------------------

Date: Thu, 10 Nov 88 08:35:30 EST
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: Smalltalk opinions

Offline, 'cause this is old news to most I'm sure, could you send me
pros, cons, shrugs about:

        Smalltalk as an OO programming environment.
        Application area is network management w/ graphical
        interface, a desideratum is providing a shell for
        supporting visual design input from nonprogrammers
        manipulating icons and widgets more or less directly.

        Specifically, the PC implementation of Smalltalk
        from Digitalk Inc.

Send replies to:

        bn@bbn.com
        Bruce Nevin

If this is an inappropriate forum for this query, please let the
lack of response indicate that.  Thanks

------------------------------

Date: 10 Nov 88 18:54:52 GMT
From: mailrus!uflorida!haven!uvaarpa!uvaee!aam9n@ohio-state.arpa 
      (Ali Minai)
Subject: Learning arbitrary transfer functions


I am looking for any references that might deal with the following
problem:

y = f(x);         f(x) is nonlinear in x

Training Data = {(x1, y1), (x2, y2), ...... , (xn, yn)}

Can the network now produce ym given xm, even if it has never seen the
pair before?

That is, given a set of input/output pairs for a nonlinear function, can a
multi-layer neural network be trained to induce the transfer function
by being shown the data? What are the network requirements? What
are the limitations, if any? Are there theoretical bounds on
the order, degree or complexity of learnable functions for networks
of a given type?

Note that I am speaking here of *continuous* functions, not discrete-valued
ones, so there is no immediate congruence with classification. Any attempt
to "discretize" or "digitize" the function leads to problems because the
resolution then becomes a factor, leading to misclassification unless
the discretizing scheme was chosen initially with careful knowledge
of the functions characteristics, which defeats the whole purpose. It
seems to me that in order to induce the function correctly, the network
must be shown real values, rather than some binary-coded version (e.g.
in terms of basis vectors). Also, given that neurons have a logistic
transfer function, is there a theoretical limit on what kinds of functions
*can* be induced by collections of such neurons?

All references, pointers, comments, advice, admonitions are welcome.
Thanks in advance,

                    Ali


Ali Minai
Dept. of Electrical Engg.
Thornton Hall
University of Virginia
Charlottesville, VA 22901

aam9n@uvaee.ee.Virginia.EDU
aam9n@maxwell.acc.Virginia.EDU

------------------------------

Date: 11 Nov 88 16:33:46 GMT
From: bbn.com!aboulang@bbn.com  (Albert Boulanger)
Subject: Re: Learning arbitrary transfer functions

Check out the following report:

"Nonlinear Signal Processing Using Neural Networks:
Prediction and System Modelling"
Alan Lapedes and Robert Farber
Los Alamos Tech report LA-UR-87-2662

There was also a description of this work at the last Denver
conference on Neural Networks. Lapedes has a nice demonstration of
recovering the logistic map given a chaotic time series of the map. He
has also done this with the Macky-Glass time-delay equation.
It is rumored that techniques like this (Doyne Farmer as well as James
Crutchfield have non neural-based dynamical-systems techniques for
doing this, cf "Equations of Motion From a Data Series, James
Crutchfield & Bruce McNamera, Complex Systems, Vol #3, June 1987,
417-452.) are being used by companies to predict the stock market.

Albert Boulanger
BBN Systems & Technologies Corp.
10 Moulton St.
Cambridge MA, 02138
aboulanger@bbn.com

------------------------------

Date: 11 Nov 88 22:21:10 GMT
From: mailrus!umich!itivax!dhw@ohio-state.arpa  (David H. West)
Subject: Re: Learning arbitrary transfer functions

In a previous article, Ali Minai writes:
>
>I am looking for any references that might deal with the following
>problem:
>
>y = f(x);         f(x) is nonlinear in x
>
>Training Data = {(x1, y1), (x2, y2), ...... , (xn, yn)}
>
>Can the network now produce ym given xm, even if it has never seen the
>pair before?
>
>That is, given a set of input/output pairs for a nonlinear function, can a
>multi-layer neural network be trained to induce the transfer function
                                                 ↑↑↑
An infinite number of transfer functions are compatible with any
finite data set.  If you really prefer some of them to others, this
information needs to be available in computable form to the
algorithm that chooses a function.  If you don't care too much, you
can make an arbitrary choice (and live with the result); you might
for example use the (unique) Lagrange interpolation polynomial of
order n-1 that passes through your data points, simply because it's
easy to find in reference books, and familiar enough not to surprise
anyone. It happens to be easier to compute without a neural network,
though :-)

-David West            dhw%iti@umix.cc.umich.edu
                       {uunet,rutgers,ames}!umix!itivax!dhw
CDSL, Industrial Technology Institute, PO Box 1485,
Ann Arbor, MI 48106

------------------------------

End of AIList Digest
********************

∂14-Nov-88  1740	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #127 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 14 Nov 88  17:40:32 PST
Date: Mon 14 Nov 1988 17:38-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #127
To: AIList@AI.AI.MIT.EDU


AIList Digest            Tuesday, 15 Nov 1988     Volume 8 : Issue 127


 Should AIList distribute job listings?

 Responses:

  Expert Systems in Scheduling
  Inherit through net
  References On Mass Terms
  Valiant's Learning Model
  Use of alternative metaphors and Analogies
  ES shells & C
  Congress on Cybernetics and Systems

----------------------------------------------------------------------

Date: Thu, 10 Nov 88 11:31 EST
From: steven horst                        
      <GKMARH%IRISHMVS.BITNET@MITVMA.MIT.EDU>
Subject: academic job listings and post-docs


The AI Digest recently hosted a set of inquiries and replies on the
topic of what universities offer programs in various areas of AI and
CS.  I should be interested in information on a similar and related
topic: namely, What universities are either (a) hiring for positions
in AI or CS (whether CS departments or in traditional departments
like philosophy and psychology), or (b) offering fellowships for
researchers in AI or CS?

I am not really sure what sort of deluge of information I may be
inviting here.  I realize that the Digest cannot serve as a
comprehensive job listing service for the entire AI community.
I expect, however, that the list of jobs and fellowships for AI and
CS within academia would not be prohibitively long and would be
of interest to a large number of subscribers.

     Steven Horst                 bitnet: gkmarh@irishmvs
     Department of Philosophy
     Notre Dame, IN  46556
     219-239-7458



[Editor's Note:

         As a matter of policy, I have not been including messages
relating to employment.  However, if enough interest is shown, perhaps
the policy should be changed.

        To put things in perspective, let me point out that I receive an
average of about three or four such postings per week, and that I would
expect this number to increase if I actually started including such
messages.  Many of these are from companies (as opposed to
universities), or individuals seeking employment.

        There are already existing channels for the dissemination of
job-related messages.  Traffic on AIList is very high by any standards.
(I am trying very hard to reduce it.)

        Perhaps some generous person would volunteer to compile such a
list, and then I could simply pass interested parties a pointer ...


                - nick]

------------------------------

Date: 4 Nov 88 16:00:54 GMT
From: Pat Prosser <mcvax!cs.strath.ac.uk!pat@uunet.UU.NET>
Reply-to: Pat Prosser <mcvax!cs.strath.ac.uk!pat@uunet.UU.NET>
Subject: Expert Systems in Scheduling


>Subject: ES for Scheduling Flexible Manufacturing Systems

>Hi!... I am compiling a survey of expert systems for scheduling CIM in general
>and FMS in particular.
>I am particularly interested in the limitations and effectiveness
> ...... and what role a human scheduler would play in the highly complex
>and constantly evolving environment of FMS. Does anyone know if ISIS still
>being used at Westinghouse?

Some ES schedulers worthy of study are:
(1) SEMIMAN: a scheduler for ASIC production, in use by Plessey, developed
    by Peter Ellerby in Reading University. This views scheduling as a
    dynamic CSP. Peter is currently installing his system ... it exists,
    I've seen it.
(2) DAS: a Distributed Asynchronous Scheduler. This again views scheduling as
    a dynamic CSP but distributes the task over many processors. It
    has similarities with the contract net metaphor and blackboard
    systems. We in Strathclyde have developed/implemented a
    demonstrator.
(3) SONIA: Claude le Pape's system. This is a blackboard architecture,
    looking much like OPIS
(4) OPIS: Stephen Smith. Apparently SS is now installing this for IBM
    wafer fab plant (I think)
(5) YAMS: Parunak's system, a contract net.
(6) ISIS: The last I heard of this Nancy Skoogland was re-implementing
    this in KEE.
(7) B.R Fox and Kempf: robot assembly ... opportunistic scheduling
    (apparently Fox was opportunistic to the degree of walking away
     from FMC with a Ferrari!)

Limitations and Effectiveness:
How long is a piece of string? Limitations depends on representations
used. Generally representing problem as constraints (temporal,
precedence, technological, preference) is rich enough to model most
domains. Also knowledge elicitation ... generally no one knows what a
good schedule is, generally they do know what a satisfactory schedule
is. Therefore in many cases knowledge must be created via simulation.
Effectiveness .... depends on what state the company's in. generally
most companies don't schedule and even fewer schedule reactively. So
... expect a big win.

Human Role:
See Peter Ellerby's "junk box approach". Take the human out of the
system and you are doomed. Until you can capture all domain knowledge,
keep the user in there.

The appropriatness of AI technology to scheduling: Want to make loadsamoney?


Patrick Prosser
University of Strathclyde
Dept of Computer Science
Glasgow

------------------------------

Date: 7 Nov 88 03:51:24 GMT
From: sword!gamma!pyuxp!nvuxj!nvuxl!nvuxh!hall@faline.bellcore.com 
      (Michael R Hall)
Subject: Re: Inherit through net

In a previous article, Siping Liu writes:
>In frame knowledge representation systems, knowledge
>can be inherited through the tree-style world hierarchies.
>i.e., each world has only one parent world.
>
>The question is:
  [Why not allow multiple parents?]

Sure, you can have multiple parents in some frame-inheritence
implementations.  KEE lets you do it.  You should be able to find
some literature on the research problems associated with doing this
type of inheritence gracefully.
--
Michael R. Hall                               | Bell Communications Research
"I'm just a symptom of the moral decay that's | nvuxh!hall@bellcore.COM
gnawing at the heart of the country" -The The | bellcore!nvuxh!hall

------------------------------

Date: 7 Nov 88 11:06 PST
From: Halvorsen.pa@Xerox.COM
Subject: Re: AIList Digest   V8 #120: References On Mass Terms

An interesting and detailed analysis of mass terms can be found in Jan Tore
Loenning's "Mass Terms and Quantification",  pp 1-52, Linguistics and
Philosophy, Vol 10, No. 1, February 1987.  He has done more on mass terms
and on the semantics of plural, and you can try to reach him at:
m_loenning_j%use.uio.uninett@TOR.nta.no, or
m_loenning_j@use.uio.uninett.

--Per-Kristian

------------------------------

Date: 8 Nov 88 13:44:38 GMT
From: bwk@mitre-bedford.arpa  (Barry W. Kort)
Subject: Re: Valiant's Learning Model

In a previous article, Dario Ringach writes:

 > Has anyone attempted to approach learning as a discrete time Markov
 > process on the hypothesis space H?  For instance at any time k let
 > h1=h(k) be the current hypothesis obviously there is defined for any
 > h2 in H a transition probability P(h(h+1)=h2|h(k)=h1) that depends
 > on the probability distribution Px and the learning algorithm A.

Look into Bayesian inference, Kalman filtering, and Kailath's
Innovations Process.  In each of these approaches, a current
best guess is updated as new information comes in.  I believe
Widrow's adaptive networks also exhibit such behavior.

--Barry Kort

------------------------------

Date: Wed, 9 Nov 88 21:32 EST
From: F1UPCHURCH%SEMASSU.BITNET@MITVMA.MIT.EDU
Subject: RE:use of alternative metaphors and Analogies


I would suggest starting with the Gentner and Stevens book Mental Models
(Lawrence Erlbaum Associates Publisher).  The Chapter by Gentner and Gentner
"Flowing Waters or Teeming Crowds:  Mental Models of Electricity" sounds
close to the description given.  J. M. Carroll has several papers on metaphors
and cognitive representations (Int. J. of Man-Machine Studies and IEEE
Trans. on Systems, Man, and Cybernetics) but these date back to 1982 and
1985. The ACM SIGCHI proceedings usually have articles related to metaphors
and analogies in learning to use software systems.

Richard Upchurch
F1UPCHURCH@SEMASSU.BITNET

------------------------------

Date: Thu, 10 Nov 88 13:09:29 PST
From: lambert@cod.nosc.mil (David R. Lambert)
Subject: ES shells & C

In response to query by sp@uunet.uu.net:

Intelligence/Compiler (about $500) is a hybrid (rules + frames + inheritance)
shell which can call C routines;  I don't know whether it can be called by a C
routine itself, however.  For more information, contact:

IntelligenceWare, Inc.
9800 S. Sepulveda Blvd. Suite 730
Los Angeles, CA  90045.
Phone:  (213) 417-8896.

David R. Lambert
lambert@nosc.mil

------------------------------

Date: 11 Nov 88 03:43:22 GMT
From: avsd!childers@bloom-beacon.mit.edu  (Richard Childers)
Subject: Re: Congress on Cybernetics and Systems

In article <2412@cs.Buffalo.EDU> lammens@sunybcs.UUCP (Johan Lammens) writes:

>What on earth is psychocybernetics and sociocybernetics?

Psycho-: prefix, referring to that internal domain of reality recognized by
        the science of psychology.

Socio-: prefix, referring to that portion of the external reality pertaining
        to human relationships which has been individually mapped by each one
        of us into the aforementioned internal domain.

Cybernetics : the science of decision-making.

Psychocybernetics : The study of one's internal decision-making process(es),
        usually for the purposes of self-analysis and/or self-improvement.

Sociocybernetics : The study of one's society and its approach to making and
implementing decisions, usually for similar purposes.

>JL.

-- richard

------------------------------

End of AIList Digest
********************

∂15-Nov-88  1504	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #128 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 15 Nov 88  15:03:51 PST
Date: Tue 15 Nov 1988 17:46-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #128
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 16 Nov 1988    Volume 8 : Issue 128

 Philosophy:

  Lightbulbs and Related Thoughts
  Attn set comments from a man without any
  Artificial Intelligence and Intelligence

 Notes on Neural Networks

----------------------------------------------------------------------

Date: 14 Nov 88 15:42:51 GMT
From: rochester!uhura.cc.rochester.edu!sunybcs!lammens@cu-arpa.cs.corn
      ell.edu  (Johan Lammens)
Subject: Re: Lightbulbs and Related Thoughts

In article <778@wsccs.UUCP> dharvey@wsccs.UUCP (David Harvey) writes:
>Don't forget to include the iconic memory.  This is the buffers
>so to speak of our sensory processes.  I am sure that you have
>saw many aspects of this phenomenon by now.  Examples are staring
>at a flag of the United States for 30 seconds, then observing the
>complementary colors of the flag if you then look at a blank wall
>(usually works best if the wall is dark).  [...]

Perhaps this question is a witness to my ignorance, but isn't the phenomenon
you describe a result of the way the retina processes images, and if
so, do you mean to say that iconic memory is located in the retina?

------------------------------------------------------------------------------
Jo Lammens     Internet:  lammens@cs.Buffalo.EDU
               uucp    :  ..!{ames,boulder,decvax,rutgers}!sunybcs!lammens
               BITNET  :  lammens@sunybcs.BITNET

------------------------------

Date: Mon, 14 Nov 88 15:14:20 CST
From: alk@ux.acss.UMN.EDU
Subject: Attn set comments from a man without any


The problem of constraint of the attention set by prior knowledge
which was observed by Tony Stuart, i.e. that a known solution may inhibit
the search for an alternative, even when the known solution does not
have optimal characteristics, goes far beyond the range of David Harvey's
statement that 'the only thing that can be said is that insconsistencies of
data with the rule base must allow for retraction of the rule and assertion
for [sic] new ones.'  Stuart's observation, unless I miscontrue [please
correct me] is not focused on the deduction of hypotheses, but extends also
to realms of problem-solving wherein the suitability of a solution is
(at the least) fuzzy-valued, if not outright qualitative.
The correctness of a solution is not so
much at issue in such a case as is the *suitability* of that solution.
Of course this suggests the use of fuzzy-valued backward-chaining
reasoning as a possible solution to the problem (the problem raised by
Tony Stuart, not the "problem" faced by the AI entity), but I am unclear
as to what semantic faculties are required to implement such a system.
Perhaps the most sensible solution is to allow resolution of all
paths to continue in parallel (subconscious work on the "problem")
for some number of steps after a solution is already discovered.
(David Harvey's discussion prompts me to think in Prolog terms here.)

Why do I quote "problem"?  Deconstruct!  In this broader context,
a problem may consist of a situation faced by the AI entity, without
the benefit of a programmatic goal in the classical sense.
What do I mean by this?  I'm not sure, but its there, nagging me.
Of course goal-formulation must be driven, but at some point
the subgoal-goal path reaches an end.  This is where attention set
and sensation (subliminal suggestion?  or perhaps those continuing
resolution processes, reawakened by the satisfaction of current
goals--the latter being more practically useful to the human
audience of the AI entity) become of paramount importance.

Here I face the dilemma:  Are we building a practical, useful,
problem solving system, or are we pursuing the more elevated (???)
goal of writing a program that's proud of us?  Very different things!

Enough rambling.  Any comments?

--alk@ux.acss.umn.edu, BITNET: alk@UMNACUX.
U of Mn ACSS <disclaimer>
"Quidquid cognoscitur, cognoscitur per modum cognoscentis"

------------------------------

Date: 15 Nov 88 02:29:12 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: Re: Artificial Intelligence and Intelligence

In article <484@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>Definition of Intelligence:
>
>1. Know how to solve problems.
>2. Know which problems are unsolvable.
>3. Know #1, #2, and #3 defines intelligence.
>
>This is the correct definition of intelligence.  If anyone disagrees, please
>state so and why.
>
(Gilbert Cockton is going to love me for this, I can tell...)
Intelligence is a social construct, an ascription of value to certain
characteristics and behaviours deemed to be mental.  One child who has
memorized the periodic table of the elements will be deemed intelligent,
another child who has memorized baseball scores for the last N years
will be deemed sports-mad, even though they may have acquired comparable
bodies of information _by_comparable_means_.  If we have three people in
a room: Subject, Experimenter, and Informant, if Subject does something,
and Informant says "that was intelligent", Experimenter is left wondering
"is that a fact about Subject's behaviour, or about Informant's culture?"
The answer, of course, is "yes it is".

Dijkstra's favourite dictionary entry is
    "Intelligent, adj. ... able to perform the functions of a computer ..."
(Dijkstra doesn't think much of AI...)

In at least some respects, computers are already culturally defined as
intelligent.

>Human beings are not machines.
I agree with this.

>Human beings are capable of knowing which problems are unsolvable, while
>machines are not.
But I can't agree with this!  There are infinitely many unsolvable
problems, and determining whether a particular problem is unsolvable
is itself unsolvable.  This does _not_ mean that a machine cannot
determine that a particular problem _is_ solvable, only that there
cannot be a general procedure for classifying _all_ problems which is
guaranteed to terminate in finite time.  Human beings are also capable
of giving up, and of making mistakes.  Most of the unsolvable problems
I know about I was _told_; machines can be told!

Human beings are not machines, but they aren't transfinite gods either.

------------------------------

Date: Mon, 14 Nov 88 22:35:47 CST
From: David Kanecki <kanecki@vacs.uwp.wisc.edu>
Subject: Notes on Neural Networks


Notes on Neural Networks:


During the month of September while trying various
experiements on neural networks I noted two observations:

1. Based on how the data for the A and B matrix
   are setup the learning equation of:
                                 T
       w(n)=w(n-1)+nn(t(n)-o(n)*i (n)

    may take more presentations for the system to learn
    then A and B output.

2. Neural Networks are self correcting in that if a
   incorrect W matrix is given by using the presentation/
   update process the W matrix will give the correct answers,
   but the value of the individual elements will differ when
   compared to a correct W matrix.


Case 1: Different A and B matrix setup

For example, in applying neural networks to the XOR problem
I used the following A and B matrix:

A    H  | H  B
------- |------
0 0  0  | 0  0
0 1  0  | 0  1
1 0  0  | 0  1
0 1  1  | 1  1

My neural network learning system took 12 presentations to
arrive at the correct B matrix when presented with the corresponding
A matrix. The W matrix was:


W(12) =     |  -0.5  0.75 |
            |  -0.5  0.75 |
            |  3.5  -1.25 |


For the second test I set the A and B matrix as follows:

A    H  | B
------------
0 0  0  | 0
0 1  0  | 1
1 0  0  | 1
1 1  1  | 0

This setup took 8 presentations for my neural network learning
system to arrive at a correct B matrix when presented with the
corresponding A matrix. The final W matrix was:

W(8) = | -0.5 -0.5 2.0 |


Conclusion: These experiements indicate to me that a
systems learning rate can be increased by presenting the
least amount of extraneous data.


--------------


Case 2: Self Correction of Neural Networks

In this second experiment I found that neural networks
exhibit great flexibility. This experiment turned out to
be a happy accident. Before I had developed my neural network
learning system I was doing neural network experiments by
speadsheet and hand transcription. During the transciption
three elements in 6 X 5 W matrix had the wrong sign. For example,
the resulting W matrix was:


       | 0.0  2.0  2.0  2.0  2.0 |
       |-2.0  0.0  4.0  0.0  0.0 |
W(0)=  | 0.0  2.0 -2.0  2.0 -2.0 |
       | 0.0  2.0  0.0 -2.0  2.0 |
       |-2.0  4.0  1.0  0.0  0.0 |
       | 2.0 -4.0  2.0  0.0  0.0 |




W(24)   = | 0.0    2.0   2.0   2.0   2.0  |
          |-1.53  1.18  1.18  -0.25 -0.15 |
          | 0.64  0.12  -0.69  1.16 -0.50 |
          | 0.27 -0.26  -0.06 -0.53  0.80 |
          |-1.09  1.62   0.79 -0.43 -0.25 |
          | 1.53 -1.18  -0.68  0.25  0.15 |


By applying the learning algorithm it took  24 presentations
the W matrix to give correct B matrix when presented with corresponding
A matrix.


But, when the experiment was run on my neural network learning
system I had a W(0) matrix of:

W(0) =   | 0.0  2.0  2.0  2.0  2.0  |
         |-2.0  0.0  4.0  0.0  0.0  |
         | 0.0  2.0 -2.0  2.0 -2.0  |
         | 0.0  2.0 -2.0 -2.0  2.0  |
         |-2.0  4.0  0.0  0.0  0.0  |
         | 2.0 -4.0  0.0  0.0  0.0  |


After 5 presentations the W(5) matrix came out to be:

W(5) =   | 0.0   2.0  2.0  2.0  2.0  |
         |-2.0   0.0  4.0  0.0  0.0  |
         | 0.0   2.0 -2.0  2.0 -2.0  |
         | 0.0   2.0 -2.0 -2.0  2.0  |
         | 2.0  -4.0  0.0  0.0  0.0  |

Conclusion: Neural networks are self correcting but the final
W matrix way have different values. Also, if a W matrix does
not have to go through the test/update procedure the W matrix
could be used both ways in that a A matrix generates the B matrix
and a B matrix  generates the A matrix as in the second example.

----------------


I am interested in communicating and discussing various
aspects of neural networks. I can be contacted at:

kanecki@vacs.uwp.wisc.edu

or at:

David Kanecki
P.O. Box 93
Kenosha, WI 53140

------------------------------

End of AIList Digest
********************

∂17-Nov-88  2106	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #129 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 17 Nov 88  21:05:58 PST
Date: Thu 17 Nov 1988 23:44-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #129
To: AIList@AI.AI.MIT.EDU


AIList Digest            Friday, 18 Nov 1988      Volume 8 : Issue 129

 Queries and Responses:

  Evaluating Expert Systems (1 response)
  College Advisor
  Social Impact of A.I.
  Program for orifice-plate sizing
  "Iterative Deepening" reference wanted (1 response)
  AI & the DSM-III
  Project Planning: Request for Bibliography
  Questionaire (Neural Nets)
  Neural Nets and Search
  Learning arbitrary transfer functions (5 responses)

----------------------------------------------------------------------

Date: 15 Nov 88 17:06:44 GMT
From: alejandr@june.cs.washington.edu  (Alejandra Sanchez-Frank)
Subject: Evaluating Expert Systems


Do you know of any research, paper or publication on Expert System
Evaluation?
I'm trying to define what a "good" expert system should be (What
characteristics it should have, and what criteria one should use
to evaluate it).

I would appreciate any comments and/or references you may have.

Thanks,

Alejandra

Alejandra Sanchez-Frank (alejandr@june.cs.washington.edu)
Computer Science Department
University of Washington FR-35
Seattle, Washington 98105

------------------------------

Date: 16 Nov 88 13:46:42 GMT
From: osu-cis!dsacg1!ntm1169@ohio-state.arpa  (Mott Given)
Subject: Re: Evaluating Expert Systems


> Do you know of any research, paper or publication on Expert System
> Evaluation?
> I'm trying to define what a "good" expert system should be (What
> characteristics it should have, and what criteria one should use
> to evaluate it).

   I would recommend the book "Expert Systems: A Non-programmer's Guide
   to Development and Applications" by Paul Siegel.  It was published
   by TAB Professional and Reference Books (Blue Ridge Summit, PA).
   Chapter 10 has information on evaluating the knowledge base.

   I would also recommend Paul Harmon's books.  One of the books is called
   Expert Systems: Artificial Intelligence in Business, published by John
   Wiley.

   Finally, I would recommend a recently published book called Expert
   Systems for Experts.  I don't have the author or publisher for it.

--
Mott Given @ Defense Logistics Agency ,DSAC-TMP, P.O. Box 1605,
            Systems Automation Center, Columbus, OH 43216-5002
UUCP:  mgiven%dsacg1.uucp@daitc.arpa              I speak for myself
Phone:       614-238-9431

------------------------------

Date: 15 Nov 88 11:52:06 PST
From: oxy!chico@csvax.caltech.edu (Gary Patrick Simonian)
Subject: College Advisor

           I am an Occidental student and a senior in the Cognitive
           Science major.  As a requirement we are to present a paper
           or project for evaluation.  I have decided to program an
           expert system which performs the duties of a college
           cousellor - given the types of interests, the "counsellor"
           will advise the student (user) as to which major would be
           best to pursue.  Therefore, I would appreciate some
           suggestions on how to model my program, and which language
           would best suit my purpose.

------------------------------

Date: 16 Nov 88 03:04:18 GMT
From: mitel!sce!karam@uunet.uu.net  (Gerald Karam)
Subject: Social Impact of A.I.

I'm looking for articles or books on the social impact of A.I.
Recent literature preferred.  Please reply directly so the net
doesn't get clogged.  If there is a suffiient response i'll post a
summary.

thanks, gerald karam

karam@sce.carleton.ca
karam@sce.uucp

------------------------------

Date: 16 Nov 88 04:34:46 GMT
From: cunyvm!ndsuvm1!ndsuvax!ncsingh@psuvm.bitnet  (arun singh)
Subject: program for orifice-plate sizing


I am writing an Expert system for selection of flowmeters. I am looking
for Algorithmic program for orifice-plate sizing calculations.
Codes for the above can be in c,pascal or fortran.

Please send me email.
Thanks for your help in advance.

Arun
--
Arun Singh,                    BITNET: ncsingh@plains.nodak.edu.bitnet
Department Of Computer Science
300#Minard Hall, N.D.State University,Fargo
ND 58105.      ARPANET,CSNET: ncsingh%plains.nodak.edu.bitnet@cunyvm.cuny.edu

------------------------------

Date: 16 Nov 88 10:55:21 GMT
From: quintus!ok@unix.sri.com
Subject: "Iterative Deepening" reference wanted

In my Prolog tutorial, I described a search method intermediate between
depth first and breadth first, called
        Iterative Deepening
or      Consecutively Bounded Depth-First Search

Does anyone know who invented these terms,
and can you give me references to readily available books or journal
articles describing them?  (I know what iterative deepening is, I'd
just like to put proper references into the next draft of the
tutorial.)

------------------------------

Date: 16 Nov 88 16:15:51 GMT
From: mailrus!eecae!netnews.upenn.edu!linc.cis.upenn.edu!hannan@husc6.
      harvard.edu  (John Hannan)
Subject: Re: "Iterative Deepening" reference wanted

In article <688@quintus.UUCP> ok@quintus () writes:
>In my Prolog tutorial, I described a search method intermediate between
>depth first and breadth first, called
>       Iterative Deepening
>or     Consecutively Bounded Depth-First Search
>
>Does anyone know who invented these terms,

Check out "Depth-First Iterative Deepening: An Optimal Admissible Tree
Search," by R. Korf in AI Journal 27(1):97-109 and also a related
article by Korf in IJCAI85 proceedings.  In the intro to his journal
paper, Korf briefly describes the origin of the algorithm but he seems to
have been the first person to use this term.

------------------------------

Date: Wed, 16 Nov 88 14:07 CST
From: ANDERSJ%ccm.UManitoba.CA@MITVMA.MIT.EDU
Subject: AI & the DSM-III

Hi Again.  I have a colleague who is attempting to write a paper on
the use of AI techniques in psychiatric diagnosis in general, and
more specifically using the DSM-III.  He tells me he's having a
great deal of trouble finding any material on this, & is between
computer accounts at the moment, and I told him I would post something
for him.  If anybody has any references, material, or any info, it would
be greatly appreciated.  His address is:

               Ron Mondor
               Dept. of Computer Science,
               University of Manitoba,
               Winnipeg, Manitoba, Canada, R3T 2N2

E-mail may be forwarded thru me: <ANDERSJ@UOFMCC.BITNET> (John Anderson)
With Great thanks,
                 J. Anderson

------------------------------

Date: 17 Nov 88 14:47:03 GMT
From: paul.rutgers.edu!cars.rutgers.edu!byerly@rutgers.edu  (Boyce
      Byerly )
Subject: Project Planning: Request for Bibliography


I am currently working on a project in Object-Oriented programming.
The domain I chose was Office Automation; namely, how to represent all
the information-processing needs of a business as a series of discrete
"objects" which get their work done by passing messages among
themselves.

My attention is mainly focused on the idea of project planning and
management now.  I am interested in how different aspects of a project
(financial, personnel, management, capital items) interrelate when
working towards some goal (which might be defined in a slightly
"fuzzy" fashion).

It's my impression that a lot of software has been written along the
the lines of project planning.  I would be interested in both academic
systems (built to demonstrate some theory) and real world systems
(built to get the job done).   An "AI Orientation" in these projects
isn't required for me to be interested in them.

Can anyone help me out with references to papers, books, projects, or
periodicals which give more information?

        Thanks in advance,

        Boyce

------------------------------

Date: 15 Nov 88 18:14:44 GMT
From: sdcc6!sdcc18!cs162faw@ucsd.edu  (Phaedrus)
Subject: Questionaire


About two weeks ago, I posted a desire for Axon/Netset information,
I'm afraid my scope was much to small, considering I only received
two responses.  I'm sorry to muck up the newsgroup, but I really do
need this information, and my posting disappeared after a week or so.
If you've ever used a neural network simulator or if you have good
opinions regarding representations.  Provided is a questionare regarding
Neural-Networking/PDP.

Information from this questionare will be used to design a user
interface for an industrial neural network program which
may perform any of the traditional PDP problems (e.g.,
back prop, counter prop, constraint satisfact, etc). The program
can handle connections set up in any fashion (e.g., fully
connected, feed-back connected, whatever), and it can also
toggle between syncronous or asyncronous modes.

What we're really interested in is what you feel is "hard"
or "easy" about neural net representations.

1. What type of research have you done ?

2. What type of research are you likely to do in the future ?

3. What is your programming background ?

4. What simulators have you used before ?
   What did you like about their interfaces ?

5. Have you used graphical interfaces before ?
   Did you like them ?
   Do you think that you could use them for research-oriented problems ?
   Why or why not ?

6. Do you prefer to work with numerical representations of
   networks ? Weight matrices ? Connection Matrices ?
   Why or why not ?

7. Would you like to use a graphical PDP interface if it could
   craft complicated networks easily ? Why or why not ?

8. Do you forsee any difficulties you might have with graphical
   interfaces ?
!
Any other comments along the same vein will be appreciated.

Your opinion is REALLY wanted, so please take 5 minutes and hit 'r-'!!!
Thank you,
James Shaw

------------------------------

Date: 15 Nov 88 23:44:51 GMT
From: ai!zeiden@speedy.wisc.edu  (Matthew Zeidenberg)
Subject: Neural Nets and Search

Does anyone know of any papers on neural network solutions to
problems involving heuristic search? I do not mean optimization problems
such as Traveling Salesman, although these are obviously related.

Please reply by e-mail and I will summarize for the net.

------------------------------

Date: 15 Nov 88 14:14:08 GMT
From: efrethei@afit-ab.arpa  (Erik J. Fretheim)
Subject: Re: Learning arbitrary transfer functions

In article <378@itivax.UUCP> dhw@itivax.UUCP (David H. West) writes:
In article <399@uvaee.ee.virginia.EDU> aam9n@uvaee.ee.virginia.EDU (Ali Mina
>
>I am looking for any references that might deal with the following
>problem:
>
>y = f(x);         f(x) is nonlinear in x
>
>Training Data = {(x1, y1), (x2, y2), ...... , (xn, yn)}
>
>Can the network now produce ym given xm, even if it has never seen the
>pair before?
>
>That is, given a set of input/output pairs for a nonlinear function, can a
>multi-layer neural network be trained to induce the transfer function
>                                                 ↑↑↑
I don't know about non-linear functions but, I did try to train a net
(back prop) to learn to compute sine(X) given X.  I trained it for two
weeks straight (virtually sole user) on an ELXSI.  The result was that in
carrying the solution to 5 significant decimal places I got a correct
solution 40% of the time.  Although this is somewhat better than random
chance, it is not good enough to be useful.  I will also note that the
solution did not improve dramatically in the last week of training so I
feel I can safely assume that the error rate would not decrease.  I also
tryied the same problem using a two's complement input/output and was able
to get about the same results in about the same amount of training.  The
binary representation needed a few more nodes though.  I was not able to
spot any significant or meaningful patterns in the errors the net was
making and do not feel that reducing the number of significant decimal places
would help (even if it were meaningful) as the errors made were not
consistantly in the last couple of digits, but rather were spread throughout
the number (in both binary and decimal representations).
Based on these observations, I don't think
a net can be expected to produce any meaningful function.  Sure it can
do 1 + 1 and other simple things, but trips when it hits something not
easily exhaustively (or nearly exhaustively) trained.

Just my opinion, but ...

------------------------------

Date: 15 Nov 88 20:59:46 GMT
From: mailrus!uflorida!novavax!proxftl!tomh@ohio-state.arpa  (Tom
      Holroyd)
Subject: Re: Learning arbitrary transfer functions

Another paper is "Learning with Localized Receptive Fields," by John Moody
and Christian Darken, Yale Computer Science, PO Box 2158, New Haven, CT 06520,
available in the Proceedings of the 1988 Connectionist Models Summer School,
published by Morgan Kaufmann.

They use a population of self-organizing local receptive fields, that cover
the input domain, where each receptive field learns the output value for the
region of the input space covered by that field.  K-means clustering is used
to find the receptive field centers.  Interpolation via weighted average of
nearby fields.  1000 times faster convergence than back-prop with conjugate
gradient.

Tom Holroyd
UUCP: {uflorida,uunet}!novavax!proxftl!tomh

The white knight is talking backwards.

------------------------------

Date: 15 Nov 88 22:08:20 GMT
From: phoenix!taiwan!hwang@princeton.edu  (Jenq-Neng Hwang)
Subject: Re: Learning arbitrary transfer functions


 A. Lapedes and R. Farber from Los Almos National
Lab. have a technical report LA-UR87-2662, entitled
"nonlinear signal processing using neural networks: prediction and system
modelling", which tried to solve the problem mentioned.
They also have a paper published in
"Proc. IEEE, Conf. on Neural Information Processing
  Systems -- Natural and Synthetic, Denvor, November 1987",
entitled "How Neural Nets Work", pp 442-456.

------------------------------

Date: 16 Nov 88 21:04:58 GMT
From: sword!gamma!pyuxp!nvuxj!nvuxl!nvuxh!hall@faline.bellcore.com 
      (Michael R Hall)
Subject: Re: Learning arbitrary transfer functions

In a previous article, Ali Minai writes:
>I am looking for any references that might deal with the following
>problem:
>
>y = f(x);         f(x) is nonlinear in x
>
>Training Data = {(x1, y1), (x2, y2), ...... , (xn, yn)}
>
>Can the network now produce ym given xm, even if it has never seen the
>pair before?
>
>That is, given a set of input/output pairs for a nonlinear function, can a
>multi-layer neural network be trained to induce the transfer function
>by being shown the data? What are the network requirements? What
>are the limitations, if any? Are there theoretical bounds on
>the order, degree or complexity of learnable functions for networks
>of a given type?
>
>Note that I am speaking here of *continuous* functions, not discrete-valued
>ones, so there is no immediate congruence with classification.

The problem you raise is not just a neural net problem.  Your
function learning problem has been termed "concept learning" by
some researchers (e.g. Larry Rendell).  A concept is a function.
There are many nonneural learning algorithms (e.g. PLS1) that are
designed to learn concepts.  My opinion is that concept learning
algorithms generally work better, easier, and faster than neural
nets for learning concepts.  (Anybody willing to pit their neural
net against my implementation of PLS to learn a concept from natural
data?)  Neural nets are more general than concept learning
algorithms, and so it is only natural that they should not learn
concepts as quickly (in terms of exposures) and well (in terms of
accuracy after a given number of exposures).

Valiant and friends have come up with theories of the sort you
desire, but only for boolean concepts (binary y's in your notation)
and learning algorithms in general, not neural nets in particular.
"Graded concepts" are continuous.  To my knowledge, no work has
addressed the theoretical learnability of graded concepts.  Before
trying to come up with theoretical learnability results for neural
networks, one should probably address the graded concept learning
problem in general.  The Valiant approach of a Probably Almost
Correct (PAC) learning criterion should be applicable to graded
concepts.
--
Michael R. Hall                               | Bell Communications Research
"I'm just a symptom of the moral decay that's | hall%nvuxh.UUCP@bellcore.COM
gnawing at the heart of the country" -The The | bellcore!nvuxh!hall

------------------------------

Date: Wed, 16 Nov 88 17:45:54 EST
From: Raul.Valdes-Perez@B.GP.CS.CMU.EDU
Reply-to: valdes@cs.cmu.edu
Subject: learning transfer functions

System identification treats the induction of a mathematical
characterization of a dynamical system from behavioral data.
A nice tutorial on system identification is the following article:
             o   ..
        K.J. Astrom and P. Eykhoff
        System Identification - A Survey
        Automatica, Vol 7, 1971, 123-162

I recall that it includes a discussion on learning transfer functions.

        Raul Valdes-Perez

------------------------------

End of AIList Digest
********************

∂22-Nov-88  1940	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #130 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 22 Nov 88  19:40:07 PST
Date: Tue 22 Nov 1988 22:21-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #130
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 23 Nov 1988    Volume 8 : Issue 130

 Announcements:

  NSF Program in Knowledge Models and Cognitive Systems
  Computer Games Olympiad
  Call for Papers (ISMIS'89)
  Congress on Cybernetics and Systems

----------------------------------------------------------------------

Date: Tue, 15 Nov 88 11:26:05 -0500
From: "Henry J. Hamburger" <hhamburg@note.nsf.gov>
Subject: NSF Program in Knowledge Models and Cognitive Systems


                  NATIONAL SCIENCE FOUNDATION
                  ---------------------------
                           PROGRAM in
                           ----------
             KNOWLEDGE MODELS and COGNITIVE SYSTEMS
             --------------------------------------

Knowledge Models and Cognitive Systems is a relatively new name
at NSF, but the Program has significant continuity with earlier
related programs.  This holds for its scientific subject matter
and also with regard to its researchers, who come principally
from computer science and the cognitive sciences, each of these
emphatically including important parts of artificial intelligence.
Many such individuals are also interested in areas supported by
other NSF programs, especially in this division -- the Division
of Information, Robotics and Intelligent Systems (IRIS) -- and in
the Division of Behavioral and Neural Sciences.

This unofficial message has two parts.  The first is a top-down
description of the major areas of current Program support.  There
follows a list of some particular topics in which there is strong
current activity in the Program and/or perceived future
opportunity.  Anyone needing further information can contact the
Program Director, Henry Hamburger, who is also the sender of this
item.  Please use e-mail if you can: hhamburg@b.nsf.gov  or else
phone: 202-357-9569.  To get a copy of the Summary of Awards for
this division (IRIS), call 202-357-9572

Many of you will be hearing from me with requests to review
proposals.  To be sure they are of interest to you, feel free to
send me a list of topics or subfields.


                MAJOR AREAS of CURRENT SUPPORT
                ------------------------------

The Program in Knowledge Models and Cognitive Systems supports
research fundamental to the general understanding of knowledge
and cognition, whether in humans, computers or, in principle,
other entities.  Major areas currently receiving support include
(i) formal models of knowledge and information, (ii) natural
language processing and (iii) cognitive systems.  Each of these
areas is described and subcategorized below.

Applicants do not classify their proposals in any official way.
Indeed their work may be relevant to two or all three of the
categories (or conceivably to none of them).  In particular, it
is recognized that language is intertwined with (or part of)
cognition and that formality is a matter of degree.  For work
that falls only partly within the program, the program director
may conduct the evaluation jointly with another program, within
or outside the division.  Descriptions of the three areas follow.


FORMAL MODELS of KNOWLEDGE and INFORMATION:
-------------------------------------------

Recent work supported under the category Formal Models of
Knowledge and Information divides into formal models of three
things: (i) knowledge, (ii) information, and (iii) imperfections
in the two. In each case, the models may encompass both
representation and manipulation. For example, formal models of
both knowledge representation and inference are part of the
knowledge area.

The distinction between knowledge and information is that a piece
of knowledge tends to be more structured and/or comprehensive
than a piece of information.  Imperfections may include
uncertainty, vagueness, incompleteness and abductive rules.  Many
investigations contribute to two or all three categories, yet
emphasize one.


COGNITIVE SYSTEMS
-----------------
Four recognized areas currently receive support within Cognitive
Systems: (i) knowledge representation and inference, (ii)
highly parallel approaches, (iii) machine learning, and (iv)
computational characterization of human cognition.

The first area is characterized by symbolic representations and a
high degree of structure imposed by the programmer, in an attempt
to represent complex entities and carry out complex tasks
involving planning and reasoning.  The second area may have
similar long-term goals but takes a very different approach.  It
includes studies based on a high degree of parallelism among
relatively simple processing units connected according to various
patterns.  The third area, machine learning, has emerged as a
distinct area of study, though the choice between symbolic and
connectionist approaches is clearly relevant.  In all of the
first three areas, the research may be informed to a greater or
lesser degree by scientific knowledge of the nature of high-
level human cognition.  Characterizing such knowledge in
computational form is the objective of the fourth area.


NATURAL LANGUAGE PROCESSING
---------------------------
Recent work supported under the category Natural Language
Processing is in three overlapping areas: (i) computational
aspects of syntax, semantics and the lexicon, (ii) discourse,
dialog and generation, and (iii) systems issues.  The distinction
between the first two often involves such intersentential
concerns as topic, plan, and situation.  Systems issues include
the interaction and unified treatment of various kinds of
modules.


            TOPICS of STRONG CURRENT ACTIVITY and
            -------------------------------------
               OPPORTUNITY for FUTURE RESEARCH
               -------------------------------

Comments on this list are welcome.  It has no official status,
is subject to change, and, most important, is intended to be
suggestive, not prescriptive.  The astute reader will notice that
many of these topics transcend the neat categorization above.

Reasoning and planning in the face of
  imperfect information and a changing world

    - reasoning about reasoning itself: the time
        and resources taken, and the consequences

    - use and formal understanding of
        temporal and nonmonotonic logic

    - integration of numerical and symbolic approaches
        to uncertainty, imprecision and justification

    - multi-agent planning, reasoning,
        communication and coordination

Interplay of human and computational languages

    - commonalities in the semantic formalisms
        for human and computer languages

    - extending knowledge representation systems to
        support formal principles of human language

    - principles of extended dialog between humans
        and complex software systems, including
        those of the new computational sciences

Machine Learning of Classification,
  Problem-Solving and Scientific Laws

    - formal analysis of what features and parameter
        settings of both method and domain are
        responsible for successes.

    - reconciling and combining the benefits of
        connectionist, genetic and symbolic approaches

    - evaluating the relevance to learning of AI
        tools: planning, search, and learning itself

------------------------------

Date: 16 Nov 88 12:58:17 GMT
From: mcvax!ukc!eagle.ukc.ac.uk!icdoc!qmc-cs!pd@uunet.uu.net  (Paul
      Davison)
Subject: Computer Games Olympiad

[ I have been asked to post this but have nothing to do with the event, so
  please address any enquiries to the address given in the posting. ]

There is to be a Computer Games Olympiad in London in 1989.  The details
are as follows:

"The world's first Olympiad for computer programs will take place at the
Park Lane Hotel, London, from August 9th to 15th 1989.  This unique event
will feature tournaments for chess, bridge, backgammon, draughts, poker,
Go, and many other classic "thinking" games.  In every tournament all of
the competitors will be computer programs.  The role of the human operators
will merely be to tell their own programs what moves have been made by
their opponents.
The Computer Olympiad is organised by International Chess Master David Levy,
who is President of the International Computer Chess Association.
Anyone wanting more information on the event should send a large stamped
addressed envelope to:  Computer Olympiad, 11 Loudoun Road, London NW8 OLP,
England.

                        CALL FOR PAPERS

The 1st London Conference on Computer Games will take place as part of the
Computer Olympiad during the period August 9th to 15th 1989.  Papers are
invited on any aspect of programming computers to play "thinking" games
such as chess, bridge, Go, backgammon, etc.
The conference Chair will be Professor Tony Marsland, from the Computing
Science Department at the University of Alberta, Edmonton, Canada.  The
editor of the conference proceedings will be Don Beal, from the Computer
Science Department at Queen Mary College, London University.
Papers should preferrably be 3000 to 4000 words in length, and if possible,
should be submitted with an IBM-PC format disk containing the text as a
file for a widely-used word-processor (e.g. Wordstar).  The closing date
for submissions is May 9th 1989.  Papers should be sent to: Computer
Olympiad, 11 Loudoun Road, London NW8 OLP, England.
--
Paul Davison

UUCP:      pd@qmc-cs.uucp                       | Computer Science Dept
ARPA:      pd%cs.qmc.ac.uk@nss.ucl.ac.uk        | Queen Mary College
JANET:     pd@uk.ac.qmc.cs                      | Mile End Road
Voice:     +44 1 980 4811 x5250                 | London E1 4NS

------------------------------

Date: 21 Nov 88 16:08:44 GMT
From: wong@cu-arpa.cs.cornell.edu  (Mike Wong)
Subject: Call for Papers (ISMIS'89)


                          CALL FOR PAPERS
           FOURTH INTERNATIONAL SYMPOSIUM ON METHODOLOGIES
                      FOR INTELLIGENT SYSTEMS
      Charlotte, North Carolina, Hilton Hotel, University Place
                        October 12-14, 1989

SPONSORS: Energy Division of the ORNL, Martin Marietta Energy Systems,
University of North Carolina - Charlotte, University of Turin (ITALY)

PURPOSE OF THE SYMPOSIUM: This Symposium is intended to attract individuals
who are actively engaged both in theoretical and practical aspects of
intelligent systems. The goal is to provide a platform for a useful exchange
between theoreticians and practitioners, and to foster the cross-fertilization
of ideas in the following areas: approximate reasoning, expert systems,
intelligent databases, knowledge representation, learning and adaptive
systems, logic for A.I., neural networks.

SYMPOSIUM CHAIRMAN: Zbigniew W. Ras (UNC-Charlotte)

ORGANIZING COMMITTEE: Bill Chu (UNC-C), Mary Emrich (ORNL),
       Attilio Giordana (Turin, Italy), Zbigniew Michalewicz (New Zealand),
       Alberto Pettorossi (Rome, Italy), Pietro Torasso (Turin, Italy),
       S.K.Michael Wong (Cornell), Maria Zemankova (NSF & UT-Knoxville),
       Jan Zytkow (George Mason)

PROGRAM COMMITTEE: Luigia Aiello (Italy), Andrew G. Barto (UM-Amherst),
       James Bezdek (Boeing), Alan W. Bierman (Duke), John Bourne (Vanderbilt),
       Jaime Carbonell (CMU), Peter Cheeseman (NASA), Su-shing Chen (UNC-C),
       Melvin Fitting (CUNY), Brian R. Gaines (Canada), Peter E. Hart
       (Syntelligence), Marek Karpinski (West Germany), Kurt Konolige (SRI),
       Catherine Lassez (IBM-T.J Watson), R. Lopez de Mantaras (Spain),
       Ryszard Michalski (George Mason), Jack Minker (Maryland),
       Jose Miro (Spain), Masao Mukaidono (Japan), Ephraim Nissan (Israel),
       Rohit Parikh (CUNY), Reind van de Riet (The Netherlands),
       Colette Rolland (France), Lorenza Saitta (Italy), Eric Sandewall
       (Sweden), Joachim W. Schmidt (West Germany), Richmond Thomason
       (Pittsburgh), David S. Warren (SUNY-Stony Brook)

INVITED SPEAKERS: Jon Doyle (MIT), Ryszard Michalski (George Mason),
       Richard Waldinger (SRI)

SUBMISSION AND INFORMATION: Send four copies of a complete paper to one of
the addresses below:
       Dr. S.K. Michael Wong, Cornell Univ., Comp. Sci., Upson Hall,
                              Ithaca, New York 14853-7501
                            or
       Dr. A. Giordana, Univ. of Turin, Comp. Sci., Corso Svizzera 185,
                        10149 Torino, Italy

TIME SCHEDULE:
       Submission of papers..........................March 15, 1989
       Notification of acceptance....................May 15, 1989
       Final paper to be included in proceedings.....June 15, 1989

------------------------------

Date: 21 Nov 88 19:18:17 GMT
From: spnhc@cunyvm.bitnet  (Spyros Antoniou)
Subject: Congress on Cybernetics and Systems


             WORLD ORGANIZATION OF SYSTEMS AND CYBERNETICS

         8 T H    I N T E R N A T I O N A L    C O N G R E S S

         O F    C Y B E R N E T I C S    A N D   S Y S T E M S

 JUNE 11-15, 1990 at Hunter College, City University of New York, USA

     This triennial conference is supported by many international
groups  concerned with  management, the  sciences, computers, and
technology systems.

      The 1990  Congress  is the eighth in a series, previous events
having been held in  London (1969),  Oxford (1972), Bucharest (1975),
Amsterdam (1978), Mexico City (1981), Paris (1984) and London (1987).

      The  Congress  will  provide  a forum  for the  presentation
and discussion  of current research. Several specialized  sections
will focus on computer science, artificial intelligence, cognitive
science, biocybernetics, psychocybernetics  and sociocybernetics.
Suggestions for other relevant topics are welcome.

      Participants who wish to organize a symposium or a section,
are requested  to submit a proposal ( sponsor, subject, potential
participants, very short abstracts ) as soon as possible, but not
later  than  September 1989.  All submissions  and correspondence
regarding this conference should be addressd to:

                    Prof. Constantin V. Negoita
                         Congress Chairman
                   Department of Computer Science
                           Hunter College
                    City University of New York
             695 Park Avenue, New York, N.Y. 10021 U.S.A.

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|   Spyros D. Antoniou  SPNHC@CUNYVM.BITNET  SDAHC@HUNTER.BITNET    |
|                                                                   |
|      Hunter College of the City University of New York U.S.A.     |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

------------------------------

End of AIList Digest
********************

∂22-Nov-88  2227	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #131 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 22 Nov 88  22:27:23 PST
Date: Tue 22 Nov 1988 22:36-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #131
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 23 Nov 1988    Volume 8 : Issue 131

 Philosophy:

  Limits of AI
  Epistemology of Common Sense (2 messages)
  Capabilities of "logic machines" (2 messages)
  Lightbulbs and Related Thoughts (2 messages)
  Computer science as a subset of artificial intelligence

----------------------------------------------------------------------

Date: 7 Nov 88 17:23:43 GMT
From: mcvax!ukc!etive!hwcs!nick@uunet.uu.net  (Nick Taylor)
Subject: Re: Limits of AI

In Article 2254 of comp.ai, Gilbert Cockton writes :
 "... intelligence is a social construct ... it is not a measure ..."

Hear, hear. I entirely agree. I suppose it was inevitable that this discussion
would boil down to the problem of defining "intelligence". Still, it was fun
watching it happen anyway.

I offer the following in an effort to clarify the framework within which we
must discuss this topic. No doubt to some people this will seem to obfuscate
the issue rather than clarify it but, either way, I am sure it will
generate some discussion.

Like Gilbert most people treat the idea of intelligence as an intra-species
comparitor. This is all well and good so long as we remember that it is
just a social construct which we find convenient when comparing the
apparent intellectual abilities of two people or two dogs or two nematodes,
etc.

However, when we move outside a single species and attempt to say things
such as "humans are more intelligent than nematodes" we are in a very
different ball game. We are now using the concept of intelligence as an
inter-species comparator. Whilst it might seem natural to use the same
concept we really have no right to. One of the most important axioms of
any scientific method is that you cannot generalise across hierarchies.
What we know to be true of humans cannot be applied to other species
willy-nilly.

Until we generate a concept ('label') of inter-species intelligence which
cannot be confused with intra-species intelligence we will forever be
running around in circles discussing two different ideas as if they were
one and the same. Clearly, machine intelligence is also concerned with
a different 'species' to ourselves and as such could be a very useful
concept but neither 'machine intelligence' nor 'human intelligence' are
useful in a discussion of which is, or might become, the more intelligent
(in the inter-species meaning of the word).

For more information on bogus reasoning about brains and behaviour
see Stephen Rose's "The Conscious Brain" (published by Penguin I think).

------------------------------

Date: Wed, 16 Nov 88 19:47 EDT
From: Dourson <"DPK100::DOURSON_SE%gmr.com"@RELAY.CS.NET>
Subject: Epistemology of Common Sense

Bruce Nevin (#125, Mon, 7 Nov 88 11:21:08 EST), responding to
McCarthy (#121, 31 Oct 88  2154 PST), compares the common sense of
"social facts" with the common sense of physical facts.  He
states,

   "Suppose we had an AI equipped with common sense defined
    solely in terms of physical facts.  This is somewhat like the
    proverbial person who knows the price of everything but the
    value of nothing."

Knowledge of physical facts is an essential condition for knowing
the value of things.  The value of any thing is determined by what
it contributes to a person's survival and happiness.  The price of
a thing is determined by the resources and effort required to
create it.  Money is the means by which we measure both a thing's
value and its price.

A person whose knowledge is defined solely in terms of physical
facts, i.e., in terms of reality, would know (or could determine)
both the value of anything (in terms of what it contributes to his
survival and happiness), and the price (in terms of his own
personal effort) he would have to pay to create it or to purchase
it from others.  Such a person knows that there is no such thing
as a "social _fact_"; and that survival and happiness are facts of
reality rooted in the natures of existence and man, not matters of
"social convention".

A person who knows prices of things without knowing their value,
has never had to earn his money, his survival, or his happiness.

A person whose values are based on social conventions, does not
think or act for himself, i.e., is not independent, and could not
survive and be happy on his own.

People talk a lot about equipping an AI with common sense and
goals, but seldom about equipping an AI with values and the
ability to make value judgements.  When they do mention values, it
is usually in terms such as "social 'facts'", "social
conventions", and so-called "higher values", all of which are a
lot of floating fuzzy abstractions that signify nothing.

A value judgement is a precondition for a setting goal, which in
turn is a precondition for thought and action carried out to
achieve the goal.  Knowledge, common sense, and effort are means
to achieve the goal.  A successful AI will have the ability to
identify and explain the value to itself of a thing, and to
measure the thing's price in terms of its own time and effort. The
values an AI holds will be based on its nature and the conditions
required for it to exist, function, and survive.  These values
will not be a matter of "social convention".

McCarthy stated that sociology is peripheral to the study of
intelligence.  I submit that it is irrelevant.


Stephen Dourson

------------------------------

Date: 19 Nov 88 02:45:39 GMT
From: quintus!ok@sun.com  (Richard A. O'Keefe)
Subject: Re: The epistemology of the common sense world

In a previous article, Gilbert Cockton writes:
>we require enlightenment on how AI workers are trained to study tasks
>in the common sense world.  Task analysis and description is an
>established component of Ergonomics/Human Factors.  I am curious as to
>how AI workers have absorbed the experience here into their work, so
>that unlike old-fashioned systems analysts, they automate the real
>task rather than an imaginary one which fits into the machine.

>If I were to choose 2 dozen AI workers at random and ask them for an
>essay on methodological hygiene in task analysis and description, what
>are my chances of getting anything ...

I'm far from sure that I could _write_ such an essay, but I'd very much
like to _read_ it.  (I was hoping to find topics like that discussed in
comp.cog-eng, but no such luck.)  Could you give us a reading list, please?

I may have misunderstood him, but Donald Michie talks about "The Human
Window", and seems to be saying that his view of AI is that it uses
computers to move a task in complexity/volume space into the human window
so that humans can finish the job.  This would suggest that MMI and that
sort of AI should have a lot in common, and that a good understanding of
task analysis would be very helpful to people trying to build "smart tools".

------------------------------

Date: 15 Nov 88 18:26:20 GMT
From: fluke!ssc-vax!bcsaic!ray@beaver.cs.washington.edu  (Ray Allis)
Subject: Re: Capabilities of "logic machines"

In article <393@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes:
>In article <42136@yale-celray.yale.UUCP>, Krulwich-Bruce@cs.yale.edu
 (Bruce Krulwich) writes:
>
>[ in reply to my doubts about ``logic-machine'' approaches to learning ]
>
>> If you're claiming that it's possible to do something with connectionist
>> models that its not possible to do with "logical machines," you have to
>> define "logical machines" in such a way that they aren't capable of
>> simulating connectionist models.
>
>Good point, and since simulating a connectionist model can be easily
>expressed as a sequence of logical operations, I would have to be
>pretty creative to design a logical machine that could not do that.

Whoa!  Wrong!  (Well, sort of.)  I think you conceded much too quickly.
'Simulate' and 'model' are trick words here.  The problem is that most
'connectionist' approaches are indeed models, and logical ones, of some
hypothesized 'reality'.  There is no fundamental difference between such
models and more traditional logical or mathematical models; of course
they can be interchanged.

A distinction must be made between digital and analog; between form and
content; between symbol and referent; between model and that which is
modelled.

Suppose you want to calculate the state of a toy rubber balloon full of
air at ambient temperature and pressure as it is moved from your office
to direct sunlight outside.  To do a completely accurate job, you're
going to need to know the vector of every molecule of the balloon and
its contents, every external molecule which affects the balloon, or
affects molecules which affect the balloon, the photon flux, the effects
of haze and clouds drifting by, and whether passing birds and aircraft
cast shadows on the balloon.  And of course even that's not nearly enough,
or at fine enough detail.  To diminishing degrees, everything from
sunspots to lunar reflectivity will have some effect. Did you account for
the lawn sprinkler's effect on temperature and humidity? "Son of a gun!"
you say, "I didn't even notice the lousy sprinkler!"

Well, it's impossible.  In any case most of these are physical quantities
which we cannot know absolutely but can only measure to the limits of our
instruments.  Even if we could manage to include all the factors affecting
some real object or event, the values used in the arithmetic calculations
are approximations anyway.  So, we approximate, we abstract and model.
And arithmetic is symbolic logic, which deals, not directly with quantities,
but with symbols for quantities.

Now with powerful digital computers, calculation might be fast enough to
produce a pretty good fake, one which is hard for a person to distinguish
from "the real thing", something like a movie.  But I don't think this is
likely to be really satisfactory.  Consider another example I like, the
modelling of Victoria Falls.  Water, air, impurities, debris and rock all
interacting in real time on ninety-seven Cray Hyper-para-multi-3000s. Will
you be inspired to poetry by the ground shaking under your feet?  No?

You see, all the ai work being done on digital computers is modelling using
formal logic.  There is no reason to argue over whether one type of logical
model can simulate another.  The so-called "neurologically plausible"
approach, when it uses real, physical devices is an actual alternative to
logical systems.  In my estimation, it's the most promising game in town.

>much like a logical machine -- pushing symbols around, performing
>elementary operations on them one at a time, until the input vector
>becomes the output vector. I have trouble imagining that is what is
>going on when I recognize a friend's face, predict a driver's
>unsignaled turn by the sound of his motor, realize that a particular
>computer command applies to a novel problem, etc.

Me, too!

>Can a system that only does logical inferences on symbols with direct
>semantic significance achieve a similar information gain through
>experience?

Key here is "What constitutes experience?"  How is this system in touch
with its environment?

>I will appreciate pointers to significant results. Is anyone making
>serious progress with the classical approach in non-toy-problem
>domains?
[...]
>                                                         Can a
>purely logical machine demonstrate a convincing ability to spot
>analogies that don't follow directly from explicit coding or
>hand-holding?  Is any logical machine demonstrating information gain
>ratios exceeding (or even approaching) unity? Are any of these
>machines _really_ surprising their creators?
>
>Dan Mocsny

Excellent questions.  I'd also like to hear of any significant results.

Ray Allis, Boeing Computer Services, Seattle, Wa. ray@boeing.com

------------------------------

Date: 18 Nov 88 02:22:18 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: Re: Capabilities of "logic machines"

In article <8673@bcsaic.UUCP> ray@bcsaic.UUCP (Ray Allis) writes:
>Whoa!  Wrong!  (Well, sort of.)  I think you conceded much too quickly.
>'Simulate' and 'model' are trick words here.

Correct.  A better would would be _emulate_.
For any given electronic realisation of a neural net,
there is a digital emulation of that net which cannot be
behaviourally distinguished from the net.
The net is indeed an analogue device, but such devices are
subject to the effects of thermal noise, and provided the
digital emulation carries enough digits to get the
differences down below the noise level, you're set.

In order for a digital system to emulate a neural net adequately,
it is not necessary to model the entire physical universe, as Ray
Allis seems to suggest.  It only has to emulate the net.

>You see, all the ai work being done on digital computers is modelling using
>formal logic.

Depending on what you mean by "formal logic", this is either false or
vacuous.  All the work on neural nets uses formal logic too (whether the
_nets_ do is another matter).

>>much like a logical machine -- pushing symbols around, performing
>>elementary operations on them one at a time, until the input vector
>>becomes the output vector. I have trouble imagining that is what is
>>going on when I recognize a friend's face, predict a driver's
>>unsignaled turn by the sound of his motor, realize that a particular
>>computer command applies to a novel problem, etc.

>Me, too!

Where does this "one at a time" come from?  Most computers these days
do at least three things at a time, and the Connection Machine, for all
that it pushes bits around, does thousands and thousands of things at
a time.  Heck, most machines have some sort of cache which does
thousands of lookups at once.  Once and for all, free yourself of the
idea that "logical machines" must do "elementary operations one at a
time".

------------------------------

Date: Tue, 15 Nov 88 16:02:18 PST
From: norman%ics@ucsd.edu (Donald A Norman-UCSD Cog Sci Dept)
Reply-to: danorman@ucsd.edu
Subject: Lightbulbs and Related Thoughts


Iconic memory is the brief, reasonably veridical image of a sensory
event.  In the visual system, it has a time constant of somewhere
around 100 msec.  Visual iconic memory is what makes TV and motion
pictures possible: 30 to 60 images a second fuse into a coherent,
apparently continuous percept.  I demonstrate this in class by waving
a flashlight in a circle in a dark auditorium: I have to rotate about
3 to 5 times/second for the class to see a continuous image of a
circle (the tail almost dying away).

The illustration of seeing complementary colors after staring at, say,
an image of a flag, is called a visual after effect, and is caused by
entirely different mechanisms.

don norman

Donald A. Norman        [ danorman@ucsd.edu   BITNET: danorman@ucsd ]
Department of Cognitive Science C-015
University of California, San Diego
La Jolla, California 92093 USA

UNIX:  {gatech,rutgers,ucbvax,uunet}!ucsd!danorman
[e-mail paths often fail: please give postal address and all e-mail addresses.]

------------------------------

Date: 21 Nov 88 23:56:31 GMT
From: hpda!hpcuhb!hp-sde!hpcea!hpcehfe!paul@bloom-beacon.mit.edu 
      (Paul Sorenson)
Subject: Re: Lightbulbs and Related Thoughts


In article <778@wsccs.UUCP> dharvey@wsccs.UUCP (David Harvey) writes:
>Don't forget to include the iconic memory.  This is the buffers
>so to speak of our sensory processes.  I am sure that you have
>saw many aspects of this phenomenon by now.  Examples are staring
>at a flag of the United States for 30 seconds, then observing the
>complementary colors of the flag if you then look at a blank wall
>(usually works best if the wall is dark).  [...]

Perhaps this question is a witness to my ignorance, but isn't the phenomenon
you describe a result of the way the retina processes images, and if
so, do you mean to say that iconic memory is located in the retina?

------------------------------------------------------------------------------
Jo Lammens     Internet:  lammens@cs.Buffalo.EDU
               uucp    :  ..!{ames,boulder,decvax,rutgers}!sunybcs!lammens
               BITNET  :  lammens@sunybcs.BITNET
----------

No, you are correct and the example is wrong.  Color after images like
those described are NOT instances of iconic memory.  Iconic memory is a
theoretical stage of memory, patterned after short term memory, that
functions as a limited capacity storage buffer for sensory information
(just as STM serves as a limited [7 + or - 2] capacity storage for
information prior to its being encoded into "Long Term Memory").
Presumably, Iconic memory preceeds STM, which preceeds LTM, which
preceeds....(forgetting, making it up,?).

------------------------------

Date: Thu, 17 Nov 88 10:49:42 pst
From: Ray Allis <ray@ATC.BOEING.COM>
Subject: Re: Computer science as a subset of artificial intelligence

In <639@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:

>In a previous article, Ray Allis writes:
>>I was disagreeing with that too-limited definition of AI.  *Computer
>>science* is about applications of computers, *AI* is about the creation
>>of intelligent artifacts.  I don't believe digital computers, or rather
>>physical symbol systems, can be intelligent.  It's more than difficult,
>>it's not possible.
>
>There being no other game in town, this implies that AI is impossible.
>Let's face it, connectionist nets are rule-governed systems; anything a
>connectionist net can do a collection of binary gates can do and vice
>versa.  (Real neurons &c may be another story, or may not.)

But there ARE other games.  I don't believe AI is impossible.  I'm convinced
on my interpretation of evidence that AI IS possible (i.e. artifacts that
think like people).  It's just that I don't think it can be done if methods
are arbitrarily limited to only formal logic.  If by "connectionist net" you
are referring to networks of symbols, such as semantic nets, implemented on
digital computers, then, in that tiny domain, they may well all be
rule-governed systems, interchangeable with "a collection of binary gates".
Those are not the same as "neural nets" which are modelled after real
organisms' central nervous systems.  Real neurons do indeed appear to be
another story.  In their domain, rules should not be thought of as governing,
but rather as *describing* operations which are physical analogs and not
symbols.  To be  repeatedly redundant, an organism's central nervous system
runs just fine without reference to explicit rules; rules DESCRIBE, to beings
who think with symbols (guess who) what happens anyway.  AI methodology must
deal with real objects and real events in addition to symbols and form.

------------------------------

End of AIList Digest
********************

∂23-Nov-88  1924	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #132 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 23 Nov 88  19:23:41 PST
Date: Wed 23 Nov 1988 22:05-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #132
To: AIList@AI.AI.MIT.EDU


AIList Digest           Thursday, 24 Nov 1988     Volume 8 : Issue 132

 Queries:

  BOOTSTRAP
  translating LISP to/in other languages
  Prolog on a Macintosh II
  Input refutations
  OPS and Prolog comparison

 Responses:

  ES for Student Advising
  Genetic Learning Algorithms
  Learning arbitrary transfer functions (2 responses)
  Iterative Deepening (2 responses)
  AI & the DSM-III

----------------------------------------------------------------------

Date: Wed, 16 Nov 88 20:21:35 EST
From: "Thomas W. Stuart" <C078D6S6@UBVM>
Subject: BOOTSTRAP

I'm passing along a query from Dr. William McGrath, here at the School
of Information and Library Studies, SUNY - Buffalo.  He is looking for
references or information about available programs and packages for
Efron's Bootstrap statistical procedures -- packages which might run
on micros or VAX systems.

------------------------------

Date: 21 Nov 1988 07:55:54 CDT
From: Walter.Daugherity@LSR.TAMU.EDU
Subject: translating LISP to/in other languages


I am looking for information about converting LISP to other languages
(C, PASCAL, ADA, etc.) or about LISP interpreters written in such
languages.

Thanks in advance,        Walter Daugherity

ARPA INTERNET:  daugher@cssun.tamu.edu
                Walter.Daugherity@lsr.tamu.edu
CSNET:  WCD7007%LSR%TAMU@RELAY.CS.NET
        WCD7007%SIGMA%TAMU@RELAY.CS.NET
BITNET: WCD7007@TAMLSR
        WCD7007@TAMSIGMA

------------------------------

Date: Tue, 22 Nov 88 14:33:50 +1000
From: "ERIC Y.H. TSUI" <munnari!aragorn.oz.au!eric@uunet.UU.NET>
Subject: Prolog on a Macintosh II

I would like to communicate with users of the following PROLOGs on a MAC:

M-1.15 (from Advanced AI Systems Prolog)
IF/Prolog (from Interface Computer Gmbh.)
Prolog-1 (from Expert Systems International)
ZYX Macintosh Prolog 1.5 (from ZYX Sweden AB)

(Welcome any other suggestions for Prolog on a MAC II ?)

I am porting a large (approx. 1MB source) MU-Prolog (almost exactly DEC-10
Edinburgh Prolog syntax) system to run on a MAC II.

Desirable features include: save states, no need to pre-declare dynamic
predicates (flexible assert and retract), reconsult, large stack space
and efficient execution.

Eric Tsui                               eric@aragorn.oz
Research Associate
Department of Computing and Mathematics
Deakin University
Geelong, Victoria 3217
AUSTRALIA

------------------------------

Date: 22 Nov 88 07:46:51 GMT
From: geoff@wacsvax.OZ (Geoff Sutcliffe)
Subject: Input refutations

I have been searching (in the wrong places obviously) for a proof that
resolution & paramodulation, or resolution & paramodulation & factoring,
form a complete input refutation system for sets of Horn clauses, and
that the single negative clause in a minimally unsatisfiable set of
Horn clauses may be used as the top clause in such refutations.

Refutation completeness, without specification of the top clause, is
in "Unit Refutations and Horn Sets" [Henschen 1974]. If set-of-support
is compatible with input resolution,paramodulation,factoring then it
is possible to choose the negative clause as the support set, and the problem
is solved. Is this compatibility known?

Any help, with this seemingly obvious result, would be appreciated.

Geoff Sutcliffe

Department of Computer Science,       CSNet:  geoff@wacsvax.oz
University of Western Australia,      ARPA:   geoff%wacsvax.oz@uunet.uu.net
Mounts Bay Road,                      UUCP:   ..!uunet!munnari!wacsvax!geoff
Crawley, Western Australia, 6009.
PHONE:  (09) 380 2305                 OVERSEAS: +61 9 380 2305

------------------------------

Date: 22 Nov 88 16:02:20 GMT
From: att!whuts!homxb!hou2d!shun@bloom-beacon.mit.edu  (S.CHEUNG)
Subject: OPS and Prolog comparison

I am looking for some information comparing OPS83
(including OPS5 and C5) and Prolog, such as speed,
the types of applications they are good for, availability,
ease of software maintenance, how easy to learn, etc.
I am also interested in statistics concerning
the number of existing applications using each language.

There might be articles on these topics already;
can someone let me know where to find them?

Thanks in advance.

-- Shun Cheung

--
-- Shun Cheung, AT&T Bell Laboratories, Middletown, New Jersey
     electronic: shun@hou2d.att.com  or   ... rutgers!mtune!hou2d!shun
       voice: (201) 615-5135

------------------------------

Date: Thu, 17 Nov 1988 20:22:35 EST
From: "Thomas W. Stuart" <C078D6S6@UBVM>
Subject: ES for Student Advising

William McGrath (School of Information and Library Studies, 309 Baldy,
SUNY at Buffalo, 14120) has created an ES knowledgebase (KB) for
advising students on what courses to take for a projected plan of study
in library and information science, particularly in reference to the
student's specific career objective.  The KB, created with 1stCLASS ES
shell, considers the type of job environment (academic, public,
corporate, sci-tech) and type of work (collection development,
cataloging, information retrieval, management, etc.), prerequisites,
hours needed to complete the program, need for faculty permission, and
other factors.  Planned modules: advice for resolving schedule
conflicts, list of job prospects -- given the student's program,
feedback and evaluation.

------------------------------

Date: 7 Nov 88 12:03:04 GMT
From: mcvax!ukc!strath-cs!pat@uunet.uu.net  (Pat Prosser)
Subject: Re: GENETIC LEARNING ALGORITHMS


Genetic Algorithms (GA's) traditionally represent the genetic string
(chromosone) using a binary alphabet; Holland has shown this to be
optimal. It is not the only alphabet, a purely symbolic alphabet is
possible if appropriate genetic operators are defined. For example

[1] P. Prosser, "A Hybrid Genetic Algorithm for Pallet Loading"
    European Conference on Artificial Intelligence, 1988
[2] Derek Smith, "Bin Packing with Adaptive Search"
    Proceedings ICGAA 1985
[3] David Goldberg, "Alleles, Loci and the Travelling Salesman
    Problem"

The only problem with non-binary alphabet is the limits of our
imagination.

------------------------------

Date: Fri, 18 Nov 88 10:18:37 EST
From: alexis%yummy@gateway.mitre.org
Reply-to: alexis%yummy@gateway.mitre.org
Subject: Flaming on Neural Nets and Transfer Functions

I have to admit some surprise that so many people got this "wrong."
Our experience is that neural nets of the PDP/backprop variety are
at their *BEST* with continueous mappings.  If you just want classification
you might as well go with nearest-neighbor alg.s (or if you want the same
thing in a net try Nestor's Coulombic stuff).  If you can't learn x=>sin(x)
in a couple of minutes, you've done something wrong and should check
your code (I'm assuming you thought to scale sin(x) to [0,1]).
Actually, requiring a PDP net to output 1's and 0's means your weights
must be quite large which takes alot of time and puts you way out on
the tails of the sigmoids where learning is slow and painful.
What I do for fun (?) these days is try to make nets output sin(t) {where
t is time} and other waveforms with static or "seed" wave inputs.

For those who like math, G. Cybenko (currently of U. Illinois and starting
12/10/88 of Tufts) has a very good paper "Approximation by Superpositions
of a Sigmoidal Function" where he gives a existence proof that you can
uniformly approximate any continuous function with support in the unit
hypercube.  This means a NN with one hidden layer (1 up from a perceptron).
Certainly more layers generally give more compact and robust codings ...
but the theory is *finally* coming together.

    Alexis Wieland    ....    alexis%yummy@gateway.mitre.org

------------------------------

Date: 17 Nov 88 20:48:52 GMT
From: amos!joe@sdcsvax.ucsd.edu  (Shadow)
Subject: Re: Learning arbitrary transfer functions

in article, 399.uvaee.ee.virginia.EDU writes:

>>I am looking for any references that might deal with the following
>>problem:
>>
>>y = f(x);         f(x) is nonlinear in x
>>
>>Training Data = {(x1, y1), (x2, y2), ...... , (xn, yn)}
>>
>>Can the network now produce ym given xm, even if it has never seen the
>>pair before?
>>
>>That is, given a set of input/output pairs for a nonlinear function, can a
>>multi-layer neural network be trained to induce the transfer function

my response:

1. Neural nets are an attempt to model brain-like learning
   (at least in theory).

   So, how do human's learn non linear functions ?

      : you learn that x↑2, for instance, is X times X.

   And how about X times Y ? How do humans learn that ?

      : you memorize it, for single digits, and
      : for more than a single digit, you multiply streams
        of digits together in a carry routine.

2. So the problem is a little more complicated. You might imagine
   a network which can perfectly learn non-linear functions if
   it has at its disposal various useful sub-networks (e.g., a
   network can learn x↑n if it has at its disposal some mechanism
   and architecture suitable for multiplying x & x.)

   (imagine a sub-network behaving as a single unit, receiving
    input and producing output in a predictable mathimatical manner)

(promoting thought)


   What is food without the hunger ?
   What is light without the darkness ?
   And what is pleasure without pain ?

   joe@amos.ling.ucsd.edu

------------------------------

Date: Fri, 18 Nov 88 19:50:27 pst
From: purcell%loki.edsg@hac2arpa.hac.com (ed purcell)
Subject: iterative deepening for game trees, state-space graphs

Some observations on the request of quintus!ok@unix.sri.com (16 Nov 88)
for references on the term ``iterative deepening'':

In his IJCAI85 paper on the IDA* (Iterative Deepening A*) search
algorithm for state-space problem graphs, Rich Korf of UCLA
acknowledges early chess-playing programs as the first implementations
of the idea of progressively deeper searches.  (The history of
progressively deeper look-ahead searches for game trees is somewhat
reminiscent of the history of alpha-beta pruning -- these clever
algorithms were both implemented early but not immediately published
nor analyzed until many years later.)

The closely-related term ``progressive deepening'' also has been around
awhile; for example, this term is used in the 2nd edition (1984) of Pat
Winston's textbook ``An Introduction to AI.''

The contributions of Korf's IJCAI85 paper on IDA* are in the
re-formulation and analysis of progressively deeper depth-first search
for state-space graphs, using a heuristic evaluation function instead
of a fixed depth bound to limit node expansions.

It is interesting that Korf is now investigating the re-formulation of
minimax/alpha-beta pruning for state-space graphs.

                                            Ed Purcell
                                            purcell%loki.edsg@hac2arpa.hac.com
                                            213-607-0793

------------------------------

Date: 19 Nov 88 02:16:34 GMT
From: korf@locus.ucla.edu
Subject: "Iterative-Deepening" Reference wanted

Another reference on this subject is: "An analysis of consecutively
bounded depth-first search with applications in automated deduction",
by Mark E. Stickel and W. Mabry Tyson, in IJCAI-85, pp. 1073-1075.

------------------------------

Date: 23 Nov 88 18:07:05 GMT
From: sire@ptsfa.PacBell.COM (Sheldon Rothenberg)
Subject: Re: AI & the DSM-III


In a previous article, ANDERSJ%ccm.UManitoba.CA@MITVMA.MIT.EDU writes:
> Hi Again.  I have a colleague who is attempting to write a paper on
> the use of AI techniques in psychiatric diagnosis in general, and
> more specifically using the DSM-III.


Todd Ogasawara, at U. of Hawaii, posted a 10 article biblio on related
topics. The article which appears most relevant is:

        Hardt, SL & MacFadden, DH
        Computer Assisted Psychiatric Diagnosis: Experiments in
        Software Design
        from "Computers in Biology and Medicine", 17, 229-237

A book by DJ Hand entitled "Artificial Intelligence and Psychiatry"
a 1985 publication of Cambridge University Press also looks promising.

Todd's e-mail address on INTERNET is: todd@uhccux.UHCC.HAWAII.EDU

Shelley Rothenberg
(415) 867-5708

------------------------------

End of AIList Digest
********************

∂29-Nov-88  2108	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #133 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 29 Nov 88  21:08:03 PST
Date: Tue 29 Nov 1988 23:37-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #133
To: AIList@AI.AI.MIT.EDU


AIList Digest           Wednesday, 30 Nov 1988    Volume 8 : Issue 133

 Seminars:

  Why AI may need Connectionism                          - Lokendra Shastri
  On the Semantics of Default Logic                      - Dr. Wiktor Marek
  Complexity and Decidability of Terminological Logics   - Peter F. Patel-Schneider
  Parallel Computation of Motion                         - Heinrich Bulthoff
  Planning and Plan Execution                            - Mark Drummond
  Computational Complexity in Medical Diagnostic Systems - Gregory Cooper
  Computation and Inference Under Scarce Resources       - Eric Horvitz
  Grammatical Categories and Shapes of Events            - Alexander Nakhimovsky

----------------------------------------------------------------------

Date: Thu, 10 Nov 88 16:26:34 EST
From: dlm@allegra.att.com
Subject: Why AI may need Connectionism - Lokendra Shastri


Why AI may need Connectionism? A Representation and Reasoning Perspective

                        Lokendra Shastri
           Computer and Information Science Department
                     University of Pennsylvania

                Tuesday, November 15 at 10:00 am.
        AT&T Bell Laboratories - Murray Hill  Room 3D436


Any generalized notion of inference is intractable, yet we are capable
of drawing a variety of inferences with remarkable efficiency.
These inferences are by no means trivial and support a broad range
of cognitive activity such as classifying and recognizing objects,
understanding spoken and written language, and performing commonsense
reasoning.  Any serious  attempt at understanding intelligence must
provide a detailed computational account of how such inferences may be
drawn with requisite efficiency.  In this talk we describe some work
within the connectionist framework that attempts to offer such an account.
We focus on two connectionist knowledge representation and reasoning systems:

1)  A connectionist system that represents knowledge in terms of
multi-place relations (n-ary predicates), and draws
a limited class of inferences based on this knowledge with extreme
efficiency. The time taken by the system to draw conclusions is
proportional to the @i(length) of the proof, and hence, optimal.
The system incorporates a solution to the "variable binding" problem
and uses the temporal  dimension to establish and maintain bindings.

2) A connectionist semantic memory that computes optimal solutions
to an interesting class of @i(inheritance) and @i(recognition) problems
extremely fast - in time proportional to the @i(depth) of the conceptual
hierarchy.  In addition to being efficient, the connectionist realization
is based on an evidential formulation and provides a principled
treatment of @i(exceptions), @i(conflicting multiple inheritance),
as well as the @i(best-match) or @i(partial-match) computation.

We conclude that working within the connectionist framework is well
motivated as it helps in identifying interesting classes of
limited inference that can be performed with extreme efficiently,
and aids in discovering constraints that must be placed on the
conceptual structure in order to achieve extreme efficiency.

Sponsor: Mark Jones - jones@allegra.att.com

------------------------------

Date: Tue, 15 Nov 88 18:03:12 EST
From: dlm@allegra.att.com
Subject: On the Semantics of Default Logic - Dr. Wiktor Marek


                      Dr. Wiktor Marek
               Department of Computer Science
                     University of KY

                  Monday, 11/21, 1:30 PM
         AT&T Bell Laboratoris - Murray Hill 3D-473

             On the Semantics of Default Logic


     At least two different types of  structures  have  been
proposed   as  correct  semantics  for  logic  of  defaults:
Minimal sets of formulas closed  under  defaults  and  fixed
points  of  Reiter's  operator  GAMMA.  In  some cases these
notions coincide, but generally this is  not  the  case.  In
addition  Konolige  identified  Reiter's  fixed  points  (so
called extensions) with a  certain  class  of  autoepistemic
expansions of Moore.

     We identify another class of structures  for  defaults,
called  weak expansions and show a one to one correspondence
between Moore's autoepistemic  expansions  and  weak  expan-
sions.  This  functor extends Konolige's correspondence.  We
show that the expressive power of autoepistemic logic  (with
expansions  as the intended structures) is precisely same as
default logic with weak expansions.

Sponsor: D. Etherington  -  ether@allegra.att.com

------------------------------

Date: Mon 21 Nov 88 17:40:09-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Complexity and Decidability of Terminological Logics - Peter
         F. Patel-Schneider

                    BBN Science Development Program
                       AI Seminar Series Lecture

          COMPLEXITY AND DECIDABILITY OF TERMINOLOGICAL LOGICS

                        Peter F. Patel-Schneider
                   AI Principles Research Department
                         AT&T Bell Laboratories
                         (pfps@allegra.att.com)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                     10:30 am, Tuesday November 29


Terminological Logics are important formalisms for representing
knowledge about concepts and objects, and are attractive for use in
Knowledge Representation systems.  However, Terminological Logics with
reasonable expressive power have poor computational properties, a fact
which has restricted their use and utility in Knowledge Representation
systems.  This talk gives a brief description of Terminological Logics,
presents some results concerning their tractability and decidability,
and discusses the role of Terminological Logics in Knowledge
Representation systems.

------------------------------

Date: Tue, 22 Nov 88 17:55:47 EST
From: kgk@CS.BROWN.EDU
Subject: Parallel Computation of Motion - Heinrich Bulthoff


Parallel Computation of Motion: computation, psychophysics and physiology

                   Heinrich H. B"ulthoff
                      Brown University
        Department of Cognitive and Linguistic Sciences

                Wednesday, November 30, 12PM.
                   Lubrano Conference Room
          4th Floor, Center for Information Technology
                       Brown University

The measurement of the 2D field of velocities -- which is the
projection of the true 3D velocity field -- from time-varying
2-dimensional images is in general impossible. It is, however,
possible to compute suitable ``optical flows'' that are
qualitatively similar to the velocity field in most cases. We
describe a simple, parallel algorithm that computes successfully an
optical flow from sequences of real images, is consistent with human
psychophysics and suggests plausible physiological models. In
particular, our algorithm runs on a Connection Machine supercomputer
in close to real time. It shows several of the same ``illusions''
that humans perceive. A natural physiological implementation of the
model is consistent with data from cortical areas V1 and MT.

Regularizing optical flow computation leads to a formulation which
minimizes matching error and, at the same time, maximizes smoothness
of the optical flow.  We develop an approximation to the full
regularization computation in which corresponding points are found
by comparing local patches of the images.  Selection between
competing matches is performed using a winner-take-all scheme.  The
algorithm accomodates many different image transformations
uniformly, with similar results, from brightness to edges. The
algorithm is easily implemented using local operations on a
fine-grained computer (Connection Machine) and experiments with
natural images show that the scheme is effective and robust against
noise. This work was done at the Artificial Intelligence Laboratory
at MIT in collaboration with Tomaso Poggio and James J. Little .

------------------------------

Date: 28 Nov 88 02:55:26 GMT
From: wucs1!loui@uunet.uu.net  (Ron Loui)
Subject: Planning and Plan Execution - Mark Drummond


                        COMPUTER SCIENCE COLLOQUIUM

                           Washington University
                                 St. Louis

                              2 December 1988


                        Planning and Plan Execution

                                Mark Drummond
                                NASA Ames

We are given a table on which to place three blocks (A, B, and C).  We begin
in a state where all the blocks are available for placement; there is also an
unspecified means of transporting each block to its target location on the
table.  We might imagine that there are an unlimited number of
interaction-free robot arms, or that each block may be levitated into place
once it is available.  The exact means for moving the blocks does not matter:
given that a block is available it may be placed.  The only constraint is that
B cannot be placed last.  We call this the "B-not-last" problem.

We must produce a plan which is as flexible as possible.  If a block can be
placed then our plan must so instruct the agent.  If a block cannot be placed
according to the constraints then our plan must prevent the agent from
attempting to place the block.  The agent must never lock up in a state from
which no progress is possible.  This would happen, for instance, if A were on
the table, and C arrived and was placed.  B could then not be placed last.

It takes four totally ordered plans or three partially ordered plans to deal
with the B-not-last problem.  In either representation there is no one plan
that can be given to the assembly agent which does not overly commit to a
specific assembly strategy.  Disjunction is not the only problem.  Actions
will often fail to live up to the planner's expectations.  An approach based
on relevancy analysis is needed, where actions are given in terms of the
conditions under which their performance is appropriate.  The problem is even
harder when there can be parallel actions.

Our approach uses a modified Condition/Event system (Drummond, 1986a,b) as a
causal theory of the application domain.  The C/E system is amenable to direct
execution by an agent, and can be viewed as a nondeterministic control
program.  For every choice point in the projection, we synthesize a "situated
control rule" that characterizes the conditions under which action execution
is appropriate.  This can be viewed as a generalization of STRIPS' algorithm
for building triangle tables from plan sequences (Nilsson, 1984).


________________________________________________________________________________

                                5 December 1988

        Coping with Computational Complexity in Medical Diagnostic Systems

                                Gregory Cooper
                Stanford University/Knowledge Systems Laboratory

Probabilistic networks will be introduced as a representation for medical
diagnostic knowledge.  The computational complexity of using general
probabilistic networks for diagnosis will be shown to be NP-hard.  Diagnosis
using several important subclasses of these networks will be shown to be
NP-hard as well.  We then will focus on some of the approximation methods
under development for performing diagnostic inference.  In particular, we will
discuss algorithms being developed for performing diagnostic inference using a
probabilistic version of the INTERNIST/QMR knowledge base.

--------------------------------------------------------------------------------

                Computation and Inference Under Scarce Resources

                                Eric Horvitz
                                Stanford University
                                Knowledge Systems Laboratory


I will describe research on Protos, a project focused on reasoning and
representation under resource constraints.  The work has centered on building
a model of computational rationality through the development of flexible
approximation methods and the application of reflective decision-theoretic
control of reasoning.  The techniques developed can be important for providing
effective computation in high-stakes and complex domains such as medical
decision making.  First, work will be described on the decision-theoretic
control of problem solving for solving classical computational tasks under
varying, uncertain, and scarce resources.  After, I will focus on
decision-theoretic reasoning under resource constraints.  I will present work
on the characterization of partial results generated by alternative
approximation methods.  The expected value of computation will be introduced
and applied to the selection and control of probabilistic inference.  Plans
for extending the work to inference in a large internal-medicine knowledge
base will be described.  Finally, I extend the techniques beyond the tradeoff
between computation time and quality of computational results to explore
issues surrounding complex reasoning under cognitive constraints.

________________________________________________________________________________

------------------------------

Date: Tue, 29 Nov 88 14:24:11 EST
From: rapaport@cs.Buffalo.EDU (William J. Rapaport)
Subject: Grammatical Categories and Shapes of Events - Alexander
         Nakhimovsky


                         UNIVERSITY AT BUFFALO
                      STATE UNIVERSITY OF NEW YORK

                  GRADUATE GROUP IN COGNITIVE SCIENCE

                                PRESENTS

                         ALEXANDER NAKHIMOVSKY

                     Department of Computer Science
                           Colgate University

              GRAMMATICAL CATEGORIES AND SHAPES OF EVENTS

                       Tuesday, December 13, 1988
                               4:30 P.M.
                     280 Park Hall, Amherst Campus

This talk traces recurrent patterns in two linguistic and two  ontologi-
cal  domains:   (1)  grammatical categories of the noun, (2) grammatical
categories of the verb, (3) shapes of visually  perceived  objects,  and
(4)   aspectual   classes   of  events.   Correspondences  between  noun
categories and visual properties of objects are shown by  comparing  the
semantics of noun classifiers in classifier languages with some computa-
tional objects and processes of early and late vision.

Among grammatical categories of the verb, only those having to  do  with
aspect  are  discussed,  and  three  kinds of phenomena identified:  the
perfective-imperfective distinction, corresponding to the  presence  vs.
absence  of  a contour, at a given scale, in the object domain (and thus
to the count-mass distinction in the noun-categories domain); the aspec-
tual  types  of  verb  meanings  (a.k.a. Aktionsarten); and coersion, or
nesting, of aspectual types.  Unlike previous treatments, a  distinction
is  drawn betweem aspectual coersion within the word (i.e., in word for-
mation and inflection) and aspectual coersion above the word  level,  by
verb  arguments  and  adverbial  modifiers.   This  makes it possible to
define the notion of an aspectual classifier and (on analogy with  noun-
classifier languages) the notion of an aspectual language.  Several pro-
perties of aspectual languages are identified, and a comparison is  made
between  the  ways  aspectual  distinctions  are  expressed in aspectual
languages (e.g.,  Slavic  languages),  predominantly  nominal  languages
(e.g., Finnish, Hungarian), and a weakly typed language like English.
The similarities between the  object-noun  domains  and  the  event-verb
domains  point to a need for topological (rather than logical) represen-
tations for aspectual classes, representations that  could  support  the
notions  of  connectedness, boundary, and continuous function.  One such
representation is presented and shown to  explain  several  facts  about
aspectual  classes.   Tentative  proposals  are made toward defining the
notion of an ``aspectually possible word''.  In  conclusion,  I  discuss
the implications of the presented material for the problem of naturalis-
tic explanation in linguistics and the modularity hypothesis.

     There will be an evening discussion at Stuart Shapiro's house,
               112 Park Ledge Drive, Snyder, at 8:15 P.M.

Contact Bill Rapaport, Dept. of Computer Science, 673-3193, for further details.

------------------------------

End of AIList Digest
********************

∂30-Nov-88  2049	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #134 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 30 Nov 88  20:48:45 PST
Date: Wed 30 Nov 1988 23:31-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #134
To: AIList@AI.AI.MIT.EDU


AIList Digest            Thursday, 1 Dec 1988     Volume 8 : Issue 134

 Announcements:

  Call for Papers: Languages for Architectures for Automation
  Conceptual Graphs Workshop '89 (Second call for papers)
  Clarification: IJCAI-89 paper submission deadline
  Call for IJCAI-89 Workshops
  Explanatory Coherence: BBS Call for Commentators
  Schwartz Associates Neural Net Promotional Advertisement is bogus

----------------------------------------------------------------------

Date: Mon, 07 Nov 88 15:53:42 EST
From: Mario Barbacci <mrb@SEI.CMU.EDU>
Subject: Call for Papers: Languages for Architectures for Automation

                 1989 INTERNATIONAL WORKSHOP ON
           LANGUAGES AND ARCHITECTURES FOR AUTOMATION

                       June 26 - 28, 1989
            Universidad Politecnica de Madrid, Spain


                       +-----------------+
                       | CALL FOR PAPERS |
                       +-----------------+


     The seventh IEEE Workshop on Languages and  Architectures
for  Automation -(former IEEE Workshop  on  Languages  for
Automation)-  will  be hosted by the Universidad  Politecnica  de
Madrid,  Spain, in cooperation with the University of Connecticut
and the University of Dortmund.  The cooperation of the IEEE  has
been applied for.

     You are invited to submit an original research paper on  any
of the following subjects:

                     MAN-MACHINE INTERFACES
                        VISUAL LANGUAGES
          PATTERN RECOGNITION METHODS AND ARCHITECTURES
                    KNOWLEDGE REPRESENTATION
          DECISION-SUPPORT and DECISION-MAKING SYSTEMS
                         NEURAL NETWORKS
                REAL TIME and PARALLEL PROCESSING
         APPLICATIONS SPECIFIC INTEGRATED CIRCUIT DESIGN
               SOFTWARE TOOLS FOR HARDWARE DESIGN

     Authors are requested to submit four copies of their  double
spaced  typed  manuscript (in English) on 8.5 by 11  inch  or  A4
paper by January 2,  1989.  Each paper should include a 50 -  100
word  Abstract  and should be no  longer than  15  pages.  Papers
should be sent to the closest Program Chair:

Prof. T.C. Ting                                   Prof. C. Moraga
School of Engineering, U-237                        FB Informatik
University of Connecticut                   Universitaet Dortmund
Storrs, CT  06258                                4600 Dortmund 50
U.S.A.                                 Bundesrepublik Deutschland
(Ting@UCONN.Edu)                            (moraga@unido.bitnet)

     Authors  will  be notified by  March  1,  1989.  Photo-ready
copies of accepted papers written on special stationery  provided
by  the Publishers are due early in April.  For more  information
contact the Workshop Chairman:

                   Prof. Julio Gutierrez-Rios
                Universidad Politecnica de Madrid
                     Facultad de Informatica
                     Campus de Montegancedo
                          28660 Madrid
                              SPAIN
                        (jg@sei.cmu.edu)

------------------------------

Date: Thu, 24 Nov 88 14:49:17 +1000
From: "ERIC Y.H. TSUI" <munnari!aragorn.oz.au!eric@uunet.UU.NET>
Subject: Conceptual Graphs Workshop '89 (Second call for papers)

Dear Colleague,

Last year, the second Annual Conference on conceptual graphs was organised by
Jean Fargues at the IBM Paris Scientific Center.

In 1989, I shall organise the Annual Conference on conceptual graphs at
Deakin University on the 9th and 10th of March, 1989.

I wish to invite you to attend this workshop, and I am looking forward to a
possible contribution you could propose, such as a 30 minutes presentation
with some handouts or an article. If you are interested in attending,
please notify

        Professor Brian J. Garner
        Division of Computing and Mathematics
        Deakin University
        Geelong, Victoria 3217
        Australia

        Phone: 61 52 471 383
        Telex: DUNIV AA35625
        FAX: 61 52 442 777
        Email: brian@aragorn.oz (CSNET)

Expenses will be the responsibility of the participants but there is no special
fee for attending the workshop. I am looking forward to your participation and
possible contribution.


                                        Brian J. Garner
                                        Professor of Computing
                                        Deakin University
                                        Geelong, Victoria 3217
                                        AUSTRALIA

------------------------------

Date: 24 Nov 88 00:35:06 PST (Thu)
From: sridhara@cel.fmc.com (Sridharan)
Subject: Clarification: IJCAI-89 paper submission deadline

Papers must be sent to the AAAI office.

AAAI Headquarters (IJCAI-89)
445 Burgess Drive
Menlo Park CA 94025 USA

so as to be received by December 12th.

Best regards,
N.S.Sridahran
Program Chair, IJCAI-89

------------------------------

Date: Fri, 25 Nov 88 14:51:47 EST
From: Joseph L. Katz. <katz@mitre-bedford.ARPA>
Subject: Call for IJCAI-89 Workshops

Please Post.

Note: This is the second time I sent this message.  I don't think the first
message went out.

                          IJCAI-89 Workshops:
                          Request for Proposals


        The IJCAI-89 Program Committee invites proposals for the Workshop
Program of the International Joint Conference on Artificial Intelligence
(IJCAI-89),to be held in Detroit, Michigan from August 20, 1989 to August 20,
1989.

        Gathering in an informal setting, workshop participants will have the
opportunity to meet and discuss issues with a selected focus---providing for
active exchange among researchers and practitioners on topics of mutual
interest.  Members from all segments of the AI community are encouraged to
submit workshop proposals for review.

        To encourage interaction and a broad exchange of ideas, the workshops
will be kept small---preferably under 35 participants.  Attendance should be
limited to active participants only.  The format of workshop presentations
will be determined by the organizers proposing the workshop, but ample time
must be allotted for general discussion.  Workshops can range in length from
two hours to two days, but most workshops will last a half day or a full day.

        Proposals for workshops should be between 1 and 2 pages in length, and
should contain:

1. a brief description of the workshop identifying specific issues that will
be focused on.

2. a discussion of why the workshop would be of interest at this time,

3. the names and addresses of the organizing committee, preferably 3 or 4
people not all at the same organization,

4. a partial list of potential participants, and

5. a proposed schedule for getting the workshop organized and a proposed
workshop agenda.

        Workshop proposals should be submitted as soon as possible, but no
later than 1 February 1989.  Proposals will be reviewed as they are received
and resources allocated as workshops are approved. Organizers will be notified
of the committee's decision no later than 15 February 1989.

        Workshop organizers will be responsible for:

1. producing a Call for Participation in the workshop, which will be mailed to
AAAI members by AAAI,

2. reviewing requests to participate in the workshop, and determining the
workshop participants,

3. scheduling the activities of the workshop, and

4. preparing a review of the workshop, which will be printed in the AI
Magazine.

        IJCAI will provide logistical support, will provide a meeting place
for the workshop, and, in conjunction with the organizers, will determine the
date and time of the workshop.

        Please submit your workshop proposals, and enquiries concerning
workshops to:


       Joseph Katz
       MITRE Corporation
       MS L203
       Burlington Road
       Bedford, MA 01730
       USA

       Phone:   (617) 271 5200
       Fax:     (617) 271 5161
       Arpanet:  Katz@mbunix.mitre.org
                 Katz@mitre-bedford.arpa

------------------------------

Date: 27 Nov 88 17:50:59 GMT
From: elbereth.rutgers.edu!harnad@rutgers.edu  (Stevan Harnad)
Subject: Explanatory Coherence: BBS Call for Commentators


Below is the abstract of a forthcoming target article to appear in
Behavioral and Brain Sciences (BBS), an international,
interdisciplinary journal providing Open Peer Commentary on important
and controversial current research in the biobehavioral and cognitive
sciences. Commentators must be current BBS Associates or nominated by a
current BBS Associate. To be considered as a commentator on this article,
to suggest other appropriate commentators, or for information about how
to become a BBS Associate, please send email to:
         harnad@confidence.princeton.edu              or write to:
BBS, 20 Nassau Street, #240, Princeton NJ 08542  [tel: 609-921-7771]
____________________________________________________________________

                  EXPLANATORY COHERENCE

                  Paul Thagard
                  Cognitive Science Loboratory
                  Princeton University
                  Princeton NJ 08542

This paper presents a new computational theory of explanatory coherence
that applies both to the acceptance and rejection of scientific
hypotheses and to reasoning in everyday life.  The theory
consists of seven principles that establish relations of local
coherence between a hypothesis and other propositions that explain it,
are explained by it, or contradict it.   An explanatory hypothesis is
accepted if it coheres better overall than its competitors.
The power of the seven principles is shown by their implementation in a
connectionist program called ECHO, which has been applied to
such important scientific cases as Lavoisier's argument for
oxygen against the phlogiston theory and Darwin's argument for evolution
against creationism, and also to cases of legal reasoning.  The
theory of explanatory coherence has implications for artificial
intelligence, psychology, and philosophy.
--
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet      CSNET:  harnad%mind.princeton.edu@relay.cs.net
(609)-921-7771

------------------------------

Date: Mon, 28 Nov 88 23:18:08 EST
From: Marvin Minsky <MINSKY@AI.AI.MIT.EDU>
Subject: Schwartz Associates Neural Net Promotional Advertisement is
         bogus

        A company named Schwartz Associates of Mountain View, CA,has
been mailing a neural net promotional advertisement that prominently
displays my name on the envelope and every page inside, with statements
that broadly suggest that I am involved in their enterprise.  They
suggest that along with the $1495.00 collection of videos and reprints,
the customer is entitled to contact me for a final qualifying
examination.  Needless to say, I have nothing to do with them and it
would seem natural to presume that their wares are as shabby as their
disgraceful business practices.

------------------------------

End of AIList Digest
********************

∂01-Dec-88  1955	AILIST-REQUEST@MC.LCS.MIT.EDU 	AIList Digest   V8 #135 
Received: from MC.LCS.MIT.EDU by SAIL.Stanford.EDU with TCP; 1 Dec 88  19:55:23 PST
Date: Thu  1 Dec 1988 22:33-EST
From: AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>
Reply-to: AIList@AI.AI.MIT.EDU
US-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139
Phone: (617) 253-6524
Subject: AIList Digest   V8 #135
To: AIList@AI.AI.MIT.EDU


AIList Digest             Friday, 2 Dec 1988      Volume 8 : Issue 135

 Philosophy:

  Defining Machine Intelligence (6 messages)

----------------------------------------------------------------------

Date: 17 Nov 88 21:16:04 GMT
From: uwslh!lishka@speedy.wisc.edu  (Fish-Guts)
Subject: The difference between machine and human intelligence (was:
         AI and Intelligence)

In article <4216@homxc.UUCP> marty@homxc.UUCP (M.B.BRILLIANT) writes:
>
>Any definition of ``artificial intelligence'' must allow intelligence
>to be characteristically human, but not exclusively so.

     A very good point (IMHO).  I believe that artificial intelligence
is possible, but that machine intelligence will probably *NOT*
resemble human intelligence all that closely.  My main reason for this
is that unless you duplicate much of what a human is (i.e. the neural
structure, all of the senses, etc.), you will not get the same result.
I propose that a machine without human-like senses cannot "understand"
many ideas and imagery the way a human does, simply because it will
not be able to perceive its surroundings in the same way as a human.
Any comments?

                                        .oO Chris Oo.--
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp

                 "I'm not aware of too many things...
                  I know what I know if you know what I mean"
                    -- Edie Brickell & the New Bohemians

------------------------------

Date: 19 Nov 88 17:06:03 GMT
From: uwslh!lishka@speedy.wisc.edu  (Fish-Guts)
Subject: Re: Defining Machine Intelligence.

In article <1111@dukeac.UUCP> sbigham@dukeac.UUCP (Scott Bigham) writes:
>In article <401@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:
>>I believe that artificial intelligence
>>is possible, but that machine intelligence will probably *NOT*
>>resemble human intelligence...
>
>So how shall we define machine intelligence?  More importantly, how will we
>recognize it when (if?) we see it?
>
>                                               sbigham

     A good question, to which I do not have a good answer.  I *have*
thought about it quite a bit, though ... however, I haven't come up
with much that I am satisfied with.  Here is what my current lines or
thought are on this subject:

     Many (if not most) attempts at definitions of "machine
intelligence" relate it to "human intelligence." However, I have yet
to find a good definition of "human intelligence" that is less vague
than a dictionary's definition.  It would seem (to me at least) that
AI scientists (as well as scientists in many other fields) have yet to
come up with a good, working definition of "human intelligence" that
most will accept.  Rather, most AI people I have spoken with
(including myself ;-) have a vague notion of what "human intelligence"
is, or else have definitions of "human intelligence" that relies on
many personal assumptions.  I still do not think that the AI community
has developed a definition of "human intelligence" that can be
universally presented in an introductory course on AI.  It is no
wonder, then, that there is no commonly accepted definition of machine
intelligence (which would seem to be a crucial definition in AI, IMHO).

     So how do we define machine intelligence?  I propose that we
define it apart from human intelligence at first, and try to relate it
to human intelligence afterwards.  In my opinion, machine intelligence
does not have to be the same as human intelligence (and probably will
not), for reasons I have mentioned in other articles.  From what I
have read here, I believe that at least a few other people in this
group also feel this way.

     First, the necessary "features" of machine intelligence should be
discussed and decided upon.  It is important that this be done
*without* considering current architectures and AI knowledge; the
"features" should be for an ideal "machine intelligence," and not
geared towards something that can be achieve in fifty years.  Also,
human intelligence should be *considered* at this point, but not used
as a *basis* for defining machine intelligence; intelligence in other
beings (mammals, birds, insects, rocks (;-), whatever) should also be
considered.

     Second, after having figured out what we want machine
intelligence to be, we should then try and come up with some good
"indicators" that could be used to tell whether an AI system exhibits
machine intelligence.  These indicators can include specific tests,
but I have a feeling that tests for any form of intelligence have
never been very good indicators (note that I do not put that much
value on IQ tests as measures of intelligence).  Indicators of
intelligence in humans and other beings should be considered here as
well (i.e. what do we feel is a good sign that someone is intelligent?).

     After all that is done (and it may never get done ;-), then we
can try and compare it to human intelligence.  Chances are the two
definitions of intelligence (for machines and humans) will be
different.  Of course, if, in looking at human intelligence, some
important points of machine intelligence have been missed, then
revisions are in order ... there is always time to revise the
definition.

      I am sorry that I could not provide a concrete definition of
what machine intelligence is.  However, I hoped I have provided a
small framework for discussions on how to go about defining machine
intelligence.  And of course all the above is only my view on the
subject, and is subject to change; do with it what you will ... if you
want to print it up and use it as bird-cage liner, well that is fine
by me ;-)

                                        .oO Chris Oo.--
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp

                 "I'm not aware of too many things...
                  I know what I know if you know what I mean"
                    -- Edie Brickell & the New Bohemians

------------------------------

Date: 20 Nov 88 06:53:44 GMT
From: quintus!ok@unix.sri.com  (Richard A. O'Keefe)
Subject: Re: Defining Machine Intelligence.

In article <404@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:
>     Many (if not most) attempts at definitions of "machine
>intelligence" relate it to "human intelligence." However, I have yet
>to find a good definition of "human intelligence" that is less vague
>than a dictionary's definition.  It would seem (to me at least) that
>AI scientists (as well as scientists in many other fields) have yet to
>come up with a good, working definition of "human intelligence" that
>most will accept.  Rather, most AI people I have spoken with
>(including myself ;-) have a vague notion of what "human intelligence"
>is, or else have definitions of "human intelligence" that relies on
>many personal assumptions.  I still do not think that the AI community
>has developed a definition of "human intelligence" that can be
>universally presented in an introductory course on AI.  It is no
>wonder, then, that there is no commonly accepted definition of machine
>intelligence (which would seem to be a crucial definition in AI, IMHO).

I think it is useful to bear in mind that "intelligence" is a _social_
construct.  We can identify particular characters which are associated
with it, and we may be able to measure those.  (For example, one of the
old intelligence tests identified knowing that Crisco (sp?) is a cooking
oil as a component of intelligence.)  It is _NOT_ the responsibility of
AI people to define "human intelligence".  It is the job of sociologists
to determine how the notion of "intelligence" is deployed in various
cultures, and of psychologists to study whatever aspects turn out to be
based on mental characteristics of the individual.

The field called "Machine Intelligence" or "Artificial Intelligence" is
something which originated in a particular related group of cultures and
took the "folk" notion of "intelligence" as its starting point.  We wave
our hands a bit, and say "you know how smart people are, and how dumb
machines are, well, we want to make machines smarter."  At some point we
will declare victory, and whatever we have at that point, _that_ will be
the definition of "machine intelligence".  ("Intelligent" is already used
to mean "able to perform the operations of a computer", so is "smart" in
the phrase "smart card".)

Let's face it, 13th century philosophers didn't have a definition of "mass",
"potential field", "tensor", or even "hadron" when they started out trying
to make sense of motion.  They used the ordinary language they had.  The
definitions came _last_.

There are at least two approaches to AI, which may be caricatured as
(1) "Let's build a god"
(2) "Let's build amplifiers for the mind"
I belong to the second camp:  I don't give a Continental whether we end
up with "machine intelligences" or not, just so long as we end up with
cognitive tools which are far more intelligible to humans than what we
have now.  For the first camp, the possibility of "inhuman" machine
intelligences is of interest.  It would definitely be a kind of success.
For the second camp, something which is not close enough to the human
style to be readily comprehended by an ordinary human would be an utter
failure.

We are still close enough to the beginnings of AI (whatever that is) that
both camps can pursue their goals by similar means, and have useful things
to say to each other, but don't confuse them!

------------------------------

Date: 23 Nov 88 11:49 EST
From: SDEIBEL%ZEUS.decnet@ge-crd.arpa
Subject: What the heck is intelligence and should we care?


  In Vol8 Issue 131 of the BITNET distribution of AILIST, Nick Taylor
mentioned the problem of defining intelligence.  This is indeed a problem:
What really are we talking about when we set ourselves off from the
"animals", etc?  I'm not foolish enough to pretend I have any answers but
did find some interesting ideas in Ray Jackendoff's book "Conciousness and
the Computational Mind".

  Jackendoff suggests (in Chapter 2, I believe) that one fundamental
characterestic of intelligence that seperates the actions of humans (and
possibly, other animals) from non-intelligent systems/animals/etc is the
way in which components of intelligent entities interact.  The matter
of interest in intelligent entities is the way in which independently
acting sub-parts (e.g. neurons) interact and the way in which the states
of these sub-parts combinatorily combine.  On the other hand, the matter
of interest in non-intelligent entities (e.g. a stomach) is the way in
which the action of subparts (e.g. secreting cells) SUM into a coherent
whole.

  While vague, this idea of intelligence as arising from complexity and
the interaction of independent units seemed interesting to me in that it
offers a nice and simplistic general description of intelligence.  Oh, yes
it could start to imply that computers are intelligent, etc, etc but one
must not forget the complexity gap between the brian and the most complex
computers in existence today!  Rather that wrestle with the subtleties and
complexities of words like "intelligence" (among others), it might be better
to accept the fact that we may never be able to decide what intelligence is.
How about "The sum total of human cognitive abilities" and forget about it
to concentrate on deciding how humans might acheive some of their cognitive
feats?  Try deciding what we really mean when we say "bicycle" and you'll run
into monumental problems.  Why should we expect to be able to characterise
"intelligence" any easier?

  Stephan Deibel (sdeibel%zeus.decnet@ge-crd.arpa)

------------------------------

Date: 25 Nov 88 08:55:39 GMT
From: tramp!hassell@boulder.colorado.edu  (Christopher Hassell)
Subject: Re: Intelligent Displacement of Dirt (was: Re: Artificial
         Intelligence and Intelligence)

In article <4561@phoenix.Princeton.EDU> eliot@phoenix.Princeton.EDU
 (Eliot Handelman) writes:

>What I've come to admire as intelligence is the capacity to understand the
>nature of one's limitations, and through that understanding to construct
>alternative approaches to whichever goal one has undertaken to acheive.
>Where real intelligence begins, I think, is the capacity to apply this idea
>to itself, that is, the capacity to assess the machinery of discrimination
>and criticism itself. I surmise that a finite level of recursion is sufficient
>to justify intelligent behaviour.
>
>As an example, supposing that my goal is to displace an enormous pile of dirt
>in the course of an afternoon. I may know that it takes me an afternoon to
>displace a fraction of the total amount.  The questions are, how would I know
>this if I haven't tried, and how do I arrive at the idea of a shovel.  I invite
>discussion of this matter.

On the whole subject, this does appear to be one of the better definitions
of intelligence because it is self-propogating (it'll get smarter over time).

I still believe that this analysis, though requiring agile thought, isn't
even attempted by most of us 'intelligent' beings.  We have our general
inference mech's to say .. "well that possibility is still not tried" or
"The outside world opened that option etc.." .. not too terribly difficult.
Myself, I am a pragmatist and find sufficient evidence for `getting'
intelligence from the outside world given a critical mass of inherently
important initial syllogisms, (i.e. the original `how to learn' questions)

I throw a verbose attempted `solution' to the world in this:
One realizes the fact of
     a homogeneous material (dirt) needing `transport' from one place
       to another, and the motor recognition of gravity and its effect
        on the dirt (needing a bowl-like container to 'move' it)
     the inability of anything DIRECTLY equalling the task
           (no BIG auto-digging-and-moving-and-dumping-bowls to control)

From this comes the reduction that given the ability to 'integrate' over
  time the more human-sized act of moving 'some' dirt (homog materials)
  which requires the ability to break down this inhuman goal to a normal one.
(this state can be more good than that original state .. so try it)
(this does come from some recognition of being able to manipulate dirt at all)

 hands are the first suggestion but upon experimentation (remembrance too)
    one gets "bored" <that beautifully intelligent perception>.
      <hope and need for something that works "better">
 upon this the extrapolations of 'what holds dirt' goes on towards
    other objects, this mixed with handyness would lead to a shovel and
     maybe even a wheelbarrow (a larger 'bowl' but one that can't
      be used directly to get dirt with)

YES this is only the break-down-the-problem-to-managible-sub-parts but
  this is a general version for with X "resources" find a way Y fits into
  them upon a thing called an attempt. (Y being the problem)
(yes also "resources" are nice and static too. Just change the problem
  to a set of responses that must propogate into themselves)

I hope this gets some opinions (not all of them unfavorable?? /:-)
--------------------------------------------------------------------------
In any situation the complete redefinition of the problem *IS* the answer
      itself, ... so let's get redefining.  :-)
{sunybcs, ncar, nbires}!boulder!tramp!hassell ## and oh so much of it ##
#### C. H. ####

------------------------------

Date: 30 Nov 88 18:04:02 GMT
From: uwslh!lishka@speedy.wisc.edu  (Fish-Guts)
Subject: Re: The difference between machine and human intelligence
         (was: AI and Intelligence)

In article <960@dgbt.uucp> thom@dgbt.uucp (Thom Whalen) writes:
>From article <401@uwslh.UUCP>, by lishka@uwslh.UUCP (Fish-Guts):
>> I propose that a machine without human-like senses cannot "understand"
>> many ideas and imagery the way a human does, simply because it will
>> not be able to perceive its surroundings in the same way as a human.
>> Any comments?
>
>Do you believe that Helen Keller "understood many ideas and imagery the
>way a human does?  She certainly lacked much of the sensory input that
>we normally associate with intelligence.
>
>Thom Whalen

     I do not believe she *perceived* the world as most people with
full senses do.  I do believe she "understood many ideas and imagery"
the way humans do because she had (1) touch, (2) taste, and (3)
olfactory senses (she was not able to hear or see, if I remember
correctly), as well as other internal sensations (i.e. sickness, pain,
etc.).  The way I remember it, she was taught to speak by having her
"feel" the vibrations of her teacher's throat as words were said while
associating the words with some sensation (i.e.  the "feeling" or
water as it ran over her hands).  Also (and this is a highly personal
judgement) I think the fact that she was a human, with a human nervous
system and human reactions to other sensations (i.e. a sore stomach,
human sicknesses, etc.), also added to her "human understanding."

                                        .oO Chris Oo.--
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp

                 "I'm not aware of too many things...
                  I know what I know if you know what I mean"
                    -- Edie Brickell & the New Bohemians

------------------------------

End of AIList Digest
********************