Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 120
AIList Digest Saturday, 10 May 1986 Volume 4 : Issue 120
Today's Topics:
Perception - Reductionist Predictions,
Policy - Improving Abstracts of Technical Talks,
Seminars - Analogical Reasoning (UPenn) &
Checking Goedel's Proof (SU) &
NL Interfaces to Software Systems (SU) &
The SNePS Semantic Network Processing System (ULowell)
----------------------------------------------------------------------
Date: Fri, 9 May 86 10:28:18 PDT
From: kube%cogsci@berkeley.edu (Paul Kube)
Subject: reductionist predictions: response to Nevin
>From AIList Digest V4 #119:
>Date: Wed, 7 May 86 10:37:36 EDT
>From: Bruce Nevin <bnevin@cch.bbn.com>
>Subject: net intelligence
>A recent letter in Nature (318.14:178-180, 14 November 1985) illustrates
>nicely how behavior of a whole may not be predictable from behavior of
>its parts.
...
>The reductionist engineering prediction would be that the fish could
>respond no more quickly than its I/O devices allow, 2*10E-6 seconds.
>>From the reductionist point of view, it is inexplicable how the fish
>in fact responds in 4*10E-7 seconds. Somewhat reminiscent of the old
>saw about it being aerodynamically impossible for the bumblebee to fly!
...
> Bruce E. Nevin bnevin@bbncch.arpa
Of course, if you have a bad enough theory, it can get pretty hard to
figure out bumblebees, fish, or anything else. In this case, however,
predicting the behavior of the whole from the behavior of the parts
requires nothing more than the most elementary signal detection theory.
First, note that the fish does not respond in 4*10^-7 seconds: the
latency of the jamming avoidance response (JAR) is only 3*10^-1
seconds. What the fish is able to do is reliably detect temporal
disparities in signal arrival on the order of 4*10^-7 seconds, and to
do this with arrival-time detectors having standard deviation of error no
better than 1*10^-5 seconds. The standard, `reductionist engineering'
explication of this goes as follows:
The fish has 3*10^-1 seconds to initiate JAR. In this time, it can
observe 100 electric organ discharges (EOD's) from the offending fish;
its job is to reliably and accurately (>90% confidence within 4*10^-7
seconds) figure out disparities in arrival times of (some component
of) the discharges between different regions of its body surface.
This will be done by taking the difference in firing time of
discharge-arrival detectors which have standard deviation of error of
1*10^-5 seconds.
It is well known that the average of N observations of a normally
distributed random variable with standard deviation sigma is sigma *
sqrt(N) / N; so here the average of the 100 observations of arrival
time of a single detector will have standard deviation sqrt(100)/100 *
1*10^-5 = 1*10^-6 seconds (and so 95% confidence intervals of two
standard deviations = 2*10^-6 seconds, as reported by Rose and
Heiligenberg). Since the standard deviation of the difference of two
identically normally distributed random variables is twice the
standard deviation of either variable, the temporal disparity measurement
has 95% confidence interval of 4*10^-6 seconds.
But that's only one pair of detectors, and the fish is paved with
detectors. If you want to reduce the 95% confidence interval by another
order of magnitude, you just need to average over 100 suitably located
detector pairs. (Mechanisms exploiting this fact are also almost
certainly responsible for some binaural stereo perception in humans,
where the jitter in individual phase-sensitive neurons is much worse
than what's required to reliably judge which ear is getting the
wavefront first.)
--Paul Kube
Berkeley EECS
kube@berkeley.edu
------------------------------
Date: 9 May 86 10:24:22 EDT
From: PRSPOOL@RED.RUTGERS.EDU
Subject: Abstracts of Technical Talks Published on AI-LIST
None of us surely, can attend all of the talks announced via the
AI-LIST. The abstracts which appear have served as a useful pointer for
me to current research in many different areas. I trust this has been
true for many of you as well. These abstracts could serve this secondary
purpose even better, if those people who post these abstracts to the
network, made an effort to include two addtional pieces of information
in them:
1) A Computer Network address of the speaker.
2) One or more references to any recently published material
with the same, or similar content to the talk.
I know that this information would help me enormously. I assume the
same is true of others.
Peter R. Spool
Department of Computer Science
Rutgers University
New Brunswick, NJ 08903
PRSpool@RUTGERS.ARPA
------------------------------
Date: Thu, 8 May 86 14:57 EDT
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Analogical Reasoning (UPenn)
CIS Colloquium - University of Pennsylvania
3:00pm Friday, May 9 - 216 Moore School
ANALOGICAL REASONING
Stuart Russell
Stanford University
I show the need for the application of domain knowledge in analogical
reasoning, and propose that this knowledge must take the form of a new class of
rule called a "determination". By giving determinations a first-order
definition, they can be used to make valid analogical inferences which may be
implemented within a logic programming system. In such a system, analogical
reasoning can be more efficient than rule-based reasoning for some tasks.
Determinations appear to be a common form of regularity in the world, and form
a natural stage in the acquisition of knowledge. The overall approach taken in
this work can be extended to the general problem of the use of knowledge in
induction.
------------------------------
Date: Thu 8 May 86 07:50:34-PDT
From: Anne Richardson <RICHARDSON@SU-SCORE.ARPA>
Subject: Seminar - Checking Goedel's Proof (SU)
Natarajan Shankar will be visiting CSD on Thursday, May 15. While here, he
will be giving the following talk:
DAY: May 15, 1986
EVENT: AI Seminar
PLACE: Bldg. 380, Room 380 X
TIME: 5:15
TITLE: Checking the Proof of Godel's Incompleteness Theorem
with the Boyer-Moore theorem prover.
PERSON: Natarajan Shankar
FROM: The University of Texas at Austin
There is a widespread belief that computer proof-checking of significant
proofs in mathematics is infeasible. We argue against this belief by
presenting a formalization and proof of Godel's incompleteness theorem
that was checked with the Boyer-Moore theorem prover. This mechanical
proof establishes the essential incompleteness of Cohen's Z2 axioms for
hereditarily finite sets. The proof involves a metatheoretic formalization
of Shoenfield's first-order logic along with Cohen's Z2 axioms. Several
derived inference rules were proved as theorems about this logic. These
derived inference rules were used to develop enough set theory in order
to demonstrate the representability of a Lisp interpreter in this logic.
The Lisp interpreter was used to establish the computability of the
metatheoretic formalization of Z2. From this, the representability of
the Lisp interpreter, and the enumerability of proofs, an undecidable
sentence was constructed. The theorem prover was led to the observation
that if the undecidable sentence is either provable or disprovable, then
it is both provable and disprovable. The theory is therefore either
incomplete or inconsistent.
------------------------------
Date: Thu, 8 May 86 13:17:29 pdt
From: Premla Nangia <pam@su-whitney.ARPA>
Subject: Seminar - NL Interfaces to Software Systems (SU)
COMPUTER SCIENCE DEPARTMENT
COLLOQUIUM
Speaker: C. Raymond Perrault
SRI International and CSLI
Title: A Strategy for Developing Natural Language Interfaces
to Software Systems
Time: Tuesday, May 27, 1986 --- 4:15 p.m.
Place: Skilling Auditorium
Refreshments: 3rd floor Lounge, Margaret Jacks Hall --- 3:45 p.m.
The commonly accepted perspective on the semantics of natural language
interfaces is that they are derived from the semantics of the underlying
software, e.g. a database. Although there appear to be computational
advantages to this position, it limits the linguistic coverage of the
interface and presents severe obstacles to their systematic construction
by confusing meaningful queries with answerable ones. We suggest
instead that interfaces be constructed by first defining the semantics
of the underlying software in terms of those of the interface language
and give criteria under which some of the computational advantage of
the meaningfulness-answerability confusion can be acceptably regained.
------------------------------
Date: Fri, 9 May 86 15:01 EDT
From: Graphics Research Lab x2681
<GRINSTEIN%ulowell.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - The SNePS Semantic Network Processing System
(ULowell)
The University of Lowell's continuing seminar series continues through the
summer with
THE SNePS SEMANTIC NETWORK PROCESSING SYSTEM
Stuart C. Shapiro
William J. Rapaport
Department of Computer Science
State University of New York at Buffalo
The SNePS Semantic Network Processing System is a semantic network
knowledge representation and reasoning system with facilities for build-
ing semantic networks to represent virtually any kind of information,
retrieving information from them, and performing inference with them.
Users can interact with SNePS in a variety of interface languages,
including a LISP-like user language, a menu-based screen-oriented edi-
tor, a graphics-oriented editor, a higher-order-logic language, and an
extendible fragment of English.
We will discuss the syntax and semantics of SNePS considered as an
intensional knowledge-representation system and provide examples of uses
of SNePS for cognitive modeling, database management, pattern recogni-
tion, expert systems, belief revision, and computational linguistics.
in Olney 428
on May 20, 1986
from 9:00 to lunch with refreshment breaks
at the University of Lowell (Lowell MA)
For further information call Georges Grinstein at 617-452-5000
------------------------------
End of AIList Digest
********************