Copy Link
Add to Bookmark
Report
AIList Digest Volume 5 Issue 118
AIList Digest Tuesday, 12 May 1987 Volume 5 : Issue 118
Today's Topics:
Queries - Design Info for Expert Systems &
Publicly available Expert Systems? & KA Workshop Proceedings &
Post-Graduate Research in Visual Recognition using AI,
Conference - IJCAI Information,
Philosophy - The Symbol Grounding Problem
----------------------------------------------------------------------
Date: 7 May 87 04:53:54 GMT
From: oliveb!intelca!mipos3!omepd!psu-cs!psueea!lendaris@ames.arpa
(George G. Lendaris)
Subject: Wanted: Design Info re: Operational Expert Systems
For the last year I have been studying papers
which give a high level treatment of design con-
siderations for expert systems in various con-
texts. Currently, I am studying a book by Sowa
titled "Conceptual Structures".
I am now interested in more detailed descriptions
of operational expert systems to gain familiarity
with "lower level" issues that need attention in
actually realizing the representation and manipu-
lation of "knowledge".
I would like a reasonably detailed description (in
English) of the "nitty gritties" of such an imple-
mentation.
Leads to people who might have such information
will be greatly appreciated.
Please send to lendaris@psueea.UUCP
Thanks in advance.
George G. Lendaris
Professor of Systems Science
and Electrical Engineering
------------------------------
Date: 8 May 87 07:48:18 GMT
From: arman@locus.ucla.edu
Subject: Publicly available Expert Systems?
Dear colleagues,
I was wondering if there are any publicly availble Expert Systems
out there. It doesn't really matter what machine it is running on,
or what language it is written in, I just want to look at some real
(and maybe) working code. I would also appreciate any pointers to places
(universities) where I could get expert systems from. Please Email
responses and I shall summarize the results if there is any interest.
Thank You, (in advance)
Arman Bostani,
arman@cs.ucla.edu
[...]
------------------------------
Date: Fri, 8 May 87 09:46:24 edt
From: dg1v#@andrew.cmu.edu (David Greene)
Subject: KA workshop proceedings
Can anyone tell me where I can get the proceedings from :
Knowledge Acquisition for Knowledge-Based Systems Workshop, Banff, Alberta,
Canada, November 2-7, 1986.
Also is there any information about future workshops?
Thanks in advance.
- David Greene
dg1v@andrew.cmu.edu
------------------------------
Date: Tue, 12 May 87 15:55:24 +1000
From: J.T. Teh <munnari!mulga.oz!jteh@seismo.CSS.GOV>
Subject: Information wanted on Post Graduate research in Visual
Recognition using AI
Information wanted on Post Graduate research in Visual Recognition using AI
I am an Honours student at the University of Melbourne, Australia and am
interested in persuing a PostGraduate degree in the field of Visual
Recognition systems using Prolog. My Honours project is to develop a
Visual Recognition system to run on the Apple Macintosh Plus using Prolog.
I am looking for information about universities in the United States, UK
or within Australia that are involved and with experience in this area
of research. If anyone has any information or names of people of whom I
should contact, could they mail me directly? Thanks in advance.
Or, if you know of anyone who might know, could you please redirect this
article to them? Thank you.
J.T. Teh
===========================
UUCP: {seismo,mcvax,ukc,ubc-vision}!munnari!jteh
or {seismo,mcvax,ukc,ubc-vision}!mulga!jteh
ARPA: jteh%munnari.oz@seismo.css.gov
or jteh%mulga.oz@seismo.css.gov
CSNET: jteh%munnari.oz@australia
or jteh%mulga.oz@australia
Postal Address:
J.T. Teh
c/o Department of Computer Science
University of Melbourne
Parkville,
Melbourne,
Australia 3052.
--------
J.T Teh
"He is no fool who gives up what he cannot keep to gain what he cannot lose."
- James Elliot
------------------------------
Date: Sun, 10 May 87 16:27:31 BST
From: "G. Joly" (Birkbeck) <gjoly@Cs.Ucl.AC.UK>
Subject: Re: IJCAI information (Vol 5 # 111).
Can I echo Chang Bang in a request for info on IJCAI-87? I would like
to know if there is any financial support available from IJCAII (the
sponsors). I note that US citizens can get support for air travel from
abroad (this was announced in the digest).
I have not seen any mention of sources of support in the conference
brochure. (I have applied to local (U.K.) sources for partial support.)
Gordon Joly,
Computer Science,
Birkbeck College,
Malet Street,
LONDON WC1E 7HX.
+44 1 631 6468
ARPA: gjoly@cs.ucl.ac.uk
BITNET: UBACW59%uk.ac.bbk.cu@AC.UK
UUCP: ...!seismo!mvcax!ukc!bbk-cs!gordon
------------------------------
Date: Mon 11 May 87 10:09:11-PDT
From: Georgia Navarro <NAVARRO@Stripe.SRI.COM>
Subject: Early Registration for IJCAI
[Forwarded by Laws@STRIPE.SRI.COM.]
The deadline for early registration for IJCAI in Milan
is JUNE 15. The registration fee has to be in lira. IT TAKES AT LEAST 3
WEEKS TO GET A CHECK FROM THE BofA. Also, there is a special on hotel
accomodations, but that check is supposed to be there no later than May 30.
[...]
------------------------------
Date: Sat, 9 May 87 11:45:17 EDT
From: harnad@Princeton.EDU
Subject: The Symbol Grounding Problem
[Also forwarded to the Neuron Digest. -- KIL]
To define a SUBsymbolic "level" rather than merely a NONsymbolic
process or phenomenon one needs a formal justification for the implied
up/down-ness of the relationship. In the paradigm case -- the
hardware/software distinction and the hierarchy of compiled
programming languages -- the requisite formal basis for the hierarchy is
quite explicit. It is the relation of compilation and implementation.
Higher-level languages are formally compiled into lower level ones
and the lowest is implemented as instructions that are executed by a
machine. Is there anything in the relation of connectionist processes
to symbolic ones that justifies calling the former "sub"-symbolic in
anything other than a a hopeful metaphorical sense at this time?
The fact that IF neural processes are really connectionistic (an
empirical hypothesis) THEN connectionist models are implementable in
the brain defines a super/sub relationship between connectionist
models and neural processes (conditional, of course, on the validity
-- far from established or even suggested by existing evidence -- of
the empirical hypothesis), but this would still have no bearing on
whether connectionism can be considered to stand in a sub/super relationship
to a symbolic "level." There is of course also the fact that any discrete
physical process is formally equivalent in its input/output relations
to some turing machine state, i.e., some symbolic state. But that would
make every such physical process "subsymbolic," so surely turing
equivalence cannot be the requisite justification for the putative
subsymbolic status of connectionism in particular.
[Has Turing-equivalence of connectionist systems been established?
My understanding is that asynchronous analog systems need not be
"discrete physical processes" or finite algorithms. -- KIL]
A fourth sense of down-up (besides hardware/software, neural
implementability and turing-equivalence) is psychophysical
down-upness. According to my own bottom-up model, presented in the book I
just edited (Categorical Perception, Cambridge University Press 1987),
symbols can be "grounded" in nonsymbolic representations in the
following specific way:
Sensory input generates (1) iconic representations -- continuous,
isomorphic analogs of the sensory surfaces. Iconic representations
subserve relative discrimination performance (telling pairs of things
apart and judging how similar they are).
Next, constraints on categorization (e.g., either natural
discontinuities in the input, innate discontinuities in the internal
representation, or, most important, discontinuities *learned* on the
basis of input sampling, sorting and labeling with feedback) generate
(2) categorical representations -- constructive A/D filters which preserve
the invariant sensory features that are sufficient to subserve reliable
categorization performance. [It is in the process of *finding* the
invariant features in a given context of confusable alternatives that I
believe connectionist processes may come in.] Categorical
representations subserve identification performance (sorting things
and naming them).
Finally, the *labels* of these labeled categories -- now *grounded*
bottom/up in nonsymbolic representations (iconic and categorical)
derived from sensory experience -- can then be combined and recombined
in (3) symbolic representations of the kind used (exclusively, and
without grounding) in contemporary symbolic AI approaches. Symbolic
representations subserve natural language and all knowledge and
learning by *description* as opposed to direct experiential
acquaintance.
In response to my challenge to justify the "sub" in "subsymbolic" when
one wishes to characterize connectionism as subsymbolic rather than
just nonsymbolic, rik%roland@sdcsvax.ucsd.edu (Rik Belew) replies:
> I do intend something more than non-symbolic when I use the term
> sub-symbolic. I do not rely upon "hopeful neural analogies" or any
> other form of hardware/software distinction. I use "subsymbolic"
> to refer to a level of representation below the symbolic
> representations typically used in AI... I also intend to connote
> a supporting relationship between the levels, with subsymbolic
> representations being used to construct symbolic ones (as in subatomic).
The problem is that the "below" and the "supporting" are not cashed
in, and hence just seem to be synonyms for "sub," which remains to
be justified. An explicit bottom-up hypothesis is needed to
characterize just how the symbolic representations are constructed out
of the "subsymbolic" ones. (The "subatomic" analogy won't do,
otherwise atoms risk becoming subsymbolic too...) Dr. Belew expresses
some sympathy for my own grounding hypothesis, but it is not clear
that he is relying on it for the justification of his own "sub."
Moreover, this would make connectionism's subsymbolic status
conditional on the validity of a particular grounding hypothesis
(i.e., that three representational levels exist as I described them,
in the specific relation I described, and that connectionistic
processes are the means of extracting the invariant features underlying
the categorical [subsymbolic] representation). I would of course be
delighted if my hypothesis turned out to be right, but at this point
it still seems a rather risky "ground" for justifying the "sub" status of
connectionism.
> my interest in symbols began with the question of how a system might
> learn truly new symbols. I see nothing in the traditional AI
> definitions of symbol that helps me with that problem.
The traditional AI definition of symbol is simply arbitrary formal
tokens in a formal symbol system, governed by formal syntactic rules
for symbol manipulation. This general notion is not unique to AI but
comes from the formal theory of computation. There is certainly a
sense of "new" that this captures, namely, novel recombinations of
prior symbols, according to the syntactic rules for combination and
recombination. And that's certainly too vague and general for, say,
human senses of symbol and new-symbol. In my model this combinatorial
property does make the production of new symbols possible, in a sense.
But combinatorics is limited by several factors. One factor is the grounding
problem, already discussed (symbols alone just generate an ungrounded,
formal syntactic circle that there is no way of breaking out of, just as
in trying to learn Chinese from a Chinese-Chinese dictionary alone). Other
limiting factors on combinatorics are combinatory explosion, the frame problem,
the credit assignment problem and all the other variants that I have
conjectured to be just different aspects of the problem of the
*underdetermination* of theory by data. Pure symbol combinatorics
certainly cannot contend with these. The final "newness" problem is of
course that of creativity -- the stuff that, by definition, is not
derivable by some prior rule from your existing symbolic repertoire. A
rule for handling that would be self-contradictory; the real source of
such newness is probably partly statistical, and again connectionism may
be one of the candidate components.
> It seems very conceivable to me that the critical property we will
> choose to ascribe to computational objects in our systems symbols
> is that we (i.e., people) can understand their semantic content.
You are right, and what I had inadvertently left out of my prior
(standard) syntactic definition of symbols and symbol manipulation was
of course that the symbols and manipulations must be semantically
interpretable. Unfortunately, so far that further fact has only led to
Searlian mysteries about "intrinsic" vs. "derived intentionality" and
scepticism about the the possibility of capturing mental processes
with computational ones. My grounding proposal is meant to answer
these as well.
> the fact that symbols must be grounded in the *experience* of the
> cognitive system suggests why symbols in artificial systems (like
> computers) will be fundamentally different from those arising in
> natural systems (like people)... if your grounding hypothesis is
> correct (as I believe it is) and the symbols thus generated are based
> in a fundamental way on the machine's experience, I see no reason to
> believe that the resulting symbols will be comprehensible to people.
> [e.g., interpretations of hidden units... as our systems get more
> complex]
This is why I've laid such emphasis on the "Total Turing Test."
Because toy models and modules, based on restricted data and performance
capacities, may simply not be representative of and comparable to
organisms' complexly interrelated robotic and symbolic
functional capacities. The experiential base -- and, more
important, the performance capacity -- must be comparable in a viable
model of cognition. On the other hand, the "experience" I'm talking
about is merely the direct (nonsymbolic) sensory input history, *not*
"conscious experience." I'm a methodological epiphenomenalist on
that. And I don't understand the part about the comprehensibility of
machine symbols to people. This may be the ambiguity of the symbolic
status of putative "subsymbolic" representations again.
> The experience lying behind a word like "apple" is so different
> for any human from that of any machine that I find it very unlikely
> that the "apple" symbol used by these two system will be comparable.
I agree. But this is why I proposed that a candidate device must pass
the Total Turing Test in order to be capture mental function.
Arbitrary pieces of performance could be accomplished in radically different
ways and would hence be noncomparable with our own.
> Based on the grounding hypothesis, if computers are ever to understand
> NL as fully as humans, they must have an equally vast corpus of
> experience from which to draw. We propose that the huge volumes of NL
> text managed by IR systems provide exactly the corpus of "experience"
> needed for such understanding. Each word in every document in an IR
> system constitutes a separate experiential "data point" about what
> that word means. (We also recognize, however, that the obvious
> differences between the text-base "experience" and the human
> experience also implies fundamental limits on NL understanding
> derived from this source.)... In this application the computer's
> experience of the world is second-hand, via documents written by
> people about the world and subsequently through users'queries of
> the system
We cannot be talking about the same grounding hypothesis, because mine
is based on *direct sensory experience* ("learning by acquaintance")
as oppposed to the symbol combinations ("learning by description"),
with which it is explicitly contrasted, and which my hypothesis
claims must be *grounded* in the former. The difference between
text-based and sensory experience is crucial indeed, but for both
humans and machines. Sensory input is nonsymbolic and first-hand;
textual information is symbolic and second-hand. First things first.
> I'm a bit worried that there is a basic contradiction in grounded
> symbols. You are suggesting (and I've been agreeing) that the only
> useful notion of symbols requires that they have "inherent
> intentionality": i.e., that there is a relatively direct connection
> between them and the world they denote. Yet almost every definition
> of symbols requires that the correspondence between the symbol and
> its referent be *arbitrary*. It seems, therefore, that your "symbols"
> correspond more closely to *icons* (as defined by Peirce), which
> do have such direct correspondences, than to symbols. Would you agree?
I'm afraid I must disagree. As I indicated earlier, icons do indeed
play a role in my proposal, but they are not the symbols. They merely
provide part of the (nonsymbolic) *groundwork* for the symbols. The
symbol tokens are indeed arbitrary. Their relation to the world is
grounded in and mediated by the (nonsymbolic) iconic and categorical
representations.
> In terms of computerized knowledge representations, I think we have
> need of both icons and symbols...
And reliable categorical invariance filters. And a principled
bottom-up grounding relation among them.
> I see connectionist learning systems building representational objects
> that seem most like icons. I see traditional AI knowledge
> representation languages typically using symbols and indices. One of
> the questions that most interests me at the moment is the appropriate
> "ontogenetic ordering" for these three classes of representation.
> I think the answer would have clear consequences for this discussion
> of the relationship between connectionist and symbolic representations
> in AI.
I see analog transformations of the sensory surfaces as the best
candidates for icons, and connectionist learning systems as
as possible candidates for the process that finds and extracts the invariant
features underlying categorical representations. I agree about traditional
AI and symbols, and my grounding hypothesis is intended as an answer about
the appropriate "ontogenetic ordering."
> Finally, this view also helps to characterize what I find missing
> in most *symbolic* approaches to machine learning: the world
> "experienced" by these systems is unrealistically barren, composed
> of relatively small numbers of relatively simple percepts (describing
> blocks-world arches, or poker hands, for example). The appealling
> aspect of connectionist learning systems (and other subsymbolic
> learning approaches...) is that they thrive in exactly those
> situations where the system's base of "experience" is richer by
> several orders of magnitude. This accounts for the basically
> *statistical* nature of these algorithms (to which you've referred),
> since they are attempting to build representations that account for
> statistically significant regularities in their massive base of
> experience.
Toy models and microworlds are indeed barren, unrealistic and probably
unrepresentative. We should work toward models that can pass the Total
Turing Test. Invariance-detection under conditions of high
interconfusability is indeed the problem of a device or organism that
learns its categories from experience. If connectionism turns out to
be able to do this on a life-size scale, it will certainly be a
powerful candidate component in the processes underlying our
representational architecture, especially the categorical level. What
that architecture is, and whether this is indeed the precise
justification for connectionism's "sub" status, remains to be seen.
Stevan Harnad
{seismo, psuvax1, bellcore, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.princeton.edu
(609)-921-7771
------------------------------
End of AIList Digest
********************