Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 199

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Friday, 26 Sep 1986      Volume 4 : Issue 199 

Today's Topics:
Review - Canadian Artificial Intelligence, June 1986,
Philosophy - Intelligence, Consciousness, and Intensionality

----------------------------------------------------------------------

Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Canadian Artificial Intelligence, June 1986

Summary:

Report of Outgoing and Ingoing Presidents

Interact R&D Starts AI division

Review of 1986 Canadian AI Conference at Montreal. It had 375
people registered. Best appaer was James Delgrande of Simon
Fraser University.

The Canadian Society for Computational Studies of Intelligence is
now up to 800 from 250 two years ago. (This was prior to including
people who became members upon paying non-member fees at the Canadian
AI conference).

Proceedings of the 1986 Conference costs $30.00

Contents

Why Kids Should Learn to Program,
Elliot Soloway, Yale University
Generative Structure in Enumerative Learning Systems
Robert C. Holte, Brunel Univeristy,
R. Michael Warton, York University
Detecting Analogous Learning
Ken Wellsch, Marlene Junes of University of Waterloo
GUMS: A General User Modeling System
Tim Finin, University of Pennsylvania
Dave Drager, Arity Corporation
An Efficient Tableau-Based Theorem Prover
Franz Oppacher, Ed Suen of Carleton University
Domain Circumscription Revisited
David Etherington, Universityof British Columbia
Robert Mercer, University of Western Ontario
A Propositional Logic for Natural Kinds
James Delgrande, Simon Fraser University
Fagin and Halpern on Logical Omniscienceses: A Critique with an Alternative
Robert F. Hadley Simon Fraser University
Representing Contextual Dependencies in Discourse
Tomek Strzalkowski, Simon Fraser University
A Domain-Independent Natural Language Database Interface
Yawar Ali, Raymond Aubin, Barry Hall, Bell Northern Research
Natural Language Report Synthesis: An Application to Marine Weather Forecasts
R. Kittredge, A. Polguere of Universite de Montreal
E. Goldberg Environment Canada
What's in an Answer: A Theoretical Perspectiveon Deductive Questioning Answering
Lenhart Schubert, L. Watanabe of University of Alberta
A New Implementation for Generalized Phrase Structure Grammar
Philip Harrison, Michael Maxwell Boeing Artificial Intelligence Center
TRACK: Toward a Robust Natural Language INterface
Sandra Carberry, University of Delaware
Representation of Negative and Incomplete Information in Prolog
Kowk Hung Chan, University of Western Ontario
On the Logic of Representing Dependencies by Graphs,
Judea Pearl of Universityof California
Azaria Paz Technion, Israel Institute of Technology
A proposal of Modal Logic Programming (Extended Abstract)
Seiki Akama, Fujitsu ltd., Japan
Classical Equality and Prolog
E. W. Elcock and P. Hoddinott of University of Western Ontario
Diagnosis of Non-Syntactic Programming Errors in the Scent Advisor
Gordon McCalla, Richard B. Bunt, Janelle J. Harms of University of
Saskatchewan
Using Relative Velocity INformation to Constrain the Motion Correspondence
Problem
Michael Dawson and Zenon Pylyshyn, University of Western Ontario
Device Representation Using Instantiation Rules and Structural Templates
Mingruey R. Taie, Sargur N. Srihari, James Geller, Stuart C. Shapro
of State University of New York at Buffalo
Machine Translation Between Chinese and English
Wanying Jin, University of Texas at Austin
Interword Constraints in Visual Word Recognition
Jonathan J. Hull, State University of New York at Buffalo
Sensitivity to Corners inFlow Paterns
Norah K. Link and STeve Zucker, McGill University
Stable Surface Estimation
Peter T. Sander, STeve Zucker, McGill University
Measuring Motion in Dynamic Images: A Clustering Approach
Amit Bandopadhay and R. Dutta, University of Rochester
Determining the 3-D Motion of a Rigid Surface Patch without Correspondence,
Under Perspective Projection
John Aloimonos and Isidore Rigoutsos, University of Rochester
Active Navigation
Amit Bandopadhay, Barun Chandra and Dana H. Ballard, University of Rochester
Combining Visual and Tactile Perception for Robotics
J. C. Rodger and Roger A. Browse, Queens University
Observation on the Role of Constraints in Problem Solving
Mark Fox of Carnegie-Mellon University
Rule Interaction in Expert System Knowledge Bases
Stan Raatz, University of Pennsylvania
George Drastal, Rutgers University
Towards User specific Explanations from Expert Systems
Peter van Beek and Robin Cohen, University of Waterloo
DIALECT: An Expert Assistant for Informatin REtrieval
Jeane-Claude Bassano, Universite de Paris-Sud
Subdivision of Knowledge for Igneous Rock Identification
Brian W. Otis, MIT Lincoln Lab
Eugene Freuder, University of New Hampshire
A Hybrid, Decidable, Logic-Based Knowledge Representation System
Peter Patel-Schneider, Schlumberger Palo Alto Research
The Generalized-Concept Formalism: A Frames and Logic Based Representation
Model
Mira Balaban, State University of New York at Albany
Knowledge Modules vs Knowledge-Bases: A Structure for Representing the
Granularity of Real-World Knowledge
Diego Lo Giudice and Piero Scaruffi, Olivetti Artificial Intelligence Center,
Italy
Reasoning in a Hierarchy of Deontic Defaults
Frank M. Brown, Universityof Kansas
Belief Revision in SNeps
Joao P. Martins Instituto Superior Tecnico, Portugal
Stuart C. Shapiro, State University of New York at Buffalo
GENIAL: Un Generateur d'Interface en Langue Naturelle
Bertrand Pelletier et Jean Vaucher, Universite de Montreal
Towards a Domain-Independent Method of Comparing Search Algorithm Run-times
H. W. Davis, R. B. Polack, D. J. Golden of Wright State University
Properties of Greedily Optimized Ordering Problems
Rina Dechter, Avi Dechter, University of California, Los Angeles
Mechanisms in ISFI: A Technical Overview (Short Form)
Gary A. Cleveland TheMITRE Corp.
Un Systeme Formel de Caracterisation de L'Evolution des Connaissances
Eugene Chouraqui, Centre National de la Recherche Scientifique
Une Experience de l'Ingenierie de la Connaissance: CODIAPSY Developpe avec
HAMEX
Michel Maury, A. M. Massote, Henri Betaille, J. C. Penochet et Michelle
Negre of CRIME et GRIP, Montpellier, France

__________________________________________________________________________
Report on University of Waterloo Research on Logic Mediated Knowledge
Based Personal Information Systems

They received a 3 year $450,000 grant. They will prototype Theorist, a PROLOG
based system, in which they will implement a diagnostic system with natural
language interface for complex system, a system to diagnose children's
reading disabilities. They will also develop a new Prolog in which
to write Theorist.

This group has already implemented DLOG, a "logic-based knowledge representation
sytem", two Prologs (one of which will be distributed by University of
wAterloo's Computer System Group), designed Theorist, implemented an expert
system for diagnosing reading disabilities (which will be redone in Theoritst)
and designed a new architecture for Prolog, and implemented Concurrent Prolog.

__________________________________________________________________________
Reviews of John Haugeland's "Artificial Intelligence: The Very Idea"
"The Connection Machine" by W W. Daniel HIllis, "Models of the Visual
Cortex" by David Rose and Vernon G. Dobson

------------------------------

Date: 25 Sep 86 08:12:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Intelligence and Representation


This is in response to some points raised by Charles Kalish -
Allow a somewhat lengthy re-quotation to set the stage:

I think that Dennet (see "Brainstorms") is right in that intentions
are something we ascribe to systems and not something that is built in
or a part of that system. The problem then becomes justifying the use
of intentional descriptions for a machine; i.e. how can I justify my
claim that "the computer wants to take the opponent's queen" when the
skeptic responds that all that is happening is that the X procedure
has returned a value which causes the Y procedure to move piece A to
board position Q?...

I think the crucial issue in this question is how much (or whether)
the computer understands. The problem with systems now is that it is
too easy to say that the computer doesn't understand anything, it's
just manipulating markers. That is that any understanding is just
conventional -- we pretend that variable A means the Red Queen, but it
only means that to us (observers) not to the computer. ...

[Pirron's] idea is that you want to ground the computer's use of
symbols in some non-symbolic experience....

One is looking for pre-symbolic, biological constraints; Something
like Rosch's theory of basic levels of conceptualization. ....

The other point is that maybe we do have to stay within this symbolic
"prison-house" after all event the biological concepts are still
represented, not actual (no food in the brain just neuron firings).
The thing here is that, even though you could look into a person's
brain and, say, pick out the neural representation of a horse, to the
person with the open skull that's not a representation, it constitutes
a horse, it is a horse (from the point of view of the neural sytem).
And that's what's different about people and computers. ...

These seem to me the right sorts of questions to be asking - here's a stab
at a partial answer.

We should start with a clear notion of "representation" - what does it mean
to say that the word "rock" represents a rock, or that a picture of a rock
represents a rock, or that a Lisp symbol represents a chess piece?

I think Dennett would agree that X represents Y only relative to some
contextual language (very broadly construed as any halfway-coherent
set of correspondence rules), hopefully with the presence of
an interpreter. Eg, "rock" means rock in English to English-speakers.
opp-queen means opponent's queen in the mini-language set up by the
chess-playing program, as understood by the author. To see the point a
bit more, consider the word "rock" neatly typed out on a piece of paper
in a universe in which the English language does not and never will exist.
Or consider a computer running a chess-playing program (maybe against
another machine, if you like) in a universe devoid of conscious entities.
I would contend that such entities do not represent anything.

So, roughly, representation is a 4-place relation:
R(representer, represented, language interpreter)
"rock" a rock English people
picture of rock a rock visual similarity people,
maybe some animals
...
and so on.

Now... what seems to me to be different about people and computers is that
in the case of computers, meaning is derivative and conventional, whereas
for people it seems intrinsic and natural. (Huh?) ie, Searle's point is
well taken that even after we get the chess-playing program running, it
is still we who must be around to impute meaning to the opp-queen Lisp
symbol. And furthermore, the symbol could just as easily have been
queen-of-opponent. So for the four places of the representation relation
to get filled out, to ground the flying symbols, we still need people
to "watch" the two machines. By contrast two humans can have a perfectly
valid game of chess all by themselves, even if they're Adam and Eve.

Now people certainly make use of conventional as well as natural
symbol systems (like English, frinstance). But other representers in
our heads (like the perception of a horse, however encoded neurally).
seem to *intrinsically* represent. Ie, for the representation
relation, if "my perception of a horse" is the representer, and the
horse out there in the field is the represented thing, the language
seems to be a "special", natural one namely the-language-of-normal-
veridical-perception. (BTW, it's *not* the case, as stated in
Charles's original posting that the perception simply is the horse -
we are *not* different from computers with respect to
the-use-of-internal-things-to-represent-external-things.)
Further, it doesn't seem to make much sense at all to speak of an
"interpreter". If *I* see a horse, it seems a bit schizophrenic to
think of another part of myself as having to interpret that
perception. In any event, note that this is self-interpretation.

So people seem to be autonomous interpreters in a way that computers
are not (at least not yet). In Dennett's terminology, it seems that
I (and you) have the authority to adopt an intentional stance towards
various things (chess-playing machines, ailist readers, etc.),
*including* ourselves - certainly computers do not yet have this
"authority" to designate other things, much less themselves,
as intentional subjects.

Please treat the above as speculation, not as some kind of air-tight
argument (no danger of that anyway, right?)

John Cugini <Cugini@NBS-VMS>

------------------------------

Date: Thu 25 Sep 86 10:24:01-PDT
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Subject: Emergent Consciousness

Recent philosophical discussions on consciousness and intentionality
have made me wonder about the analogy between Man and Bureaucracy.
Imagine a large corporation. Without knowing the full internal chain of
command, an external observer could still deduce many of the following
characteristics.

1) The corporation is composed of hundreds of nearly identical units
(known as personnel), most of whom perform material-handling
or information-handling tasks. Although the tasks differ, the
processing units are essentially interchangeable.

2) The "intelligence" of this system is distributed -- proper functioning
of the organization requires cooperative action by many rational agents.
Many tasks can be carried out by small cliques of personnel without
coming to the attention of the rest of the system. Other tasks require
the cooperation of all elements.

3) Despite the similarity of the personnel, some are more "central" or
important than others. A reporter trying to discover what the
organization is "doing" or "planning" would not be content to talk
with a janitor or receptionist. Even the internal personnel recognize
this, and most would pass important queries or problems to more central
personnel rather than presume to discuss or set policy themselves.

4) The official corporate spokesman may be in contact with the most
central elements, but is not himself central. The spokesman is only
an output channel for decisions that occur much deeper or perhaps in a
distributed manner. Many other personnel seem to function as inputs or
effectors rather than decision makers.

5) The chief executive officer (CEO) or perhaps the chairman of the board
may regard the corporation as a personal extension. This individual
seems to be the most central, the "consciousness" of the organization.
To paraphrase Louis XV, "I am the state."


It seems, therefore, that the organization has not only a distributed
intelligence but a localized consciousness. Certain processing elements
and their own thought processes control the overall behavior of the
bureaucracy in a special way, even though these elements (e.g., the CEO)
are physiologically indistinguishable from other personnel. They are
regarded as the seat of corporate consciousness by outsiders, insiders,
and themselves.

Consciousness is thus related to organizational function and information
flow rather than to personal function and characteristics. By analogy,
it is quite possible that the human brain contains a cluster of simple
neural "circuits" that constitute the seat of consciousness, even though
these circuits are indistinguishable in form and individual functioning
from all the other circuits in the brain. This central core, because of
its monitoring and control of the whole organism, has the right to
consider itself the sole autonomous agent. Other portions of the brain
would reject their own autonomy if they were equipped to even consider
the matter.

I thus regard consciousness as a natural emergent property of hierarchical
systems (and perhaps of other distributed systems). There is no need to
postulate a mind/body dualism or a separate soul. I can't explain how
this consciousness arises, nor am I comfortable with the paradox. But I
know that it does arise in any hierarchical organization of cooperating
rational agents, and I suspect that it can also arise in similar organizations
of nonrational agents such as neural nets or computer circuitry.

-- Ken Laws

------------------------------

Date: 25 Sep 1986 1626-EDT
From: Bruce Krulwich <KRULWICH@C.CS.CMU.EDU>
Subject: semantic knowledge


howdy.

i think there was a discussion on searle that i missed a month or so
ago, so this may be rehash. i disagree with searle's basic conjecture
that he bases all of his logic on, namely that since computers represent
everything in terms of 1's and 0's they are by definition storing
knowledge syntactically and not semantically. this seems wrong to me.
as a simple counterexample, consider any old integer stored within a
computer. it may be stored as a string of bits, but the program
implicitely has the "semantic" knowledge that it is an integer.
similarly, the stored activation levels and connection strengths in a
connectionist model simulator (or better, in a true hardware
implementation) may be stored as a bunch of munerical values, but the
software (ie, the model, not the simulator) semantically "knows" what
each value is just as the brain knows the meaning of activation patterns
over neurons and synapses (or so goes the theory).

i think the same can be said for data stored in a more conventional AI
program. in response to a recent post, i don't think that there is a
fundamental difference between a human's knowledge of a horse and a
computers manipulation of the symbol it is using to represent it. the
only differences are the inherently associative nature of the brain and
the amount of knowledge stored in the brain. i think that it is these
two things that give us a "feel" for what a horse is when we think of
one, while most computer systems would make a small fraction of the
associations and would have much less knowledge and experience to
associate with. these are both computational differences, not
fundamental ones.

none of this is to say that we are close or getting close to a seriously
"intelligent" computer system. i just don't think that there are
fundamental philosophical barriers in our way.

bruce krulwich

arpa: krulwich@c.cs.cmu.edu
bitnet: krulwich%c.cs.cmu.edu@cmccvma

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT