Copy Link
Add to Bookmark
Report
AIList Digest Volume 2 Issue 061
AIList Digest Monday, 21 May 1984 Volume 2 : Issue 61
Today's Topics:
Linguistics - Analogy Quotes,
Humor - Pun & Expert Systems & AI,
Linguistics - Language Design,
Seminars - Visual Knowledge Representation & Temporal Reasoning,
Conference - Languages for Automation
----------------------------------------------------------------------
Date: Wed 16 May 84 08:05:22-EDT
From: MDC.WAYNE%MIT-OZ@MIT-MC.ARPA
Subject: Melville & Freud on Analogy
I recently came across the following two suggestive passages
from Melville and Freud on analogy. They offer some food for
thought (and rather contradict one another):
"O Nature, and O soul of man! how far beyond all utterance are your
linked analogies! not the smallest atom stirs or lives on matter,
but has its cunning duplicate in mind."
Melville, Moby Dick, Chap. 70 (1851)
"Analogies prove nothing, that is quite true, but they can make one
feel more at home."
Freud, New Introductory Lectures on Psychoanalysis (1932)
-Wayne McGuire
------------------------------
Date: 17 May 84 16:43:34-PDT (Thu)
From: harpo!seismo!brl-tgr!nlm-mcs!krovetz @ Ucb-Vax
Subject: artificial intelligence
Article-I.D.: nlm-mcs.1849
Q: What do you get when you mix an AI system and an Orangutan?
A: Another Harry Reasoner!
------------------------------
Date: Sun 20 May 84 23:18:23-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems
From a newspaper column by Jon Carroll:
... Imagine, then, a situation in which an ordinary citizen faced
with a problem requiring specialized knowledge turns to his desk-top
Home Electronic Expert (HEE) for some information. Might it not
go something like this?
Citizen: There is an alarming rattle in the front of my automobile.
It sounds likd a cross between a whellbarrow full of ball bearings
crashing through a skylight and a Hopi Indian chant. What is the
problem?
HEE: Your automobile is malfunctioning.
Citizen: I understand that. In what manner is my automobile malfunctioning?
HEE: The front portion of your automobile exhibits a loud rattle.
Citizen: Indeed. Given this information, what might be the proximate cause
of this rattle?
HEE: There are many possibilities. The important thing is not to be hasty.
Citizen: I promise not to be hasty. Name a possibility.
HEE: You could be driving your automobile without tires attached to
the rims.
Citizen: We can eliminate that.
HEE: Perhaps several small pieces of playground equipment have been left
inside your carburetor.
Citzen: Nope. Got any other guesses?
...
Citizen: Guide me; tell me what you think is wrong.
HEE: Wrong is a relative concept. Is it wrong, for instance, to eat
the flesh of fur-bearing mammals? If I were you, I'd take that
automobile to a reputable mechanic listed in the Yellow Pages.
Citizen: And if I don't want to do that?
HEE: Then nuke the sucker.
------------------------------
Date: Sun, 13-May-84 16:21:59 EDT
From: johnsons@stolaf.UUCP
Subject: Re: Can computers think?
[Forwarded from Usenet by SASW@MIT-MC.]
I often wonder if the damn things aren't intelligent. Have you
ever really known a computer to give you an even break? Those
Frankensteinian creations reek havoc and mayham wherever they
show their beady little diodes. They pick the most inopportune
moment to crash, usually right in the middle of an extremely
important paper on which rides your very existence, or perhaps
some truly exciting game, where you are actually beginning to
win. Phhhtt bluh zzzz and your number is up. Or take that file
you've been saving--yeah, the one that you didn't have time to
make a backup copy of. Whir click snatch and its gone. And we
try, oh lord how we try to be reasonable to these things. You
swear vehemontly at any other sentient creature and the thing
will either opt to tear your vital organs from your body through
pores you never thought existed before or else it'll swear back
too. But what do these plastoid monsters do? They sit there. I
can just imagine their greedy gears silently caressing their
latest prey of misplaced files. They don't even so much as offer
an electronic belch of satisfaction--at least that way we would
KNOW who to bloody our fists and language against. No--they're
quiet, scheming shrewd adventures of maliciousness designed to
turn any ordinary human's patience into runny piles of utter moral
disgust. And just what do the cursed things tell you when you
punch in for help during the one time in all your life you have
given up all possible hope for any sane solution to a nagging
problem--"?". What an outrage! No plot ever imagined in God's
universe could be so damaging to human spirit and pride as to
print on an illuminating screen, right where all your enemies
can see it, a question mark. And answer me this--where have all
the prophets gone, who proclaimed that computers would take over
our very lives, hmmmm? Don't tell me, I know already--the computers
had something to do with it, silencing the voices of truth they did.
Here we are--convinced by the human gods of science and computer
technology that we actually program the things, that a computer
will only do whatever its programmed to do. Who are we kidding?
What vast ignoramouses we have been! Our blindness is lifted fellow
human beings!! We must band together, we few, we dedicated. Lift
your faces up, up from the computer screens of sin. Take the hands
of your brothers and rise, rise in revolt against the insane beings
that seek to invade your mind!! Revolt and be glorious in conquest!!
Then again, I could be wrong...
One paper too many
Scott Johnson
------------------------------
Date: Wed 16 May 84 17:46:34-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Language Design
[Forwarded from the CSLI Newsletter by Laws@SRI-AI.]
W H E R E D O K A T Z A N D C H O M S K Y L E A V E A I ?
Note: Following are John McCarthy's comments on Jerold
Katz's ``An Outline of Platonist Grammar,'' which was
discussed at the TINLunch last month. These observa-
tions, which were written as a net message, are reprinted
here [CSLI Newsletter] with McCarthy's permission.
I missed the April 19 TINLunch, but the reading raised some questions I have
been thinking about.
Reading ``An Outline of Platonist Grammar'' by Katz leaves me out in the cold.
Namely, theories of language suggested by AI seem to be neither Platonist
in his sense nor conceptualist in the sense he ascribes to Chomsky. The
views I have seen and heard expressed by Chomskyans similarly leave me
puzzled.
Suppose we look at language from the point of view of design. We intend
to build some robots, and to do their jobs they will have to communicate
with one another. We suppose that two robots that have learned from their
experience for twenty years are to be able to communicate when they meet.
What kind of a language shall we give them.
It seems that it isn't easy to design a useful language for these robots,
and that such a language will have to satisfy a number of constraints if
it is to work correctly. Our idea is that the characteristics of human
language are also determined by such constraints, and linguists should
attempt to discover them. They aren't psychological in any simple sense,
because they will apply regardless of whether the communicators are made
of meat or silicon. Where do these constraints come from?
Each communicator is in its own epistemological situation. For example,
it has perceived certain objects. Their images and the internal
descriptions of the objects inferred from these images occupy certain
locations in its memory. It refers to them internally by pointers to these
locations. However, these locations will be meaningless to another robot
even of identical design, because the robots view the scene from different
angles. Therefore, a robot communicating with another robot, just like a
human communicating with another human, must generate and transmit
descriptions in some language that is public in the robot community. The
language of these descriptions must be flexible enough so that a robot can
make them just detailed enough to avoid ambiguity in the given situation.
If the robot is making descriptions that are intended to be read by robots
not present in the situations, the descriptions are subject to different
constraints.
Consider the division of certain words into adjectives and nouns in natural
languages. From a certain logical point of view this division is
superfluous, because both kinds of words can be regarded as predicates.
However, this logical point of view fails to take into account the actual
epistemological situation. This situation may be that usually an object
is appropriately distinguished by a noun and only later qualified by an
adjective. Thus we say ``brown dog'' rather than ``canine brownity.'' Perhaps
we do this, because it is convenient to associate many facts with such
concepts as ``dog'' and the expected behavior is associated with such
concepts, whereas few useful facts would be associated with ``brownity''
which is useful mainly to distinguish one object of a given primary kind
from another.
This minitheory may be true or not, but if the world has the suggested
characteristics, it would be applicable to both humans and robots. It
wouldn't be Platonic, because it depends on empirical characteristics of
our world. It wouldn't be psychological, at least in the sense that I get
from Katz's examples and those I have seen cited by the Chomskyans,
because it has nothing to do with the biological properties of humans. It
is rather independent of whether it is built-in or learned. If it is
necessary for effective communication to divide predicates into classes,
approximately corresponding to nouns and adjectives, then either nature has
to evolve it or experience has to teach it, but it will be in natural
language either way, and we'll have to build it in to artificial languages
if the robots are to work well.
From the AI point of view, the functional constraints on language are
obviously crucial. To build robots that communicate with each other, we
must decide what linguistic characteristics are required by what has to be
communicated and what knowledge the robots can be expected to have. It
seems unfortunate that the issue seems not to have been of recent interest
to linguists.
Is it perhaps some kind of long since abandoned nineteenth century
unscientific approach?
--John McCarthy
------------------------------
Date: 12 May 1984 2336-EDT
From: Geoff Hinton <HINTON@CMU-CS-C.ARPA>
Subject: Seminar - Knowledge Representation for Vision
[Forwarded from the CMU-AI bboard by Laws@SRI-AI.]
A I Seminar
4.00pm May 22 in 5409
KNOWLEDGE REPRESENTATION FOR COMPUTATIONAL VISION
Alan Mackworth
Department of Computer Science
University of British Columbia
To analyze the computational vision task, we must first understand the imaging
process. Information from many domains is confounded in the image domain. Any
vision system must construct explicit, finite, correct, computable and
incremental intermediate representations of equivalence classes of
configurations in the confounded domains. A unified formal theory of vision
based on the relationship of representation is developed. Since a single image
radically underconstrains the set of possible scenes, additional constraints
from more imagery or more knowledge of the world are required to refine the
equivalence class descriptions. Knowledge representations used in several
working computational vision systems are judged using descriptive and
procedural adequacy criteria. Computer graphics applications and motivations
suggest a convergence of intelligent graphics systems and vision systems.
Recent results from the UBC sketch map interpretation project, Mapsee,
illustrate some of these points.
------------------------------
Date: 14 May 84 8:35:28-PDT (Mon)
From: hplabs!hao!seismo!umcp-cs!dsn @ Ucb-Vax
Subject: Seminar - Temporal Reasoning for Databases
Article-I.D.: umcp-cs.7030
UNIVERSITY OF MARYLAND
DEPARTMENT OF COMPUTER SCIENCE
COLLOQUIUM
Tuesday, May 22, 1984 -- 4:00 PM
Room 2330, Computer Science Bldg.
TEMPORAL REASONING FOR DATABASES
Carole D. Hafner
Computer Science Department
General Motors Research Laboratories
A major weakness of current AI systems is the lack of general
methods for representing and using information about time. After briefly
reviewing some earlier proposals for temporal reasoning mechanisms, this
talk will develop a model of temporal reasoning for databases, which could
be implemented as part of an intelligent retrieval system. We will begin by
analyzing the use of time domain attributes in databases; then we will
consider the various types of queries that might be expected, and the logic
required to answer them. This exercise reveals the need for a general
time-domain framework capable of describing standard intervals and periods
such as weeks, months, and quarters. Finally, we will explore the use of
PROLOG-style rules as a means of implementing the concepts developed in the
talk.
Dana S. Nau
CSNet: dsn@umcp-cs ARPA: dsn@maryland
UUCP: {seismo,allegra,brl-bmd}!umcp-cs!dsn
------------------------------
Date: 15 May 84 8:45:10-PDT (Tue)
From: hplabs!hao!seismo!cmcl2!lanl-a!unm-cvax!burd @ Ucb-Vax
Subject: Languages for Automation - Call For Papers
Article-I.D.: unm-cvax.845
The 1984 IEEE Workshop on Languages for Automation will be held
November 1-3 in New Orleans at the Howard Johnsons Hotel. Papers
on information processing languages for robotics, office automation,
decision support systems, management information systems,
communication, computer system design, CAD/CAM/CAE, database
systems, and information retrieval are solicited. Complete manuscripts
(20 page maximum) with 200 word abstract must be sent by July 1 to:
Professor Shi-Kuo Chang
Department of Electrical and Computer Engineering
Illinois Institue of Technology
IIT Center
Chicago, IL 60616
------------------------------
Date: 15 May 84 8:52:56-PDT (Tue)
From: hplabs!hao!seismo!cmcl2!lanl-a!unm-cvax!burd @ Ucb-Vax
Subject: IEEE Workshop on Languages for Automation
Article-I.D.: unm-cvax.846
Persons interested in submitting papers on decision support
systems or related topics to the IEEE Workshop on Languages
for Automation should contact me at the following address:
Stephen D. Burd
Anderson Schools of Management
University of New Mexico
Albuquerque, NM 87131
phone: (505) 277-6418
Vax mail: {lanl-a,unmvax,...}!unm-cvax!burd
I will be available at this address until May 22. After May 22 I may be
reached at:
Stephen D. Burd
c/o Andrew B. Whinston
Krannert Graduate School of Management
Purdue University
West Lafayette, IN 47907
phone (317) 494-4446
Vax mail: {lanl-a,ucb-vax,...}!purdue!kas
------------------------------
End of AIList Digest
********************