Copy Link
Add to Bookmark
Report

AIList Digest Volume 2 Issue 111

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Wednesday, 29 Aug 1984    Volume 2 : Issue 111 

Today's Topics:
Hardware - Touch Screen,
Games - Chess Notation,
Conferences - AAAI-84 Review,
Hardware - Cellular Logic,
AI Tools - Taxonomy Assistant,
Speech Understanding - Hearsay II,
Seminar - Speech Acts as Summaries of Plans
----------------------------------------------------------------------

Date: 20 August 1984 16:43-EDT
From: Roland Ouellette <ROLY @ MIT-MC>
Subject: Who knows about TOUCH SCREEN?

My group wants to buy a touch screen for our Symbolics 3600. I would
appreciate any information about interfacing one to a 3600 or any
other machine. Please, also, send me reviews, who's products are
great and who's aren't so hot, price, and anything else that you might
think of. If you could send me information about who to get in touch
with, too, (i.e. address and/or phone) that would be fantastic.

Send mail to Roly at MIT-MC.ARPA

Many thanks in advance,
Roland Ouellette

------------------------------

Date: 26 August 1984 06:17-EDT
From: Jerry E. Pournelle <POURNE @ MIT-MC>
Subject: number-cruncher vs. humans: 9th move

query: is there a program that can convert from the algebraic
notation to descriptive notation? I learned P-K4 and like that,
and there is no possibility that I will ever have an intuitive
feel for cxd4 and the like. Can it be converted for those of us
who are algebraic cripples?

------------------------------

Date: 21 Aug 84 13:38:10-PDT (Tue)
From: ihnp4!mhuxl!mhuxm!sftig!sfmag!eagle!prem @ Ucb-Vax.arpa
Subject: AAAI-84 - a short subjective report
Article-I.D.: eagle.1187

The feelings evoked by the tremendous increase in interest, funding, media
participation and products available are best described by this excerpt from
W. B. Yeats :

"And what rough beast, its hour come round at last,
Slouches toward Bethlehem to be born? "

- W. B. Yeats, from "The Second Coming"

allegra!eagle!prem

------------------------------

Date: 9 Aug 84 10:23:00-PDT (Thu)
From: hplabs!hp-pcd!hpfcnml!robert @ Ucb-Vax.arpa
Subject: Re: Hardware Implementations of Cellular
Article-I.D.: hpfcnml.3400002

The latest Discover magazine mentions a hardware implementation of
cellular automata in their article on the topic. Interesting,
readily available, light reading.

-Robert (animal) Heckendorn
hplabs!hpfcla!robert

------------------------------

Date: Fri 24 Aug 84 22:26:52-EDT
From: Wayne McGuire <MDC.WAYNE%MIT-OZ@MIT-MC.ARPA>
Subject: Taxonomy Assistant

The problems of systematically representing the conceptual
relations among a set of abstract objects in any knowledge domain go
right to the heart of much leading-edge AI research. All inferencing
is based, among other things, on implicit taxonomic understanding.

It seems to me that in the knowledgebase management systems which
I hope we will see developed in the near future will be embedded rich
resources for evoking and representing taxonomies. Semantic nets
provide an ideal scheme with which to do just that.

The most useful thinking about taxonomies and classification
theory appears not in the computer science literature, but at the
interface of library science, information science, and philosophy. The
leading journal in the field is _International Classification_ (which
should be available in any world class humanities library). It is
published (as I recall) three times a year, and is chockfull of
pointers to articles, books, dissertations, etc. in the world
literature on all aspects of classification theory.

You might want to scan the following subject headings in some
recent editions of the index _Library Literature_ (published by H. W.
Wilson): Classification analysis, Subject headings, Thesauri. File 57
(Philosopher's Index) and File 61 (LISA -- Library and Information
Science Abstracts) on Dialog are also fertile sources of information
on the literature about taxonomies and classification theory. There
are many insights in the theoretical writings on classification theory
in the library science literature which could be handily transferred
to advanced AI research and systems.

It occurs to me that what we need is a _meta-taxonomy_, that is,
a thorough inventory of all the fundamental conceptual structures by
which objects in _any_ domain can be taxonomically related.

One way a taxonomy assistant might operate is to combine each and
every significant term in a knowledge domain with every other term,
and offer a list of possible relations with which to tag each offered
matching set. Someday, perhaps, we will be able to buy off the shelf
"taxonomy packs" (dynamic thesauri) in many domains.

-- Wayne --

------------------------------

Date: Sat, 25 Aug 84 01:36 EDT
From: Sidney Markowitz <sidney%MIT-OZ@MIT-MC.ARPA>
Subject: Hearsay II question in AIList Digest V2 #110


Date: 22 Aug 1984 22:05:18-PDT
From: doshi%umn-cs.csnet@csnet-relay.arpa
Subject: Question about HEARSAY-II.

I have a question about the HEARSAY-II system [Erman et.al.1980].

What exactly is the HEARSAY system required/supposed to do ?
i.e. what is the meaning of the phrase :
"Speech Understanding system"

I am not familiar with the HEARSAY-II system, however I am answering
your question based on the following lines from the quotes you
provided, and some comments of yours that indicate that you are not
familiar with certain points of view common among natural language
researchers. The quotes:

(1) page 213 : "The HEARSAY-II reconstructs an intention .... "
(2) on the strong syntactic/semantic/task constraints
(3) - with a slightly artificial syntax and highly
constrained task
(4) - tolerating less than 10 % semantic error

Researchers pretty much agree that in order to understand natural
language, we need an understanding of the meaning and context of the
communication. It is not enough to simply look up words in a
dictionary, and/or apply rules of grammar to sentences. A classic
example is the pair of sentences: "Time flies like an arrow." and
"Fruit flies like a banana." The problem with speech is even worse --
It turns out that even to separate the syllables in continuous speech
you need to have some understanding of what the speaker is talking
about! You can discover this for yourself by trying to hear the sounds
of the words when someone is speaking a foreign language. You can't
even repeat them correctly as nonsense syllables.
What this implies is an approach to speech recognition that goes
beyond pattern recognition to include understanding of utterances.
This in turn implies that the system has some understanding of the
"world view" of the speaker, i.e., common sense knowledge and the
probable intentions of the speaker. AI researchers have attempted to
make the problem tractable by restricting the "domain" of a system. A
famous example is the "blocks world" used by Terry Winograd in his
doctoral thesis on a natural langugage understanding system, SHRDLU.
All SHRDLU knew about was its little world of various shapes and
colors of blocks, its robot arm and the possible actions and
interactions of those elements. Given those limitations, and the
additional assumption that anything said to it was either a question
about the state of its world or else a command, Winograd was able to
devise a system in which syntax, semantics and task performance all
interacted. For example, an ambiguity in syntax could be resolved if
only one grammatical interpretation made semantic sense.
You can see how this approach is implied by the four quotes above.
With this as background, lets proceed to your questions...


Let me explain my confusion with examples. Does the system do one of the
following :
- 1) Accepts speech as input; Then, tries to output what (ever) was
spoken or might have been spoken ?
- 2) Or, accept speech as input and UNDERSTAND it ?
Now, the 1) above is, I think speech RECOGNITION. DARPA did not want just
that.

Then, what is(are) the meaning(s) of UNDERSTAND ?
- if I say "Alligators can fly", should the system repeat this and
also tell me that that is "not true"; is this called UNDERSTANDING?
- if I say "I go house", should the system repeat this and also add
that there is a "grammetical error"; is this called UNDERSTANDING?
- Or, if HAYES-ROTH claims "I am ERMAN", the system should say
"No, You are not ERMAN" - I dont think that HEARSAY was supposedd
to do this (it does not have Vision etc). But you will agree that
that is also UNDERSTANDING. Note that the above claim by
HAYES-ROTH would be true if :
- he had changed his last name
- he was merely QUOTING what ERMAN might have said somewhere
- etc

In light of the above examples, what does it mean by
saying that HEARSAY-II understands speech ?


The references to "tasks" in the quotes you provided are a clue that
the authors are thinking of "understanding" in terms of the ability to
perform a task that is requested by the speaker. The examples in your
questions are statements that would need to be reframed as tasks. It
is possible that the system could be set up so that a statement like
"Alligators can fly" is an implied command to add that fact to the
knowledge base, perhaps first checking for contradictions. But you
probably ought to think of an example of a restricted task domain
first, and then think about what "understanding" would mean in that
context. For example, given a blocks world domain the system might
respond to a statement such as "Place a blue cube on the red pyramid"
by saying "I can't place anything on top of a pyramid". There's much
that can be done with modelling the speaker's intentions and
assumptions which would affect the sophistication of the resulting
system, but that's the general idea.

-- Sidney Markowitz <sidney%mit-oz@mit-mc.ARPA>

------------------------------

Date: 28 Aug 1984 15:31-EDT
From: Brad Goodman <BGOODMAN at BBNG>
Subject: Seminar - Speech Acts as Summaries of Plans

[Forwarded from the MIT bboard by SASW@MIT-MC.]


Speech Acts as Summaries of Plans

Phil Cohen

SRI International
and
Center for the Study of Language and Information
Stanford University


BBN AI Seminar
10:30 a.m. on Wednesday, September 5th
Third floor large conference room at 10 Moulton St., Cambridge.

Many theories of communication require a hearer to determine what
illocutionary act(s) (IA's) the speaker performed in making each
utterance. This talk will sketch joint work, with Hector Levesque,
that aims to call this presumption into question, at least for some
kinds of illocutionary acts. Such acts will be shown to be definable
on a "substrate" of interacting plans --- i.e., as beliefs about the
conversants' shared knowledge of the speaker's and hearer's goals and
the causal consequences of achieving those goals. In this formalism,
illocutionary acts are no longer conceptually primitive, but rather
amount to theorems that can be proven about a state-of-affairs. The
important point here is that the definition of, say, a request is
derived from an independently-motivated theory of action, rather than
stipulated. Just as one need not determine if a proof corresponds to
a prior lemma, a hearer need not actually characterize the
consequences of each utterance in terms of the IA theorems, but may
simply infer and respond to the speaker's goals. However, the hearer
could retrospectively summarize a complex of utterances as satisfying
an illocutionary act.

This move of defining illocutionary acts in terms of plans may
alleviate a number of technical obstacles in applying speech act
theory to extended discourse. It formally characterizes a range of
indirect requests in terms of conversants' plans, and demonstrates
how certain conventionalized forms can be derived from and integrated
with plan-based reasoning. Finally, it gives a formal foundation to the
view that speech act characterizations of discourse are not
necessarily those of the conversants but rather are the work of the
theorist.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT