Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 262

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Monday, 9 Nov 1987      Volume 5 : Issue 262 

Today's Topics:
Queries - Michael O. Rabin & Blackboard Sources & AI Programming Texts,
Neuromorphic Systems - Shift-Invariant Neural Nets for Speech Recognition,
Msc. - Indexing Schemes,
Applications - Speech Recognition,
Comments - Goal of AI & Humanist, Physicist, and Symbolic Models of the Mind

----------------------------------------------------------------------

Date: Thu, 5 Nov 87 08:56 EST
From: Araman@BCO-MULTICS.ARPA
Subject: Michael O. Rabin - location

One of my friends sent me this message. If anyone knows Mr. Rabin, or
if Mr. Rabin is reading this message, could you please send a response
to

Bensoussan -at BCO-Multics.ARPA

thanks
#1 (14 lines in body): Date: Wednesday, 4 November 1987 10:03 est
From: Bensoussan Subject: Michael O. Rabin To: Araman

Does anyone know Michael O. Rabin's address? An AI award is waiting
for him!

A friend of mine, Monica Pavel, asked me to find him. My friend teaches
a class on Pattern Recognition at Paris University, and she gave several
classes and presentations in Japan. The Japanese government decided to
maake an AI award available and asked her to select the person who
should receive it. Since she was impressed by one of Rabin's
publications, she selected him to receive the award,...that is, if she
can find him.

Can anyone in the AI community help locate him?

------------------------------

Date: 6 Nov 87 23:46:39 GMT
From: teknowledge-vaxc!jlevy@beaver.cs.washington.edu (sleeze hack)
Subject: Shopping list of sources wanted

I'm looking for the following:

1. Sample black board systems
Ideally, small black board systems written using a black board
tool of some kind, but no examples refused! I'd like these to
use as test cases for various black board work I do. The only
good examples I've seen are the two "AGE Example Series" by
Nii & Co. at Stanford's HPP.

2. A frame system in C (or maybe PASCAL)
Something like a C translation of the PFL code published in AI
EXPERT, Dec. 1986 by Finin.

3. A yacc grammer for english or any subset of english
If someone has yaccized Tomita's "Efficient Parsing for Natural
Language"
that would be ideal.

These are in order of importance. I might be willing to pay for the
sample black board systems. I'm posting this to comp.source.wanted and
comp.ai because I think it belongs in both, and there is minimal overlap
in readership between the two. If I'm wrong, sorry.

Thanks in advance.

Name: Joshua Levy (415) 424-0500x357
Disclaimer: Teknowledge can plausibly deny everything.
Pithy Quote: "Give me action, not words."
jlevy@teknowledge-vaxc.arpa or {uunet|sun|ucbvax}!jlevy%teknowledge-vaxc.arpa--
Name: Joshua Levy (415) 424-0500x357
Disclaimer: Teknowledge can plausibly deny everything.
Pithy Quote: "You're just a bunch of CYNIX"
jlevy@teknowledge-vaxc.arpa or {uunet|sun|ucbvax|decwrl}!jlevy%teknow...

------------------------------

Date: 6 Nov 87 18:01:05 GMT
From: aplcen!jhunix!apl_aimh@mimsy.umd.edu (Marty Hall)
Subject: AI Programming texts?

I am teaching an AI Programming course at Johns Hopkins this coming
semester, and was wondering if there were any suggestions for texts
from people that have taught/taken a similar course. The course
will be using Common LISP applied to AI Programming problems. The
students have an Intro AI course as a prereq, and have only mild
exposure to LISP (Franz) at the end of that course. Both the
AI Programming course and the Intro are supposed to be graduate
level, but would probably be undergrad level in the day school.

My thoughts so far were to use the second edition of Charniak,
Riesbeck, etc's "Artificial Intelligence Programming", along with
"Common LISPCraft" (Wilensky). Steele (CLtL) will be included
as an optional reference.

Any alternate suggestions? Send E-mail, and if there is a consensus,
I would be glad to post it to the net.

Thanks!
- Marty Hall
hall@hopkins-eecs-bravo.arpa

------------------------------

Date: Fri, 30 Oct 87 20:31:32+0900
From: kddlab!atr-la.atr.junet!waibel@uunet.UU.NET (Alex Waibel)
Subject: Shift-Invariant Neural Nets for Speech Recognition

A few weeks ago, there was a discussion on AI-list, about connectionist
(neural) networks being afflicted by an inability to handle shifted patterns.
Indeed, shift-invariance is of critical importance to applications such as
speech recognition. Without it a speech recognition system has to rely
on precise segmentation and in practice reliable errorfree segmentation
cannot be achieved. For this reason, methods such as dynamic time warping
and now Hidden Markov Models have been very successful and achieved high
recognition performace. Standard neural nets have done well in speech
so far, but due to this lack of shift-invariance (as discussed on AI-list
a number of these nets have been limping along in comparison to these other
techniques.

Recently, we have implemented a time-delay neural network (TDNN) here at
ATR, Japan, and demonstrate that it is shift invariant. We have applied
it to speech and compared it to the best of our Hidden Markov Models. The
results show, that its error rate is four times better than the best of our
Hidden Markov Models.
The abstract of our report follows:

Phoneme Recognition Using Time-Delay Neural Networks

A. Waibel, T. Hanazawa, G. Hinton^, K. Shikano, K.Lang*
ATR Interpreting Telephony Research Laboratories

Abstract

In this paper we present a Time Delay Neural Network (TDNN) approach
to phoneme recognition which is characterized by two important
properties: 1.) Using a 3 layer arrangement of simple computing
units, a hierarchy can be constructed that allows for the formation
of arbitrary nonlinear decision surfaces. The TDNN learns these
decision surfaces automatically using error backpropagation.
2.) The time-delay arrangement enables the network to discover
acoustic-phonetic features and the temporal relationships between
them independent of position in time and hence not blurred by
temporal shifts in the input.

As a recognition task, the speaker-dependent recognition of the
phonemes "B", "D", and "G" in varying phonetic contexts was chosen.
For comparison, several discrete Hidden Markov Models (HMM) were
trained to perform the same task. Performance evaluation over 1946
testing tokens from three speakers showed that the TDNN achieves a
recognition rate of 98.5 % correct while the rate obtained by the
best of our HMMs was only 93.7 %. Closer inspection reveals that
the network "invented" well-known acoustic-phonetic features (e.g.,
F2-rise, F2-fall, vowel-onset) as useful abstractions. It also
developed alternate internal representations to link different
acoustic realizations to the same concept.

^ University of Toronto
* Carnegie-Mellon University

For copies please write or contact:
Dr. Alex Waibel
ATR Interpreting Telephony Research Laboratories
Twin 21 MID Tower, 2-1-61 Shiromi, Higashi-ku
Osaka, 540, Japan
phone: +81-6-949-1830
Please send Email to my net-address at Carnegie-Mellon University:
ahw@CAD.CS.CMU.EDU

------------------------------

Date: 5 Nov 87 17:11:52 GMT
From: dbrauer@humu.nosc.mil (David L. Brauer)
Reply-to: dbrauer@humu.nosc.mil (David C. Brauer)
Subject: Indexing Schemes


In regards to the recent request for keyword/indexing schemes for AI
literature, look up the April 1985 issue of Applied Artificial Intelligence
Reporter. It contains an article describing the AI classification scheme
used by Scientific DataLink when compiling their collections of research
reports.

------------------------------

Date: Fri, 6 Nov 87 10:29:44 EST
From: hafner%corwin.ccs.northeastern.edu@RELAY.CS.NET
Subject: Practical effects of AI


In AIList V5 #255 Bruce Kirby asked what practical effects AI will have
in the next 10 years, and how that will affect society, business, and
government.

One practical effect that I expect to see is the integration of logic
programming with database technology, producing new deductive databases
that will replace traditional databases. (In my vision, in 15 years
no one will want to buy a database management system that does not support
a prolog-like data definition and query language.)

David D. H. D. Warren wrote a paper on this in the VLDB conference in 1981,
and the database research community is busy trying to work out the details
right now. Of course, the closer this idea comes to a usable technology,
the less AIish it seems to many people.

I can speculate on how this will affect society, business, and government:
it will make many new applications of databases possible, for management,
manufacturing, planning, etc. Right now, database technology is
very hard to use effectively for complex applications. (Many application
projects are never successfully completed - they are crushed by the complexity
of getting them working right. Ordinary programmers simply can't hack
these applications, and brilliant programmers don't want to.)

Deductive databases will be so much easier to create, maintain and use, that
computers will finally be able to fulfill their promise of making
complex organizations more manageable. White collar productivity will
be improved beyond anyone's current expectations.

A negative side effect of this development (along with personal computers
and office automation) will be serious unemployment in the white collar
work force. The large administrative and middle management work force
will shrink permanently, just as the large industrial work force has.

All of the above, of course, is simply an opinion, backed up by (hopefully)
common sense.

Carole Hafner
csnet: hafner@northeastern.edu

------------------------------

Date: 8 Nov 87 17:14:19 GMT
From: PT.CS.CMU.EDU!SPEECH2.CS.CMU.EDU!kfl@cs.rochester.edu (Kai-Fu
Lee)
Subject: Re: Practical effects of AI (speech)

In article <930001@hpfcmp.HP.COM>, gt@hpfcmp.HP.COM (George Tatge) writes:
> >
> >(1) Speaker-independent continuous speech is much farther from reality
> > than some companies would have you think. Currently, the best
> > speech recognizer is IBM's Tangora, which makes about 6% errors
> > on a 20,000 word vocabulary. But the Tangora is for speaker-
> > dependent, isolate-words, grammar-guided recognition in a benign
> > environment. . . .
> >
> >Kai-Fu Lee
>
> Just curious what the definition of "best" is. For example, I have seen
> 6% error rates and better on grammar specific, speaker dependent, continuous
> speech recognition. I would guess that for some applications this is
> better than the "best" described above.
>

"Best" is not measured in terms of error rate alone. More effort and
new technologies have gone into the IBM's system than any other system,
and I believe that it will do better than any other system on a comparable
task. I guess this definition is subjective, but I think if you asked other
speech researchers, you will find that most people believe the same.

I know many commercial (and research) systems have lower error rates
than 6%. But you have to remember that the IBM system works on a 20,000
word vocabulary, and their grammar is a very loose one, accepting
arbitrary sentences in office correspondences. Their grammar has a
perplexity (number of choices at each decision point, roughly speaking)
of several hundred. Nobody else has such a large vocabulary or such
a difficult grammar.

IBM has experimented with tasks like the one you mentioned. In 1978,
they tried a 1000-word task with a very tight grammar (perplexity = 5 ?),
the same task CMU used on Hearsay and Harpy. They achieved 0.1% error
rate.

> George (floundering in superlative ambiguity) Tatge

Kai-Fu Lee

------------------------------

Date: 29 Oct 87 14:22:46 GMT
From: clyde!watmath!utgpu!utcsri!utegc!utai!murrayw@rutgers.edu
(Murray Watt)
Subject: Re: Goal of AI: where are we going? (the right way?)

In article <2072@cci632.UUCP> mdl@cci632.UUCP (Michael Liss) writes:
>I read an interesting article recently which had the title:
>"If AI = The Human Brain, Cars Should Have Legs"
>
>The author's premise was that most of our other machines that mimic human
>abilites do not do so through strict copying of our physical processes.
>
>What we have done, in the case of the automobile, is to make use of wheels and
>axles and the internal combustion engine to produce a transportation device
>which owes nothing tothe study of human legs.
>
>In the case of AI, he state that artificial intelligence should not be
>assumed to be the equivalent of human intelligence and thus, the disection of
>the human mind's functionality will not necessarily yield a solution to AI.
>
"THE USE AND MISUSE OF ANALOGIES"

Transporation (or movement) is not a property unique to human beings.
If one were to refine the goal better, the analogy flips sides.
If the goal is to design a device that can climb rocky hills it may
have something like legs. If the goal is to design a device that can
fly it may have something like wings. (Okay so there not the same type of
wings, but what about streamlining?)

AS I UNDERSTAND IT, one goal of AI is to design systems that perform well
in areas that the human brain performs well. Current computer systems can do
things (like add numbers) better than we can. I would not suggest creating
an A.I. system for generating telephone bills! However, don't tell me
that understanding the human brain doesn't tell me anything about natural
language!

The more analogies I see the less I like them. However, they seem handy to
convince the masses of completely false doctrines.

e.g. "Jesus accepted food and shelter from his friends, so sign over
your paycheck to me."
(I am waiting Michael) 8-)

Murray Watt (murrayw@utai.toronto.edu)

The views of my colleagues do not necessarily reflect my opinions.

------------------------------

Date: Fri, 6 Nov 87 02:44:05 PST
From: larry@VLSI.JPL.NASA.GOV
Subject: Success/Future of AI

NATURAL ENTITIES AS PROTOTYPES

Much of the confusion about the nature of intelligence seems to
be the result of dealing with it at abstraction levels that are
too low.

At a low level of detail an aircraft is obviously drastically
different from a bird, leading to the conclusion that a study of
birds has no relevance to aeronautical science. At a higher
level the relevance becomes obvious: air-flow over the chord of
birds' and aircrafts' wings produces lift in exactly the same
way. Understanding this process was crucial to properly
designing the first aircrafts' wings.

Once the basic form+function was understood engineers could
produce articial variations that surpassed those found in
nature--though with numerous trade-offs. Construction and repair
of artifical wings, for instance, are much more labor- and
capital-intensive.

Understanding birds' wings helped in other ways. Analytically
separating the lift and propulsion functions of wings allowed us
to create jet aircraft; combining them in creative ways gave us
rocket-ships (where propulsion IS lift) and helicopters.

THE NATURE OF INTELLIGENCE

The understanding of intelligence is less advanced than that of
flight, but some progress HAS been made. The quotes from Robert
Frost illuminate the basic nature of intelligence: creation,
exploration, and manipulation within an entity of a model of the
Universe. He labels this model and its parts "metaphor." I
prefer "analog."

The mechanism that holds the analog we call memory. Though low-
level details (HOW memory works) are important, it is much more
important to first understand WHAT memory does. For instance,
there is a lot of evidence that there are several kinds of
memory, describable along several dimensions. One dimension,
obviously, is time.

This has a number of consequences that have nothing to do with,
for instance, the fact that deci-second visual memory works via
interactions of photons with visual purple. Eyes that used a
different storage mechanism but had the same black-box
characteristics (latency, bandwidth, communication protocol,
etc.) would present the same image to their owner.

One consequence of the time dimension of human memory is that
memory decays in certain ways. Conventionally memory units that
do not forget are considered good, yet forgetting is as important
as retention. Forgetting emphasizes the important by hiding the
unimportant; it supports generalization because essential
similarities are not obscured by inessential differences.

MECHANICAL NATURE OF INTELLIGENCE

There have been other real advances in scientifically understand-
ing intelligence, but I believe the above is enough to convince
the convincable. As to whether human intelligence is
mechanical--this depends on one's perception of machines. When
the word is used as an insult it usually calls up last-century
paradigms: the steam engine and other rigid, simple machines. I
prefer to think of the human hand, which can be soft and warm, or
the heart, which is a marvel of reliability and adaptibility.

Scientific models of the mind can (and to be accurate, must) use
the more modern "warmware" paradigm rather than the idiotic hand-
calc simplicity of Behaviorism. One example is my memory-mask
model of creativity (discussed here a year ago).

ART AND INTELLIGENCE

The previous comments have (I perhaps naively believe) a direct
relevance to the near-future of AI. That can't be said of this
last section but I can't resist adding it. Though professionally
a software engineer, I consider myself primarily an artist (in
fiction-writing and a visual media). This inside view and my
studies has convinced me over the years that art and cognition
are much closer than is widely recognized.

For one thing, art is as pervasive in human lives as air--though
this may not be obvious to those who think of haut cultur when
when they see/hear the word. Think of all the people in this
country who take a boombox/Walkman/stereo with them wherever they
stroll/jog/drive. True, the sound-maker often satisfies because
it gives an illusion of companionship, but it is more often
simply hedonically satisfying--though their "music" may sound
like audio-ordure to others. Think of all the doodling people
do, the small artworks they make (pastries, knitting, sand-
castles, Christmas trees, candy-striped Camaros), the photos
and advertising posters they tape to walls.

Art enhances our survival and evolution as a species, partly
because it is a source of pleasure that gives us another reason
for living. It also has intellectual elements. Poetic rules are
mnemonic enhancers, as all know who survived high-school English,
though nowadays these rules most often are used in prose and so
reflexively they aren't recognized even by their users.

Artistic rules are also cognitive enhancers. One way they do
this is with a careful balance of predictibility and surprise;
regularity decreases the amount of attention needed to remember
and process data, discontinuities shock us enough to keep us
alert. Breaks can also focus attention where an artist
desires.
Larry @ jpl-vlsi

------------------------------

Date: Fri 6 Nov 87 12:47:35-EST
From: Albert Boulanger <ABOULANGER@G.BBN.COM>
Subject: Humanist, Physicist, and Symbolic Models of the Mind


Pat Hayes puts forth the view that the symbolic computational
model of the mind can bridge the gap between science and a
humanistic outlook. I see a FURTHER exciting bridge being
built that is actually more pervasive that just models of the
mind. Why should the physicist model of the mind be any
different than what one does when building models that use
symbolic representations? The answer to this question being
"NO!" is becoming clear. There is a profound change happening
in the natural sciences; we are accepting non-linear phenomena
for what it is. Amazing behavior occurs with non-linear
dynamical systems. Behavior that is changing the way one views
the world as simple rules with followable outcomes. We know know
that we can have simple rules with amazingly complex behavior.
Deterministic randomness sounds contradictory at first, but is a
concept that non-linear phenomena is forcing us to accept. The
manifold emergent phenomena in non-linear systems, including
self-organization, is a humbling experience. It is the setting
where we can see emergent symbolic representations. This should
not be too surprising, since we build computers to host
computational models of the mind using symbolic representations
with a very restrictive class of non-linear switching circuits.


Albert Boulanger
BBN Labs

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT