Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 002

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Sunday, 3 Jul 1988        Volume 8 : Issue 2 

Today's Topics:

No digests for one week

Philosophy:
On applying AI
replicating the brain with a Turing machine
metaepistemology
ASL vs dance
Auto_Suggestion?

Announcements:
Directions and Implications of Advanced Computing - DIAC-88
Intermediate Mechanisms For Activation Spreading

----------------------------------------------------------------------

Date: Sun, 3 Jul 88 00:49 EDT
Subject: No digests for one week


I have been unexpectedly called away for a period of about one
week. Unless I am lucky and manage to obtain net access during that
time, there will be no digests sent out.

Apologies to all.

- nick

------------------------------

Date: Fri, 1 Jul 88 14:18:14 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: ON applying AI

The following is excerpted without permission from The Boston Globe
Magazine for June 26, 1988, pp. 39-42 of the cover article by D. C.
Denison entitled "The ON team, software whiz Mitch Kapor's new venture":

In the conference room, where the ON team has assembled for a group
interview, a question is posed: What developments in artificial
intelligence have made their project possible?

The first response comes immediately: "The lack of progress."

Then William Woods, ON's principal technologist, takes a turn. "What
came out of artificial intelligence that's useful to us is kind of
like what came out of the space program that's useful for everybody on
Earth --"

"Velcro," someone interrupts.

"Tang." From the other side of the table. "Are we the Tang of
artificial intelligence?"

Woods continues undaunted. "Artificial intelligence has given us a
tool kit of engineering techniques. AI has been driven by people
who've been tilting at windmills, but their techniques are pretty good
for what we want to do."

<Paragraph on Bill's background & experience omitted>

Although the early promise of artificial-intelligence research has
been tempered--we still don't have computers that can understand
English or reason like a human expert--the possibilities are so
seductive, so intriguing, and so potentially profitable that the field
continues to attract some of the best minds in the computer field.
Which is why it wans't surprising that when Mitchell Kapor left Lotus
two years ago, he became a visiting scholar at MIT's Center for
Cognitive Science, a leading center of artifial-intelligence-related
research. And it's not at all surprising that when Kapor and [Peter]
Miller put together the ON team, at least one AI veteran of Woods'
stature was part of the group.

Yet "artificial intelligence" has become such a buzzword, such an
umbrella term, that when the topic is brought up, experts such as
Woods and interested explorers such as Kapor take deep breaths and try
to redefine the terms of discussion.

"When you talk about AI," Kapor says, "you're talking about many, many
things at once: a body of research, certain kinds of goals and
aspirations that are characteristic of the people who are in it,
certain fields of inquiry; you've got soft stuff, you've got hard
stuff, you have mythology--the term 'AI' casts a broad shadow.

"I also think there's a reason why so much attention is paid to the AI
question in the nontechnical press," he continues, "and that is that
there are some very bombastic people in the AI community who have
spoken incredibly irresponsibly, who've made careers out of that. But
if you make the assumption that because AI gets a lot of attention in
the press there's a lot going on in the field, you might be making a
big mistake."

When Kapor and Woods first met soon after Kapor left Lotus, they
discovered that they shared a similar view of the value of current AI
research. First of all, they both felt that the goal most often
attributed to artificial-intelligence research--the creation of a
computer that "thinks" just like a human--was so remote as to be
essentially impossible. Kapor's experience at MIT had convinced him
that scientists still have no idea how people really think.
Therefore, any attempt to design a computer that works the way people
think is doomed.

A more realistic approach, according to Kapor, would be to design
computer programs that are compatible with the way people think, that
help amplify a person's intelligence rather than try to duplicate it.
Kapor felt that some artificial-intelligence techniques, when applied
to that goal, could be very powerful.

<Another bg paragraph omitted>

Last year, Woods, who was working at Applied Expert Systems,
discovered that Kaapor had leased a floor in the same building, and he
began stopping in for informal conversations. Eventually, after
discussing the ON Technology project with Kapor and Miller, and
studying their business plan, he accepted the position of principal
technologist and moved his things down two floors into a large corner
office.

<paragraph omitted about micros such as Mac II getting more memory>

One of the possibilities [that opens up], which Woods will be actively
working on during the next two years, is a more sympathetic fit
between people and their computers. "I want to take an abstract
perspective of what people's mental machinery does very well and what
a machine can do well," Woods says, "and design ways that you can
couple the two together to complement each other. For example,
machines can do long sequences of complicated steps without leaving
out one. People will forget something with frequency. On the other
hand, people can walk down the street without falling in holes. To
get a mechanical artifact to do that is a challenge that hasn't even
been aapproximately approached after several decades of research."

Woods pauses to frame his thoughts. "But if you could get the right
interface technology and conceptual framework, on the machine side, to
match up with what people really want to do on our side--that would be
a very nice arrangement.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Fri, 1 Jul 88 09:55:16 EDT
From: Duke Briscoe <briscoe-duke@YALE.ARPA>
Subject: Re: replicating the brain with a Turing machine

>Date: Wed, 29 Jun 88 9:26:50 PDT
>From: jlevy.pa@Xerox.COM
>Subject: Re: AIList Digest V7 #46 replicating the brain with a
> Turing machine
>
>Andy Ylikoski asks why you can't replicate the brain's exact functions
>with a Turing machine. First off, the brain is not a single machine but
>a whole bunch of them. Therefore "replacing it with a Turing machine"
>wouldn't get you there.
I think this is not a valid point because a single Turing machine (TM) can
simulate the actions of a group of parallel TMs.

>Turing machines have an inherent limitation in that they are not
>reactive i.e. they are unable to react to the environment directly. On
>the other hand, the brain is in direct communication with a number of
>input devices (eyes, ears, nose, touch-sense, etc.), all of which are
>sending data at the same time.
TMs are usually only used as a theoretical tool. If you were actually
going to implement one, you could have a multi-track input tape with
one tape having an alphabet representing sensory input sampled at an
appropriate rate. Issues of real-time response discussed below.

>An interesting question is whether the brain's software suffers from the
>Church-Rosser problem which is present in functional languages -
>basically, you cannot, in a functional language, see that a certain
>source of input is empty and later detect input on it. It seems that
>this is not so, since we are able to close our eyes and later open them,
>seeing again.
In a functional program to simulate a brain, you are assuming that
closing your eyes equates to closing an input stream, while in fact
real optic nerves continue sending information even when the eyes are
closed.

Even though I have just shown that I think the points above are
invalid, I'm still not sure that brain functions can be theoretically
modelled by a TM. TMs operate in discrete steps, while material
objects act in continuous dimensions of time and space (as far as we
know, otherwise perhaps the universe is a giant, parallel
Turing-equivalent computer). Assuming reality is continuous, a TM
model might closely approximate something material for some period of
time, but would eventually diverge.

Plus there is the whole problem that any physical TM implementation
would have problems such as unavoidable bit errors which would
invalidate its exact correspondence to the abstract TM.

However, physical implementations, even using non-organic materials,
of computers should still theoretically be capable of the same
computing powers as organic brains. There just seem to be limitations
in using a restricted TM model to prove things about brain computable
functions. Maybe an expanded TM model is needed which takes into
account physical properties of space-time. Or perhaps the space-time is
discrete at some level we have not yet detected, in which case the
current plain TM would be adequate. After all, electric charges seem
to be discrete.

------------------------------

Date: 2 Jul 88 19:11:40 GMT
From: proxftl!bill@bikini.cis.ufl.edu (T. William Wells)
Subject: Re: metaepistemology


In a previous article, YLIKOSKI@FINFUN.BITNET writes:
> In AIList Digest V7 #41, John McCarthy <JMC@SAIL.Stanford.EDU>
> writes:
>
> >I want to defend the extreme point of view that it is both
> >meaningful and possible that the basic structure of the
> >world is unknowable. It is also possible that it is
> >knowable.

I did not see the origins of this debate but it appears to be
nothing more than an attempt to defend the Kantian noumenal vs.
phenomenal distinction. Instead of wasting time debating this
issue, why don't those of you who are interested go and study
some philosophy? And, for those of you who are going to say "but
I have", carefully compare this view with Kant and you will see
that they are in essence identical.

------------------------------

Date: Fri, 1 Jul 88 08:16:09 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: ASL vs dance


I have not studied ASL, but it seems prima facie likely that the gesture
system of a sign language used by the deaf would have both a formal and
an expressive aspect, just as the gesture system of ordinary spoken
phonology does.

In the phonology of a given language, there is a limited inventory of
usually <50 contrasts, differences that make a difference. The phonetic
`content' of these contrasts (the actual sounds used to embody the
contrasts in a given utterance by a given speaker at a given time) is
subject to remarkably free `stretching', which languages exploit for
expressive purposes as well as in dialect variation.

Leigh Lisker long ago speculated that the function of semantically empty
greeting rituals ("Hello, how are you?" "Fine, and you?") is to provide
an opportunity for conversants to tune in on the fundamental frequency
of each other's voice and calibrate for the relative location of
vowel formants. Calibrating for the phonetic envelope each uses to
embody the contrasts of their shared language is also a likely function.

I would expect that deaf folks have to attune themselves to the gestural
style and expressive range of conversants, but I can't think of anything
analogous to the fundamental frequency and vowel formants in phonology.
I would astonished if there were no analogs of phonemic contrasts in ASL
utterances, no fundamental and stable `differences that make a
difference' to other ASL users, and I would be be very interested to
learn what they are like.

In language, it is the formal aspect, the system of contrasts or
`differences that make a difference', and the information structures
that they support, that are the ostensive focus. This is surely the
case with the sign languages of the Deaf also. In dance, by contrast,
it is the expressive aspect that is typically the main point, and the
formal structure is subsidiary, merely a channel for expressive
communication, else the piece is seen as dry, technical, academic,
uninspired. One may apply such adjectives to a conversation, but with
scarcely the same devastating critical effect! Conversely, a critic who
discussed what a choreographer was saying without comment on how she or
he said it would generally be thought to be missing the point.

An interesting thing here is that the expressive aspects of language use
actually do influence people much more than the literal content (words
7%, tone 32%, kinesics 61%: Albert Murahbian, _Public Places & Private
Spaces_; Ray Birdwhistell, _Kinesics & Context_). In this respect, we
very much need an understanding and representation of the expressive
`stretching' of a formal structure, since that is where most of human
communication takes place (as distinct from simple transmission of
literal information). This is a big part of the difference between
linguistic competence (Chomsky) and communicative competence (Hymes).
An AI that has the first (a hard enough problem!) but not the second
will always be missing the point and misconstruing the literal meaning
of what is said. I should think that the notations developed by Ray
Birdwhistell and his colleagues at the Annenberg School of Communication
would be more apt than Laban dance notation, because they concern the
unconscious, culturally inherited expressive art form of ordinary human
communication rather than a consciously cultivated art form. And of
course Manfred Clynes makes claims about the underlying form of all
communicative expression.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Sat, 02 Jul 88 16:05:09 +0100
From: "Gordon Joly, Statistics, UCL"
<gordon%stats.ucl.ac.uk@ESS.Cs.Ucl.AC.UK>
Subject: Auto_Suggestion?


> From AIList Vol 3 # 161

gcj> Date: Mon, 4 Nov 85 09:58:29 GMT
gcj> From: gcj%qmc-ori.uucp@ucl-cs.arpa
gcj> Subject: Vision Systems and American Sign Language
gcj>
gcj> One of goals of AI research is to produce speech recognition systems.
gcj> Has there been a proposal to produce a vision system that can ``read''
gcj> ASL?
gcj>
gcj> Gordon Joly

> From AIList Vol 4 # 49

ph> Date: 28 Jun 88 09:52 PDT
ph> From: hayes.pa@Xerox.COM
ph> Subject: Re: AIList Digest V7 #45
ph>
ph> On dance notation:
ph> A quick suggestion for a similar but perhaps even thornier problem:
ph> a notation for the movements involved in deaf sign language.
ph>

I am not sure if Pat and I are really thinking of the same thing...
Gordon Joly.

Surface mail: Dr. G.C.Joly, Department of Statistical Sciences,
University College London, Gower Street, LONDON WC1E 6BT, U.K.
E-mail: | Tel: +44 1 387 7050
JANET (U.K. national network) gcj@uk.ac.ucl.stats | extension 3636
(Arpa/Internet form: gcj@stats.ucl.ac.uk) |
Relays: ARPA,EAN: @nss.cs.ucl.ac.uk |
CSNET: %nss.cs.ucl.ac.uk@relay.cs.net |
BITNET: %ukacrl.bitnet@cunyvm.cuny.edu, @ac.uk
EARN: @ukacrl.bitnet, @AC.UK
By uucp/Usenet: ....!uunet!mcvax!ukc!stats.ucl.ac.uk!gcj

------------------------------

Date: 30 Jun 88 18:46:41 GMT
From: bcsaic!douglas@june.cs.washington.edu (Douglas Schuler)
Subject: Directions and Implications of Advanced Computing - DIAC-88


DIRECTIONS AND IMPLICATIONS OF ADVANCED COMPUTING

DIAC-88 Twin Cities, Minnesota August 21, 1988

Earle Browne Continuing Education Center, University of Minnesota


Advanced computing technologies are presented as instruments and images
of both near and distant futures. Some of these futures radically challenge
our conceptions of work, security, leisure, and common purpose. Will we be
drawn into these futures as passive participants or will we actively select
and shape alternative futures in our own interests?

Few computing disciplines lie so directly at the intersection of these issues
as does Artificial Intelligence. This summer thousands of computer
professionals will descend on the Twin Cities for the annual conference of
the American Association for Artificial Intelligence (AAAI). Sunday, August
21, the day before the AAAI conference, Computer Professionals for Social
Responsibility (CPSR) will sponsor a one day symposium, "Directions and
Implications of Advanced Computing." DIAC-88 aims to examine the social and
political contexts of advanced computing, asking what futures are obtainable,
for whom, and at what cost?

Douglas Engelbart, the DIAC-88 plenary speaker, will share his perspective on
using the computer to address global problems. Since the late 1950's,
Engelbart has worked with systems that augment the human intellect including
his NLS/Augment system, a hypertext system that pioneered "windows" and a
"mouse." The driving force behind Engelbart's professional career has been
his recognition of social impacts of computing technology. The plenary
session will be followed by presentations of research papers and a panel
discussion. The panel, John Ladd (Brown University), Deborah Johnson (Rens-
salaer Polytechnic), Claire McInerney (College of St. Catherine) and Glenda
Eoyang (Excel Instruction) will address the question, "How Should Ethical
Values be Imparted and Sustained in the Computing Community?"

Presented Papers

Computer Literacy: A Study of Primary and Secondary Schools, Ronni
Rosenberg

Dependence Upon Expert Systems: The Dangers of the Computer as
an Intellectual Crutch, Jo Ann Oravec

Computerized Voting, Eric Nilsson

Computerization and Women's Knowledge, Lucy Suchman and Brigitte Jordan

Some Prospects for Computer Aided Negotiation, Douglas Schuler

Computer Accessibility for Disabled Workers: It's the Law (invited paper)
Richard E. Ladner

Send symposium registration to: DIAC-88, CPSR/Los Angeles, P.O. Box 66038
Los Angeles, CA 90066-0038. Enclose check payable to CPSR/DIAC-88 with
registration. For additional information, call David Pogoff, 612-933-6431.

NAME ___________________________________________________
ADDRESS _________________________________________________
________________________________________________________
________________________________________________________
Phone (home) _____________________ (work) ______________________

Please check one:
Symposium Registration Regular O $50
(Includes Proceedings and Lunch) CPSR Member O $35
Student/Low Income O $25

I cannot attend, but want the symposium proceedings O $15

There will a reception following the symposium. Proceedings will be
distributed to registrants at the symposium. Non-attendees will receive
proceedings by October 15, 1988.
--
** MY VIEWS MAY NOT BE IDENTICAL TO THOSE OF THE BOEING COMPANY **
Doug Schuler (206) 865-3226
[allegra,ihnp4,decvax]uw-beaver!uw-june!bcsaic!douglas
douglas@boeing.com

------------------------------

Date: Fri, 1 Jul 88 09:10:39 EDT
From: dlm@research.att.com
Subject: talk announcement


Title: Intermediate Mechanisms For Activation Spreading
or
Why can't neural networks talk to expert systems?

Speaker:Jim Hendler
University of Maryland Institute for Advanced Computer Studies
University of Maryland, College Park

Date: Tuesday, July 19
Time: 1:30
Place: AT&T Bell Laboratories - Murray Hill 3D-473

Abstract:

Spreading activation, in the form of computer
models and cognitive theories, has recently been under-
going a resurgence of interest in the cognitive science
and AI communities. Two competing schools of thought
have been forming. One technique concentrates on the
spreading of symbolic information through an associa-
tive knowledge representation. The other technique has
focused on the passage of numeric information through a
network. In this talk we show that these two tech-
niques can be merged.

We show how an ``intermediate level'' mechanism,
that of symbolic marker-passing, can be used to provide
a limited form of interaction between traditional asso-
ciative networks and subsymbolic networks. We describe
the marker-passing technique, show how a notion of
microfeatures can be used to allow similarity based
reasoning, and demonstrate that a back-propogation
learning algorithm can build the necessary set of
microfeatures from a well-defined training set. We
discuss several problems in natural language and plan-
ning research and show how the hybrid system can take
advantage of inferences that neither a purely symbolic
nor a purely connectionist system can make at present.

Sponsor: Diane Litman (allegra!diane)

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT