Copy Link
Add to Bookmark
Report

AIList Digest Volume 7 Issue 021

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest           Wednesday, 8 Jun 1988      Volume 7 : Issue 21 

Today's Topics:

Philosophy:
Human-human communication
Constructive Question
Why study connectionist networks?
Fuzzy systems theory
Ill-structured problems
The Social Construction of Reality
Definition of 'intelligent'
Artificial Intelligence Languages
Who else isn't a science?

----------------------------------------------------------------------

Date: 2 Jun 88 07:18:09 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: Human-human communication

In article <238@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>
>Name one thing that isn't expressible with language! :-)
Many things learnt by imitation, and taught by demonstration ;-)

I used to be involved in competitive gymnastics. Last year, I got
involved in coaching. The differences between the techniques I
learnt and the ones now taught are substantial. There is a lot less
talk, and much more video. Many moves are taught by "shaping"
gymnasts into "memory positions" (aiming for some of these positions
will actually put you somewhere else, but that's the intention). With
young children especially, trying to describe moves is pointless.
Even with adults, dance notations are a real problem.

We could get pedantic and say that ultimately this is describable.
For something to be USEFULLY describable by language

a) someone other than the author must understand it
(thus we comment programs in natural language)
b) it must be more accurate and more efficient than
other forms of communication.

Anyone who's interested in robot movement might find some inspiration
in gymnastic training programs for under-5s. The amount of knowledge and
skill required to chain a few movements together is intriguing. As
with all human learning, the only insights are from failures to learn
(you can't observe someone learnING). Perhaps the early mistakes of
young gymnasts may give a better insight into running robots :-)
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

The proper object of the study of humanity is humans, not machines

------------------------------

Date: 6 Jun 88 10:31:47 GMT
From: mcvax!ukc!its63b!epistemi!jim@uunet.uu.net (Jim Scobbie)
Subject: Re: Constructive Question


The difference between Cognitive Science and AI? The fact that the initials
"C.S." are already booked? :-)

(Actually, A.I. for artificial insemination is widespread in the real
world in my experience. When I visited the rural area my mother was
brought up in a few years ago, there were several who had been told I
working in A.I. who wondered what had become of my bowler hat and
wellingtons.)

All this from a linguist, so in this case the normal disclaimers certainly
apply!

--
Jim Scobbie: Centre for Cognitive Science and Department of Linguistics,
Edinburgh University,
2 Buccleuch Place, Edinburgh, EH8 9LW, SCOTLAND
UUCP: ...!ukc!cstvax!epistemi!jim JANET: jim@uk.ac.ed.epistemi

------------------------------

Date: 6 Jun 88 12:54:40 GMT
From: craig@think.com (Craig Stanfill)
Subject: Why study connectionist networks?

It seems to me that it is proper for AI to study the degree
to which any computational method behaves in an
``intelligent'' manner. Connectionist methods are certainly
worth studying, regardless of the degree (small at present)
to which they mimic the behavior of actual neurons. I do,
however, balk at calling these networks ``neural networks,''
because that implies that an important criteria in judging
the research is how well these networks mimic the behavior
of neurons; if such is the case, then the majority of
existing connectionist research is deeply flawed. But let's
not get tangled up in words, and let's not let the level of
hype generated within the field obscure the fact that
connectionist networks have some very interesting
properties.

-Craig Stanfill

------------------------------

Date: 6 Jun 88 13:05:59 GMT
From: eagle!icdoc!qmc-cs!flash@bloom-beacon.mit.edu (Flash Sheridan)
Subject: Re: Fuzzy systems theory was (Re: Alternative to Probability)

In article <1073@usfvax2.EDU> Wayne Pollock writes:
>In article <487@sequent.cs.qmc.ac.uk> root@cs.qmc.ac.uk [ME] writes:
>>...
>>>>Because fuzzy logic is based on a fallacy
>>>Is this kind of polemic really necessary?
>>
>>Yes. The thing the fuzzies try to ignore is that they haven't established
>>that their field has any value whatsoever except a few cases of dumb luck.
>
>On the other hand, set theory, which underlies much of current theory, is
>also based on fallacies; (given the basic premises of set theory one can
>easily derive their negation).

Sorry, it's a lot more complicated than that. For more details, see my
D.Phil thesis when it exists. (For a start, if you think you know what
you're talking about, what _are_ these "basic premises"? From your
comment, I think you're about 70 years out of date.)

I'm the first to criticize Orthodox Set Theory. But its flaws are of an
entirely different kind from those of fuzzy logic. ST has what it takes,
almost, to be a mathematically respectable theory. FL isn't even close.

From: flash@ee.qmc.ac.uk (Flash Sheridan)
Reply-To: sheridan@nss.cs.ucl.ac.uk
or_perhaps_Reply_to: flash@cs.qmc.ac.uk

------------------------------

Date: 6 Jun 88 15:30:40 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu
(Stephen Smoliar)
Subject: Re: Ill-structured problems

In article <17481@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle)
writes:
>In article <5644@venera.isi.edu> Stephen Smoliar writes:
>>Take a look at Herb Simon's article in ARTIFICIAL INTELLIGENCE about
>>"ill-structured problems" and then decide whether or not you want to
>make that bet.
>
> A reference to the above would be helpful.
>
Herbert A. Simon
The Structure of Ill Structured Problems
ARTIFICIAL INTELLIGENCE 4(1973), 181-201

Simon discusses houses, rather than steeples; but I think the difficulties
he encounters are still relevant.

------------------------------

Date: 6 Jun 88 16:31:46 GMT
From: mcvax!ukc!dcl-cs!simon@uunet.uu.net (Simon Brooke)
Subject: Re: The Social Construction of Reality

Some time ago, on comp.lang.prolog, there was a discussion about the
supposed offensiveness of Richard O'Keefe's upper class English accent.
There are times, however, when such an accent appears to be called for,
and one of these occurred when T. William Wells posted (in
<218@proxftl.UUCP>):

#Oh, yeah. First, Kant is a dead horse.

Really, the arrogance of the ignorant knows no bounds.

What Wells was objecting to was the claim in an earlier posting
<1157@crete.cs.glasgow.ac.uk> by Gilbert Cockton, that reality was socially
constructed, through a process of evolving concensus. Wells goes on to
ask:

#OK, answer me this: how in the world do they reach a consensus
#without some underlying reality which they communicate through.

This sentence demonstrates a profound misunderstanding. There may indeed
be a 'reality' out there, if by such you mean a system of material
(whatever that word means) objects, but if there is, we can never access
it except though our perceptions, and we have no way of verifying these.
Moreover, we can never verify that our own perceptions of phenomena agree
with those of other people. In order to make communication possible we
assume that our perceptions do accord; but the possibility of communication
between humans remains accutely mysterious in itself - see for example
Barwise and Perry's attempt (as yet unsuccessful) to formalise it.

Thus we are able to communicate not because of the existance of a
'reality' but despite the possible absence of it.

#And this: logic depends on the idea of noncontradiction. If you
#abandon that, you abandon logic.

Well, tough. Einstein claimed that 'God does not play with dice'; that was
simply a statement of belief. The assumption that reality - if it exists -
is either consistent or coherent is no more than an assumption. It would
be nice if it were true, and life is a lot more comfortable so long as we
believe that it is. But it is simply ideology to claim that it certainly
is.

Later in his piece, Wells (replying to Cockton) writes:

#You assert that consensus determines reality (whatever that means)......
#your proposition has no evidence to validate itself with.

Well, I (personally) would not assert quite that, so let me state the case
more carefully. In the absence of any verifyable access to a real world,
what we conventionally (that is, in normal conversation) refer to as
'reality' *can only be* a social construct - it can't be an individual
construct, as if I talked to you about a world constructed entirely in my
own imagination no communication could take place (note that I am assuming
for the sake of the argument now that communication between humans *is*
possible, despite the fact that we don't understand how). Likewise, it
cannot be given, because we don't have access to any medium through which
it could be given. That's not much evidence, I agree - but as Sherlock
Homes (or was it Sir Arthur Conan Doyle?) repeatedly said, when you have
eliminated the impossible, whatever remains, no matter how incredible,
must be true.

The most depressing thing of all in Well's posting is his ending. He
writes:

#DO NOT BOTHER TO REPLY TO ME IF YOU WANT TO DEFEND CONSENSUS
#REALITY. The idea is so sick that I am not even willing to reply
#to those who believe in it.
#
#As you have noticed, this is not intended as a counter argument
#to consensus reality.

Unable to find rational argument to defend the articles of his faith,
Wells, like fanatical adherents of other ideologies before him, first
hurls abuse at his opponents, and finally, defeated, closes his ears. I
note that he is in industry and not an academic; nevertheless he is
posting into the ai news group, and must therefore be considered part of
the American AI community. I haven't visited the States; I wonder if
someone could tell me whether this extraordinary combination of ignorance
and arrogance is frequently encountered in American intellectual life?


** Simon Brooke *********************************************************
* e-mail : simon@uk.ac.lancs.comp *
* surface: Dept of Computing, University of Lancaster, LA 1 4 YW, UK. *
* *
* Thought for today: isn't it time you learned the Language *
********************* International Superieur de Programmation? *********

------------------------------

Date: Tue, 7 Jun 88 10:08 EDT
From: GODDEN%gmr.com@RELAY.CS.NET
Subject: Definition of 'intelligent'


The current amusement here at work is found in >Webster's New Collegiate
Dictionary< published by Merriam:

intelligent: [...] 3: able to perform some of the functions of a computer

-Kurt Godden

------------------------------

Date: Tue, 07 Jun 88 11:01:04 HOE
From: ALFONSEC%EMDCCI11.BITNET@CUNYVM.CUNY.EDU
Subject: Re: Artificial Intelligence Languages

In
AIList Digest Friday, 3 Jun 1988 Volume 7 : Issue 16
Ed King mentions a list of features an AI language must have:

< 1) True linked list capability.
<
< 2) Easy access to hardware
<
< 3) Easy to use string functions, or a library to do such.
<
<So, by this criteria, all the commonly held "AI languages" would fit
<(like PROLOG, LISP, POP, et cetera ad nauseum).

Please add APL2. It has all of the above plus supporting frames
as a basic data structure. After all, a frame is nothing but a matrix
of complex objects. Object oriented programming is also trivial in APL2.



Regards,

Manuel Alfonseca, ALFONSEC at EMDCCI11

------------------------------

Date: 7 Jun 88 15:14:00 GMT
From: apollo!nelson_p@eddie.mit.edu (Peter Nelson)
Subject: who else isn't a science


>>regarded as adequate for the study of human reasoning. On what
>>grounds does AI ignore so many intellectual traditions?

> Because AI would like to make some progress (for a change!). I
> originally majored in psychology. With the exception of some areas
> in physiological pyschology, the field is not a science. Its
> models and definitions are simply not rigorous enough to be useful.

>Your description of psychology reminds many people of AI, except
>for the fact that AI's models end up being useful for many things
>having nothing to do with the motivating application.
>
>Gerald Edelman, for example, has compared AI with Aristotelian
>dentistry: lots of theorizing, but no attempt to actually compare
>models with the real world. AI grabs onto the neural net paradigm,
>say, and then never bothers to check if what is done with neural
>nets has anything to do with actual brains.

But we don't know enough about how real brains work yet, and it
may be quite a while until we do. Besides, neural net models
use a lot fewer nodes that a real brain does to solve a similar
problem so we're probably not doing *exactly* the same thing as
a brain anyway.

I don't see why everyone gets hung up on mimicking natural
intelligence. The point is to solve real-world problems. Make
machines understand continous speech, translate technical articles,
put together mechanical devices from parts that can be visually
recognized, pick out high priority targets in self-guiding missiles,
etc. To the extent that we understand natural systems and can use
that knowledge, great! Otherwise, improvise!

--Peter Nelson

------------------------------

Date: 7 Jun 88 16:21:52 GMT
From: bbn.com!pineapple.bbn.com!barr@bbn.com (Hunter Barr)
Subject: Re: Human-human communication

In article <1315@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>In article <238@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>>
>>Name one thing that isn't expressible with language! :-)
>Many things learnt by imitation, and taught by demonstration ;-)
>
>I used to be involved in competitive gymnastics. Last year, I got
>involved in coaching. The differences between the techniques I
>learnt and the ones now taught are substantial. There is a lot less
>talk, and much more video. Many moves are taught by "shaping"
>gymnasts into "memory positions" (aiming for some of these positions
>will actually put you somewhere else, but that's the intention). With
>young children especially, trying to describe moves is pointless.
>Even with adults, dance notations are a real problem.
>
>We could get pedantic and say that ultimately this is describable.
>For something to be USEFULLY describable by language
>
> a) someone other than the author must understand it
> (thus we comment programs in natural language)
> b) it must be more accurate and more efficient than
> other forms of communication.
>
>Anyone who's interested in robot movement might find some inspiration
>in gymnastic training programs for under-5s. The amount of knowledge and
>skill required to chain a few movements together is intriguing. As
>with all human learning, the only insights are from failures to learn
>(you can't observe someone learnING). Perhaps the early mistakes of
>young gymnasts may give a better insight into running robots :-)
>--
>Gilbert Cockton, Department of Computing Science, The University, Glasgow
> gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert
>
> The proper object of the study of humanity is humans, not machines

----------------------------------------------------------------
First: thank you, Gilbert, for your contributions to this discussion.
You have been responsible for much of the life in it. I disagree with
most of your arguments and almost all of your conclusions, but I am
much indebted to you for stimulating my thoughts, and forcing some
rigor in justifying my own opinions. Without you (and other people
willing to dissent) we might all sit around nodding agreement at each-other!

Now I must "get pedantic," by saying that body movement *is*
describable. As for part a), you are correct that someone other than
the author must understand it, otherwise we do not have communication.
But you ignore the existance of useful dance notations. I don't know
much about dance notation, and I am sure there is much lacking in it--
probably standardization for one thing. But the lack of a universally
intelligable *spoken* language does not make human speech fail the
"usefulness" test. Mandarin Chinese is an even bigger problem with
adults than dance notation! If one learned a common dance notation
from childhood, it would be every bit as useful as the Chinese
language. And I often interpret computer programs with *no* "natural
language" comments whatsoever. (Pity me if you wish, but don't say
that these computer programs fail to describe anything, simply because
they have no natural language in them.)

To further show that description of movement is possible, imagine that
I tell a gymnastics student this:

Run over to the rings, get into the Iron-Cross position, then lower
your body, letting the arms get straight-up before the legs start to
come down. At this point your toes should be pointing straight down.
Then lift your fingers from the rings until you are hanging by your
pinky-fingers. Then drop to the floor and come back over here.

Yes it contains ambiguity, but it is pretty clear compared to much of
what passes for communication, even to people who know very little
about gymnastics (like me).

Now for part b). We need something more accurate and more efficient
than other forms of communication. Well, one could conceivably plot
out a system whereby different combinations and sequences of smells
stand for body movements. Cinnamon-sugar-lime, followed by peppermint
could mean "a one-handed cartwheel." Compared to this smell-system for
dance notation, spoken language is very accurate and efficient.

So your definition allows dance to be considered "USEFULLY
describable" by dance notation, because a) the notation *can* be
understood by those other than the author (namely, those educated to
understand the notation), and b) the notation *is* more accurate and
efficient than other forms of communication (e.g., the smell-system).

It looks like your "definition" is useless, even for your argument.
As it happens, there are many cases where a picture *is* worth a
thousand words, and using your hands to grab the student's body,
putting it into the correct position, *is* the best way teach
gymnastics. And there are many cases where the real thing *is* more
useful than the symbols we make up for it. (This is in contrast to the
case where assembly-language mnemonics are easier to follow than the real,
bare bits in core memory.) But you have in no way shown that
gymnastics movement is not describable. So I join the challenge:

>>Name one thing that isn't expressible with language! :-)

Well (removing my tongue from my cheek), you don't have to *name* it,
but give us some sort evidence that it exists, and that it cannot be
expressed with symbols.

To the audience at large: I hope you will all pardon me for quoting
the above posting *in toto*, but since I attack Gilbert Cockton I felt
it only fair to avoid taking his words out of context. Thank you for
reading. Please reply either to me directly, or to COMP.AI.

______
HUNTER

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT