Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 227

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest             Monday, 5 Oct 1987      Volume 5 : Issue 227 

Today's Topics:
Philosophy - Goal of AI

----------------------------------------------------------------------

Date: Sun, 4 Oct 87 14:56:42 -0200
From: Jacob Levy <jaakov%wisdom.bitnet@jade.berkeley.edu>
Subject: Another Blooming Endless Argument

Please!

It seems we are getting ready for another deluge, this time under the title
of "Re: Goal of AI: where are we going?". While the subject line certainly
does justify its inclusion in AI-digest, it may easily get out of hand and
deteriorate into personal arguments. There are already first signs of super
heated discussions and personal attacks in V5 #226.

The question I am asking myself when reading these postings is "how much of
this material is AI-related, and how much is purely philosophical/psycholo-
gical or whatever?"
I suggest that authors participating in this discussion
would benefit from application of the same criterion - how much is this an
ARTIFICIAL intelligence-related posting? The word ARTIFICIAL is crucial and
determines, for me at least, whether I want to read on.

Another Please!

Remember that this is my personal opinion, I am "a small egg" only, so no
flames or personal attacks. Educate, not eradicate, OK?

P.S. Is the discussion entitled "Nature of Computer Science" really appro-
priate for AIlist?

Rusty Red (AKA Jacob Levy)

BITNET: jaakov@wisdom
ARPA: jaakov%wisdom.bitnet@wiscvm.wisc.edu
CSNET: jaakov%wisdom.bitnet@relay.cs.net

------------------------------

Date: 1 Oct 87 18:36:20 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: Goal of AI: where are we going?

In article <270@uwslh.UUCP>, lishka@uwslh.UUCP (Christopher Lishka) writes:
> ***Warning: FLAME ON***
>
> In article <549@csm9a.UUCP> bware@csm9a.UUCP (Bob Ware) writes:
> >>We all admit that the human mind is not flawless. Bias decisions...
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> The expression "we all" does not apply to me, at very least. Some of
> us (at least myself)like to believe that the human mind should not be
> considered to be either flawed or flawless...it only "is." ....

Some interesting points here.

Point one, the human mind is in fact a phenomenon, and phenomena are
neither flawed nor perfect, they are the stuff that observation is made
of. Score one for Lishka.

Point two, we keep using the human mind as a tool, to solve problems.
As such, it is not merely a phenomenon, but a means to an end, and is
subject to judgments of its utility for that purpose. Now we can say
whether it is perfect or flawed. Obviously, it is not perfect, since
we often make mistakes when we use it. Score one for Ware.

Point three, when we try to make better tools, or tools to supplement
the human mind, all these improvements are created by the human mind.
In fact, the purposes of these tools are created by the human mind.
The human mind is thus the ultimate reasoning tool. Score one for the
human mind.

You might say the same of the human hand. As a phenomenon, it exists.
As a tool, it is imperfect. And it is the ultimate mechanical tool,
since all mechanical tools are directly or indirectly made by it.

It is from these multiple standpoints that we derive the multiple goals
of AI: to study the mind, to supplement the mind, and to serve the mind.

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

Date: 1 Oct 87 13:22:00 GMT
From: uxc.cso.uiuc.edu!uxe.cso.uiuc.edu!morgan@a.cs.uiuc.edu
Subject: Re: Goal of AI: where are we going?


Maybe you should approach it as a scientist, rather than an engineer. Think
of the physicists: they aren't out to fix the universe, or construct an
imitation; they want to understand it. What AI really ought to be is a
science that studies intelligence, with the goal of understanding it by
rigorous theoretical work, and by empirical study of
systems that appear to have intelligence, whatever that is. The best work
in AI, in my opinion, has this scientific flavor. Then it's up to the
engineers (or society at large) to decide what to do with the knowledge
gained, in terms of constructing practical systems.

------------------------------

Date: 2 Oct 87 15:31:04 GMT
From: bloom-beacon!gatech!udel!montgome@husc6.harvard.edu (Kevin
Montgomery)
Subject: Re: Goal of AI: where are we going?

In article <259@tut.cis.ohio-state.edu>
tanner%tut.cis.ohio-state.edu@osu-eddie.UUCP (Mike Tanner) writes:
>If you want to say that what I'm doing is not AI, fine. I think it is, but if
>you'll give me a better name I'll take it and leave AI to the logicians. It
>is not psychology (my experiments involve building programs and generally
>thinking about computational issues, not torturing college freshmen). And I'm
>not really interested in duplicating the human mind, it's just that the human
>mind is the only intelligence I know.

Welcome to the fascinating world of Cognitive Modelling!

If AI is to be pure logic, more power to it. But the "real world"
usually doesn't let one work only with pure logic- the case of incomplete
information is an example. If you saw me driving towards work, and it
is around 9am, you may conclude that I'm going to work. However, from
a purely logical point of view, my direction of travel has little to do
with my end destination. Whatever. I think modelling's more fun anyway!

(no flames about AI handling the above situation and 'most-probable
scenario' stuff, please)
--
Kevin Montgomery

------------------------------

Date: 1 Oct 87 12:49:39 GMT
From: tanner@tut.cis.ohio-state.edu (Mike Tanner)
Reply-to: tanner%tut.cis.ohio-state.edu@osu-eddie.UUCP (Mike Tanner)
Subject: Re: Goal of AI: where are we going?

In article <270@uwslh.UUCP> lishka@uwslh.UUCP (Christopher Lishka) writes:
>In article <549@csm9a.UUCP> bware@csm9a.UUCP (Bob Ware) writes:
>>We all admit that the human mind is not flawless. Bias decisions...
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> [the underscored bit above indicates a number of faulty assumptions,
> e.g., that it makes sense to talk about "flaws" in the mind.]
>

I liked this reply. Whether the problem is "western" philosophy or not, I'm
not sure. It may be true for the casual AI dabbler. I.e., the average
intelligent person on first thinking or hearing of the topic of AI will often
say things like, "But people make mistakes, do you really want to build
human-like machines?"


Within AI itself this attitude manifests itself as rampant normativism.
Somebody adopts a model of so-called correct reasoning, e.g., Bayesian
decision theory, logic, etc., and then assumes that the abundant empirical
evidence that people are unable to reason this way shows human reasoning to be
flawed. These people want to build "correct" reasoning machines.

I say, OK, go ahead. But that's not what I want to do. I want to understand
thinking, intelligent information processing, problem-solving, etc. And I
think the empirical evidence is trying to tell us something important. I am
not sure just what. It seems clear that thinking is not logical (which is not
to say "flawed" or "incorrect", merely "not logical"). An interesting
question is, "why not?" People are able to use language, solve problems -- to
think -- but is that in spite of illogic or because of it or neither? I don't
think we're going to understand intelligence by adopting an a priori correct
model and trying to build machines that work that way (except by negative
results).

If you want to say that what I'm doing is not AI, fine. I think it is, but if
you'll give me a better name I'll take it and leave AI to the logicians. It
is not psychology (my experiments involve building programs and generally
thinking about computational issues, not torturing college freshmen). And I'm
not really interested in duplicating the human mind, it's just that the human
mind is the only intelligence I know.


-- mike tanner

Dept. of Computer and Info. Science tanner@ohio-state.arpa
Ohio State University ...cbosgd!osu-eddie!tanner
2036 Neil Ave Mall
Columbus, OH 43210

------------------------------

Date: 3 Oct 87 06:14:38 GMT
From: vax1!czhj@cu-arpa.cs.cornell.edu (Ted Inoue)
Subject: Re: Goal of AI: where are we going? (Where should we go...)

In article <46400008@uxe.cso.uiuc.edu> morgan@uxe.cso.uiuc.edu writes:
>
>Maybe you should approach it as a scientist, rather than an engineer. Think
>...
>What AI really ought to be is a
>science that studies intelligence, with the goal of understanding it by
>rigorous theoretical work, and by empirical study of
>systems that appear to have intelligence, whatever that is. The best work
>in AI, in my opinion, has this scientific flavor. Then it's up to the
>engineers (or society at large) to decide what to do with the knowledge
>gained, in terms of constructing practical systems.


I wholeheartedly support this idea. I'd go even further however, and say that
most "AI" research is a huge waste of time. I liken it to using trial and
error methods like those used by Edison which led him to try thousands of
possibilities before hitting one that made a good lightbulb. With AI, the
problem is infinitely more complicated, and the chance of finding a solution
by blind experimentation is nil.

On the other hand, if we take an educated approach to the problem, and study
'intelligent' systems, we have a much greater chance of solving the mysteries
of the mind.

Some of you may remember my postings from last year where I expounded on the
virtues of cognitive psychology. After investigating research in this field
in more detail, I came up very disillusioned. Here is a field of study in
which the soul purpose is to scientifically discover the nature of thought.
Even with some very bright people working on these problems, I found that the
research left me cold. Paper after paper describe isolated phenomena, then go
on to present some absurdly narrow minded theory of how such phenomena could
occur.

I've reached the conclusion that we cannot study the mind in isolated pieces
which we try to put together to form a whole. But rather we have to study
the interactions between the pieces in order to learn about the pieces
themselves. For example, take vision research. Many papers have been written
about edge detection algorithms, possible geometries, and similarly
reductionist algorithms for making sense of scenes. I assert that the
interplay between the senses and the experiential memory is huge. Further,
because of these interactions, no simple approach will ever work well. In
fact, what we need is to study the entire set of processes involved in seeing
before we can determine how we perceive objects in space.

This is but a single example of the complexity of studying such aspects of the
mind. I found that virtually every aspect of cognition has such problems.
That is, no aspect is isolated!

Because of this immensely complex set of interactions, I believe that the
connectionist theories are heading in the right direction. However, these
theories are somewhat too reductionistic for my tastes as well. I want to
understand how the mind works at a high level (if possible). The actual
implementation is the easy part. The understanding is the hard part.

---Ted Inoue

------------------------------

Date: 3 Oct 87 23:47:07 GMT
From: lishka@uwslh.UUCP (Christopher Lishka)
Reply-to: lishka@uwslh.UUCP (Christopher Lishka)
Subject: Re: Goal of AI: where are we going?

In article <46400008@uxe.cso.uiuc.edu> morgan@uxe.cso.uiuc.edu writes:
>
>Maybe you should approach it as a scientist, rather than an engineer. Think
>of the physicists: they aren't out to fix the universe, or construct an
>imitation; they want to understand it.

I think this is a good point. I have always thought that Science was
a method used to predict natural events with some accuracy (as opposed
to guessing). Whether this is understanding, well I guess that
depends on one's definition. I like this view because it (to me at
least) parallels the attempts by nearly all (if not all) religions to
do the same thing, and possibly provide some form of meaning to this
strange world we live in. It also opens the possibility of sharing
views between scientists and other people explaining the world they
see with their own methods.

>What AI really ought to be is a
>science that studies intelligence, with the goal of understanding it by
>rigorous theoretical work, and by empirical study of
>systems that appear to have intelligence, whatever that is. The best work
>in AI, in my opinion, has this scientific flavor. Then it's up to the
>engineers (or society at large) to decide what to do with the knowledge
>gained, in terms of constructing practical systems.

I like this view also, and feel that A.I. might go a little further in
studying other areas in conjunction with the human mind. Maybe this
isn't pure A.I., but I'm not sure what pure A.I. is. One interesting
note is that maybe the people who are implementing various Expert
Systems (which grew out of A.I. research) for real-world applications
are the "engineers" of which morgan@uxe speaks of. And more power to
both the "scientists" and "engineers" then, and those in the gray area
in between. It's good to be able to work together like this, and not
have the "scientists" only come up with research that cannot be
applied.

Disclaimer: I am sitting here typing this because my girfriends cat is
holding a gun at my head, and am in no way responsible for the content
;-)

[If anyone really wants to flame me, please mail me; if you really
think there is some benefit in posting the flame, go ahead. I reply
to all flames, but if my reply doesn't get to you, it is because I am
not able to find a reliable mail path (which is too damned often!)]

-Chris

--
Chris Lishka /lishka@uwslh.uucp
Wisconsin State Lab of Hygiene <-lishka%uwslh.uucp@rsch.wisc.edu
\{seismo, harvard,topaz,...}!uwvax!uwslh!lishka

------------------------------

Date: 1 Oct 87 09:11:37 GMT
From: mcvax!enea!kuling!waldau@uunet.uu.net (Mattias Waldau)
Subject: Re: Goal of AI: where are we going?

In article <178@usl> khl@usl.usl.edu.UUCP (Calvin Kee-Hong Leung) writes:
>Provided that we have the necessary technology to build robots
>that are highly intelligent; they are efficient and reliable and
>they do not possess any "bad" characteristic that man has. Then
>what will be the roles man plays in the society where his intel-
>ligence can be viewed as comparatively "lower form"?
>
One of the short stories in Asimov's "I, robot" is about the problem
mentioned in the previous paragraph. It is about a robot and two humans
on a space station near our own sun. I can not tell more, otherwise I
spoil your fun. It is very good!

------------------------------

Date: Sun, 4 Oct 87 20:54:01 -0200
From: Eyal mozes <eyal%wisdom.bitnet@jade.berkeley.edu>
Subject: Re: Goal of AI: Where are we Going?

> I believe that those "bad" characteristics of human are necessary
> evils to intelligence. For example, although we still don't understand
> the function of emotion in human mind, a psychologist Toda saids that
> it is a device for servival. When an urgent danger is approaching, you
> don't have much time to think. You must PANIC! Emotion is a meta-
> inference device to control your inference mode (mainly of recources).
>
> If we ever make a really intelligent machine, I bet the machine
> also has the "bad" characteristics. In summary, we have to study
> why human has those characteristics to understand the mechanism of
> intelligence.

I think what you mean by "the bad characteristics" is, simply, free
will. Free will includes the ability to fail to think about some
things, and even to actively evade thinking about them; this is the
source of biased decisions and of all other "flaws" of human thought.

Emotions, by themselves, are certainly not a problem; on the contrary,
they're a crucial function of the human mind, and their role is not
limited to emergencies. Emotions are the result of subconscious
evaluations, caused by identifications and value-judgments made
consciously in the past and then automatized; their role is not "to
control your inference mode"
, but to inform you of your subconscious
conclusions. Emotional problems are the result of the automatization of
wrong identifications and evaluations, which may have been reached
either because of insufficient information or because of volitional
failure to think.

A theory of emotions and of free will, explaining their role in the
human mind, was developed by Ayn Rand, and the theory of free will was
more recently expanded by David Kelley.

Basically, the survival value of free will, and the reason why the
process of evolution had to create it, is man's ability to deal with a
wide range of abstractions. A man can form concepts, gain abstract
knowledge, and plan actions on a scale that is in principle unlimited.
He needs some control on the amount of time and effort he will spend on
each area, concept or action. But because his range his unlimited, this
can't be controlled by built-in rules such as "always spend 1 hour
thinking about computers, 2 hours thinking about physics"
etc.; man has
to be free to control it in each case by his own decision. And this
necessarily implies also freedom to fail to think and to evade.

It seems, therefore, that free will is inherent in intelligence. If we
ever manage to build an intelligent robot, we would have to either
narrowly limit the range of thoughts and actions possible to it (in
which case we could create built-in rules for controlling the amount of
time it spends on each area), or give it free will (which will clearly
require some great research breakthroughs, probably in hardware as well
as software); and in the later case, it will also have "the bad
characteristics"
of human beings.

Eyal Mozes

BITNET: eyal@wisdom
CSNET and ARPA: eyal%wisdom.bitnet@wiscvm.wisc.edu
UUCP: ...!ihnp4!talcott!WISDOM!eyal

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT