Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 226

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Friday, 2 Oct 1987      Volume 5 : Issue 226 

Today's Topics:
Comments - Goal of AI & Nature of Computer Science

----------------------------------------------------------------------

Date: 28 Sep 87 14:36:36 GMT
From: nbires!isis!csm9a!bware@ucbvax.Berkeley.EDU (Bob Ware)
Subject: Re: Goal of AI: where are we going?

>We all admit that the human mind is not flawless. Bias decisions
>can be made due to emotional problems, for instance. ...

The above has been true for all of recorded history and remains true
for almost everyone today. While almost everyone's mind is flawed due
to emotional problems, new data is emerging that indicates the mind can
be "fixed" in that regard. To see what I am referring to, read L Ron
Hubbard's book on "Dianetics".

MAIL: Bob Ware, Colorado School of Mines, Golden, Co 80401, USA
PHONE: (303) 273-3987
UUCP: hplabs!hao!isis!csm9a!bware or ucbvax!nbires!udenva!csm9a!bware

------------------------------

Date: 29 Sep 87 17:25:55 GMT
From: eugene@pioneer.arpa (Eugene Miya N.)
Reply-to: eugene@pioneer.UUCP (Eugene Miya N.)
Subject: Re: Is Computer Science Science? Or is it Art? [sort of hope
not]

In article <8709290724.AA10633@ucbvax.Berkeley.EDU> solar!shf
(Stuart Ferguson) writes:
>+-- cdfk@hplb.CSNET (Caroline Knight) writes:
>| ... I believe that in software there is a better analogy with art
>| and illustration than engineering or science. I have noticed that this
>| is not welcomed by many people in computing but this might be because
>| they know so little of the thought processes and planning that go on
>| behind the development of, say, a still life or an advertising poster.
>
>This line of thinking appeals to me alot (and I'm a "person in computing,"
>having 10+ years programming experience). I can apreciate this article
>because my own thinking has led me to somewhat the same place regarding
>"Computer Science."

I'm glad I waited a bit on this. Two years ago, I met Nico Habermann of
CMU. At that time I suggest CS could learn more from cognitive sciences
(psychology). Habermann has an EE PhD. He didn't like this idea due to
the softness. I suggest others try this question on other hard
CS-types. I only ask that you avoid analogies to introspection.

While the art analogy to computing has a certain appeal, especially the
iterative and prototypical aspects, and it also has Knuth behind it,
it also has some problems. Rather than mentioned them, I suggest you
send mail to DEK and report back.

From the Rock of Ages Home for Retired Hackers:

--eugene miya
NASA Ames Research Center
eugene@ames-aurora.ARPA
"You trust the `reply' command with all those different mailers out there?"
"Send mail, avoid follow-ups. If enough, I'll summarize."
{hplabs,hao,ihnp4,decwrl,allegra,tektronix}!ames!aurora!eugene

------------------------------

Date: 29 Sep 87 17:59:04 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: Goal of AI: where are we going?

In article <178@usl>, khl@usl (Calvin K. H. Leung) writes:
> Should the ultimate goal of AI be the perfecting of human intel-
> ligence, or the imitating of intelligence in human behavior?
>
> We all admit that the human mind is not flawless... So there is
> no point trying to imitate the human thinking process. Some
> current research areas (neural networks, for example) use the
> brain as the basic model. Should we also spend some time on the
> investigation of some other models which could be more efficient
> and reliable?

I always thought there were several different currents going in AI.
One stream is trying to learn how the human mind works and imitate it.
Another stream is trying to fill in the gaps in the capabilities of the
human mind by using unique machine capabilities in combination with
imitations of the mind. Some people are working with research
objectives, some have application objectives.

We don't need a unique goal for AI. We contain multitudes.

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

Date: 29 Sep 87 18:25:36 GMT
From: nysernic!rpicsb8!csv.rpi.edu!franklin@rutgers.edu (W. Randolph
Franklin ( WRF ))
Subject: Re: Is Computer Science Science? (Funding)

In article <2868@ames.arpa> eugene@pioneer.UUCP (Eugene Miya N.) writes:
>Status Quo? Hopefully a short note:
>The reason why you have to make some clear distinctions care partially
>be read in the latest CPSR [Computer Professionals for Social
>Responsibility] Newsletter. It appears in the halls of places like
>Ames, JPL, DOE Labs, the NAS (Natl. Acad. Sci), NSF, etc. Basically if
>you are not a science, you don't get funding from those Science
>Agencies.
>
>This is a difference in Geography (seen as an art) and Geology.
>I studied remote sensing for several years. The fact that it was in a
>geography --->cartography -->graph --> "art" department was a big
>minus. RS is pretty respectable in some circles, and like AI, disreputable

This may be improving. NSF is soliciting proposals to set up a center
for excellence in Geographic Information Systems.


Wm. Randolph Franklin
Preferred net address: Franklin@csv.rpi.edu
Alternate net: wrf@RPITSMTS.BITNET
Papermail: ECSE Dept, Rensselaer Polytechnic Institute,
Troy NY, 12180
Telephone: (518) 276-6077
Telex: 6716050 RPI TROU -- general RPI telex number.

Wm. Randolph Franklin, RPI, 6026 JEC, (518) 276-6077, Franklin@csv.rpi.edu

------------------------------

Date: 30 Sep 87 02:08:00 GMT
From: munnari!comp.vuw.ac.nz!lindsay@uunet.uu.net (Lindsay Groves)
Subject: Re: Is Computer Science Science?

In article <5068@jade.BERKELEY.EDU> ed298-ak@violet.berkeley.edu
(Edouard Lagache) writes:
>>>
>.... Does Computer Science have any laws?
>>>
>>"Anything that can go wrong will go wrong."
>> ...
>
> Hey those aren't laws from Computer Science, they are from the
> Science (Religion?) of Murphyology.!
>
> E.L.

The August issue of the Communications of the ACM contains an article by
C.A.R.Hoare and eight others, entitled "Laws of Programming". One of their
laws (4) is:
ABORT U P = ABORT
where ABORT (which they denote by an upside down T) is a statement that can
do anything ("It places no constraint on the executing machine, which may do
anything, or fail to do anything; in particular, it may fail to terminate"
),
and U is nondeterministic choice.

The text explaining this law says:
"This law is sometimes known as Murphy's Law, which state, "If it can go
wrong it will"; the left-hand side describes a machine that CAN go wrong
(or can behave like P), whereas the right-hand side might be taken to
describe a machine that WILL go wrong. But the true meaning of the law
is actually worse than this: The program ABORT will not always go wrong --
only when it ismost disastrous for it to do so! THe abundance of empirical
evidence for law (4) suggests that it should be taken as the first law of
computer programming."


It seems that being part of "Murphyology" doesn't preclude something from
being a law of Computer Science -- this one is given a very precise
statement and interpretation as a law of programming, which must also count
as a law of Computer Science. Given that Computer Science draws heavily on
such fields as mathematics, logic, linguistics (Chomsky's hierarchy has far
more relevance to Computer Science than it does to lingusitics!), electrical
engineering etc., it is not surprising that laws in Computer Science should
bear similarity to laws in other areas.

Lindsay Groves
Logic programmers' theme song: "The first cut is the deepest"

------------------------------

Date: 30 Sep 87 17:42:21 GMT
From: uwslh!lishka@speedy.wisc.edu (Christopher Lishka)
Subject: Re: Goal of AI: where are we going?

***Warning: FLAME ON***

In article <549@csm9a.UUCP> bware@csm9a.UUCP (Bob Ware) writes:
>>We all admit that the human mind is not flawless. Bias decisions...
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The expression "we all" does not apply to me, at very least. Some of
us (at least myself)like to believe that the human mind should not be
considered to be either flawed or flawless...it only "is." I feel
that making a judgement on whether or not everyone admits that the
human mind is flawed happens to be a biased decision on the above
net-reader's part. Realize that not everyone has the same views as
the above...

>>...can be made due to emotional problems, for instance. ...
^^^^^^^^^^^^^^^^^^

Is this statement to be read as "emotional problems can cause bias
decisions, which are flaws in the human mind?"
If it does, then I
heartily disagree, because I once again feel that emotional problems
and/or bias decisions are not indicative of flaws in the human
mind...see above for my reasons.

>
>The above has been true for all of recorded history and remains true
>for almost everyone today. While almost everyone's mind is flawed due
^^^^^^^^^^
>to emotional problems, new data is emerging that indicates the mind can...
^^^^^^^^^^^^^^^^^^^^^

Again, I don't feel that my mind is "flawed" by emotional problems.
To me that seems to be a very "Western" (and I am making a rather
stereotyped remark here) method of thinking. As I have grown up with
parents who have Buddhist values and beliefs, I think that making a
value judgement such as "human minds are flawed because of..." should
be indicated as such...there is no way to prove that sort of "fact."
For all I know or care, the human mind is neither perfect nor flawed;
it just "is," and I don't wish to make sweeping generalities such as
the above. There are many other views of the mind out there, and I
recommend looking into *all* Religious views as well as *all*
Scientific views before even attempting a statement like the above
(which would easily take more than a lifetime).

>...be "fixed" in that regard. To see what I am referring to, read L Ron
^^^^^
>Hubbard's book on "Dianetics".

To me this seems to be one of many problems in A.I.: the assumption
that the human mind can be looked at as a machine, and can be analyzed
as having flaws or not, and subsequently be fixed or not. That sort
of thinking in my opinion belongs more in ones Personal Philosophy and
probably should not be used in a "Scientific" (ugghh, another
hard-to-pin-down word) argument, because it is damned hard to prove,
if it is able to be proven at all.

I feel that the mind just "is," and one cannot go around making value
judgements on another's thoughts. Who gives anyone else the right to
say a person's mind is "flawed?" To me that kind of judgement can
only be made by the person "owning" the mind (i.e. who is thinking and
communicating with it!), and others should leave well enough alone.
Now I realize that this brings up arguments in other fields (such as
Psychology), but I feel A.I. should try and move away from these sort
of value judgements.

A comment: why don't A.I. "people" use the human mind as a model, for
better or for worse, and not try to label it as "flawed" or "perfect?"
In the first place, it is like saying that something big (like the
U.S. Government) is "flawed;" this kind of thing can only be proven
under *certain*conditions*, and is unlikely to hold for all possible
"states" that the world can be in. In the second place, making that
kind of judgement would seem to be fruitless given all that we
*do*not* know about the human brain/mind/soul. It seems to me to be
like saying "hmmmm, those damned quarks are fundamentally flawed", or
"neuronal activity is primarily flawed in the lipid bilayer membrane."
I feel that we as humans just do not know diddley about the world
around us, and to say it is flawed is a naive statement. Why not just
look at the human mind/brain as something that has evolved and existed
over time, and therefore may be a good model for A.I. techniques UNDER
CERTAIN CIRCUMSTANCES? A lot less people would be offended...

***FLAME*OFF***

Sorry if the above offends anyone...but the previous remarks offended
me enough to send a followup message around the world. If one is
going to make remarks based on very personal opinions, try to indicate
that they are such, and please remember that not everyone thinks the
way you do.

Of course, pretty much everything I said above is a personal opinion,
and I don't presume that even one other person thinks the same way as
I do (but it would be nice to know that others think similarily ;-).
Disclaimer: the above views are my thoughts only, and do not reflect
the views of my employer, although there is eveidence that my
cockatiels are controlling my thoughts !!! ;-)

-Chris

--
Chris Lishka /lishka@uwslh.uucp
Wisconsin State Lab of Hygiene <-lishka%uwslh.uucp@rsch.wisc.edu
\{seismo, harvard,topaz,...}!uwvax!uwslh!lishka

------------------------------

Date: 30 Sep 87 22:09:09 GMT
From: topaz.rutgers.edu!josh@rutgers.edu (J Storrs Hall)
Subject: Re: Goal of AI: where are we going?

lishka@uwslh.UUCP (Christopher Lishka) writes:
In article <549@csm9a.UUCP> bware@csm9a.UUCP (Bob Ware) writes:
>>We all admit that the human mind is not flawless. Bias decisions...
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The expression "we all" does not apply to me, at very least. Some of
us (at least myself)like to believe that the human mind should not be
considered to be either flawed or flawless...it only "is."

It seems to me that this simply means that you hold the words "flawed"
and "flawless" to be meaningless. It is as if Bob Ware were saying
that the human mind were not plegrontless. Only I don't see why I
would get so upset if I saw people saying that minds are plegronted
at best, even if I didn't understand what they meant by the term.
I would instead make an effort to comprehend the concepts being used.

>>...can be made due to emotional problems, for instance. ...

Is this statement to be read as "emotional problems can cause bias
decisions, which are flaws in the human mind?"
If it does, then I
heartily disagree, because I once again feel that emotional problems
and/or bias decisions are not indicative of flaws in the human
mind...see above for my reasons.

I would say that an emotional *problem* is by definition a flaw.
If you believe that Manson and Hitler and Caligula were not flawed,
but that is just the "way they were", and there is no reason to
prefer Thomas Aquinas over Lyndon LaRouche, then your own reasoning
is distinctly flawed.

To me that seems to be a very "Western" (and I am making a rather
stereotyped remark here) method of thinking. As I have grown up with
parents who have Buddhist values and beliefs, I think that making a
value judgement such as "human minds are flawed because of..." should
be indicated as such...there is no way to prove that sort of "fact."

Can you say "evangelical fundamentalist mysticism"? Your Eastern
values seem to be flavored by a strong Western intellectual
aggressiveness, which seems contradictory. Twice the irony in a
pound of holy calves liver.

There are many other views of the mind out there, and I
recommend looking into *all* Religious views as well as *all*
Scientific views before even attempting a statement like the above
(which would easily take more than a lifetime).

What an easy way to sidestep doing any real thinking. Do you suggest
that we should read all the religious writings having to do with
angels before we attempt to build an airplane? Do you think that one
must be an expert on faith healing and the casting out of demons
before he is allowed to make a statement about this interesting mold
that seems to kill bacteria?

In Western thought it has been realized at long and arduous last that
the appeal to authority is fallacious. Experiment works; the real
world exists; objective standards can be applied. Even to people.


>...be "fixed" in that regard. To see what I am referring to, read L Ron
>Hubbard's book on "Dianetics".

Experiment (the church of scientology) shows that Hubbards ideas in
this regard are hogwash. Hubbard's phenomenon had much more to do
with the charismatic religious leaders of the past, than the rational
enlightenment of the future.

To me this seems to be one of many problems in A.I.: the assumption
that the human mind can be looked at as a machine, and can be analyzed
as having flaws or not, and subsequently be fixed or not.

Surely this is independent of the major thrust of AI, which is to
build a machine that exhibits behaviors which, in a human, would be
called intelligent. It is true that most AI researchers "believe that
the mind is a machine"
, but it seems that the alternative is to
suggest that human intelligence has a supernatural mechanism.

That sort
of thinking in my opinion belongs more in ones Personal Philosophy and
probably should not be used in a "Scientific" (ugghh, another
hard-to-pin-down word) argument, because it is damned hard to prove,
if it is able to be proven at all.

My personal philosophy *is* scientific, thank you, and it is an
objectively better one than yours is.

I feel that the mind just "is," and one cannot go around making value
judgements on another's thoughts. Who gives anyone else the right to
say a person's mind is "flawed?"

Who gives me the right to say that 2+2=4 when you feel that it should
be 5? If the Wisconsin State Legislature passed a law saying that it
was 5, they would be wrong; if everybody in the world believed it was
5, they would be wrong; if God Himself claimed it was 5, He would be
wrong.

A comment: why don't A.I. "people" use the human mind as a model, for
better or for worse, and not try to label it as "flawed" or "perfect?"
In the first place, it is like saying that something big (like the
U.S. Government) is "flawed;" this kind of thing can only be proven
under *certain*conditions*, and is unlikely to hold for all possible
"states" that the world can be in.

But the U.S. Government IS flawed...

In the second place, making that
kind of judgement would seem to be fruitless given all that we
*do*not* know about the human brain/mind/soul.

Back in the middle ages, we didn't know much about the Black Plague,
but it was obvious that someone who caught it became pretty flawed
pretty fast. Furthermore, this small understanding was considered
sufficient grounds to inflict the social snubs of not associating
with such a person.

It is incredibly arrogant to declare that we must not make any
judgements until we know everything. The whole point of having
a human mind rather than a rutabaga is that you *are* able to make
judgements in the absence of complete information. Brains evolving in
a natural setting have always had to make *life-and-death* decisions
on the spur of the moment with whatever information was available.
Is that large furry creature dangerous? You've never seen a grizzly
bear before. No time to consult the views of all the world's ancient
religions on the subject...

I feel that we as humans just do not know diddley about the world
around us, and to say it is flawed is a naive statement.

To say that it is not flawed is just simply idiotic. If you apply
enough sophistry you may manage to get the conversation to a level
where the original statement is meaningless. For example, there are
(or may be) no "flawed" atoms in a broken radio. But to change the
level of discussion as a rhetorical device is tantamount to lying.
To do it without realizing you are doing it is tantamount to
gibberish.

Sorry if the above offends anyone...

It offends me greatly. The anti-scientific mentality is an emotional
excuse used to avoid thinking clearly. It would be much more honest
to say "I don't want to think, it's too hard work." Can't you see the
contradiction involved in criticizing someone for exercising his
judgement?

The champions of irrationality, mysticism, and superstition have
emotional problems which bias their cognitive processes. Their minds
are flawed.

--JoSH

------------------------------

Date: 30 Sep 87 16:31:00 GMT
From: uxc.cso.uiuc.edu!osiris.cso.uiuc.edu!goldfain@a.cs.uiuc.edu
Subject: Re: Goal of AI: where are we going?


Bob Ware, of Colorado School of Mines, writes :

> ... While almost everyone's mind is flawed due to emotional problems, new
> data is emerging that indicates the mind can be "fixed" in that regard. To
> see what I am referring to, read L Ron Hubbard's book on "Dianetics".

I suppose that if someone feels they have emotional problems and turned to Mr.
Hubbard for help, there is some sense to that. He ought to know about them,
since reports have indicated over the years that he has more than his fair
share of them ... :-)

Alternatively, one could consult someone who actually has credentials in
psychology. "You pays your money and you takes yer choice."

- Mark Goldfain
(ARPA: goldfain@osiris.cso.uiuc.edu)

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT