Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 094
AIList Digest Friday, 18 Apr 1986 Volume 4 : Issue 94
Today's Topics:
Philosophy - Consciousness
----------------------------------------------------------------------
Date: 14 Apr 86 07:44:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: More wrangling on consciousness
> >me: Briefly, we believe other people are conscious
> >for TWO reasons: 1) they are capable of certain clever activities,
> >like holding English conversations in real-time, and 2) they
> >have brains, just like us, and each of us knows darn well that
> >he/she is conscious.
>
> Nigel Goddard: Personally I think that the only practical
> criterion (i.e. the ones we use when judging whether this
> particular human or robot is "conscious") are performance ones.
> Is a monkey conscious ?. If not, why not ? There are people I
> meet who I consider to be very "unconscious", i.e. their stated
> explanations of their motives and actions seem to me to
> completely misunderstand what I consider to be the *real*
> explanations. Nevertheless, I still think they are conscious
> entities, and the only way I can rationalize this paradox is
> that I think they have the ability to learn to understand the
> *real* reasons for their actions. This requires an ability to
> abstract and to make an internal model of the self, which may be
> the main factors underlying what we call consciousness.
At the technical level, I think it's simply wrong to dismiss
brains as a criterion for consciousness - if mechanism M
causes C (consciousness) and enables P (performance), then
clearly it is an open question whether something that can do P,
but does not have M, does or does not have C.
At the "gut" level I think the whole tenor of the reply misses
the point that consciousness is a very "low-level", primitive
sort of phenomenon. Do severely retarded persons have "the
ability to learn to understand the *real* reasons for their
actions...an ability to abstract and to make an internal model
of the self" ? or cows, or cats? Yet no one, I hope, doubts
that they are conscious (eg, can feel pain, experience shapes,
colors, sounds). This has very little to do with any clever
information processing capabilities. And it is these "raw
feelings" that a) are essential to what most people mean by
consciousness and b) seem least susceptible to implementation by
Lisp machines, regardless of size.
John Cugini <Cugini@NBS-VMS>
------------------------------
Date: 13 Apr 86 09:50:25 GMT
From: ihnp4!houxm!hounx!kort@ucbvax.berkeley.edu (B.KORT)
Subject: Re: Computer Dialogue
I think Paul King is right on the mark in his comments about the nature
of feelings, instincts, and conscious awareness. Paul's point about
a system having a world-model which includes the system itself as an
entity within that world model is perhaps the most salient point in
his article. Self-diagnosis, self-reconfiguration, and self-repair
are already found in complex computer installations. Self-perpetuation
is the higher-level goal of those three capabilities. The first
industrial robots were put to work to build--you guessed it--more
industrial robots. So we have self-reproduction, as well. In the
case of industrial robots, evolution is speeded up by the hand of
the creator, who introduces new models through intelligent intervention.
We no longer have to wait for a serendipitous random perturbation to
yield a more successful offspring. In my original Computer Dialogues
#1 and #2, I playfully introduced a pair of self-programming computers
who gradually developed a protocol for mutual self-learning. I think
it may be possible, by the end of the millenium, to create the first
rudimentary Artificial Sentient Being.
--Barry Kort ...ihnp4!houxm!hounx!kort
------------------------------
Date: 13 Apr 86 22:07:13 GMT
From: tektronix!uw-beaver!ssc-vax!eder@ucbvax.berkeley.edu (Dani Eder)
Subject: Re: Computer Dialogue
> Before the recent tragedy, there had been a number of
> instances where the space shuttle computers aborted the mission in the
> final seconds before launch. My explanation for this was that the
> on-board computers were displaying a form of 'programmed survival
> instinct.' In short: they were programmed to survive, and if the
> launch had continued, they might not have.
>
In almost every countdown there have been delays because some
measured parameter of the vehicle was out of tolerance. The ground
launch sequencer, which controls events from t-9 minutes to t-25 seconds,
and the onboard computers, which control events in the last 25 seconds,
are required because there are too many time critical events for humans
to handle. They command a series of actions, such as opening a valve, and
take measurements from sensors, such as the temperature in the
combustion chamber. When a sensor reading is outside allowable limits,
the software stops the countdown and attempts to return the vehicle to
a 'safe' condition.
Earlier in the countdown, events occur at a slower pace, and humans
monitoring the data coming from the sensors have often called a halt to
the operation. The Shuttle system, men and machines, is designed to operate
under the rule 'do not launch unless all the data says it is safe to do so'.
Because the early 1970's technology used in the Shuttle is marginal for
a reuseable transportation system, EVERYTHING has to be working just right
for a successful launch.
The computers used onboard the Shuttle are too dumb even to be programmed
for survival. If there is an in-flight abort that requires returning to the
ground from halfway to orbit, the pilot must turn a rotary switch on the
console to choose between returning to Florida and landing in Senegal. The
switch controls loading of data and routines into the computers. This was
required because the software for flying the Shuttle runs ~500k of code, and
the computers can only handle 64k. The decision routines for which part of
the software to swap in were left in the pilots head.
Dani Eder/Advanced Space Transportation/Boeing/ssc-vax!eder
------------------------------
Date: 14 Apr 86 19:26:11 GMT
From: decvax!linus!faron!rubenk@ucbvax.berkeley.edu (Ruben J. Kleiman)
Subject: Re: Natural Language processing
In article <3500011@uiucdcsp> bsmith@uiucdcsp.CS.UIUC.EDU writes:
>
>You are probably correct in your belief that Wittgenstein is closer to
>the truth than most current natural language programming. I also believe
>it is impossible to go through Wittgenstein with a fine enough toothed
>comb. However, there are a couple of things to say. First, it is
>patently easier to implement a computer model based on 2-valued logic.
>The Investigations have not yet found a universally acceptable
>interpretation (or anything close, for that matter). To try to implement
>the theories contained within would be a monumental task. Second, in
>general it seems that much AI programming starts as an attempt to
>codify a cognitive model. However, considering such things as grant
>money and egos, when the system runs into trouble, an engineering-type
>solution (ie, make it work) is usually chosen. The fact that progress
>in AI is slow, and that the great philosophical theories have not yet
>found their way into the "state of the art," is not surprising. But
>give it time--philosophers have been working hard at it for 2500 years!
>
>Barry Smith
Whoever believes that "engineering-type solution[s]" are the consequence
of small grants or large egos:
1. should be able to conceive of an implementation of some concept (or the
concept of an implementation) which does not involve
"engineering-type solutions."
2. should NOT be able to give form to the notion of a "lag"
between research ("great philosophical theories") and
implementation ("state of the art").
- Ruben
------------------------------
Date: Tue, 15 Apr 86 15:08:54 EST
From: tes%bostonu.csnet@CSNET-RELAY.ARPA
Subject: please include
In volume 4, Issue 87, Ray Trent wrote
> Please define this concept of "consciousness" before
> using it. Please do so in a fashion that does not resort
> to saying that human beings are mystically different
> from other animals or machines. Please also avoid self-
> important definitions. (e.g. consciousness is what humans
> have)
...
> The above request also applies to the term "desire".
...
> My definition of these concepts ["desires" and "feelings"]
> would say that they "are" the actions that a life process
> take in response to certain stimuli.
Bravo, but with qualification:
Mr. Trent has the "physicalist" point of view, which recognizes ONLY
objects and phenomena describable in the "language of science" (what
I mean by this is the language that deals exclusively with molecules,
gravity, velocity, entropy, etc.).
This general view of the universe is FINE, and it's obviously powerful
and result-oriented (look at what we have done with it in the last
few hundred years). BUT - this view is not the only one, or the
"right" one, in any sense.
I'll bet that the term "consciousness" is undefinable in the language of
science, and therefore useless to the physicalists. (I have a hunch
that physicalists cannot get any further than a behaviorial or
mechanistic description of conscious beings). Therefore,
in discussions about the mind like the kind that is going on in AIList,
perhaps one thing that should be made clear by each participant is whether
he or she is assuming the "scientific" or some "non-scientific" viewpoint.
If one is to adopt the physicalist approach, I agree with Ray Trent that
terms like "desire" and "feeling" and "consciousness" can only be used
if they have been (sorry if I'm putting words into Ray's mouth here)
mechanistically defined.
Tom Schutz
CSNET: tes@bu-cs
ARPA: tes%bu-cs@csnet-relay
UUCP: ...harvard!bu-cs!tes
------------------------------
Date: Tue, 15 Apr 86 15:29:13 EST
From: tes%bostonu.csnet@CSNET-RELAY.ARPA
Subject: One more little thing
Just a brief question here:
Nigel Goddard wrote in Volume 4 Issue 87
> I meet [people] who I consider to be very "unconscious",
> i.e. their stated explanations of their motives and actions
> seem to me to completely misunderstand what I consider to
> be the *real* explanations.
What, by Jove, is a "*real* explanation" ??????????????????????
I can't digest my food properly until I find out.
Tom Schutz
CSNET: tes@bu-cs
------------------------------
Date: 14 Apr 86 13:19:55 GMT
From: ihnp4!houxm!hounx!kort@ucbvax.berkeley.edu (B.KORT)
Subject: Re: Computer Dialogue
I enjoyed Ray Trent's rejoinder to Paul King's article on computer
feelings and self-awareness. In particular the description of the
relational database system--as an entity that collects and organizes
information into an abstract model that it then uses to interact with
the world--was most suggestive. Now if we give that system some further
rules of logic and assign it some goals, could we turn it into a "rational
database system"? (I would give it the goal of nudging the external world
into one which operates more successfully than the current implementation.)
--Barry Kort ...ihnp4!houxm!hounx!kort
------------------------------
End of AIList Digest
********************