Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 087

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Monday, 14 Apr 1986       Volume 4 : Issue 87 

Today's Topics:
Philosophy - Wittgenstein & Computer Consciousness

----------------------------------------------------------------------

Date: 9 Apr 86 00:18:00 GMT
From: pur-ee!uiucdcs!uiucdcsp!bsmith@ucbvax.berkeley.edu
Subject: Re: Natural Language processing


You are probably correct in your belief that Wittgenstein is closer to
the truth than most current natural language programming. I also believe
it is impossible to go through Wittgenstein with a fine enough toothed
comb. However, there are a couple of things to say. First, it is
patently easier to implement a computer model based on 2-valued logic.
The Investigations have not yet found a universally acceptable
interpretation (or anything close, for that matter). To try to implement
the theories contained within would be a monumental task. Second, in
general it seems that much AI programming starts as an attempt to
codify a cognitive model. However, considering such things as grant
money and egos, when the system runs into trouble, an engineering-type
solution (ie, make it work) is usually chosen. The fact that progress
in AI is slow, and that the great philosophical theories have not yet
found their way into the "state of the art," is not surprising. But
give it time--philosophers have been working hard at it for 2500 years!

Barry Smith

------------------------------

Date: 8 Apr 86 00:32:00 GMT
From: ihnp4!inuxc!iubugs!iuvax!marek@ucbvax.berkeley.edu
Subject: Re: Natural Language processing


Interestingly enough, similar sentiments to your endorsment of L.W. are
strongly voiced with respect to Charles Sanders Peirce, by semioticians.
>From what I can surmise about Pericean thought, their thrust (or, trust)
appears questionable. I am not implying that this necessarily casts a pall
on the Vienna School, but my present inclination is to read the Dead Greats
for inspiration, not vindication or ready-made answers.

-- Marek Lugowski

Indiana U. CS Dept.
Bloominton, Indiana 47405
marek@indiana.csnet
--------
``I mistrust all systematizers and avoid them. The will to a system is
a lack of integrity'' -- Friedrich Nietzsche (``Twilight of the Idols, or
How One Philosophizes with a Hammer'')

``Onwards, hammerheads, bright and dangerous, we're big and strong and
we're sure of something'' -- Shriekback (``Oil and Gold'')

------------------------------

Date: Sat, 12 Apr 86 23:06:42 est
From: Nigel Goddard <goddard@rochester.arpa>
Reply-to: goddard@rochester.UUCP (Nigel Goddard)
Subject: Re: computer consciousness

In article <8604110647.AA25206@ucbvax.berkeley.edu> "CUGINI, JOHN"
<cugini@nbs-vms.ARPA> writes:
>
>Thought I'd jump in here with a few points.
>
...

>
>3. Taking up the epistemological problem for the moment, it
>isn't as obvious as many assume that even the most sophisticated
>computer performance would constitute *decisive* evidence for
>consciousness. Briefly, we believe other people are conscious
>for TWO reasons: 1) they are capable of certain clever activities,
>like holding English conversations in real-time, and 2) they
>have brains, just like us, and each of us knows darn well that
>he/she is conscious. Clearly the brain causes/supports
>consciousness and external performance in ways we don't
>understand. A conversational computer does *not* have a brain;
>and so one of the two reasons we have for attributing
>consciousness to others does not hold.
>

It is not just having a brain (for which most of us have no direct evidence
anyway), but having a head, body, mouth, eyes, voice, emotional sensitivity
and many other supporting factors (no one of which is *necessary*, but the
more there are there the better the evidence). I guess a brain is necesary,
but were one to come across a brain with no body, eyes, voice, ears or other
means for verifying its activity, would one consider it to be conscious ?
Personally I think that the only practical criterion (i.e. the ones we use
when judging whether this particular human or robot is "conscious") are
performance ones. Is a monkey conscious ?. If not, why not ? There are
people I meet who I consider to be very "unconscious", i.e. their stated
explanations of their motives and actions seem to me to
completely misunderstand what I consider to be the
*real* explanations. Nevertheless, I still think they are conscious
entities, and the only way I can rationalize this paradox is that I think
they have the ability to learn to understand the *real* reasons for their
actions. This requires an ability to abstract and to make an internal model
of the self, which may be the main factors underlying what we call
consciousness.

Nigel Goddard

------------------------------

Date: 8 Apr 86 09:57:10 GMT
From: hplabs!qantel!lll-lcc!lll-crg!styx!lognet2!seismo!ll-xn!topaz!harvard
!h-sc1!pking@ucbvax.berkeley.edu
Subject: Re: Computer Dialogue


In all this discussion of "feelings," "survival instinct," and
"consciousness," one point is being overlooked. That is, can you
really say that a behavioral reaction (survival instinct) is a
feeling if the animal or computer has no consciousness?

Joseph Mankoski asked whether or not one could say that the
shuttle's computers were displaying a form of "programmed
survival instinct."
I think that the answer is yes. This does
not mean that shuttle missions were aborted because the computer
wanted to save itself. Biologists, however, are quick to point
out that cats run away from dogs not because they want to save
themselves, but because the sight of a dog triggers a cat's
flight (abort) mechanism. The net effect of the cat's behavior
is to increase its chances of survival, but the cat (and the
shuttle's computer) has no "desire to survive."

But we, as humans, DO have a desire to survive, don't we? When
faced with danger, we do everything in our power to avoid it. The
difference is that we are conscious of our attempts to avoid
danger, even if we do not understand them. "Why did you run away
from that snake,"
someone might ask. "To escape possible
injury,"
we rationalize. The more truthful answer, however, is
"It just happened -- it was the first thing that came to mind."

But what of the sensation of fear that comes over us in such
situations? "Fear" is just a name we have given to the sensation
of anxiety coupled with avoidance-behavior. For the most part,
we are observers of our own behavior (and our own thoughts, for
that matter: introspection). Sure, we have control over our
instinctual tendencies, but not as much as we would like to
think. Witness the acrophobic "unable" to climb a fire-escape.
Why would courage be such an envied quality if it weren't so hard
to defeat one's instinctual (intuitive) reactions.

Unfortunately, gut-feeling tendencies can backfire, as in the
case of drug addiction. In this case, the emotional mind sets
the goal ("get drugs") and the rational mind does what it can to
get satiate the emotional mind despite knowledge of the damage
being done. Phobias aren't so desirable either.

What I'm getting at is that "desires" and "feelings" are how we
experience the state of our mind, just as colors are the way we
experience light frequency and pain is the way we experience
tissue damage. To say a computer has feelings is incorrect
unless the computer is AWARE of its behavior. You could possibly
say that the shuttle's computer aborted the mission to prevent
it's own death (i.e. it felt fear) if one of the sensory inputs
to the computer was the fact that it was entering the abort-
state.

The same argument could be made for consciousness. That to be
conscious is to be aware of one's own thought process and state
of mind (a sixth sense?). Computers (and Barry Kort's gigantic
telephone switching system) are not conscious. While they receive
input from the various "senses" (telephone exchanges, disk-
drives, users), they receive no information about themselves. One
could say that a time-sharing system that monitor its own status
is "conscious" but this is a very limited consciousness, since
the system cannot construct an abstract world-model that would
include itself, a requirement for personal identity.

If a computer could compile sensory information about itself and
the world around it into an abstract model of the "world," and
then use this model to interact with the world, then it would be
conscious. Further, if it could associate pieces of its model to
words, and words to a grammar, then it could communicate with
people and let us know "what it's like to be a computer."

-------
I would appreciate any reactions.


Paul King

UUCP: {seismo,harpo,ihnp4,linus,allegra,ut-sally}!harvard!h-sc4!pking
ARPA: pking@h-sc4.harvard.EDU
BITNET: pking@harvsc4.BITNET

------------------------------

Date: 9 Apr 86 23:18:21 GMT
From: decvax!linus!philabs!cmcl2!seismo!ll-xn!cit-vax!trent@ucbvax.berkeley
.edu (Ray Trent)
Subject: Re: Computer Dialogue

In article <1039@h-sc1.UUCP> pking@h-sc1.UUCP (paul king) writes:
>"consciousness," one point is being overlooked. That is, can you
>really say that a behavioral reaction (survival instinct) is a
>feeling if the animal or computer has no consciousness?

Please define this concept of "consciousness" before using it.
Please do so in a fashion that does not resort to saying that
human beings are mystically different from other animals or
machines. Please also avoid self-important definitions. (e.g.
consciousness is what humans have)

>is to increase its chances of survival, but the cat (and the
>shuttle's computer) has no "desire to survive."

The above request also applies to the term "desire".

>difference is that we are conscious of our attempts to avoid
...
>"It just happened -- it was the first thing that came to mind."

Huh? This pair of sentences seems to say that your definition of
"consciousness" is that consciousness is "the first thing that
[comes] to mind."
I don't think that split second decisions are a
good measure of what most people call consciousness.

> [two paragraphs that seem to reinforce the idea that
> consciousness has much to do with "gut-level reactions" and
> "instincts"]

>What I'm getting at is that "desires" and "feelings" are how we

My definition of these concepts would say that they "are" the
actions that a life process take in response to certain stimuli.

>tissue damage. To say a computer has feelings is incorrect
>unless the computer is AWARE of its behavior. You could possibly

No, to say that a computer has self-awareness is to say that it
is AWARE of its feelings. Unless, of course, this is yet another
self-defined concept.

>say that the shuttle's computer aborted the mission to prevent
>it's own death (i.e. it felt fear) if one of the sensory inputs
>to the computer was the fact that it was entering the abort-
>state.

[reductio ad absurdum(sp?)] You could possibly say that a human
entered abort mode (felt fear) if one of its sensory inputs was
the fact that it was entering abort mode (feeling fear).

>telephone switching system) are not conscious. While they receive
>input from the various "senses" (telephone exchanges, disk-
>drives, users), they receive no information about themselves. One

Telephone systems receive no inputs about themselves? What about
routing information derived from information the system has about
its own damaged components?

>the system cannot construct an abstract world-model that would
>include itself, a requirement for personal identity.

Here is a simple program to construct an abstract world-model
that includes the machine:

main()
{
printf("I think, therefore I am.\n");
}

Try to convince me that humans do something fundamentally
different here. (seriously)

>If a computer could compile sensory information about itself and
>the world around it into an abstract model of the "world," and
>then use this model to interact with the world, then it would be
>conscious. Further, if it could associate pieces of its model to
>words, and words to a grammar, then it could communicate with
>people and let us know "what it's like to be a computer."

I give as example the relational database program. It collects
sensory information about the world into an abstract model of the
"world" and then uses this model to interact with the world. Is
it therefore conscious? I don't think so. (how self-referential
of me) If fact, I will go further...such a program associates
pieces of its model to words and words into a grammer, and with
the appropriate database, could indeed let us know "what it's
like to be a computer,"
but I don't think that most people would
call it conscious.

>I would appreciate any reactions.

Ask, and you shall receive.
--
../ray\..
(trent@csvax.caltech.edu)
"The above is someone else's opinion only at great coincidence"

------------------------------

Date: 13 Apr 86 17:25:09 GMT
From: dali.berkeley.edu!regier@ucbvax.berkeley.edu (Terrance P. Regier)
Subject: Re: Computer Dialogue


trent@csvax.caltech.edu writes:

> Here is a simple program to construct an abstract world-model
> that includes the machine:
>
> main()
> {
> printf("I think, therefore I am.\n");
> }
>
> Try to convince me that humans do something fundamentally
> different here. (seriously)
^^^^^^^^^

Descartes' famous assertion was the result of a period of admirably
honest introspection: After allowing himself to doubt the veracity
of his beliefs, senses, etc., he found that some things (well, at
least one thing) CANNOT be doubted. I think, therefore I am. Your
admittedly concise and elegant program fails to capture the integrity
and awareness of self implicit in the statement. It is closer in
spirit to an involuntary burp.

-- Terry

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT