Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 049

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Sunday, 22 Feb 1987       Volume 5 : Issue 49 

Today's Topics:
Philosophy - Consciousness & Other Minds
Scientific Method - Formalization in AI

----------------------------------------------------------------------

Date: 14 Feb 87 03:33:12 GMT
From: well!wcalvin@LLL-LCC.ARPA (William Calvin)
Subject: Re: More on Minsky on Mind(s)


Stevan Harnad replies to my Darwin Machine proposal for consciousness
(2256@well.uucp) as follows:
> Summary: No objective account of planning for the future can give an
independent causal role to consciousness, so why bother?
> wcalvin@well.UUCP writes:
>
>> Rehearsing movements may be the key to appreciating the brain
>> mechanisms [of consciousness and free will]
>
> But WHY do the functional mechanisms of planning have to be conscious?
> ...Every one of the internal functions described for a planning,
> past/future-oriented device of the kind Minsky describes (and we too
> could conceivably be) would be physically, causally and functionally EXACTL
Y
> THE SAME--i.e., would accomplish the EXACT same things, by EXACTLY the same

> means -- WITHOUT being interpreted as being conscious. So what functional
> work is the consciousness doing? And if none, what is the justification
> for the conscious interpretation of any such processes...?
>
Why bother? Why bother to talk about the subject at all? Because one
hopes to understand the subject, maybe extend our capabilities a little by
appreciating the mechanistic underpinning a little better. I am describing a
stochastic-plus-selective process that, I suggest, accounts for many of the
things which are ordinarily subsumed under the topic of consciousness. I'd
like the reactions of people who've argued consciousness more than I have,
who could perhaps improve on my characterization or point out what it can't
subsume.
I don't claim that these functional aspects of planning (I prefer to
just say "scenario-spinning" rather than something as purposeful-sounding as
planning) are ALL of consciousness -- they seem a good bet to me, worthy of
careful examination, so as to better delineate what's left over after such
stochastic-plus-selective processes are accounted for. But to talk about
consciousness as being purely personal and subjective and hence beyond
research -- that's just a turn-off to developing better approaches that are
less dependent on slippery words.
That's why one bothers. We tend to think that humans have something
special going for them in this area. It is often confused with mere
appreciation of one's world (perceiving pain, etc.) but there's nothing
uniquely human about that. The world we perceive is probably a lot more
detailed than that of a spider -- and even of a chimp, thanks to our constant
creation of new schemata via word combinations. But if there is something
more than that, I tend to think that it is in the area of scenario-spinning:
foresight, "free will" as we choose between candidate scenarios, self-
consciousness as we see ourselves poised at the intersection of several
scenarios leading to alternative futures. I have proposed a mechanistic
neurophysiological model to get us started thinking about this aspect of
human experience; I expect it to pare away one aspect of "consciousness" so
as to better define, if anything, what remains. Maybe there really is a
little person inside the head, but I am working on the assumption that such
distributed properties of stochastic neural networks will account for the
whole thing, including how we shift our attention from one thing to another.
Even William James in 1890 saw attention as a matter of competing scenarios:
[Attention] is the taking possession by the mind, in
clear and vivid form, of one out of what seem several
simultaneously possible objects or trains of thought."
To those offended by the notion that "
chance rules," I would point out
that it doesn't: like mutations and permutations of genes, neural stochastic
events serve as the generators of novelty -- but it is selection by one's
memories (often incorporated as values, ethics, and such) that determine what
survives. Those values rule. We choose between the options we generate, and
often without overt action -- we just form a new memory, a judgement on file
to guide future choices and actions.
And apropos chance, I cannot resist quoting Halifax:
"
He that leavth nothing to chance
will do few things ill,
but he will do very few things."
He probably wasn't using "
chance" in quite the sense that I am, but it's
still appropriate when said using my stochastic sense too.

William H. Calvin BITNET: wcalvin@uwalocke
University of Washington USENET: wcalvin@well.uucp
Biology Program NJ-15 206/328-1192 or 543-1648
Seattle WA 98195

------------------------------

Date: 12 Feb 87 19:12:48 GMT
From: mcvax!ukc!reading!onion!minster!adt@seismo.css.gov
Subject: Re: Harnad's epiphenomenalism

In article <4021@quartz.Diamond.BBN.COM> aweinste@Diamond.BBN.COM
(Anders Weinstein) writes:

>Well, I don't think we ought to give this up so easily. I would urge that
>cognitivists *not* buy into the premise of so many of Harnad's replies: the
>existence of some weird parallel universe of subjective experience.
>(Actually, *multiple* such universes, one per conscious subject, though of
>course the existence of more than my own is always open to doubt.) We should
>recognize no such private worlds. The most promising prospect we have is that
>conscious experiences are either to be identified with functional states of
>the brain or eliminated from our ultimate picture of the world. How this
>reduction is to be carried out in detail is naturally a matter for
>empirical study to reveal, but this should remain one (distant) goal of
>mind/brain inquiry.
>
>Anders Weinstein aweinste@DIAMOND.BBN.COM
>BBN Labs, Cambridge MA

Why is it necessary to assert that there are no subjective universes, all that
is necessary is that everyone in their own subjective universe agrees the
definition of consciousness as they perceive it. Eliminating conscious
experiences from our ultimate picture of the world sounds like throwing away
half the results so that the theory fits. The analogy of our understanding of
gold in terms of its atomic structure is a useful one but does not require
the rejection of subjective universes. If objectivism is taken to its limit
as above then surely it must be possible to define "
beautiful" in terms of
physical states of mind, or "
beautiful" should be eliminated from our
ultimate picture of the world. OR "
beautiful" is not a conscious experience.
I would be interested to know which of these possibilities you support.

------------------------------

Date: 14 Feb 87 21:57:45 GMT
From: brothers@topaz.rutgers.edu (Laurence R. Brothers)
Subject: Submission for mod-ai

Path: topaz!brothers
From: brothers@topaz.RUTGERS.EDU (Laurence R. Brothers)
Newsgroups: mod.ai
Subject: Re: Other Minds
Message-ID: <9245@topaz.RUTGERS.EDU>
Date: 14 Feb 87 21:57:45 GMT
References: <8702132202.AA01947@BOEING.COM>
Organization: Rutgers Univ., New Brunswick, N.J.
Lines: 49

So...? I think you've basically restated a number of properties of
intelligence which AI researchers have been exploring for some time,
with varying degrees of success.

There are two REAL reasons why you can't build an "
intelligent"
machine today:

1) Since no one really knows how people think, we can't build machines
which accurately model ourselves.

2) Current machines do not have anything like the kind of computing
power necessary for intelligence.

Ray@Boeing says:
>Manipulation of symbols is insufficient by itself to duplicate human
>performance; it is necessary to treat the perceptions and experiences the
>symbols *symbolize*. Put a symbol for red and a symbol for blue in a pot,
>and stir as you will, there will be no trace of magenta.

Look, manipulation of symbols by a program is analogical with
manipulation of neural impulses by a brain. When you reduce far
enough, EVERYTHING is typographical/syntactical. The neat thing about
brains is that they manipulate so MANY symbols at once.

General arguments against standard AI techniques are all well and good
(viz. Hofstadter's position), but keep in mind that while mainstream
AI has not produced so much wonderful stuff, the old neural-net
research was even less impressive.

My own view regarding true machine intelligence is that there is no
particular reason why it's not theoretically possible, but given
an "
intelligent" machine, one should not expect it to be able to
do anything weird like passing a Turing Test. The hypothetical
intelligent machine won't be anything like a human -- different
architecture, different i/o bandwidths, different physical
manifestation, so it is philosophically deviant to expect it
to emulate a human.

Anyhow, as a putative AI researcher (so I'm only 1st year, so sue me),
it seems to me that decades of work have to be done on both hardware
and cognitive modeling before we can even set our sights on
HAL-9000.... Give me another ring when those terabyte RAM, femtosecond
CAML cycle optical computers come out -- until then the entire
discussion is numinous....
--
Laurence R. Brothers
brothers@topaz.rutgers.edu
{harvard,seismo,ut-sally,sri-iu,ihnp4!packard}!topaz!brothers
"
The future's so bright, I gotta wear shades!"

------------------------------

Date: Mon, 16 Feb 87 18:50:21 n
From: DAVIS%EMBL.BITNET@wiscvm.wisc.edu
Subject: reply to harnad


I'm afraid that Stevan Harnad has still appeared not to grasp the irrelevance
of asking 'WHY' questions about consciousness.

> ...I am not asking a teleological question or even an evolutionary one.
> [In prior iterations I explained why evolutionary accounts of the origins
> and "
survival value" of consciousness are doomed: because they're turing-
> indistinguishable from the IDENTICAL selective-advantage scenario minus
> conciousness.]

Oh dear. In my assertion that there is a *biological* dimension to the
current existence (or illusion of) consciousness, I had hoped that Harnad
would understand the idea of evolutionary events being 'frozen-in'.
Sure - there is no advantage in a conscious system doing what can be done
unconciously. BUT, and its a big but, if the system that gets to do
trick X first *just happens* to be conscious, then all future systems
evolving from that one will also be conscious. This is true of all aspects
of biological selection, and would be true in any context of natural
selection operating on an essentially randomn feature generator.

There need be NO 'why' as to the fact that consciousness is with us now
- there is every reason to suppose that we are looking at a historical
accident that is frozen-in by the irreversibility of a system evolving in
a biological context. In fact, it may not even be an accident - when you
consider the sort of complexity involved in building a'turing-
indistinguishable' automaton, versus the slow, steady progress possible
with an evolving, concious system, it may very well be that the ONLY
reason for the existence of conscious systems is that they are *easier* to build
within an evolutionary, biochemical context.

Hence, we have no real reason to suppose that there is a 'why' to be
answered, unless you have an interest in asking 'why did my random number
generator give me 4.5672 ?'. Consciousness appears to be with us today -
the > justification for the conscious interpretation of the "
how" < (Harnad)
is simply this:

- as individuals we experience self-consciousness,
- other system's behaviour is so similar to our own that we may
reasonably make the assumption of conscioussness there too,
- the *a priori* existence of conciousness is supported by
(i) our own belief in our own experience

and hence

(ii) the evolutionary parrallels with other biological features
such as the pentadactyl limb, globin and histone
structures and the use of DNA.

Voila - Occam's razor meets the blind watchmaker, and gives us conscious
machines, not because there is any reason 'why' this should be so, but just
because it worked out like that.

Like it - or lump it!

As for the question of knowledge & consciousness: I did not intend the
word 'know' to be used in its epistemological sense, merely to point out
that our VAXcluster has access to information, but (appears not to) KNOW
anything. The mystery of the 'C-1' is that we can be aware, that it is
'like something to be us', period.

We don't know how yet,and we will probably never know why beyond the likelihood
of our ancestral soup bowl being pretty good at coming up with bright ideas,
like us! (no immodesty intended here.....)

regards,

Paul Davis

netmail: davis@embl.bitnet
wetmail: embl, postfach 10.2209, 6900 heidelberg, west germany
petmail: homing pigeons to ......

------------------------------

Date: Sun, 15 Feb 87 17:50:00 EST
From: Raul.Valdes-Perez@B.GP.CS.CMU.EDU
Subject: Formalization in AI (Not Philosophy)

I believe it is wrong to say that the importance of formalization to AI
is overstated; formalization is our secret weapon. Let's say that AI is
the science of codifying human knowledge in an effective manner, where by
effective is meant able to effect a result, rather than, say, listing on
paper and hanging in a museum.

Our secret weapon is formalization by embedding knowledge in a computer
program, in accordance with our theories of how best to organize the
embedding. We then run the program to test our theories. This embedding
is a formalization; we are able to discover qualitative properties of the
knowledge and organization by syntactic manipulation i.e. execution of
the computer program. These qualitative properties would not otherwise
be discovered by us because of our limited capacity to sustain complex
thought.

Programming may not seem formal, because few theorems follow from its
exercise. This difficulty is due to our programming languages that lack
useful mathematical properties. Our resulting insights are qualitative;
nevertheless they are achieved by formalization.

My conclusion is that everyone in AI believes in formalization, whether he
knows it or not.

-- Raul E. Valdes-Perez --
-- CMU CS --

------------------------------

Date: Mon, 16 Feb 87 07:41:29 PST
From: ames!styx!lll-lcc!ihnp4!hounx!kort@cad.Berkeley.EDU
Subject: Re: Other Minds


Ray Allis has brought up one of my favorite subjects: the creation
of an artificial mind.

I agree with Ray that symbol manipulation is insufficient. In last
year's discussion of the Chinese Room, we identified one of the
shortcomings of the Room: it was unable to learn from experience
and tell the stories of its own adventures.

The cognitive maps of an artificial mind are the maps and models of
the external world. It is one thing to download a map created by
an external mapmaker. It is quite another thing to explore one's
surroundings with one's senses and construct an internal representation
which is analogically similar to the external world.

An Artificial Sentient Being would be equipped with sensors (vision,
audition, olfaction, tactition), and would be given the goal of
exploring its environment, constructing an internal map or model
of the that environment, and then using that map to navigate safely.

Finally, like Marco Polo, the Artificial Sentient Being would describe
to others, in symbolic language, the contents of its internal map:
it would tell its life story.

I personally would like to see us build an Artificial Sentient Being
who was able to do Science. That is, it would observe reality and
construct accurate theories (mental models) of the dynamics which
governed external reality.

Suppose we had two such machines, and we set them to explore each
other. Would each build an accurate internal representation of the
other? (That is, could a Turing Machine construct a mathematical
model of (another) Turing Machine?) Would the Sentient Being
recognize the similarity between itself and the Other? And in seeing
its soul-mate, would it come to know itself for the first time?

Barry Kort
---
-- Barry Kort
...ihnp4!houxm!hounx!kort

A door opens. You are entering another dementia.
The dementia of the mind.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT