Copy Link
Add to Bookmark
Report

AIList Digest Volume 2 Issue 080

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest            Tuesday, 26 Jun 1984      Volume 2 : Issue 80 

Today's Topics:
Expert Systems - Request for Abstracts,
Reasoning - Checking for Inconsistencies,
AI Programming & Turing Tests - Spelling Correction,
Business - Softwar,
Cognition - Intuition & Hypnosis & Unconscious Mind,
Games - Optimal Strategies,
Philosophy - Purpose & Relation to AI
----------------------------------------------------------------------

Date: 25 Jun 84 17:05:31 EDT (Mon)
From: Dana S. Nau <dsn@umcp-cs.arpa>
Subject: expert computer systems

I am currently writing a revised and updated version of my tutorial on
expert computer systems (which appeared in IEEE Computer in Feb. 1983). As
part of the tutorial I plan to include a list of current expert computer
systems, including both their domains of expertise and references to any
available current papers describing them. If you know of any successful
expert computer systems which you would like me to mention, please send me a
brief note giving the name of the system, the domain area, what kind of
success the system has had, and journal-style reference listings for any
relevant published papers.

------------------------------

Date: 23 Jun 84 16:24:59-PDT (Sat)
From: hplabs!tektronix!orca!shark!brianp @ Ucb-Vax.arpa
Subject: Re: Commonsense Reasoning?
Article-I.D.: shark.845

When presented with a problem like the 'if 3 is half of 5' one,
how many dive right in and try to solve something, and how many
start by checking for inconsistencies? Solving problems that
are 'inconsistent' sounds like it goes in the same pile as working
with insufficient data. (problems with problems :-)

Brian Peterson
...ucbvax!tektronix!shark!brianp

------------------------------

Date: Mon, 25 Jun 84 09:09 EDT
From: MJackson.Wbst@XEROX.ARPA
Subject: Re: Spelling Correction vs. Fact Correction

"It might well be one doesn't want to call a system that uses this
strategy to proofcheck student's essays about geography an AI program,
but it sure would be hard to tell from its performance whether it
was an AI program or a non-AI program `pretending' to be an AI program."


-- Robert Amsler <AMSLER@SRI-AI>

If one cannot distinguish a non-artificial intelligence program from an
artificial intelligence program by, say, interacting with it freely for
a couple of hours, then would not one be compelled to conclude that the
non-artificial intelligence program was displaying true artificial
artificial intelligence?

Mark

------------------------------

Date: Sun, 24 Jun 84 18:38:06 pdt
From: syming%B.CC@Berkeley
Subject: Re: Softwar

Four years ago, I worked as a programmer for Business School at Ohio State U.
When we ordered SAS/ETS(Statistical Analysis System/Econometic and Time Series)
from SAS company, they sent us a tape with a fixed time (two months or so?)
payment notice and stated that the program would vanish after that time. Of
course, we paid in time and they sent us a 20(?)-digit long key word and
instruction to make our trial copy a one-year-life-time program, since the
service contract was year by year. I had not realized this was a rare case.
Isn't it a common practice for a company to protect their products?

-- syming hwang

------------------------------

Date: 23 Jun 84 8:13:06-PDT (Sat)
From: hplabs!hao!seismo!ut-sally!utastro!bill @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: utastro.127

Apropos this discussion, there has been research into hypnotically
aided recall that casts serious doubt on its reliability.
Two recent articles in *Science* magazine directly address this issue:
"The Use of Hypnosis to Enhance Recall", Oct 14, 1983, pp. 184-185 and
"Hypnotically Created Memory Among Highly Hypnotized Subjects", Nov 4,
1983, pp. 523-524.

Bill Jefferys 8-%
Astronomy Dept, University of Texas, Austin TX 78712 (USnail)
{allegra,ihnp4}!{ut-sally,noao}!utastro!bill (uucp)
utastro!bill@ut-ngp (ARPANET)

------------------------------

Date: 22 Jun 84 10:56:44-PDT (Fri)
From: ihnp4!houxm!mhuxl!mhuxm!mhuxi!charm!slag @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: charm.380

There seems to be some consensus here that however mind
and brain are related, there seems to be more process going
on then we are directly aware of. In some way, a filtering
mechanism in our mind/brain extracts certain salient images
from all the associations and connections. It is these
structures (thoughts?) that I would call consciousness or
awareness. Would anybody care to take a stab at a
model for this?


Logic is bunch of pretty flowers that smell bad.

slag heap.

------------------------------

Date: 22 Jun 84 13:58:21-PDT (Fri)
From: ihnp4!houxm!mhuxl!ulysses!gamma!pyuxww!pyuxn!rlr @ Ucb-Vax.arpa
Subject: Re: Intuition
Article-I.D.: pyuxn.773

[from Ned Horvath:]
> I will (like others) recommend "The Mind's I". The issue
> is addressed until ANYBODY will get confused. You may come away with the
> same belief, but you will have DOUBTS, regardless of your current position.
> As for "intuition," we are (so far) using an inaccurate picture: those
> "leaps of imagination" are not necessarily correct insights! Have you never
> had an intuitive feeling that was WRONG in the face of additional data?

> 1. Intuition is just deduction based on data one is not CONSCIOUSLY aware of.
> Body language is a good example of data we all collect but often are not
> aware of consciously; we may use terms like "good/bad vibes"...
> 2. Intuition is just induction based on partial data and application of a
> "model" or "pattern" from a different experience.
> 3. Intuition is a random-number-generator along with some "sanity checks"
> against internal consistency and/or available data.

> I submit that about the only thing we KNOW about intuition is that it is
> not a consciously rational process. Introspection, by definition, will not
> yield up any distinctions between any of the above three mechanisms, or
> between them and the effects of a soul or divine inspiration.

Thanks, Ned, for putting together what I was trying to say about intuition
in a clearer manner than I could. The three examples you cite sound like
rationally feasible constructs to describe what we call intuition. As far
as external possibilities (souls and deities), it seems sufficient to say that
until we see a facet which internal biochemical physical processes cannot
account for, there is no reason to presuppose the supernatural/external.

"So, it was all a dream!" --Mr. Pither
"No, dear, this is the dream; you're still in the cell." --his mother
Rich Rosen pyuxn!rlr

------------------------------

Date: 22 Jun 84 8:36:42-PDT (Fri)
From: ihnp4!cbosgd!rbg @ Ucb-Vax.arpa
Subject: Re: Mind and Brain
Article-I.D.: cbosgd.42

The distinction between conscious and subconscious components of the mind
is an important one. The substrate for consciousness is basically cortical,
which implies that it has access to language and reasoning processes, but
only some of the information about emotional states processed
primarily in lower brain centers. To restate it: consciousness can monitor
only a fraction of the activity of the brain, and can effectively control
only a fraction of our behavior. The example of body language not being
conscious is a good one (although trained observers can learn to make
conscious interpretations of some of these signals).

>2. Intuition is just induction based on partial data and application of a
> "model" or "pattern" from a different experience.
>
>3. Intuition is a random-number-generator along with some "sanity checks"
> against internal consistency and/or available data.
>
>I submit that about the only thing we KNOW about intuition is that it is
>not a consciously rational process.
> ech@spuxll.UUCP (Ned Horvath)

There is a variety of evidence that human memory is content addressable.
The results of the association process whereby different memories are
compared or brought together are accessable to consciousness, and indeed
may even make up a significant component of the "stream of consciousness".
The "sanity checks" are the conscious, rational evaluation of the
associations. A lot of intuitions and ideas get junked...

The control of this association process is not rational: how many times
have you known that you knew a fact, but were unable to produce it on the
spot? There may well be an element of randomness to this process (Hinton
at CMU has suggested a model based on statistical mechanics), but there
are also constraints on the patterns to be matched against. You don't
generate lots of inappropriate associations, or you would not be very
successful in competing for survival. And that is the force that shaped
our brain and thought capacity.

--Rich Goldschmidt cbosgd!rbg a former brain hacker (now reformed?)

------------------------------

Date: 25 Jun 1984 10:39-EST
From: Robert.Frederking@CMU-CS-CAD.ARPA
Subject: Intuition; Hans Berliner

There is a good article in the Winter 83 AI Magazine (4;4) about
non-logical AI (it is a rebuttal to Nils Nilsson's Presidential Address
at AAAI-83). The authors point out that certain problems are intractable if
dealt with symbolically, whereas they are easily solved if one uses
real numbers and ordinary math. I suspect that the human brain uses a
combination of analog and digital/symbolic processing, and that some
cases of intuition might arise from the results of an analog
computation into which introspection is not possible.
As for Ken Laws's comment about switching to a new optimal
strategy at each step (rather than Berliner's smoothing of
transitions), one of the things he is trying to get around is the
"horizon effect", where the existance of a sharp cut-off in the
program's evaluation makes it think that postponing a problem solves it
(since you no longer see the problem if it is pushed back over your
horizon). In other words, perhaps the optimal strategy at each point *is*
a non-linear combination of several discrete strategies.
Also, I think it is a mistake to say that "pattern-matching"
and "reasoning" are different things. After all, one must
pattern-match in order to find appropriate objects to combine with an
inference rule (obvious in OPS5, but also true in PROLOG). The
question at hand is perhaps more whether one is allowed to use logically
unsound inferences (a.k.a. heuristics).

------------------------------

Date: Mon 25 Jun 84 08:10:45-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: ``Mind and brain'' mumbo-jumbo

> From: Michael Dyer <dyer@UCLA-CS.ARPA>
> The task of AI researchers
> is to show how such vague notions CAN be understood computationally,
> not to go around arguing against this simply because such notions
> as "intuition" are so vague as to be computationally useless at
> such at a bs level of discussion. It's like my postulating the
> notion of "radio" and then looking at each transistor, crystal, wire or
> what-have-you inside the radio, and then saying "THAT part can't be a
> radio; that OTHER part there can't be one either.

Just so!

> From: hplabs!hao!seismo!rochester!ritcv!ccivax!band @ Ucb-Vax.arpa
> Is it possible that "
intuition" is the word we
> use to explain what cannot be explained more
> formally or logically?

Why do these discussions always degenerate into suggestions of
absolute limits to reason, perception or what not? That the task is
*very* difficult we know, but we should not claim (without proof) that
something *cannot* be done just because we cannot see how it could be
done (within our lifetime...). Reminds me of those old ``if God had
intended man to fly...'' arguments... Let's replace those ``what
*cannot* be explained'' by ``what we can't yet explain''!

-- Fernando Pereira
pereira@sri-ai

------------------------------

Date: 25 Jun 84 16:27:57 EDT
From: BIESEL@RUTGERS.ARPA
Subject: Philosophy and other amusements.

Judging from the responses on this net, the audience is evenly split between
those who consider philosophy a waste of time in the context of AI, and those
who love to dig up and discuss the same old chestnuts and conundrums that
have amused amateur philosophers for many years now.

First, any AI program worthy of that apellation is in fact an implementation
of of philosophical theory, whether the implementer is aware of that fact or
not. It is unfortunate that most implementers do *NOT* seem to be
aware of this.

Take something as apparently clear and unphilosophical as a vision program
trying to make sense out of a blocks-world. Well, all that code deciding
whether this or that junction of line segments could correspond to a corner
is ultimately based on the (usually subconscious) presumption that there
is a "
real" world, that it exhibits certain regularities whether perceived
by man or machine, that these regularities correspond to arrangements of
"
matter" and "energy", and that some aspects of these regularities can and
should serve to constrain the behavior of some machine. There are even
more buried assumptions about the time invariance of physical phenomena,
the principle of causation, and the essential equivalence of "
intelligent"
behavior realized by different kinds of hardware/mushware (i.e. cells vs.
transistors). ALL of these assumptions represent philosophical positions,
which at other times, and in other places would have been severely
questioned. It is only our common western heritage of rationalism and
materialism that cloaks these facts, and makes it appear that the matter is
settled. The unfortunate end-effect of this is that some of our more able
practitioners (hackers) are unable to critically examine the foundations
on which they build their systems, leading to ever more complex hacks, with
patches applied where the underlying fabric of thought becomes threadbare.

Second, for those who are fond of unscrewing the inscrutable, it should be
pointed out that philosophy has never answered any fundamental questions
(i.e. identity, duality, one vs. many, existence, essence etc. etc.).
That is not its purpose; instead it should be an attempt to critically
examine the foundations of our opinions and beliefs about the world, and
its meaning. Take a real hard look at why you believe that "
...Intuition
is nothing more than..." thus-and-such, and if you come up with:'it is
intuitively obvious', or 'everybody knows that', you've uncovered a mental
blind spot. You may in the end confirm your original views, but at least
you will know why you believe what you do, and you will have become aware
of alternative views.

Consider a solipsist AI program: philosophically unassailable, logically
self-consistent, but functionally useless and indistinguishable from
an autistic program. I'm afraid that some of the AI program approaches
are just as dead-end, because they reflect only too well the simplistic
views of their authors.

Pete BIESEL@RUTGERS.ARPA


(quick, more gasoline, I think the flames are dying down...)

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT