Copy Link
Add to Bookmark
Report
AIList Digest Volume 2 Issue 079
AIList Digest Monday, 25 Jun 1984 Volume 2 : Issue 79
Today's Topics:
Combinatory Logic - Request,
AI Tools - NIAL,
AI and Society - Relevance of "souls" to AI,
Problem Solving - Commonsense Reasoning,
AI Programming - Spelling Correction,
Cognition - Intuition & Mind vs. Brain
----------------------------------------------------------------------
Date: 28 Jun 84 6:56:08-EDT (Thu)
From: hplabs!hao!seismo!cmcl2!floyd!vax135!ukc!srlm @ Ucb-Vax.arpa
Subject: combinatory logic
Article-I.D.: ukc.4280
[-: kipple :-] [I couldn't bear to delete this one. -- KIL]
In the hope that many of you are also interested in combinatory logic...
please have a look at this and mail me any suggestions, references, etc.
------------------
[by a. pettorossi in notre dame j. form. logic 22 (4) 81]
define:
marking
is a function that assigns, for each combinator in a term (tree)
the number of left choices (of path) that one has to make to go
from the root to the combinator.
ex.:
marking SII = <S,2><I,1><I,0>
the set right applied subterms of a combinator X is defined as:
1) if X is a basic combinator or a variable ras(X) = {X)
2) if X is (YZ) then ras(YZ) = union (ras(X)) Z
a combinator X with reduction axiom X x1 x2 x3 ... xk -> Y
has non-ascending property iff
for all i, 1<=i<=k, if <xi,p> occurs in marking (X x1...xk)
and <xi,q> occurs in marking Y, then p >= q.
a combinator (X x1 x2 ... xk -> Y) has compositive effect iff
a right applied subterm of Y is not a variable.
------------------
Theorem:
given a subbase B={X1,...Xk} such that all Xi in B have non-ascending
property and no compositive effect, every reduction strategy applied
to any Y in B+ leads to normal form.
------------------
Open Problem:
does the theorem hold if non-ascending property is the only condition?
------------------
My personal questions:
if one specifies leftmost-outermost reduction only, would the Open
Problem be any easier?
how much of combinatory logic can we do with B?
and with non-ascending property only?
silvio lemos meira
UUCP: ...!{vax135,mcvax}!ukc!srlm
Post:
computing laboratory
university of kent at canterbury
canterbury ct2 7nf uk
Phone:
+44 227 66822 extension 568
------------------------------
Date: 20 Jun 84 10:35:51-PDT (Wed)
From: decvax!linus!utzoo!utcsrgv!qucis!carl @ Ucb-Vax.arpa
Subject: what is NIAL?
Article-I.D.: qucis.70
Nial is the "Nested Interactive Array Language."
It is based on the nested, rectangular arrays of T. More, and
has aspects of Lisp, APL, FP, and Pascal.
Nial runs on lots of Unix(&etc) systems, VAX/VMS, PC-DOS, and
VM/CMS (almost).
Nial is being used primarily for prototyping and logic programming.
Distribution is through Nial Systems Limited, PO Box 2128, Kingston,
Ontario, Canada, K7L 5J8. (613) 549-1432.
Here are some trivial samples (names in uppercase are second order
functions, called transformers):
5 in 0 1 2 5 = truth
1 3 5 EACHLEFT in 0 1 2 5 = truth falsehood truth
average is divide [sum, tally]
average 1 2 3 4 5 = 3.
[sum, tally] 1 2 3 4 5 = 15 5
divide 15 5 = 3.
MONO is equal EACH
MONO type 1 2.0 3.1j4.3 `a "phrase ?fault truth = falsehood
MONO type 1 3 5 2 = truth
------------------------------
Date: 10 Jun 84 11:39:00-PDT (Sun)
From: hplabs!hp-pcd!hpfcla!hpfclq!robert @ Ucb-Vax.arpa
Subject: Re: Re: elevance of "souls" to AI
Article-I.D.: hpfclq.68500003
Is a soul going to be the real issue here?
> I submit that the concept of "soul" is irrelevant only if AI is doomed
> to utter failure. Use your imagination and consider a computer program
> that exhibits many of the characteristics of a human being in
> its ability to reason, to converse, and to be creative and unexpected in
> its actions. How will you AI-ers defend yourself if a distinguished
> theologian asserts that G-d has granted to your computer program a soul?
To those AIers who don't believe in God it probably won't matter much what a
distinguished theologain asserts. I think many that believe in God will
wonder why God would come down and bless a computer program with a soul.
They will doubt the theologian. And for those that do believe that
the program has a soul, what are they to defend themselves from? Are they
to defend God for doing it? Or they may just agree with the theologian
saying, "Yep, that sure is neat that it has a soul."
I think a bigger problem will be empathy for the program. A program that
is your friend could be just as hard to kill as any other being.
This could be particularly true of people who are only end users of
these friend programs and don't understand how it works. It is hard
to guess the psychological effects of man-machine freindships. It is a very
lonely world and a computer might be your only friend in the world!
> If he might be right, the program, and its hardware must not be destroyed.
Is cremation bad because that destroys the hardware of
something that had a soul?
> Perhaps it should not be altered either, lest its soul be lost.
> The casual destruction, recreation and development of computer programs
> containing souls will horrify many people.
Altering, such as in psychotherapy for humans and mods to code or inference
tables in programs, is bad? Operating on people or making mods to hardware
is bad? I would imagine not. What we do have is the possibility of
of modifying and experimenting with models of human psychologies to a
degree never before available. What are the issues involved in the
torture of beings created out of software? The indiscriminate
experimentation on man-made psyches may bring about a new form of the
antivivisectionist movement. This is all independant of the soul issue
for many people. "If it really appears to be human how can you kill it?"
will be the underlying measure, I think. Again, who knows how the
intervening history will condition man to the thought of man made intelligence.
> You will face demonstrations,
> destruction of laboratories, and government interference of the worst kind.
Nice drama here.
> Start saving up now, for a defense fund for the first AI-er accused by
> a district attorney of soul-murder.
Now I speak from the point of view of someone who doesn't hold much stock in
the idea of a soul. I do believe in the importance of the human as a
thinking, feeling being, so we may really agree. A lot of what you said
seems to be all based on the issue of a soul. I'm just not convienced that
that many people will see it as an issue of the soul. I can see more easily
the DA above arguing that the man-made intelligence is alive
and therefore can be murdered.
> On second thought, you have nothing to fear; no one in AI is really trying
> to make computers act like humans, right?
You bet AIers are out to make computers act like humans, bit by bit
and byte by byte. They are also studying
even more general concepts. What is intelligence? What is
the nature of thought? This goes beyond just making a machine act like
a human.
-Robert (animal) Heckendorn
hplabs!hpfcla!robert
[A couple of notes here: First, SF writers have certainly tried to
explore the man/machine friendship issue in many forms. I remember
stories about robots, computer environments (e.g., HAL), direct
computer/brain links, relationships with intelligent spaceships, etc.
Second, the churches have seldom been strongly opposed to killing
either in war or as capital punishment. At times they have taken the
position that torture and death are unimportant as long as confession
has cleared the soul for entry to heaven. They have been less tolerant
of the torture of soulless animals. -- KIL]
------------------------------
Date: 21 Jun 84 13:58:15-PDT (Thu)
From: ihnp4!houxm!mhuxl!mhuxm!mhuxi!charm!slag @ Ucb-Vax.arpa
Subject: Re: Commonsense Reasoning?
Article-I.D.: charm.377
In solving a puzzle like:
If 3 is half of 5, what is a third of ten?
One might try a series of solutions like the ones suggested,
but I would consider them incorrect if they were logically
inconsistant. The meaning of the problem would be undermined
if one redefined three but not two, five, ten, half or third.
One approach I would take would be to explore
alternate bases. For instance, in base nine, three is a third
of ten. This approach does not solve the above problem though
so it must be marked as wrong, and thrown out.
At what point should a problem like that be given up
on as illogical?
------------------------------
Date: 21 Jun 84 12:45:00-PDT (Thu)
From: pur-ee!uiucdcs!uicsl!keller @ Ucb-Vax.arpa
Subject: Re: Commonsense Reasoning? - (nf)
Article-I.D.: uicsl.12300001
1/2 * 5 = 2.5 round up to 3
1/3 * 10 = 3.333... round down to 3
Just another possible interpretation.
-Shaun Keller
------------------------------
Date: Sun 24 Jun 84 22:34:57-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Commonsense Reasoning?
1/2 * 5 = 2.5 round up to 3
1/3 * 10 = 3.333... round down to 3
-Shaun Keller
Shaun's solution is the same as Richard Treitel's solution in the
previous issue, derived independently. I like it better than my
own solution except for the fact that it makes the problem less
metaphysical.
Roger Hale's solution of (temporarily) subtracting one from each
number was essentially a solution to "If 3-X were half of 5-X, what
would X plus a third of 10-X be?" It seems as valid as my own
solution to "If 3 were half of 5X, what would a third of 10X be?"
I am surprised that such good alternatives to my explanation were found,
especially after I had exposed everyone to my own way of thinking.
For 18 years I've thought I had >>the<< answer.
-- Ken Laws
------------------------------
Date: 21 Jun 84 17:15:09-PDT (Thu)
From: decvax!mcnc!unc!ulysses!allegra!princeton!eosp1!robison @ Ucb-Vax.arpa
Subject: Re: Commonsense reasoning
Article-I.D.: eosp1.955
>> Q: If you call a tail a leg, how many legs does a sheep have?
>> A: Four. Calling a tail a leg doesn't make it a leg.
I find this answer less satisfactory than the two given below.
It seems to me that "calling an X a Y" is exactly how we define
what most things are. SO:
A: One. A tail is leg, those other four things are obviously
something else.
OR:
A: Five. If you call it a leg, it is a leg (albeit of a different
kind), in addition to those other four legs.
- Toby Robison (not Robinson!)
allegra!eosp1!robison
decvax!ittvax!eosp1!robison
princeton!eosp1!robison
------------------------------
Date: Sun 24 Jun 84 22:30:21-PDT
From: Robert Amsler <AMSLER@SRI-AI.ARPA>
Subject: Spelling Correction vs. Fact Correction
If one changed the content of a Spelling corrector to be a list of
predicates containing `facts' rather than sequences of letters, and then
one used such a program against the output of a parser which reduced
incoming text to similarly structured predicates, and the `fact checker'
then emitted confirmations or `corrections' of the facts in the parsed text
(e.g. South-Of San-Francisco San Jose; Capital-of USSR Moscow; etc.)
would this be a knowledge-based system? What has changed from sequences
of letters being acceptable `truths' to the mechanical use of predicates?
I fail to see how this is very different from having a spelling corrector
look over a string of letters and note that MAN and DOG are correct truths
whereas DOA (= Capital-of USSR San-Francisco) and MNA = (South-Of
San-Jose San-Francisco) are actually `misspellings' of DOG and MAN.
It might well be one doesn't want to call a system that uses this
strategy to proofcheck student's essays about geography an AI program,
but it sure would be hard to tell from its performance whether it
was an AI program or a non-AI program `pretending' to be an AI program.
------------------------------
Date: 21 Jun 84 13:30:56-PDT (Thu)
From: hplabs!hao!seismo!cmcl2!floyd!whuxle!spuxll!ech @ Ucb-Vax.arpa
Subject: Re: Intuition
Article-I.D.: spuxll.510
We have a couple of different issues here: is there a distinction between
'mind' and 'brain', and -- if you advocate the position that there is no
difference -- what possible mechanisms account for intuition?
On the first, I will (like others) recommend "The Mind's I". The issue
is addressed until ANYBODY will get confused. You may come away with the
same belief, but you will have DOUBTS, regardless of your current position.
As for "intuition," we are (so far) using an inaccurate picture: those
"leaps of imagination" are not necessarily correct insights! Have you never
had an intuitive feeling that was WRONG in the face of additional data?
Let's look at a few candidates; are any of these either supported or
disproved by current evidence?
1. Intuition is just deduction based on data one is not CONSCIOUSLY aware of.
Body language is a good example of data we all collect but often are not
aware of consciously; we may use terms like "good/bad vibes"...
2. Intuition is just induction based on partial data and application of a
"model" or "pattern" from a different experience.
3. Intuition is a random-number-generator along with some "sanity checks"
against internal consistency and/or available data.
I submit that about the only thing we KNOW about intuition is that it is
not a consciously rational process. Introspection, by definition, will not
yield up any distinctions between any of the above three mechanisms, or
between them and the effects of a soul or divine inspiration. The traditional
technical and ethical constraints against breaking open that skull to measure
it are only beginning to break down (the technical ones, that is!).
I'll add one thing, then get off the box. I USE my intuition: I am willing
to take ideas whether I can account for the source/process or not. However,
I apply the usual rational processes to the intuitive notion before swearing to
its truth: check for self-consistency, consistency with available data,
and where possible set up "experiments" that might falsify the premise.
The Son of Sam had the divine inspiration that he had to kill a few folks...
=Ned=
------------------------------
Date: Sun, 24 Jun 84 13:17:28 PDT
From: Michael Dyer <dyer@UCLA-CS.ARPA>
Subject: Intuition
Those who are trying to argue that "intuition" is something that cannot
be mechanized or understood in terms of computational structures and
operations should try substituting the word "soul" everywhere for
"intuition" and see if they still believe their own arguments.
If they still do, then I ask them to re-read Minksy's comments
on the "soul" a few digest issues back. The task of AI researchers
is to show how such vague notions CAN be understood computationally,
not to go around arguing against this simply because such notions
as "intuition" are so vague as to be computationally useless at
such at a bs level of discussion. It's like my postulating the
notion of "radio" and then looking at each transistor, crystal, wire or
what-have-you inside the radio, and then saying "THAT part can't be a
radio; that OTHER part there can't be one either. I guess the idea of
'radio' can never be realized by the combination of such parts."
I second the suggestion that amateur philosophers of mind read
Hofstadter, or better yet, start building computer programs which
exhibit aspects of "intuition" and then discuss their own programs.
------------------------------
Date: 22 Jun 84 8:41:28-PDT (Fri)
From: hplabs!hao!seismo!rochester!ritcv!ccivax!band @ Ucb-Vax.arpa
Subject: Re: Mind and Brain
Article-I.D.: ccivax.171
In reference to Mr. Robison's comments:
Is it possible that "intuition" is the word we
use to explain what cannot be explained more
formally or logically?
I'm thinking of the explanation of evolution
based on Natural Selection. An explanation based
on probability is NOT an explanation at all.
It is an admission that there is no logical or
formal explanation possible. Of course, we
still accept evolution as a fact of life, but
we don't have any mechanical (or dynamical in the
sense of physics) model for it.
Perhaps the same is true of our experience of
intuition. Something is going on when we have
a flash of insight, but we don't have any
dynamical model that can be used for prediction.
I think that Mr. Robison is correct when he says
that we just don't know much about how our
mind/brain system works. We need to keep asking
any and all questions that come to mind (pun not
intended) -- that's what science is all about.
Bill Anderson
...!{ {ucbvax | decvax}!allegra!rlgvax }!ccivax!band
------------------------------
Date: 22 Jun 84 10:11:16-PDT (Fri)
From: decvax!mcnc!unc!ulysses!gamma!pyuxww!pyuxn!rlr @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: pyuxn.770
[from shark!hutch]
> | Intuition is nothing more than one's subconscious employing logical
> | thought faster than the conscious brain can understand or realize it.
> | What's all the fuss about? And where's the difference between the
> | "brain" and the "mind"? What can this "mind" do that the physical brain
> | doesn't?
> | Rich Rosen pyuxn!rlr
>
> Thank you, Rich, for so succinctly laying to rest all the questions
> mankind has ever had about self and mind and consciousness.
You're welcome. It only takes a miniscule amount of logic and a careful
shave with my Occam's Electric Razor. The point is, for all this talk of
"soul" and "mind", I've never seen anything that points to a *need* (from a
logical point of view) for anything external to "physicalism" to describe
the goings-on in the human brain.
> Now, how about proving it. Oh, and by the way, what is a "subconscious"
> and how do you differentiate between a "conscious" brain and a "subconscious"
> in any meaningful way?
> And once you have told us exactly what a physical brain can do, then we
> can tell you what a mind could do that it doesn't.
Let's place the burden of proof on the proper set of shoulders. If anyone is
proposing a view of intelligence involving a "mind" (defined as that part of
intellect not part of the physical brain), then they had better describe some
phenomena which physical processes cannot account for.
[from eosp1!robison]
> I'm not comfortable with Rich Rosen's assertion that intuition
> is just the mind's unconscious LOGICAL reasoning that happens
> too fast for the conscious to track. If intuition is simply
> ordinary logical reasoning, we should be just as able to
> simulate it as we can other types of reasoning. In fact, attempts
> to simulate intuition account for some rather noteworthy successes
> and failures, and seem to require a number of discoveries before
> we can make much real progress. E.g.:
My statement was probably a little too concise there. It seems like the
brain may be able to extract patterns through an elaborate pattern matching
process that can be triggered by random (or pseudo-random) "browsing", such
that a small subsection of a matched thought pattern can trigger the recall
(or synthesis) of an entire thought element. (Whatever that means...)
> Artists and composers use intuition as part of the process of
> creating art. It is likely that one of the benefits they gain
> from intuition is that a good work of art has many more internal
> relationships among its parts than the creator could have planned.
> It is hard to see how this result can be derived from "logical"
> reasoning of any ordinary deductive or inductive kind. It is
> easier to see how artists obtain this result by making various
> kinds of intuitive decisions to limit their scope of free choice
> in the creative process.
Logical may not be the right word, since the process does seem to be either
conscious or intentional. The "click" or "flash" that often is said to
coincide with intuitive realizations seems like an interrupt from a sub-
conscious process that, after random (or pseudo-random) searching, has found
a "match".
"Submitted for your approval..." Rich Rosen pyuxn!rlr
------------------------------
End of AIList Digest
********************