Copy Link
Add to Bookmark
Report
AIList Digest Volume 7 Issue 027
AIList Digest Saturday, 11 Jun 1988 Volume 7 : Issue 27
Today's Topics:
Philosophy:
education .vs. programming
who else isn't a science
Human-human communication
Consensus and Reality
Emotion
the brain
deduction .vs. inference
Queries:
Re: Fuzzy systems theory was (Re: Alternative to Probability)
AI & Software Engineering
Seminar:
Model Based Diagnostic Reasoning -- Phylis Koton (BBN)
----------------------------------------------------------------------
Date: 7 Jun 88 15:06:23 GMT
From: USENET NEWS <linus!news@harvard.harvard.edu>
Subject: education .vs. programming
Path: linus!mbunix!bwk
From: bwk@mitre-bedford.ARPA (Barry W. Kort)
Newsgroups: comp.ai.digest
Subject: Re: Sorry, no philosophy allowed here]
Summary: AI systems will require education, not programming.
Keywords: Programming, Learning, Education, Self Study
Message-ID: <33761@linus.UUCP>
Date: 7 Jun 88 15:06:22 GMT
References: <19880606032026.6.NICK@INTERLAKEN.LCS.MIT.EDU>
Sender: news@linus.UUCP
Reply-To: bwk@mbunix (Barry Kort)
Organization: IdeaSync, Inc., Chronos, VT
Lines: 13
Eric Barnes writes:
>Can we build machines with original thought capabilities,
>and what is meant by `program'? I think that it is possible
>to build machines who will think originally. The question
>now becomes: "Is what we do to set these "free thinking" machines up
>considered programing".
I suggest that the process of empowering intelligent systems to
think would be called "education" rather than "programming".
And one of our goals would be the creation of autodidactic
systems (that is, systems who are able to learn on their own).
--Barry Kort
------------------------------
Date: 9 Jun 88 18:34:03 GMT
From: well!sierch@lll-lcc.llnl.gov (Michael Sierchio)
Subject: Re: who else isn't a science
One of the criticisms of AI is that it is too engineering oriented -- it
is a field that had its origins in deep questions about intelligence and
automata. Like many fields, the seminal questions remain unanswered, while
Ph.D.s are based on producing Yet Another X-based Theorem Prover/Xprt System/
whatever.
The problem has enormous practical consequences, since practice follows
theory. For instance, despite all the talk about it, why is it that
cognition is mimicked as the basis for intelligence? What about cellular
intelligence, memory, etc of the immune system? Einstein's "positional
and muscular" thinking?
I think that there is, ideally , an interplay between praxis and theory.
But Computer SCIENCE is just that -- or should be -- It has, lamentably,
become an engineering discipline. Just so you knowm, I pay the rent through
the practical application of engineering knowledge. But I love to ponder
those unanswered questions. And never mind my typing -- just wait till I
get my backspace key fixed!
--
Michael Sierchio @ SMALL SYSTEMS SOLUTIONS
2733 Fulton St / Berkeley / CA / 94705 (415) 845-1755
sierch@well.UUCP {..ucbvax, etc...}!lll-crg!well!sierch
------------------------------
Date: 9 Jun 88 23:06:37 GMT
From: esosun!kobryn@seismo.css.gov (Cris Kobryn)
Subject: Re: Human-human communication
In article <198@esosun.UUCP> jackson@esosun.UUCP (Jerry Jackson) writes:
>
>Some obvious examples of things inexpressible in language are:
>
>How to recognize the color red on sight (or any other color)..
>
>How to tell which of two sounds has a higher pitch by listening..
>
>And so on...
>
>--Jerry Jackson
All communication is based on common ground between the communicator
and the audience. . . .
I will now express in language:
"How to recognize the color red on sight (or any other color)..":
Find a person who knows the meaning of the word "red." Ask her to
point out objects which are red, and objects which are not,
distinguishing between them as she goes along. If you are physically
able to distinguish colors, you will soon get the hang of it.
Right. However, I suspect Mr. Jackson was pointing to a more difficult
problem than the simple one for which you have offered a solution.
I believe he was addressing the well-known problem of expressing
_unexpressible_ (i.e., ineffable) entities such as sensations and
emotions. This is the sort of with which writers contend.
While a few writers have made a reasonable dent in the problem,
it remains far from resolution. (If has been resolved the word
_ineffable_ should be made into an _archaic usage_ dictionary entry.)
A concrete expression of the problem follows:
How does one verbally explain what the color blue is to someone
who was born blind?
The problem here is to explain a sensory experience (e.g. seeing
"blue") to someone lacking the corresponding sensory facility
(e.g., vision). This problem is significantly more difficult than the
one you addressed. (Although a reasonable explanation has been offered.)
-- Cris Kobryn
------------------------------
Date: 9 Jun 88 16:14 PDT
From: hayes.pa@Xerox.COM
Subject: Re: Consensus and Reality
I suspect we agree, but are using words differently. Let me try to state a few
things I think and see if you agree with them. First, what we believe ( know )
about the world - or, indeed, about anything else - can only be believed by
virtue of it being expressed in some sort of descriptive framework, what is
often called a `language of thought': hence, we must apprehend the world in
some categorical framework: we think of our desks as being DESKS. Second, the
terms which comprise our conceptual framework are often derived from
interactions with other people: many - arguably, all - indeed were learned from
other people, or at any rate during experiences in which other people played a
central part. ( I am being deliberately vague here because almost any position
one can take on nature/nurture is violently controversial: all I want to say is
that either view is acceptable. )
None of this is held as terribly controversial by anyone in AI or cognitive
science, so it may be that by your lights we are all consensual realists. I
suspect that the difference is that you think that when we talk of reality we
mean something more: some `absolute Reality', whatever the hell that is. All I
mean is the physical world in which we live, the one whose existence no-one, it
seems, doubts.
One of the useful talents which people have is the ability to bring several
different categorical frameworks to bear on a single topic, to think of things
in several different ways. My CRT screen can be thought of ( correctly ) as an
ensemble of molecules. But here is where you make a mistake: because the
ensemble view is available, it does not follow that the CRT view is wrong, or
vice versa. You say:
BN> From one very valid perspective there is no
BN> CRT screen in front of you, only an ensemble of
BN> molecules.
No: indeed, there is a collection of molecules in front of me, but it would be
simply wrong to say that that was ALL there was in front of me, and to deny that
this collection didnt also comprise a CRT. That perspective isnt valid.
Perhaps we still agree. Let me in turn agree with something else which you
seem to think we realists differ from: neither of these frameworks IS the
reality. Of course not: no description of something IS that thing. We dont mix
up beliefs about a world with the world itself: what makes you think we do? But
to say that a belief about ( say ) my CRT is true is not to say that the belief
IS the CRT.
I suspect, as I said in my earlier note, that you have a stronger notion of
Truth and Reality than I think is useful, and you attribute deep significance to
the fact that this notion - "absolute Reality" - is somehow forever ineffable.
We can never know Reality ( in your sense ): true, but this could not possibly
be otherwise, since to know IS to have a true belief, and a belief is a
description, and a description is couched in a conceptual framework. And as
Ayer says, it is perverse to attribute tragic significance to what could not
possibly be otherwise.
When your discussion moves on to the evolution of nature, citing Pask, Winograd
and Flores and other wierdos, Im afraid we just have to agree to silently
disagree.
Pat
------------------------------
Date: 10 Jun 88 03:48:07 GMT
From: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: Emotion (was Re: Language-related capabilities)
Emotions are more basic than language; horses, for example, have
what certainly appear to be emotions, but are not particularly verbal
(although see Talking With Horses, by Henry Blake). It may be fruitful
to research the idea of emotions such as fear being useful in the
construction of robots. I am working in this direction, but it will
be some time before I have results. I would like to hear from others
contemplating serious work in this area.
John Nagle
------------------------------
Date: Fri, 10 Jun 88 08:31 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: the brain
In AIList Digest V7 #5, Ken Laws writes:
>Are my actions fully determined by external context, my BDI state,
>and perhaps some random variables? Yes, of course -- what else is there?
Don't forget the machinery which carries out yor actions - the brain,
the neurological engine.
Its way of operating seems to differ from person to person - think of
Einstein vs. a ballet primadonna.
Remember the structure of a typical expert system - in addition to the
rule base and the data there is the inference engine.
Antti Ylikoski
------------------------------
Date: Fri, 10 Jun 88 08:16:19 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: Re: Consensus and Reality, Consensus and Reality
From: hayes.pa@Xerox.COM
Subject: Re: Consensus and Reality
PH> First, what we believe ( know ) about the world - or, indeed, about
PH> anything else - can only be believed by virtue of it being expressed in
PH> some sort of descriptive framework, what is often called a `language of
PH> thought': hence, we must apprehend the world in some categorical
PH> framework: we think of our desks as being DESKS.
I would add that we must distinguish this 'language of thought' from our
various languages of communication. They are surely related: our
cognitive categories surely influence natural language use, and the
influence may even go the other way, though the Whorf-Sapir hypothesis
is certainly controversial. But there is no reason to suppose that they
are identical, and many reasons to suppose that they differ. (Quests
for a Universal Grammar Grail notwithstanding, languages and cultures do
differ in sometimes quite profound ways. Different paradigms do exist
in science, different predilections in philosophy, though the same
natural language be spoken.)
Note also that what we know about a 'language of thought' is inferred
from natural language (problematic), from nonlinguistic human behavior,
and sometimes introspection (arguably a special case of the first two).
If we have some direct evidence on it I would like to know.
I agree with your second statement that learning occurs in a social
matrix. It is not clear that all the "terms which comprise our
conceptual framework" are learned, however. Some may be innate, either
as evolutionary adaptations or as artefacts of the electrochemical means
our bodies seem to use (such as characteristics of neuropeptides and
their receptors in the lower brain, at the entry points of sensory
nerves to the spinal cord, in the kidney, and elsewhere throughout the
body, for instance, apparently mediating emotion--cf recent work of
Candace Pert & others at NIH). I also agree that the nature/nurture
controversy (which probably has the free will controversy at its root)
is unproductive here.
PH> I suspect that the difference is that you think that when we talk of
PH> reality we mean something more: some `absolute Reality', whatever the
PH> hell that is. All I mean is the physical world in which we live, the
PH> one whose existence no-one, it seems, doubts.
No, I only want to establish agreement that we are NOT talking about
some 'absolute Reality' (Ding an Sich), whatever the hell that is. That
we are constrained to talking about something much less absolute. That
is the point.
The business about what you are looking at now being an ensemble of
molecules >>instead of<< a CRT screen is an unfortunate red herring. I
did not express myself clearly enough. Of course it is both or either,
depending on your perspective and present purposes. If you are a
computer scientist reading mail, one is appropriate and useful and
therefore "correct". If you are a chemist or physicist contemplating it
as a possible participant in an experiment, the other "take" is
appropriate and useful and therefore "correct". And the Ultimate
Reality of it (whatever the hell that is) is neither, but it lets us get
away with pretending it "really is" one or the other (or that it "really
is" some other "take" from some other perspective with some other
purposes). We are remarkably adept at ignoring what doesn't fit so long
as it doesn't interfere, and that is an admirably adaptive, pro-survival
way to behave. Not a thing wrong with it. But I hope to reach
agreement that that is what we are doing. Maybe we already have:
PH> . . . neither of these frameworks
PH> IS the reality. Of course not: no description of something IS that
PH> thing. We dont mix up beliefs about a world with the world itself: what
PH> makes you think we do? But to say that a belief about ( say ) my CRT is
PH> true is not to say that the belief IS the CRT.
But we do mix up our language of communication with our 'language of
thought' (first two paragraphs above), perhaps unavoidably since we have
only the latter as means for reaching agreement about the former, and
only the former (adapted to conduct in an environment) for cognizing
itself. And although you and I agree that we do not and cannot know
what is "really Real" (certainly if we could we could not communicate
about it or prove it to anyone), my experience is that many folks do
indeed mix up beliefs about a world with the world itself. They want a
WYSIWYG reality, and react with violent allergy to suggestions that what
they see is only a particular "take" on what is going on. They never
get past that to hear the further message that this is OK; that it has
survival value; that it is even fun.
Ad hominem comments ("wierdos") are demeaning to you. I will be glad to
reach an agreement to disagree about what Prigogine, Pask, Winograd &
Flores, Maturana & Varela, McCulloch, Bateson, or anyone else has said,
but I have to know >what< it is that you are disagreeing with--not just
who.
Bruce
------------------------------
Date: 10 Jun 88 12:50:31 GMT
From: news@mind.Princeton.EDU
Subject: deduction .vs. inference
Path: mind!confidence!ghh
From: ghh@confidence.Princeton.EDU (Gilbert Harman)
Newsgroups: comp.ai.digest
Subject: Re: construal of induction
Summary: Deductive implication, not inference
Keywords: deduction, induction, reasoning
Message-ID: <2531@mind.UUCP>
Date: 10 Jun 88 12:50:31 GMT
References: <19880609224213.9.NICK@INTERLAKEN.LCS.MIT.EDU>
Sender: news@mind.UUCP
Reply-To: ghh@confidence.UUCP (Gilbert Harman)
Organization: Cognitive Science, Princeton University
Lines: 26
In article <19880609224213.9.NICK@INTERLAKEN.LCS.MIT.EDU>
Raul.Valdes-Perez@B.GP.CS.CMU.EDU writes:
>The concept of induction has various construals, it seems. The one I am
>comfortable with is that induction refers to any form of ampliative
>reasoning, i.e. reasoning that draws conclusions which could be false
>despite the premises being true. This construal is advanced by Wesley
>Salmon in the little book Foundations of Scientific Inference. Accordingly,
>any inference is, by definition, inductive xor deductive.
This same category mistake comes up time and time again.
Deduction is the theory of implication, not the theory of
inference. The theory of inference is a theory about how to
change one's view. The theory of deduction is not a theory
about that.
For further elaboration, see Gilbert Harman, CHANGE IN VIEW,
MIT Press: 1986, chapters 1 and 2. Also Alvin I. Goldman,
EPISTEMOLOGY AND COGNITION, chapter 13.
Gilbert Harman
Princeton University Cognitive Science Laboratory
221 Nassau Street, Princeton, NJ 08542
ghh@princeton.edu
HARMAN@PUCC.BITNET
------------------------------
Date: 8 Jun 88 18:08:36 GMT
From: hpda!hp-sde!hpfcdc!hpfclp!jorge@bloom-beacon.mit.edu (Jorge
Gautier)
Subject: Re: Fuzzy systems theory was (Re: Alternative to Probability)
> Sorry, it's a lot more complicated than that. For more details, see my
> D.Phil thesis when it exists.
When and where will it be available?
Jorge
------------------------------
Date: Fri, 10 Jun 88 09:30:40 PDT
From: rbogen@Sun.COM (Richard Bogen)
Subject: AI & Software Engineering
I am interested in leads on any papers or research concerned with applications
of AI to the production of complex software products such as Operating Systems.
I feel that there is a tremendous amount of useful information in the heads of
the software developers concerning dependencies between the various data
structures and procedures which compose the system. Much of this could also be
derived automatically from the compiler and linker when the OS is built. It
would require a rather large database to store all of this but imagine how
useful it would be to the support people and to the people developing products
dependent upon the OS (such as datacomm subsystems). With a front-end query
language they could check the database for the expected impact of any changes
they were about to make to the system possibily avoiding the time consuming
debugging process of reading a dump later on. Furthermore it would be possible
to automatically generate a dump formatting program and an online monitor
program from the data stucture declarations in the source code. By using
include files these programs could always be kept up-to-date whenever the
OS was changed.
------------------------------
Date: Fri 10 Jun 88 13:52:39-EDT
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: BBN AI Seminar -- Phylis Koton
BBN Science Development Program
AI Seminar Series Lecture
MODEL-BASED DIAGNOSTIC REASONING USING PAST EXPERIENCES
Phylis Koton
MIT Lab for Computer Science
(ELAN@XX.LCS.MIT.EDU)
BBN Labs
10 Moulton Street
2nd floor large conference room
10:30 am, Tuesday June 14
The problem-solving performance of most people improves with experience.
The performance of most expert systems does not. People solve
unfamiliar problems slowly, but recognize and quickly solve problems
that are similar to those they have solved before. People also remember
problems that they have solved, thereby improving their performance on
similar problems in the future. This talk will describe a system,
CASEY, that uses case-based reasoning to recall and remember problems it
has seen before, and uses a causal model of its domain to justify
re-using previous solutions and to solve unfamiliar problems.
CASEY overcomes some of the major weaknesses of case-based reasoning
through its use of a causal model of the domain. First, the model
identifies the important features for matching, and this is done
individually for each case. Second, CASEY can prove that a retrieved
solution is applicable to the new case by analyzing its differences from
the new case in the context of the model. CASEY overcomes the speed
limitation of model-based reasoning by remembering a previous similar
case and making small changes to its solution. It overcomes the
inability of associational reasoning to deal with unanticipated problems
by recognizing when it has not seen a similar problem before, and using
model-based reasoning in those circumstances.
The techniques developed for CASEY were implemented in the domain of
medical diagnosis, and resulted in solutions identical to those derived
by a model-based expert system for the same domain, but with an increase
of several orders of magnitude in efficiency. Furthermore, the methods
used by the system are domain-independent and should be applicable in
other domains with models of a similar form.
------------------------------
End of AIList Digest
********************