Copy Link
Add to Bookmark
Report
AIList Digest Volume 8 Issue 078
AIList Digest Monday, 5 Sep 1988 Volume 8 : Issue 78
Philosophy:
Two Points (ref AI Digests passim).
Navigation and symbol manipulation
New books and reviews thereof, from Nature
Can we human being think two different things in parallel?
Newell's Knowledge Level (2)
----------------------------------------------------------------------
Date: 28 Aug 88 19:33:18 GMT
From: bph@buengc.bu.edu (Blair P. Houghton)
Reply-to: bph@buengc.bu.edu (Blair P. Houghton)
Subject: Re: Two Points (ref AI Digests passim).
In a previous article, "Gordon Joly, Statistics, UCL" writes:
>[b] With regard to what Einstein said, Heisenberg's uncertainty princinple
> is also pertinent to "AI". The principle leads to the notion that the
> observer influences that which is observed. So how does this affect the
> observer who preforms a self analysis?
C'mon; Heisenberg said nothing of the kind. He was talking about tiny
little particles with miniscule kinetic energies. (...or maybe not |^D )
"Know thyself" is more like Shakespeare than Heisenberg, and likely
as old as Egypt.
Physical self-analysis on the scale for Heisenberg is moot. Electrons
"know" where they are and where they are going. They don't have, nor
do they need, self-analysis.
I do wish people would keep *recursion* and *perturbation* straight
and different from the Uncertainty Principle. It's a very
poor metaphor (kindof like what Freud did to Oedipus' reputation...)
--Blair
------------------------------
Date: 30 Aug 88 02:02:27 GMT
From: josh@klaatu.rutgers.edu (J Storrs Hall)
Subject: Re: navigation and symbol manipulation
> How much more pleasant to think deep philosophical thoughts.
>Perhaps, if only the right formalization could be found, the problems
>of common-sense reasoning would become tractible. One can hope.
>The search is perhaps comparable to the search for the Philosopher's Stone.
>One succeeds, or one fails, but one can always hope for success just ahead.
>Bottom-up AI is by comparison so unrewarding. "The people want epistemology",
>as Drew McDermott once wrote. It's depressing to think that it might take
>a century to work up to a human-level AI from the bottom. Ants by 2000,
>mice by 2020 doesn't sound like an unrealistic schedule for the medium term,
>and it gives an idea of what might be a realistic rate of progress.
>
> I think it's going to be a long haul. But then, so was physics.
>So was chemistry. For that matter, so was electrical engineering. We
>can but push onward. Maybe someone will find the Philosopher's Stone.
>If not, we will get there the hard way. Eventually.
There is much to speak for this point of view. However, halfway
through the historical life of the steam engine, thermodynamics was
put on a sound basis. In Physics, we had Newton, in Chemistry,
Mendeleev. In EE there were Maxwell's equations. There is a two-way
feedback here: sufficient practical experience allows one to
formulate general principles, which then inform and amplify practical
efforts.
I think the robot and the expert system are the Newcomen engines of
AI. Our "science" may be all epicycles and alchemy but what we are
after is not a Philosopher's Stone but a periodic table and a
calculus.
There was a feeling among some of the people I polled at AAAI this
year that there is a bit of a malaise in "theoretical AI". My guess
is that we have our phlogiston and caloric theories that can be turned
into the real thing with some more work and insight.
--JoSH
------------------------------
Date: Wed, 31 Aug 88 16:00:27 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: New books and reviews thereof, from Nature
Two books relevant to the AI field are reviewed in Nature this week
(25 August). Philip Kichner reviews "Patterns, Thinking, and Cognition:
A Theory of Judgement", by Howard Margolis. Margolis proposes the idea
that thinking and judgement are species of pattern recognition. Whether or
not one agrees with this, the reviewer claims that the idea is presented
in a sufficiently thorough manner to justify a careful study of the work.
Drew McDermott (whose name appears in this newsgroup now and then)
reviews "Logical Foundations of Artificial Intelligence", by Genesereth
and Nilsson. The book is a presentation of the Stanford version of
the "logicism" approach to AI. McDermott is not impressed.
John Nagle
------------------------------
Date: 1 Sep 88 02:31:08 GMT
From: temvax!pacsbb!tlohrbe@bpa.bell-atl.com (trevor lohrbeer)
Subject: Re: Can we human being think two different things in
parallel?
In another article, Ken Johnson says:
>> Can we human being think two different things in parallel?
>
>I think most people have had the experience of suddenly gaining insight in
>into the solution of a problem they last deliberately chewed over a few
>hours or days previously. I'd say this was evidence for the brain's abili
>ability to work at two or more (?) high-order tasks at the same time.
>But I look forward to reading what Real Psychologists say.
In response to this, Jeff Hartung writes:
>The above may demonstrate that the brain can "process" two jobs
>simultaneously, but is this what we mean by "think"? If so, this still
>doesn't demonstrate adequately that parallel processing is what is
>going on. It may be equally true that serial processing on several
>jobs is happening, only some processing is below the threshold of
>awareness. Or, there may be parallel processing , but with a limited
>number of processes at the level of awareness of the "thinker".
I think the problem does indeed lie in what we mean by "thinking". But
if we define thinking in terms of working out a Xdefinit solvab problem,,
such as working out a math problem (a large one consisting of say m
multiplying two three digit numbers, not something that can be recalled fro
memory), and also append the notion that one must be consiously thinking it
for it to be "thinking", then the problem is solvable.
To solve it, try to do the problem. Try for example multiplying 356 x 674
and 965 x 3124, at the same time. T A way to be pretty sure that you are
figuring out the problem serially, is to see if you come out with the
answers to both problems at the same time. Try to do it and you'll find
that even for a mathematical wizard, it is impossible to work out the two
problems simultaneously, if done at the consious level.
At the unconcious level though, it is possible to think in parallel. Take
an instance of walking and talking at the same time. The brain must send m
messages to the legs, mouth, heart, and many other muscles, all at the same
time. It must also intake the senses of touch (for balance), of vision (to
see where your going), and sometimes smell. It then has to analyze it all
while still keeping all the muscles moving and intaking more data. So at
the unconcious level, the number of things able to be done in parallel
become innumerable.
Trevor Lohrbeer
------------------------------
Date: 3 Sep 88 15:06:22 GMT
From: jbn@glacier.stanford.edu (John B. Nagle)
Reply-to: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: Newell's Knowledge Level
Much the same idea has been referred to as "deep understanding" by
the rule-based knowledge representation people. The term "deep structure"
is sometimes used by those working on natural language understanding. In
both cases, the limitations of the superficial representations in use today are
being recognized. The remark "Mycin doesn't know about bacteria" dates
from the previous decade, but is still applicable. Many critics of AI,
from Weitzenbaum to Dreyfus, have noted this problem, which some refer to
as the "knowledge representation problem". This is a key unsolved problem
in AI. A recent posting here by McCarthy indicates that he considers it
the key unsolved problem, and that effort should be directed toward the
development of a formal language suitable for the representation of
"deep understanding" of the real world.
I have not heard of any system where "deep understanding" or "deep
structure" or a "knowledge level" were implemented in any general way.
In a very few systems, always ones where the underlying domain is formalizable,
there is some notion of deep understanding. Eurisko (Lenat) comes close.
When people use these terms, they are usually talking about the parts of the
problem for which no useful approaches are known.
John Nagle
------------------------------
Date: 4 Sep 88 15:53:41 GMT
From: mohan@boc.rutgers.edu (Sunil Mohan)
Subject: Re: Newell's Knowledge Level
The Knowledge Based Software Development Environment (KBSDE) group at
Rutgers University are strong believers in the separation of the
specification of knowledge from the specification of its use. I
believe that that is the underlying theme of Newell's "Knowledge
Level". Marr has also talked about the specification of a system in
different levels, separating knowledge from algorithm from
implementation. This allows a partitioning of the concerns involved in
developing a system. As a simple example, it allows one to decide
whether inability to solve a particular problem is due to lack of
knowledge or an inherently `incomplete' algorithm that uses that
knowledge. Describing your research along these levels will also help
you and the reader decide where the contribution lies. See for example
the paper "Learning At The Knowledge Level" by Dietterich (I think).
How many levels you choose to have depends entirely on how finely you
wish to partition your concerns. There is no "right" partitioning. The
eventual aim is clarity.
As far as logic belonging at the Knowledge Level is concerned, in so
far as logic is used as a declarative specification of knowledge, and
its implications, that is the purpose of the knowledge level. I would
tend to think that logic may also be used to specify the algorithm at
the symbol level, thus allowing the capability of reasoning about the
algorithm.
I don't know what you mean by "extra-logical". Could you perhaps be
taking the terms too literally? Remeber that the Knowledge Level in
itself is not interesting. It is interesting because of what it
achieves (viz. clarity, focussing attention). Logic is just a
specification and reasoning device. Any form of logic should do, so
long as you are aware of its capabilities and limitations.
_
Sunil
------------------------------
End of AIList Digest
********************