Copy Link
Add to Bookmark
Report
AIList Digest Volume 8 Issue 083
AIList Digest Thursday, 15 Sep 1988 Volume 8 : Issue 83
Philosophy:
The Uncertainty Principle.
Newell's response to KL questions
Pinker & Prince: The final remark ...
Navigation and symbol manipulation
I got rhythm
Robotics and Free Will
----------------------------------------------------------------------
Date: Mon, 05 Sep 88 14:38:35 +0100
From: "Gordon Joly, Statistics, UCL"
<gordon%stats.ucl.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: The Uncertainty Principle.
In Vol 8 # 78 Blair Houghton cries out:-
> I do wish people would keep *recursion* and *perturbation* straight
> and different from the Uncertainty Principle.
Perhaps... But what is the *perturbation* in question? "Observation"?
Blair also observes
> Electrons "know" where they are and where they are going.
And I know where I'm coming from too, Man!
On page 55 (of the American edition) of "A Brief History of Time",
Professor Stephen Hawking says
``The uncertainty principle had profound implications for way in
which we view the world... The uncertainty principle signaled an
end to Laplace's dream of a theory of science, a model of the
universe that could be completely deterministic: one certainly
cannot predict future events exactly if one cannot even measure
the present state of the universe precisely!''
And what of "chaos"?
Gordon Joly.
------------------------------
Date: Mon, 05 Sep 88 12:14:10 EDT
From: Allen.Newell@CENTRO.SOAR.CS.CMU.EDU
Subject: Newell's response to KL questions
> From: IT21%SYSB.SALFORD.AC.UK@MITVMA.MIT.EDU
> Subject: Newell's Knowledge Level
> From: Andrew Basden, I.T. Institute, University of Salford, Salford.
> Please can anyone help clarify a topic?
> In 1982 Allen Newell published a paper, 'The Knowledge Level' (Artificial
> Intelligence, v.18, p.87-127), in which he proposed that there is a level
> of description above and separate from the Symbol Level. He called this
> the Knowledge Level. I have found it a very important and useful concept
> in both Knowledge Representation and Knowledge Acquisition, largely
> because it separates knowledge from how it is expressed.
>
> But to my view Newell's paper contains a number of ambiguities and
> apparent minor inconsistencies as well as an unnecessary adherence to
> logic and goal-directed activity which I would like to sort out. As
> Newell says, "to claim that the knowledge level exists is to make a
> scientific claim, which can range from dead wrong to slightly askew, in
> the manner of all scientific claims." I want to find a refinement of it
> that is a bit less askew.
>
> Surprisingly, in the 6 years since the idea was introduced there has
> been very little discussion about it in AI circles. In psychology
> circles likewise there has been little detailed discussion, and here the
> concepts are only similar, not identical, and bear different names. SCI
> and SSCI together give only 26 citations of the paper, of which only four
> in any way discuss the concepts, most merely using various concepts in
> Newell's paper to support their own statements. Even in these four there
> is little clarification or development of the idea of the Knowledge
> Level.
[[AN: I agree there has been very little active use or development of the
concept in AI, although it seems to be increasing somewhat. The two most
important technical uses are Tom Ditterich's notion of KL vs SL learning and
Hector Levesque's work on knowledge bases. Zenon Pylyshyn uses the notion
as an appropriate way to discuss foundation issues in an upcoming book
on the foundatations of cognitive science (while also using the term
semantic level for it). And David Kirsh (now at MIT AIL) did a thesis in
philosophy at Oxford on the KL some time ago, which has not been published,
as far as I know. We have continued to use the notion in our own research
and it played a strong role in my William James Lectures (at Harvard). But,
importantly, the logicists have not found it very interesting (with the
exception of Lesveque and Brachmann). I would say the concept is doing
about as well as the notion of weak methods did, which was introduced in
1969 and didn't begin to play a useful role in AI until a decade later.
I might say that the evolution of the KL in our own thinking has been
(as I had hoped) in the direction of seeing the KL as just another systems
level, with no special philsophical character from the other levels.
In particular, there seems to me no more reason to talk about an observer
taking an intentional stance when using the knowledge level to describe
a system than there is to talk about an engineering taking the electronic-
circuits stance when he says "consider the circuit used for ...". It is
ok, but the emphasis is on the wrong syllABLE. One other point might be
worth making. The KL is above the SL in the systems hierarchy. However,
in use, one often considers a system whose internal structure is described
at the SL as a collection of components communicating via languages and
codes. But the components may themselves be described at the KL, rather
than at any lower level. Indeed, design is almost always an approximation
to this situation. Such usage doesn't stretch the concept of KL and SL in
anyway or put the KL below the SL. It is just that the scheme to be used
to describe a system and its behavior is always pragmatic, depending on
what is known about it and what purposes the description is to serve.
]]
> So I am turning to the AILIST bulletin board. Has anyone out there any
> understanding of the Knowledge Level that can help in this process?
> Indeed, is Allen Newell himself listening to the board?
[[AN: No, but one of my friends (Anurag Acharya) is and forwarded it to me,
so I return it via him.]]
> Some of the questions I have are as follows:
>
> 1. Some (eg. Dennett) mention 3 levels, while Newell mentions 5. Who is
> 'right' - or rather, what is the relation between them?
[[AN: The computer systems hierarchy (sans the KL), which is what I infer
the "5" refers to, is familiar, established, and technical (i.e., welded
into current digital technology). There may also exist other such
systems hierarchies. Dennett (and Pylyshyn, loc cit) talk about 3, simply
because the details of the lower implementations are not of interest to
them, so they simply talk about some sort of physical systems. There is
no doubt that the top two levels correspond: the program or symbol level,
and above that the knowledge, semantic or intentional systems level. That
does not say the formulations or interpretations of the intentional systems
level and the KL are identical, but they are aimed at the same phenomena
and the same systems possibilities. There is an upcoming Brain and
Behavioral Science treatment of Dennett's new book on the Intentional
Stance, in which my own (short) commentary raises the question of the
relation of these two notions, but I do not know what Dennett says about
it, if anything.]]
> 2. Newell says that logic is at the Knowledge Level. Why? I would have
> put it, like mathematics, very firmly in the Symbol Level.
[[AN: Here Basden mystifies me. However obscure I may have been in the KL
paper, I did not say that logic was at the KL. On the contrary, as
the paper says in section 4.4, "A logic is just a representation of
knowledge. It is not the knowledge itself, but a structure at the symbol
level."]]
> 3. Why the emphasis on logic? Is it necessary to the concept, or just
> one form of it? What about extra-logical knowledge, and how does his
> 'logic' include non-monotonic logics?
[[AN: Again, the paper seems to me rather clear about this. Logics are
simply languages that are designed to be clear about what knowledge
they represent. They have lots of family resemblances, because certain
notions (negation, conjunction, disjunction, functions and parameters)
are central to saying things about domains. Monotonic logics are so called,
because they are members of this family. I don't have any special 'logic'
that I am talking about, just what the culture calls logic. The emphasis
on logic is real, just like the emphasis on analysis (the mathematics of
the continuum) is real for physics. But there are lots of other ways of
representing knowledge, for example, modeling the situations being known.
And there is plenty of evidence that logics are not necessarily efficient
for extracting new useful expressions. This evidence is not just from AI,
but from all of mathematics and science, which primarily use formalisms
that are not logics. As to "extra-logical" knowledge, I understand that
term ok as a way of indicating that some knowledge is difficult to express
in logics, but I do not understand it in any more technical way. Certainly,
the endeavor of people like McCarthy has been to seek ways to broaden
the useful expressiveness of logic -- to bring within logic kinds of
knowledge that here-to-fore seemed "extra-logical". Certainly, there is lots
of knowledge we use where we have not yet developed ways of expressing in
external languages (data structures outside the head); and having not done
it cannot be quite sure that it can be done.
I should say that in other people's (admittedly rare) writings about the
KL there sometimes seems to be a presumption that logic is necessary and
that, in particular, some notion of implicational closure is necesssary.
Neither are the case. Often (read: usually) agents have an indefinitely
large body of knowledge if expressed in terms of ground expressions of
the form "in situation S with goal G take action A". Thus, such knowledge
needs to be represented (by us or by the agent itself) by a finite physical
object plus some processes for extracting the applicable ground expressions
when appropriate. With logics this is done by taking the knowledge to be
the implicational closure over a logic expression (usually a big
conjunction). But, it is perfectly possible to have other productive ways
(models with legal transformations), and it is perfectly possible to
restrict logics so that modus ponens does not apply (as Levesque and others
have recently emphasized). I'm not quite sure why all this is difficult to
be clear about. It may indeed be because of the special framing role of
logics, where to be clear in our analyses of what knowledge is there we
always return to the fact that other representations can be transduced
to logic in a way that preserves knoweldge (though it does not preserve
the effort profile of what it takes to bring the knowledge to bear).]]
> 4. The definition of the details of the Knowledge Level is in terms of
> the goals of a system. Is this necessary to the concept, or is it just
> one possible form of it? There is much knowledge that is not goal
> directed.
[[AN: In the KL formulation, the goals of the system are indeed a necessary
concept. The KL is a systems level, which is to say, it is a way of
describing the behavior of a system. To get from knowledge to behavior
requires some linking concept. This is all packaged in the principle
of rationality, which simply says that an agent uses its knowledge to
take actions to attain its goals. You can't get rid of goals in that
formulation. Whether there are other formulations of knowledge that might
dispense with this I don't rightly know. Basden appears to be focussing
simply on the issue of a level that abstracts from representation and
process. With only that said, it would seem so. And certainly, generally
speaking, the development of logic and epistemology has not taken goals as
critical. But if one attempts to formulate a system level and not just
a level of abstraction, then some laws of behavior are required. And
knowledge in action by agents seems to presuppose something in the agents
that impels them to action.
Dennett, D. The Intentional Stance, Cambridge, MA: Bradford Books MIT
Press, 1988 (in press).
Dietterich, T. G. Learning at the knowledge level. Machine Learning,
1986, v1, 287-316.
Levesque, H. J. Foundations of a functional approach to knowledge
representation, Artificial Intelligence, 1984, v23, 155-212.
Levesque, H. J. Making believers out of computers, Artificial Intelligence,
1987, v30, 81-108.
Newell, A. The intentional stance and the knowledge level: Comments on
D. Dennett, The Intentional Stance. Behavioral and Brain Sciences (in
press).
Newell, A., Unified Theories of Cognition, The William James Lectures.
Harvard University, Spring 1987 (to be published). (See especially
Lecture 2 on Foundations of Cognitive Science.)
Pylyshyn, Z., Computing in cognitive science, in Posner, M. (ed) Foundations
of Cognitive Science, MIT Bradford Press (forthcoming).
Rosenbloom, P. S., Laird, J. E., & Newell, A. Knowledge-level learning
in Soar, AAAI87.
Rosenbloom, P. S., Newell, A., & Laird, J. Towards the knowledge level
in Soar: The role of architecture in the use of knowledge, in VanLehn, K.,
(ed), Architectures for Intelligence, Erlbaum (in press).
]]
------------------------------
Date: 7 Sep 88 06:10:53 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: Pinker & Prince: The final remark...
Posted for Pinker & Prince by S. Harnad:
__________________________________________________________________
From: Alan Prince <prince@cogito.mit.edu>
Cc: steve@ATHENA.MIT.EDU
Here is a final remark from us. I've posted it to connectionists and will
leave it to your good offices to handle the rest of the branching factor.
Thanks, Alan Prince.
``The Eye's Plain Version is a Thing Apart''
Whatever the intricacies of the other substantive issues that
Harnad deals with in such detail, for him the central question
must always be: "whether Pinker & Prince's article was to be taken
as a critique of the connectionist approach in principle, or just of
the Rumelhart & McClelland 1986 model in particular" (Harnad 1988c, cf.
1988a,b).
At this we are mildly abashed: we don't understand the continuing
insistence on exclusive "or". It is no mystery that our paper
is a detailed analysis of one empirical model of a corner (of a
corner) of linguistic capacity; nor is it obscure that from time
to time, when warranted, we draw broader conclusions (as in section 8).
Aside from the 'ambiguities' arising from Harnad's humpty-dumpty-ish
appropriation of words like 'learning', we find that the two modes
of reasoning coexist in comfort and symbiosis. Harnad apparently
wants us to pledge allegiance to one side (or the other) of a phony
disjunction. May we politely refuse?
S. Pinker
A. Prince
______________________________________________________________
Posted for Pinker & Prince by:
--
Stevan Harnad ARPA/INTERNET: harnad@mind.princeton.edu harnad@princeton.edu
harnad@confidence.princeton.edu srh@flash.bellcore.com harnad@mind.uucp
CSNET: harnad%mind.princeton.edu@relay.cs.net UUCP: princeton!mind!harnad
BITNET: harnad@pucc.bitnet harnad@pucc.princeton.edu (609)-921-7771
------------------------------
Date: Wed, 7 Sep 88 23:59 EDT
From: Michael Travers <mt@media-lab.media.mit.edu>
Subject: Re: navigation and symbol manipulation
Date: 23 Aug 88 06:05:43 GMT
From: jbn@glacier.stanford.edu (John B. Nagle)
It's depressing to think that it might take
a century to work up to a human-level AI from the bottom. Ants by 2000,
mice by 2020 doesn't sound like an unrealistic schedule for the medium term,
and it gives an idea of what might be a realistic rate of progress.
Well, we're a little ahead of schedule. I'm working on agent-based
systems for programming animal behavior, and ants are my main test case.
They're pretty convincing, and have been blessed by real ant
ethologists. But I won't make any predictions as to how long it will
take to extend this methodology to mice or humans.
------------------------------
Date: Fri, 9 Sep 88 10:12 EDT
From: PGOETZ%LOYVAX.BITNET@MITVMA.MIT.EDU
Subject: I got rhythm
Here's a question for anybody: Why do we have rhythm?
Picture yourself tapping your foot to the tune of the latest Top 40 trash hit.
While you do this, your brain is busy processing sensory inputs, controlling
the muscles in your foot, and thinking about whatever you think about when
you listen to Top 40 music.
If you could write a conventional program to do all those things, each task
would take a different amount of time. It would "consciously" perceive
time at varying rates, since a lot of time spent processing one task would
give it less time of "consciousness" (whatever that is.) So if this program
were solving a system of 100 equations and 100 unknowns while tapping its
simulated foot, the foot taps would be at a slower rate than if it were
doing nothing else at all.
I suspect that the same would hold true of parallel programs and expert-
system paradigms. For neural networks, an individual task would take the
same amount of time regardless of data, but some things require more subtasks.
It comes down to this: Different actions require different processing
overhead. So why, no matter what we do, do we perceive time as a constant?
Why do we, in fact, have rhythm? Do we have an internal clock, or a
"main loop" which takes a constant time to run? Or do we have an inadequate
view of consciousness when we see it as a program?
Phil Goetz
PGOETZ@LOYVAX.bitnet
------------------------------
Date: Sun, 11 Sep 88 17:32:38 -0200
From: Antti Ylikoski <ayl%hutds.hut.fi@MITVMA.MIT.EDU>
Subject: Robotics and Free Will
In a recent AIList issue, John McCarthy presented the problem how a robot
could utilize information dealing with its previous actions to improve its
behaviour in the future. Here is an idea.
Years ago, an acquaintance of mine came across a very simple computer game
which was annoyingly overwhelming to its human opponent.
The human chose either 0 or 1. The computer tried to guess the
alternative he had chosen in advance. He told the alternative he had
chosen to the computer, which told him if it had guessed right or
wrong.
The human got a point if the guess of the machine was incorrect; otherwise
the machine got a point.
After a number of rounds, the computer started to play very well, guessing
the alternative that the human had chosen correctly in some 60-70 per cent of
the rounds.
Neither of us ever got to know how the game worked. I would guess it had a
model of the behaviour of the human opponent. Perhaps the model was a Markov
process with states "human chooses 0" and "human chooses 1"; maybe the
program performed a Fourier analysis of the time series.
This suggests an answer to McCarthy's problem. Make the robot have a
model of the behaviour of the environment. Calculate the parameters of
the model with a best fit approach from the history data. The robot
also might have several possible models and choose the one which
produces the best fit to the history data.
If the environment is active (other robots, humans) one also could
apply game theory.
------------------------------------------------------------------------------
Antti Ylikoski !YLIKOSKI@FINFUN (BITNET)
Helsinki University of Technology !OPMVAX::YLIKOSKI (DECnet)
Digital Systems Laboratory !mcvax!hutds!ayl (UUCP)
Otakaari 5 A !
SF-02150 Espoo, Finland !
------------------------------
End of AIList Digest
********************