Copy Link
Add to Bookmark
Report
AIList Digest Volume 5 Issue 051
AIList Digest Monday, 23 Feb 1987 Volume 5 : Issue 51
Today's Topics:
News - Impact of Artificial Intelligence,
Philosophy - Design Stance on Consciousness,
Review - Society of Mind
----------------------------------------------------------------------
Date: Sat, 21 Feb 1987 09:59 CST
From: Laurence L. Leff <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: Impact of Artificial Intelligence
"Artificial Intelligence" is the computer journal with the highest
impact factor according to the latest issue of the Journal Citation
Reports put out by the Institute for Scientific Information. It beats
by a good factor the second runner up, IEEE Transactions on Pattern
Analysis.
The Impact Factor is a measure of how often people cite the journal and is
proportional to the number of citations per article published. I. E. we
are looking at how often someone deems an article in "Artificial Intelligence"
of sufficient significance to cite it in their article.
To put this in perspective, here are the numbers for some familiar computer
journals.
Artificial Intelligence 3.914
IEEE T Pattern Anal 2.374
IEEE T Computers 1.654
Comput Surv 1.545
Commun ACM 1.528
SIAM J Comput 1.349
INT J Robot Res 1.314
J Assoc Comput Mach 1.282
Comput Vision Graph 1.170
IEEE T Syst Man Cyb 1.168
Computer 1.161
Pattern Recogn 1.092
IBJ J Res Dev 1.087
IEEE T Software Eng 0.963
Acta Inform 0.627
J Comput Syst Sci 0.613
J Robotic System 0.600
Int J. Syst Sci 0.428
Software Pract Experience 0.253
Kybernetika 0.171
AT&T Tech Journal 0.080
So who is citing "Artificial Intelligence" you might ask.
Of a total of 924 citations in 1985 to Artificial Intelligence, here is
a break down of some of the frequent and interesting citing journals
Artificial Intelligence 103
IEEE T Pattern Analysis 62
Comput Vision Graph 50
Int J Man Mach Stud 37
Comput Aided Design 29
P. Soc. Photo-Opt Inst 25
TSI-Tech Science Inf 23
Comput Math Appl 20
IEEE T Syst Man Cybern 19
Lect Notes Comput Sc 18
Comput Surv 15
J Assoc Comput Mach 12
J Symb Comput 7
Environ Plann B 6
It is also interesting to note who authors publishing in "Artificial
Intelligence" cite. When you compare the list
of items cited within "Artificial Intelligence" and compare it to that
in other fields, one is impressed by the importance of the conference
literature to artificial intelligence.
Of a total of 997 citations by "Artificial Intelligence" articles, here
are the numbers for some of the more noteworthy sources of these
citations
Artificial Intelligence 103
IJCAI and AAAI conferences 76
P Int S Robotics Res 33
Mach Intell 23
Commun ACM 16
Cognitive Science 15
Comput Surv 7
Handbook of Artifical Int. 6
.
------------------------------
Date: 22 Feb 87 1150 PST
From: John McCarthy <JMC@SAIL.STANFORD.EDU>
Subject: consciousness
This discussion of consciousness considers AI as a branch of
computer science rather than as a branch of biology or philosophy.
Therefore, it concerns why it is necessary to provide AI programs
with something like human consciousness in order that they should
behave intelligently in certain situations important for their
utility. Of course, human consciousness presumably has accidental
features that there would be no reason to imitate and other features
that are perhaps necessary consequences of its having evolved that
aren't necessary in programs designed from scratch. However, since
we don't yet understand AI very well, we shouldn't jump to conclusions
about what features of consciousness are unnecessary in order to
have the intellectual capabilities humans have and that we want our
programs to have.
Consciousness has many aspects and here are some.
1. We think about our bodies as physical objects to which
the same physical laws apply as apply to other physical objects.
This permits us to predict the behavior of our bodies in certain
situations, e.g. what might break them, and also permits us to
predict the behavior of other physical objects, e.g. we expect
them to have similar inertia. AI systems should apply physics
to their own bodies to the extent that they have them. Whether
they will need to use the analogy may depend on what knowledge
we choose to build in and what we will expect them to learn from
experience.
2. We can observe in a general way what we have been thinking
about and draw conclusions. For example, I have been thinking
about what to say about consciousness in this forum, and at present
it seems to be going rather well, so I'll continue composing
my comment rather than think about some specific aspect of
consciousness. I am, however, concerned that when I finish this
list I may have left our important aspects of consciousness that
we shall want in our programs. This kind of general observation
of the mental situation is important for making intellectual
plans, i.e. deciding what to think about. Very intelligent computer
programs will also need to examine what they have been thinking
about and reason about this information in order to decide whether
their intellectual goals are achievable. Unfortunately, AI isn't
ready for this yet, because we must solve some conceptual problems
first.
3. We compare ourselves intellectually with other people.
The concepts we use to think about our own minds are mainly learned
from other people. As with information about our bodies, we infer
from what we observe about ourselves to the mental qualities of
other people, and we also learn about ourselves from what we
learn about others. In so far as programs are made similar to
people or other programs, they may also have to learn from interaction.
4. We have goals about our own mental functioning. We would
like to be smarter, nicer and more content. It seems to me that
programs should also have such meta-goals, but I don't see that
we need to make them the same as people's. Consider that many
people have the goal of being more rational, e.g. less driven
by impulses. When we find ourselves with circular preferences,
e.g preferring A to B, B to C and C to A, we chide ourselves and
try to change. A computer program might well discover that its
heuristics give rise to circular preferences and try to modify
them in service of its grand goals. However, while people are
originally not fully rational, because our heritage provides
direct connections between our disparate drives and the actions
that achieve the goals they generate, it seems likely that
there is no reason to imitate all these features in computer programs.
Thus our programs should be able to compare the desirability
of future scenarios more readily than people do.
5. Besides our direct observations of our own mental
states, we have a lot of general information about them. We
can predict whether problems will be easy or difficult for us
and whether hypothetical events will be pleasing or not.
Programs will require similar capabilities.
Finally, it seems to me that the discussion of consciousness
in this digest has been too much an outgrowth of the ordinary
traditional philosophical discussions of the subject. It hasn't
sufficiently been influenced by Dennett's "design stance". I'm
sure that more aspects of human consciousness than I have been
able to list will require analogs in robotic systems. We should
also be alert to provide forms of self-observation and reasoning
about the programs own mental state that go beyond those evolution
has given us.
------------------------------
Date: Sun, 22 Feb 87 20:21 EST
From: ANK%CUNYVMS1.BITNET@wiscvm.wisc.edu
Subject: N.Y.Times review of SOCIETY OF MIND
Today's (22 Feb. 87) New York Times Book review section carried
a full page review of Minsky's "Society of Mind" {pp 339 Simon &
Schuster $19.95} by James W Lance, a professor in neurology from
Australia.
Since the beginning of this year, over a score of people have
devoted a cumulative of 100 + hours debating over Marvin's comments on
Consciousness. With that as a backdrop I wanted to see what Dr. Lance
had to say ! Well nothing much that readers of AI-Digest do not
already know.
In all fairness to the reviewer I must say he did a good job of
filling a page with bits and pieces from the book. But what he did not
accomplish is to critique the book as a scholarly (I am right ?...Well
many may think not..) work. New York Times, I must complain, has not been
very serious in the past two years, when it comes to reviews relating
to such topics, in comparision to other scientific books that pass through
their tables.
What then is my gripe ? I think "consciousness" is a very
serious matter. Furthermore the classical Mind-Body question will
always re-occur every few decades in the light of a new philosophical
construct. Therefore to attribute the onus of assigning the
*definitation of "consciousness" to Minsky's posting in AI Digest, is
wrong. I did not see much debate when PDP was published by M.I.T.Press
? Listen folks ! I think there is more mileage to be got from the two
Volumes of Parallel Distributed Processing than in "Society....".
I rather suspect that we in the academia expect great architecture
every monday morning. Similarly Minsky's book is *not* supposed to be
taken as the final word or *official* pronouncement of "mind-brain debate".
The purpose, as I understood it from reading the book, was
to generate idea's and reflect on the homely's and aphorisms that
the book is so full of. It is true that many common-day phenomena
relating to memory is outside many models of memory. Let me illustrate
" I forgot the my telephone number of two years ago in
Cambridge.... and last week right in the middle of
Fifth Ave. and 42nd. it came as a flash.."
I do not think many theories of memory either explain one
or the other problems, but none that in the classical sense address
all the issues. (Yes ! not even the latest theories. That's the
complexity of studying Man and his mind using expirical tools)
The point I wish to make is simple. Many of us (graduate-level students)
could get many germ-of-an-idea from his book. Lets keep it at that.
All many of us need is a metaphor or a notion, and off we go. His
book does that rather neatly. It should be a required reading along
with Drefus's, if we have to go beyond satisfying our Ph.D. requirements.
The last paragraph of Lance's review was, and let me paraphrase it:
"This is a disturbing book for a neurologist to read
because of the summation of mathematics + psychology
+ philosophy still does not approach the complexities
of neurology. And yet the text pursues an exciting
trail to the elusive goal"
Sure enough, I guess Minsky did not expect to give one either (or so I
presume..) I'm sure it is easy for Harnad to reduce all "books in
Psychology, Philosophy, Biology....theatre, music..." to the MIND-BODY
problems. Not that I personally mind, but it is better that we limit
the domain.
Finally I wonder if *intensional-realist* like Harnad (maybe I'm wrong)
really have a plausible model of the mind ?
Anil Khullar
{Ph.D. Program in Psychology
C.U.N.Y. Graduate Center.
New York NY 100036 }
ank%cunyvms1.BITNET@wiscvm.edu
BITNET: ank@cunyvms1
PS: I personally think Harnad
has given me enough insights
for my term-paper.......
------------------------------
End of AIList Digest
********************