Copy Link
Add to Bookmark
Report
AIList Digest Volume 2 Issue 055
AIList Digest Saturday, 5 May 1984 Volume 2 : Issue 55
Today's Topics:
AI Support - The End of British AI?,
Expert Systems - English Conference Reference,
AI Jobs - Noncompetition Clauses,
Review - HEURISTICS by Judea Pearl,
Humor - Computers and Incomprehensibility,
Consciousness - Reply to Phaedrus (long)
----------------------------------------------------------------------
Date: Thu 3 May 84 11:30:40-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: The End of British AI?
The End of British AI?
I think Mr. Pereira is being more than a little paranoid here (and he need not
imagine that AI research is the only type for which industry sometimes shows
little enthusiasm). That pronouncement sounds as if it was politically
motivated, therefore not to be taken too literally anyway, and will be
forgotten as soon as convenient. Not that I think my government's policy on
computer science research is sound -- quite the reverse -- but I don't think it
has suddenly become a lot worse.
- Richard
------------------------------
Date: 30 Apr 84 8:07:16-PDT (Mon)
From: decvax!decwrl!rhea!bartok!shubin @ Ucb-Vax
Subject: Info on Expert Systems conference in England
Article-I.D.: decwrl.7512
| In Bruce Buchanan's Partial Bibliography on Expert Systems (Nov. 82)
| he cited the Proceedings for the Colloquium on Application of Knowledge
| Based (or Expert) Systems, London, 1982. Does anybody out in netland
| know who sponsored this colloquium or, more importantly, how I can get
| a hold of these proceedings?
| Charlie Berg
| Expert Systems
| Automatix, Inc.
| ...{allegra, linus}!vaxine!chb
We gave a paper at a conference called "Theory and Practice of Knowledge
Based Systems", which was held 14-16 Sep 82 at Brunel University, which is
*near* London. The chair of the conference was Dr. Tom Addis, also of
Brunel University. The conference was sponsored (or approved or whatever)
by ACM, IEEE and SPL International.
I found two addresses. The first is where the conference was, and (I
believe) the second is where the Computer Science department is:
Brunel University
Shoreditch Campus
Coopers Hill, Englefield Green
Egham, Surrey
ENGLAND
or Brunel University
Department of Computer Science
Uxbridge, Middlesex
ENGLAND
hal shubin
UUCP: ...!decwrl!rhea!bartok!shubin
ARPAnet: hshubin@DEC-MARLBORO
------------------------------
Date: Fri, 4 May 84 10:51 EDT
From: MJackson.Wbst@XEROX.ARPA
Subject: Re: Non-competition clauses
You have constructed a very good argument for nondisclosure agreements.
The issue, however, was non-competition clauses, for which your only
justification seem to be that "[h]istory shows that many ex-employees
are unscrupulous. . .". I find this less than compelling.
The successful legal actions you cite demonstrate that recourse is
available to the company damaged by such actions by ex-employees. The
risk that *full* compensation for such damage may not be forthcoming is
a risk of doing business, and must be managed as such.
By the way, nondisclosure of personal data by the company is much more
closely analogous to nondisclosure of proprietary information by the
employee than it is to noncompetition by the employee. (Do you think I
could talk Xerox into agreeing not to employ anyone in my present
capacity for two years if I should leave?)
Mark
------------------------------
Date: Fri, 4 May 84 15:32:32 PDT
From: Anna Gibbons <anna@UCLA-CS.ARPA>
Subject: HEURISTICS/Dr. Judea Pearl
FROM: Judea Pearl@UCLA-SECURITY.
Those who have inquired about my new book "HEURISTICS", may be
interested to know that it is finally out, and can be obtained from
Addison-Wesley Publishing Company, Reading Mass. 01867, Tel.
(617) 944-8660. The title is "Heuristics: Intelligence Search
Strategies for Computer Problem Solving", the ISBN number is
0-201-05594-5, and the price 38.95. For those unfamiliar with the
book's content, the following are excerpts from the cover description.
This book presents, characterizes, and analyzes problem solving
strategies that are guided by heuristic information. It provides a
bridge between heuristic methods developed in artificial intelligence,
optimization techniques used in operations research, and
complexity-analysis tools developed by computer theorists and
mathematicians.
The book is intended to serve both as a textbook for classes in AI
Control Strategies and as a reference for the professional/researcher
who seeks an in-depth understanding of the power of heuristics and
their impact on various performance characteristics.
In addition to a tutorial introduction of standard heuristic search
methods and their properties, the book presents a large collection of
new results which have not appeared in book form before. These include:
* Algorithmic taxonomy of basic search strategies, such as
backtracking, best-first, and hill-climbing, their variations and
hybrid combinations.
* Searching with distributions and with nonadditive evaluation
functions.
* The origin of heuristic information and the prospects for automatic
discovery of heuristics.
* Applications of branching processes to the analysis of path-seeking
algorithms.
* The effect of errors on the complexity of heuristic search.
* The duality between games and mazes.
* Recreational aspects of recursive minimaxing.
* Average performance analysis of game-playing strategies.
* The benefits and pitfalls of look-ahead.
Each chapter contains annotated references to the literature and a
set of nontrivial exercises chosen to enhance skill, insight, and
curiosity.
Enjoy your reading and, please, let me know if you have suggestions
for improving the form or content. Judea Pearl @ UCLA-SECURITY.
------------------------------
Date: 3 May 1984 20:50:55-EDT
From: walter at mit-htvax
Subject: Seminar - Computers and Incomprehensibility
[Forwarded from the MIT bboard by SASW@MIT-MC.]
GRADUAL STUDENT LUNCH SEMINAR SERIES
The G0001 Project:
An Experiment in G0002 and Creative G0003
A G0004 is described, in which many G0003 involving strikingly
different G0005 and levels of G0006 can be made. The question "What
differentiates the good G0003 from the bad G0003?" is discussed, and
the problem of how to G0008 a G0009 G0010 of the G0011 G0012 to come
up with such G0003 (and to have a sense for their quality) is
considered. A key part of the proposed system, now under
development, is its dependence on G0013 G0014 G0015 of G0016
interacting "G0017" (selected at random to G0019 with G0020
proportional to G0021 assigned "G0022"). Another key G0023 is a
G0024 of linked G0005 of varying levels of "G0025", in which G0026
spreads and G0027 controls the G0028 of new G0017. The shifting of
(1) G0033 G0034 inside structures, (2) descriptive G0005 chosen to
apply to G0030, and (3) G0043 perceived as "G0031" or not, is called
"G0032". What can G0031, and how, are G0014 G0033 of the interaction
of (1) the temporary ("G0034") structures involved in the G0003 with
(2) the permanent ("G0035") G0005 and links in the G0036 network, or
"G0037 network". The G0038 of this system is G0039 as a general
G0038 suitable for dealing not only with fluid G0003, but also with
other types of G0039 G0040 and G0041 tasks, such as musical G0040,
G0041 G0042, Bongard problems and others.
12:00 NOON 8TH FLOOR PLAYROOM FRIDAY 5/5
Hosts: Harry Voorhees and Dave Siegel
------------------------------
Date: 27 Apr 84 20:51:58-PST (Fri)
From: harpo!ulysses!burl!clyde!akgua!sdcsvax!davidson @ Ucb-Vax
Subject: Re: New topic for discussion (long)
Article-I.D.: sdcsvax.736
This is a response to the submission by Phaedrus at the University of
Maryland concerning speculations about the nature of conscious beings.
I would like to take some of the points in his/her submission and treat
them very skeptically. My personal bias is that the nature of
conscious experience is still obscure, and that current theoretical
attempts to deal with the issue are far off the mark. I recommend
reading the book ``The Mind's Eye'' (Hofstadter & Dennett, eds.) for
some marvelous thought experiments which (for me) debunk most current
theories, including the one referred to by Phaedrus. The quoted
passages which I am criticizing are excerpted from an article by J. R.
Lucas entitled ``Minds, Machines, and Goedel'' which was excerpted in
Hofstadter's Goedel, Escher, Bach and found there by Phaedrus.
the concept of a conscious being is, implicitly, realized to be
different from that of an unconscious object
This statement begs the question. No rule is given to distinguish conscious
and unconscious objects, nothing is said about the nature of either, and
nothing indicates that consciousness is or is not a property of all or no
objects.
In saying that a conscious being knows something we are saying not
only does he know it, but he knows that he knows it, and that he
knows that he knows that he knows it, and so on ....
First, I don't accept the claim that people possess this meta-knowlege more
than a (small) finite number of levels deep at any time, nor do I accept
that human beings frequently engage in such meta-awareness; just because
human beings can pursue this abstraction process arbitrarily deeply (but
they get lost fairly quickly, in practice), does not mean that there is any
process or structure of infinite extent present.
Second, such a recursive process is straightforward to simulate on a
computer, or imbue an AI system with. I don't see any reason to regard such
systems as being conscious, even though they do it better than we do (they
don't have our short term memory limitations).
we insist that a conscious being is a unity, and though we talk
about parts of our mind, we do so only as a metaphor, and will not
allow it to be taken literally.
Well, this is hardly in accord with my experience. I often become aware of
having been persuing parallel thought trains, but until they merge back
together again, neither was particularly aware of the other. Marvin Minsky
once said the same thing after a talk claiming that the conscious mind is
inherently serial. Superficially, introspection may seem to show a unitary
process, but more careful introspection dissolves this notion.
The paradoxes of consciousness arise because a conscious being can
be aware of itself, as well as of other things, and yet cannot
really be construed as being divisible into parts.
The word ``aware'' is an implicit reference to the unknown mechanism of
consciousness. This is part of the apparent paradox. Again, there's
nothing mysterious about a system having a model of itself and being able to
do reasoning on that model the same way it does reasoning on other models.
Also again, nothing here supports the claim that the conscious mind is not
divisible.
It means that a conscious being can deal with Godelian questions in
a way in which a machine cannot, because a conscious being can
consider itself and its performance and yet not be other than that
which did the performance.
Whatever the conscious mind is, it appears to be housed in a physical
information processing system, to wit, the human brain. If our current
understanding about the kind of information processing brains are capable of
is correct, brains fall into the class of automata and cannot ultimately do
any processing task that cannot be done with a computer. The conscious mind
can scrutinize its internal workings to an extent, but so can computer
programs. Presumably the Goedelian & (more to the point) Turing limitations
apply in principle to both.
no extra part is required to do this: it is already complete, and
has no Achilles' heel.
This is an unsupported statement. The whole line of reasoning is rather
loose; perhaps the author simply finds it psychologically difficult to
suppose that he has any fundamental limitations.
When we increase the complexity of our machines, there may, perhaps,
be surprises in store for us.... Below a certain ``critical'' size,
nothing much happens.... Turing is suggesting that it is only a
matter of complexity [before?] a qualitative difference appears.
Well, its very easy to build machines that are infeasible to predict. Such
machines do not even have to be very complex in construction to be highly
complex in behavior. Las Vegas is full of many examples of such machines.
The idea that complexity in itself can result in a system able to escape
Goedelian and Turing limitations is directly contradicted by the
mathematical induction used in their proofs: The limitations apply to
<<arbitrary>> automata, not just to automata simple enough for us to
inspect.
Charlatans can claim any properties they want for mechanisms too complex for
direct disproofs, but one need not work hard before dismissing them with
indirect disproofs. This is why the patent office rejects claimed perpetual
motion machines which supposedly operate merely by the complexities of their
mechanical or electromagnetic design. It is also why journals of
mathematics reject ridiculously long proofs which claim to supply methods of
squaring the circle, etc. No one examines such proofs to find the flaw, it
would be a thankless task, and is not necessary.
It is essential for the mechanist thesis that the mechanical model
of the mind shall operate according to ``mechanical principles,''
that is, we can understand the operation of the whole in terms of
the operation of its parts....
Certainly one expects that the behavior of physical objects can be explained
at any level of reduction. However, consciousness is not necessarily a
behavior, it is an ``experience'', whatever that is. Claims of
consciousness, as in ``I assert that I am conscious'' are behavior, and can
reasonably be subjected to a reductionist analysis. But whether this will
shed any light on the nature of consciousness is unclear. A useful analogy
is whether attacking a computer with a voltmeter will teach you anything
about the abstractions ``program'', ``data structure'', ``operating
system'', etc., which we use to describe the nature of what is going on
there. These abstractions, which we claim are part of the nature of the
machine at the level we usually address it, are not useful when examining
the machine below a certain level of reduction. But that is no paradox,
because these abstractions are not physical structure or behavior, they are
our conceptualizations of its structure and behavior. This is as mystical
as I'm willing to get in my analysis, but look at what Lucas does with it:
if the mechanist produces a machine which is so complicated that
this [process of reductionist analysis] ceases to hold good of it,
then it is no longer a machine for the purpose of our discussion,
no matter how it was constructed. We should say, rather, that he
had created a mind, in the same sort of sense as we procreate
people at prsent.
If someone produces a machine which which exhibits behavior that is
infeasible to predict through reductionist methods, there is nothing
fundamentally different about it. It is still obeying the laws of physics
at all levels of its structure, and we can still in principle apply to it
any desired reductionist analysis. We should certainly not claim to have
produced anything special (such as a mind) just because we can't easily
disprove the notion.
When talking of [human beings and these specially complex machines]
we should take care to stress that although what was created looked
like a machine, it was not one really, because it was not just the
total of its parts: one could not even tell the limits of what it
could do, for even when presented with the Goedel type question, it
got the answer right.
There is simply no reason to believe that people can answer Goedelian
questions any better than machines can. This bizarre notion that conscious
objects can do such things is unproven and dubious. I assert that people
cannot do these things, and neither can machines, and that the ability to
escape from Goedel or Turing restrictions is irrelevant to questions of
consciousness, since we are (experientially) conscious but cannot do such
things.
I find that most current analyses of consciousness are either mystical like
the one I've addressed here, or simply miss the phenonmenon by attacking the
system at a level of reduction beneath the level where the concept seems to
apply. It is tempting to thing we can make scientific statements about
consciousness just because we can experience consciousness ourselves. This
idea runs aground when we find that this notion is dependent on capturing
scientifically the phenomena of ``experience'', ``consciousness'' or
``self'', which I have not yet seen adequately done. Whether consciousness
is a phenomenon with scientific existence, or whether it is an abstract
creation of our conceptualizations with no external or reductionist
existence is still undetermined.
-Greg Davidson (davidson@sdcsvax.UUCP or davidson@nosc.ARPA)
------------------------------
End of AIList Digest
********************