Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 080
AIList Digest Saturday, 12 Apr 1986 Volume 4 : Issue 80
Today's Topics:
Queries - Shape & LOOPS on a XEROX,
Application - Automatic Documentation,
Policy - Press Releases,
Journal - AI Expert,
Review - Spang Robinson Report, Volume 2 No. 4,
Philosophy - Lucas on AI & Computer Consciousness
----------------------------------------------------------------------
Date: Thu 10 Apr 86 09:52:08-PST
From: Ken Laws <LAWS@SRI-IU.ARPA>
Subject: Shape
Jerry Hobbs has asked me "What is a hook and what is a ring that we know
the ring can hang on the hook?" More specifically, what do we have to
know about hooks and rings in general (for default reasoning) and
about a particular hook-like object and ring-like object (dimensions,
radius of curvature, surface normals, clearances, tolerances, etc.)
in order to say whether a particular ring may be placed on a particular
hook and whether it is likely to stay in place once put there. Can we
reason about the functionality of shapes (in this and in other "mechanics"
problems) without resorting to full CAD/CAM descriptions, physics,
and simulation? How do people (e.g., children) reason about shape,
particularly in the intuitively obvious cases where tolerances are not
critical? Can anyone suggest a good lead?
-- Ken Laws
------------------------------
Date: Fri, 11 Apr 86 15:03 CST
From: Brick Verser <BAV%KSUVM.BITNET@WISCVM.WISC.EDU>
Subject: LOOPS running on a XEROX
Does anybody have any information pertaining to applications running
under Loops on Xerox hardware?
------------------------------
Date: Thu, 10 Apr 86 20:30:50 pst
From: saber!matt@SUN.COM (Matt Perez)
Subject: Re: towards better documentation
>
> I am interested in creating an expert system to serve as on-line
> documentation. The intent is to abrogate the above law and
> corollaries. Does anyone know of such a system or any effort(s) to
> produce one?
>
Contact Mark Miller of Computer*Thought, in Dallas,
Texas. They may have what you are looking for. Mark
is a pretty friendly guy and may also point you to the
right literature, etc.
The other place I can think of where there's something
like this in development is the work being done by
Prof. Wilensky at UCBerkely: The Unix Consultant.
Matt Perez
------------------------------
Date: Thu 10 Apr 86 09:03:06-PST
From: GARVEY@SRI-AI.ARPA
Subject: Re: Policy - Press Releases
Eschew Policy!! Let Ken handle it; if you don't like what he let's
through, don't read it (I have ^O on my terminal for just such
situations). Nobody wastes your time but you....
(And maybe me.)
Cheers,
Tom
[Unfortunately ^O doesn't work for those reading the "unexploded"
digest format. Most mail programs haven't adapted to the digest
formats yet. -- KIL]
------------------------------
Date: Thu 10 Apr 86 11:17:53-PST
From: Ken Laws <LAWS@SRI-IU.ARPA>
Subject: AI Expert
The April issue of CACM has an ad (p. A-31) for AI Expert, a new journal
for AI programmers. "No hype, no star wars nonsense, no pipe dreams.
AI Expert will focus on practical AI programming methods and applications."
AI Expert, 2443 Fillmore Street, Suite 500, San Francisco, CA 94115;
$27 for 13 issues, $33 Canada, $39 worldwide, $57 airmail.
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Spang Robinson Report, Volume 2 No. 4 Summary
Summary of The Spang Robinson Report, Volume 2 No. 4
April 1986
Packaging Financial Expertise
Activities at specific companies and available products:
Applied Expert Systems (APEX) Plan Power:
Expert system for personal financial planning. (less than $50K including
Xerox 1186 on which to run on)
Arthur D. Little:
Personal Financial Planning System (in test), Equity Trader's
Assistant, Cash Trader's Assistant, insurance personnel selection
system, investment manager's work station, bond indenture advice
system. (in development) (will run on Symbolics 3670 and use databases
residing on IBM mainframe)
Cognitive Systems:
Courtier, a stock portfolio management system. One version is design
for individual use at public terminals with another to assist bank
portfolio managers. Runs on Apollo's and DEC VAX.
Human Edge Software:
is supporting development of Financial Statement Analysis, an expert
business evaluation program, and a busines plan expert for IBM PC's.
Palladian Software:
Financial Advisor which is designed to help with corporate financial
decison-making and project evaluation capabilities. It is based upon
net present value.
Prophecy Development Corporation:
Profit tool, a brokerage and financial services shell. Runs on MS-DOS
computers and costs $1995.00.
Sterling Wentworth Corporation:
Planman, $4500.00
Database $2000.00
For CPA's to produce "comprehensive financial planning reports" Runs on MSDOS
and they have sold 400 units
Syntelligence:
Underwriting Advisor System
Lending Advisor System.
Delivered on IBM 30 and 43 series with connections to PC/AT's.
Nikko Securities Co and Fujitsu:
(under development) a system for selecting stocks for investment.
Daiwa Securities:
Placed a system to provide investment councelling into operation last
month.
Yamaichi Securities:
developing AI based investment products in collaboration with Hitachi
and Nippon Univak.
Nomura Securities:
This is Japan's largest stock broker and they embarking on a broad-
based AI R&D program.
The total revenue for financial expert systems is five million
dollars. In 1985, financial applications were five percent of all
epxert systems. In 1986, it is expected to be twenty percent. One in
five of large financial institutions have applied expert systems.
__________________________________________________________________________
Japan Watch:
Nomura research Institute has developed a tool called DORSAI for assisting
in the production of expert systems in PROLOG.
Hitachi's Energy Research Institute have developed a system for
proving theorems at a high speed. Hitachi applied for a patent and it
employs the "Connection Graph technique." Hitachi will use this
system for VLSI logic design, factory automation, real-time failure
diagnostics, chemical compound synthesis, and hardware applications.
60 percent of Japanese corporations are beginning to utilize AI or are
studying such a move. 28 percent of Japanese hardware, software
companies and heavy computer users have plans to enter into AI while
32 percent are currently involv ed in AI. 52 percent of the companies
with plans to enter AI expressed an interest in expert systems.
__________________________________________________________________________
Micro Trends:
Discussion of Borlands's Turbo prolog including reactions from Arity, Quintus
Prolog, GOld Hill, CIGNA.
__________________________________________________________________________
News:
Amoco Corporation and IntelliCorp announced a joint venture to market AI
products for molecular biology. The first new product will be Strategene.
Boeing Computer Services and Carnegie Federal Systems Corproation will
be working together on a Rome Air Development center contract to
develop a "new engineering environment."
Carnegie Federal Systems will support TRW in developing AI software for
tactical mission planning and resource allocation functions.
Qunintus Computer Systems has over 270 users with 170 of them using Quintus
on the workstaiton. Revenues for 1985 were $2.1 million dollars with
an 18 percent profit margin.
Rapport, a DBMS may soon be available for Symbolics machines. It will run
not only in single user mode but as a multi-user file server.
UC Berkeley is developing a RISC based LISP machine with multiple
processors (SPUR project). The Aquarius project involves using separate
processors for numeric, Prolog and LISP processing, each optimized for
its specific rule.
Frank Spitznogle who formerly was President and chief operating officer of Lisp
Machines is now President and chief operating officer of Knowledge Systems of
Houston Texas. They will be applying AI to oil and gas industry.
Their first product will be an exploration-potential evaluation
consultant.
Thomas Kehler is now chairman and CEO of Intellicorp.
------------------------------
Date: Thu, 10 Apr 86 11:33:07 est
From: John McLean <mclean@nrl-css.ARPA>
Subject: Lucas on AI
> From: Stanley Letovsky <letovsky@YALE.ARPA>
> At the conference on "AI an the Human Mind" held at Yale early in
> March 1986, a paper was presented by the British mathematician John
> Lucas. He claimed that AI could never succeed, that a machine was in
> principle incapable of doing all that a mind can do. His argument went
> like this. Any computing machine is essentially equivalent to a system
> of formal logic. The famous Godel incompleteness theorem shows that for
> any formal system powerful enough to be interesting, there are truths
> which cannot be proved in that system. Since a person can see and
> recognize these truths, the person can transcend the limitations of the
> formal system. Since this is true of any formal system at all, a person
> can always transcend a formal system, therefore a formal system can
> never be a model of a person.
Stanley Letovsky tries to refute this argument by showing that a
formal description that describes Lucas' beliefs may have unprovable
assertions that Lucas nevertheless believes.
> What is critical to realize, however, is that the Godel sentence
> for our model of Lucas is not a belief of Lucas' according to the model.
> The form of the Godel sentence
> G: not(provable(G))
> is syntactically distinct from the form of an assertion about Lucas'
> beliefs,
> believes(p,t)
> Nothing stops us from having
> believes(G,t)
> be provable in the system, despite the fact that G is not itself
> provable in the system.
This view of what G must look like is too restrictive. Note that the
Godel sentence for a system of first order arithmetic is an assertion
in number theory ("There is no integer such that..."). The fact that
the assertion numeralwise represents an assertion about provability
takes a great deal of showing. Similarly, the Godel sentence for our
model of Lucas may be an assertion about what Lucas believes. Since
Lucas is going to have beliefs about his beliefs and what is provable
in the system, it's not hard to believe that we can construct a
self-referential sentence G such that Lucas believes G at t but
believe(G,t) is not a theorem. This is particularly plausible since
there is a strong connection between what Lucas believes and what is
provable in the system. In particular, believes(believes(x,y),t) will
be provable iff Lucas believes that he believes x at y. But it is
plausible to assume that Lucas believes that he believes x at y iff
he believes x at y, i. e., iff believes(x,y) is provable. In other words,
the belief predicate is a provabilility predicate for the system restricted
to statements about beliefs.
To fill this out, note that we will probably have that if
believes("not(believes(x,y))",t) then not(believes(x,y)) since if Lucas
believes that he doesn't believe p, then he doesn't believe p. Now consider
G: believes("not(G,t)",t).
If our system is consistent and such a G exists, G is not provable. If
G were provable, then not(G) would also be provable given our observation
since G is a statement about belief.
I believe that it is possible to construct such a sentence G, but this
does not imply that we can't dismiss Lucas. Lucas' argument is unconvincing
since there is no reason to believe that for any formal system, I can see and
recognize the Godel sentence for that system. Godel sentences for a particular
system are long and complicated. Hence, there is no reason to believe
that Lucas surpasses every formal system. In fact, it is clear that
there is at least one formal system that can recognize as true a sentence
that Lucas can't. Consider the system that contains one axiom:
"is a sentence that Lucas will never recognize as true when appended
to its own quotation" is a sentence that Lucas will never recognize
as true when appended to its own quotation.
The system recognizes the sentence as true since it's an axiom; Lucas
doesn't.
John McLean
mclean@nrl-css
...!decvax!nrl-css!mclean
------------------------------
Date: Thu, 10 Apr 86 11:19:13 EST
From: tes%bostonu.csnet@CSNET-RELAY.ARPA
Subject: Computer Consciousness
Informal talk on computer consciousness:
The whole family of questions like "Can computers feel emotion?"
and "Is it possible for a computer to be conscious?" define a loaded,
emotionally-charged subject. Some people (especially "artistic" folks, in
my experience) give an immediate emphatic "NO!" to these questions;
other people (many Science-is-The-Answer-to-Everything sorts)
devise computational models that parallel what we know about physical
brain structure and conclude "yes, of course"; and other folks remain
somewhere in the middle or profess "it's too complicated - I don't know."
My main beef with some physicalist or reductionist opinions is
the *assumption* that nothing except physical events exist in the universe,
and that a physical or functional description of a system describes its
essence entirely, and therefore if the human brain's neural interactions are
simulated by some machine then this machine is for all intents and purposes
equivalent to a human mind. To me, the phenomenological red that I perceive
when looking at an apple is OBVIOUSLY real, as is my consciousness. It is
ridiculous to conclude that consciousness and phenomenological experiences
do not exist simply because they cannot be easily described with mathematics
or the English language.
My main beef with immediate emphatic "NO"s is that it may reflect
an emotional fear of examining "taboo" territories, perhaps
because such inquiry threatens the Meaning of Life or the sovereignty
of the human mind.(There is no need to expound on how much suffering this
attitude has brought upon our ancestors throughout history.) To find out
that the human mind is "just this" or "just that" would significantly alter
certain worldviews.
The possibilites that are left to me are either that
1) Consciousness "emerges" from the functionality of the
brain's neural interactions (if this is true, then it
would be entirely possible, in principle, for a computer
program with the same functionality to generate consciousness),
2) There is a dualism of the mental and the physical with
mysterious interactions between the two realms, and
3) Other possibilities which no one has thought of yet.
Now the first two may seem ridiculous, and I have no idea how to
prove or disprove them, but they remain *possibilities* for me because
they are not yet disproven. The physicalist proposal, on the other hand, is
proven wrong (or rather its absolute universality is proven wrong) by the
simplest introspective observation.
I am not campaigning for a ceasing of all brain research or
cognitive science; these sciences will continue to yield useful information.
But I hope that these researchers and their fans do not delude themselves
into thinking that the only aspect of the universe which exists is the
aspect that science can deal with.
Tom Schutz
CSNET: tes@bu-cs
ARPA: tes%bu-cs@csnet-relay
UUCP: ...harvard!bu-cs!tes
------------------------------
End of AIList Digest
********************