Copy Link
Add to Bookmark
Report
AIList Digest Volume 8 Issue 096
AIList Digest Sunday, 9 Oct 1988 Volume 8 : Issue 96
Spang Robinson Report
Philosophy:
Re: common sense "reasoning"
Followup on JMC/Fishwick Diffeq
Re: Newell's response to KL questions
----------------------------------------------------------------------
Date: Sun, 18 Sep 88 08:07:46 CDT
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: bm965
The Spang Robinson Report on Artificial Intelligence, August, 1988, Volume 4,
No. 8
Lead Article is on the "New AI Industry"
Revenue List
Expert System Development Tools
1987 139 million
1989 278 million
Natural Language
1987 49 million
1989 95 million
Symbolic Processing Languages
1987 51 million
1989 145 million
AI Services
1987 150 million
1989 336 million
Symbolic Processors
1987 170 million
1989 161 million
General Workstations
1987 81 million
1989 277 million
Number of companies selling AI technology or applications
1986 80
1988 ~160
Discussions aof Carnegie Group, Inference, IntelliCorp, Teknowledge and Lucid
(Lisp)
Lucid revenues in 1988 were 1.4 million.
________________________________________
Neural Netowrks:
Discussion of various neural network products. They have a 10,000 u9it
installed base. It took 30 months to achieve 10000 units in Expert
systems contrasted to 13 months for neural networks.
MIT sent out 7000 copeis of the software in Explorations in
Parallel Distributed Processing.
NeuralWorks published 1000 copies of its NeuralWare tool. They range
from $195 to $2995.00
Neuronics hs sold 500 units of MacBrain,
TRW sold 40 units but is third in dollar volume.
________________________________________
Hypertext and AI.
CogentTEXT is a hypertext system embedded in Prolog. Each hypertext
button causes execution of an apporpriate segment of Prolog code. This
system is a "shareware"
product. It can be obtained from Cogent Software for $35.00 (508 875 6553)
________________________________________
Third Millenium is a venture capital fund still interested in AI start ups
(as well as Neural networks).
))))))))))))))))))))))))))))))))))))))))
Shorts:
IntelliCorp reports profit of $416,000 for its fourth quarter.
Lucid has a product called Distill! which will remove the develpment
environment for the runtime execute. SUN renewed its on-goiing OEM agreement.
Lucid has sold a total of 3000 products with 2000 went to SUN. CSK will be
selling LUCID in Japan.
Neuron Data has integrated Neuron OBJECT with ORACLE, SYBASE and Ingres.
The interfaces cost $1000 each.
KDS has released a version of an expert system shell with Blackboard.
Logicware ported MPROLOG and TWAICE (expert system shell) to IRIS
systems.
Flavors TRechnology has introduced a system to real-time inference 10,000
rules in 10 milliseconds. A Japanese company ordered the product.
Inference has ported ART to IBM mainframes and PC (under MS-DOS).
The Spang Robinson Report has a two pag list of AI companies
broken down into each of the following fields:
Expert System Tools, Expert System Applications,
Languages (e. g. PROLOG), natural language systems and hardware
------------------------------
Date: 26 Sep 88 14:59:56 GMT
From: jbn@glacier.stanford.edu (John B. Nagle)
Reply-to: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: common sense "reasoning"
Use of the term "common-sense reasoning" presupposes that common sense
has something to do with reasoning. This may not be the case. Many animals
exhibit what appears from the outside to be "common sense". Even insects
seem to have rudiments of common sense. Yet at this level reasoning seems
unlikely.
The models of behavior expressed by Rod Brooks and his artificial
insects (there's a writeup on this in the current issue of Omni), and by
Hans Moravec in his new book "Mind Children", offer an alternative. I
won't attempt to summarize that work here, but it bears looking at.
I would encourage workers in the field to consider models of common
sense that don't depend heavily on logic. There are alternative ways to
look at this class of problem. Both Brooks and Moravec use approaches
that are spatial in nature, rather than propositional. This seems to be
a good beginning for dealing with the real world.
The energetic methods Witkin and Kass use in vision processing are
another kind of model which offers a spatial orientation, an internal
drive toward consistency, and the ability to deal with noisy data. These
are promising beginnings for common-sense processing.
John Nagle
------------------------------
Date: 28 Sep 88 2:21 +0100
From: ceb%ethz.uucp@RELAY.CS.NET
Subject: Followup on JMC/Fishwick Diffeq
>From ceb Wed Sep 28 02:21:09 MET 1988 remote from ethz
>for Robots Interchange
Apropos using diffeqs or other mathematical models to imbue a robot
with the ability to reason about observation of continuous phenomena:
in John McCarthy's message <cdydW@SAIL.Stanford.EDU>, JMC states that
(essentially) diffeqs are not enough and must be imbedded in
"something" larger, which he calls "common sense knowledge". He also
state that diffeqs are inappropriate because "noone could acquire the
initial [boundary?] conditions and integrate them fast enough".
I would like to pursue this briefly, by asking the question:
Just how much of this something-larger (JMC's framework of common
sense knowledge) could be characterized as descriptions
of domains in which such equations are in force, and in describing
the interactions between neighboring domains?
I ask because I observe in my colleagues (and sometimes in myself)
that an undying fascination with the diffeq "as an art form" can lead
one think about them `in vitro', i. e. isolated on paper, with all
those partial-signs standing so proud. You have to admit, the idea as
such gets great mileage: you have a symbolic representation of
something continuous, and we really don't have another good way of
doing this. Notwithstanding, in order to use them, you've got to
describe a domain, the bc's, etc.
This bias towards setting diffeqs up on a stage may also stem from
practical grounds as well: in numerical-analysis work, even having
described the domain and bc's you're not home free yet - the equations
have to be discretized, which leads to huge, impossible-to-solve
matrices, etc. There are many who spend the bulk of their working
lives trying to find discretizations which behave well for certain
ill-behaved but industrially important equations. Such research is
done by trial-and-error, with verification through computer
simulation. In such simulations, to try out new discretizations, the
same simple sample domains are used over and over again, in order to
try to get results which *numerically* agree with some previously
known answer or somebody elses method. In short, you spend a lot of
time tinkering with the equation, and the domain gets pushed to the
back of your mind.
In the case of the robot, two things are different:
1. No one really cares about the numerical accuracy of the results:
something qualitative should be suffficient.
2. The modelled domains are *not* simple, and do not stay the same.
There can also be quite a lot of them.
I would wager that, if the relative importance of modelling the domain
and modelling the intrinsic behavior that takes place within it were
turned around, and given that you could do a good enough job of
modelling the such domains, then:
a. only a very small subset of not scientifically accurate but very
easy to integrate diffeqs would be needed to give good performance,
b. in this case, integration in real time would be a possibility,
and,
c. something like this will be necessary. I believe this supports the
position taken by Fishwick, as near as I understood it.
One might wonder idly if the Navier-Stokes equation (even in laminar
form) would be among the small set of subwager a. Somehow I doubt it,
but this is not really so important, and certainly need not be decided
in advance. It may even be that you can get around using anything at
all close to differential equations.
What does seem important, though, is the need to be able to
geometrically describe domains at least qualitatively accurately, and
this `on the fly'. I am not claiming this would cover all "common
sense knowledge", just a big part of it.
ceb
P. S. I would also be interested to know of anyone working on such
modelling --- this latter preferably by mail.
------------------------------
Date: 30 Sep 88 04:06:59 GMT
From: goel-a@tut.cis.ohio-state.edu (Ashok Goel)
Subject: Re: Newell's response to KL questions
I appreciate Professor Allen Newell's explanation of his scheme of
knowledge, symbolic, and device levels for describing the architecture
of intelligence. More recently, Prof. Newell has proposed a scheme
consisting of bands, specifically, the neural, cognitive, rational,
and social bands, for describing the architecture of the mind-brain.
Each band in this scheme can have several levels; for instance, the
cognitive band contains (among others) the deliberation and the
operation levels. What is not clear (at least not to me) is the
relationship between the two schemes. One possible relationship is
colinearity in that the device level corresponds to the neural band,
the symbolic level to the cognitive band, and the knowledge level to
the rational band. Another possibility is containment in the sense
that each of band consists of (the equivalents of) knowledge,
symbolic, and device levels. A yet another possibility is
orthogonality of one kind or another. Which relationship (if any)
between the two schemes does Prof. Newell imply?
A commonality between Newell's two schemes is their emphasis on
structure. A different scheme, David Marr's, focuses on the
processing and functional aspects of cognition. Again, what (if any)
is the relationship between Newell's levels/bands and Marr's levels?
Colinearity, containment, or some kind of orthogonality?
--ashok--
------------------------------
End of AIList Digest
********************