Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 140

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Wednesday, 10 Jun 1987    Volume 5 : Issue 140 

Today's Topics:
AI Tools - ID3 vs C4 & Expert Systems for CAD,
Comment - Precision in Writing,
Theory - Complexity Theory &
Applying AI Models to Biology &
Symbol Grounding and Physical Invertibility

----------------------------------------------------------------------

Date: 4 Jun 87 13:04:17 GMT
From: reiter@endor.harvard.edu (Ehud Reiter)
Subject: Re: ID3 vs C4

In article <114@upba.UUCP> damon@upba.UUCP (Damon Scaggs) writes:
>I understand that Ross Quinlan, author of the ID3 classification algorithm
>has developed a better version with the designation C4. I am looking for
>any papers or references about this new algorithm as well as any comments
>about what it does better.

The best reference I've seen on statistical algorithms for learning decision
trees is

CLASSIFICATION AND REGRESSION TREES
by L. Breiman, J. Friedman, R. Olshen, C. Stone
Wadsworth Press, 1984

The book makes no specific mention of ID3 or C4, but it gives much more
detail about this class of learning algorithms than I've seen in any of
Quinlan's papers.

I'm posting this reponse to the net because I really do think this is a
superb book.
Ehud Reiter
reiter@harvard (ARPA,BITNET,UUCP)
reiter@harvard.harvard.EDU (new ARPA)

------------------------------

Date: 4 Jun 87 19:16:24 GMT
From: bolasov@athena.mit.edu (Benjamin I Olasov)
Reply-to: aphrodite!bolasov@RUTGERS.EDU (Benjamin I Olasov)
Subject: Re: Wanted: Information on current work in Expert Systems
for CAD


In article <8705300459.AA08391@ucbvax.Berkeley.EDU> SPANGLER@gmr.COM writes:
>I am beginning a survey of the current status of work in applying Expert
>Systems technology to Computer Aided Design. This survey is being done
>for the Knowledge Engineering group at General Motors.
>
>I would greatly appreciate any descriptions of or references to research
>in this area, as well as information on what CAD expert systems and
>expert system shells are available for purchase.
>
> -- Scott Spangler, spangler@gmr.com
> -- Advanced Engineering Staff, GM


I just finished writing a master's thesis on just this topic, however
it focuses primarily on applications for architectural practice,
especially design with a structural pre-cast concrete panel system.
You may also write or send e-mail to Professor Sriram in the Civil
Engineering Department here at MIT who has written an expert system
for structural design called DESTINY. His e-mail address is
sriram@athena.mit.edu- mine is bolasov@aphrodite.mit.edu or
Olasov@MIT-MULTICS.ARPA.

Good luck!

Ben

------------------------------

Date: Tue, 09 Jun 87 11:26:05 n
From: DAVIS%EMBL.BITNET@wiscvm.wisc.edu
Subject: precision in writing


I have no wish to make a big deal out of this point, but I feel that Laura
Creighton's remarks on precision in writing/expression must be dealt with.
She writes:

> Precision is a good thing. If one can say precisely what one means then
> one will not be misunderstood. This, alas, is a pipe dream. There
> is no way to say precisely what one means -- what one says does not
> have precise meaning embedded in the words or the relationships between
> the words. Rather, one shares a linguistic context with one's
> audience. This means that the serach for precision is never ending.

I'm afraid that there is a gross difference between the precise delineation
of an idea, and over-precise word usage. To be sure, all of human activity
is constantly capable of generating new words, and new uses for old words
(radical! barf! hack! bug!) - but this alone does not justify the
`jargonising' of debate. I believe that if two (or more) people wish to
debate any issue, then they have a responsibility to do so on as much common
ground as humanly possible. You think that Bertrand Russel was any less
capable of a meaningful debate on various aspects of philosphy/cognition
because he didn't have access to computerese ? The delineation of an idea
is capable of being precise through carefully chosen analogy and metaphor.
Such a route is actually better than jargonising since the writer/speaker
stands a better chance of getting the audience to appreciate the *core*
of an idea, rather than sit back satisfied that they *think* they understand
his words......

Sorry to go on on this one, but so much of the debate in and around
AI/cognitive science/philosphy of mind gets bogged down by people jargonising
their positions, which forces replies to first hack through the cloud
that surrounds potentially good (or bad) opinions.

yours for jargon free AI,

paul davis

"i wash my own clothes, i wash my own hair, i wash my own opinions"
nb: but my employers provide the washing machine, the shower & the computer

davis@embl.bitnet

------------------------------

Date: 3 Jun 87 17:15:21 GMT
From: mcvax!botter!klipper!biep@seismo.css.gov (J. A. "Biep" Durieux)
Subject: What philosophical problems does complexity theory yield?

Suppose P != NP. Then some things will take a long time to compute.
But so what?

Suppose someone finds out not all problems can be solved in constant
time. Now that comes as a philosophical shock, of course. That has
lots of implications.

But once one has overcome that shock, finding that some problems cannot
be solved in linear time may be annoying, but now since the possibility
of constant time already has been destroyed, it's no great news.

As, one by one, all sorts of upper bounds on exponents prove false, and
finally it seems polynomial time isn't good at all, one gets even bored
by all those variations on the same theme, not? So what exactly is so
exciting about that polynomial limit?


About constant time solutions: Seemingly linear-time solutions can often be
turned into constant-time solutions by applying parallelism. This is the
way the universe is able to simulate itself, however big it (object) may
be or grow. I don't know of what complexity the collapsing of a wave-
function would be supposed that all "time-space-points" of it worked
parallelly on it.
But, isn't anything which cannot be turned into a constant-time process
philosophically annoying? Why just hassling about non-polynomial time
solutions? Am I missing something? (Shouldn't I have asked that at the
beginning of this article? :-))
--
Biep. (biep@cs.vu.nl via mcvax)
Popper tells us how science ought to be done - Kuhn tells us how it *is* done.
And I ... I read neither.

------------------------------

Date: 4 Jun 87 16:09:44 GMT
From: ramesh@cvl.umd.edu (Ramesh Sitaraman)
Subject: Re: What philosophical problems does complexity theory yield?

In article <789@klipper.cs.vu.nl> biep@cs.vu.nl (J. A. "Biep" Durieux) writes:

>But, isn't anything which cannot be turned into a constant-time process
>philosophically annoying? Why just hassling about non-polynomial time
>solutions? Am I missing something? (Shouldn't I have asked that at the
>beginning of this article? :-))
>--

Yes, you are missing the point !!

The difference between a polynomial and non-polynomial solution for
a problem is the difference between structure and a complete lack
of it. If P not = NP we would have shown that some problems can be
solved only by something similar to a dumb exhaustive search over
the solution space i.e. there is not enough structure in the
problem to constrain its solutions.

Graph theorists have found eulerian circuits very interesting
and there have been very strong theorems proved about graphs
with this property. However, the seemingly similar problem
of Hamiltonian circuits have almost no characterisation inspite
of dilligent efforts for the past 100 years or so. The theory of
NP-completeness explains this anomaly. Eulerian circuit can be
solved in linear time while Hamiltonian circuit is NP-complete !!!

Ramesh

(Defn:
Eulerian ckt: A circuit passing through all edges of a graph
without repeating an edge.
Hamiltonian ckt: A circuit passing through all the vertices
of a graph without repeating a vertex.
)

------------------------------

Date: 6 Jun 87 20:52:53 GMT
From: umnd-cs!umn-cs!moll@ucbvax.Berkeley.EDU (Rick Moll)
Subject: Re: What philosophical problems does complexity theory yield?

In article <789@klipper.cs.vu.nl> biep@cs.vu.nl (J. A. "Biep" Durieux) writes:
>Suppose P != NP. Then some things will take a long time to compute.
>But so what?
>As, one by one, all sorts of upper bounds on exponents prove false, and[...]

You seem to be implying that if P=NP then *all* problems can be solved in
polynomial time. This is certainly not so. Given any computable function
f(x), one can construct (by diagonalization) a problem which can be solved,
but cannot be solved in time f(n) on a Turing machine. I believe
the same proof would work for parallel machines.

>About constant time solutions: Seemingly linear-time solutions can often be
>turned into constant-time solutions by applying parallelism. [...]

Note that P=NP is stated as a problem about Turing machines (or sometimes
single processor random access machines). Any problem in the class NP
can definately be solved in polynomial time is one is allowed to use an
arbitrary number (varying with the size of the problem instance) of
processors.

------------------------------

Date: 7 Jun 87 00:28:36 GMT
From: chiaraviglio@husc4.harvard.edu (lucius chiaraviglio)
Subject: Re: Taking AI models and applying them to biology...

In article <622@unicus.UUCP> craig@unicus.UUCP (Craig D. Hubley) writes:
>I've heard that mammal cells appear to suffer a "hard" reproductive limit
>of 52 mitosis operations, and that meiosis "resets this counter" to 0.
>
>- any comment on this, bio-med types? Is it true?
>- Would a theory assuming a simple variable or random "counter" in each cell
>limiting its reproductive span better explain aging (programmed cells...)

Random failure may be a significant factor in aging, but a hard limit
on the number of times a cell may divide before it self-destructs has been
observed in tissue culture, where the cells are for the most part not
dependant on each other. Those cells which manage to get past the hard limit
are abnormal (although not necessarily cancerous) in ways beyond their mere
ability to keep dividing after they were supposed to self-destruct. I don't
remember most of the details of this, but I do remember that they tend to
become tetraploid (I think also aneuploid) due to an increase in the rate of
mitotic failure.

-- Lucius Chiaraviglio
lucius%tardis@harvard.harvard.edu
seismo!tardis.harvard.edu!lucius

------------------------------

Date: 9 Jun 87 00:02:19 GMT
From: pixar!davel@ucbvax.Berkeley.EDU (David Longerbeam)
Subject: Re: Taking AI models and applying them to biology...

In article <622@unicus.UUCP>, craig@unicus.UUCP (Craig D. Hubley) writes:
>
> This description of the human memory system, though cloaked in vaguer terms,
> corresponds more or less one-to-one with the traditional computer
> architecture we all know and love. To wit:

[description deleted]

> At least this far, this theory appears to owe a lot to computer science.
> Granted, there is lots of empirical evidence in favour, but we all know
> how a little evidence can go far too far towards developing an analogy.

One of my philosophy professors in college offered the observation that
models for the human mind have always seemed to correspond to the most
advanced form of technology at that given point in history. He could
recall that when he was young, this technology was the combustion engine,
and lo, the cognitive psychologists' model at that time was the combustion
engine.

Of course, this technology is now the digital computer, and many psychologists,
linguists and computer scientists use it as a model to explain activites
of the human mind. Some go so far as to say that intelligence is nothing
more than the result of following the same sorts of syntactical rules as
performed by a computer!

But I stray...
I wanted to point out that you didn't give the source of the above model/
comparison, and that if it is not entirely empirical in nature, it
may be a case of "use the latest technology as the best model".
--
David Longerbeam || The opinions expressed above
Pixar || are not to be contrued as the
San Rafael, CA || opinions, stated or otherwise,
ucbvax!pixar!davel (415) 499-3600 || of Pixar.

------------------------------

Date: 5 Jun 87 04:45:45 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: symbol grounding and physical invertibility


John Cugini <Cugini@icst-ecf.arpa> asks:

> (1) I wonder why the grounding is to depend on invertibility rather than
> causation and/or resemblance?
>
> (2) Isn't it true that physically distinct
> kinds of light (eg. #1 red-wavelength and green-wavelength vs.
> #2 yellow-wavelength) can cause completely indistinguishable
> sensations (ie subjective yellow)? Is this not, then, a non-invertible,
> but nonetheless grounded sensation?

(1) According to my view, invertibility (and perhaps inversion)
captures just the relevant features of causation and resemblance that
are needed to ground symbols. The relation is between the proximal
projection (of a distal object) onto the sensory surfaces -- let's
call it P -- and an invertible transformation of that projection [I(P)].
The latter is what I call the "iconic representation." Note that the
invertibility is with the sensory projection, *not* the distal object. I
don't believe in distal magic. My grounding scheme begins at the
sensory surfaces ("skin and in"). No "wider" metaphysical causality is
involved, just narrow, local causality.

Of course the story is more complicated, because iconic
representations are not sufficient to ground a symbol referring to
an object. They're not even enough to allow a device to reliably pick
out the object and give it the right name (i.e., to categorize or
identify it). "Categorical representations" are needed next, but these
are no longer invertible into the sensory projection. They are
feature-filters preserving only the (proximal) properties of the object's
sensory projection that reliably distinguish the object (let's say
it's an "X") from the other objects that it can be confused with
(i.e., relevant "non-X's" in the particular context of confusable
alternatives sampled to date). Then finally the labels ("X," "non-X")
can be used as the primitive symbols in a (now *grounded*) symbol
system, to be combined and otherwise syntactically manipulated into
meaningful composite symbol-strings (descriptions).

(2) Your question about indistinguishable but distinct colors mistakes my
grounding scheme for a "wide" metaphysical grounding scheme -- one
where the critical "causality" would be in the relation between distal
objects and our internal representations of them, whereas mine is a narrow,
skin-and-in grounding proposal. I have dubbed this view
"approximationism," and, without going into details (for which you may
want to consult the CP book or a reprint of the theoretical chapter),
the essence of the idea is that internal representations are
always approximate rather than "exact," in two important senses. The
iconic representation is approximate up to its grain of resolution
(the "jnd" or "just-noticeable-difference"): Think of it as a Principle
of the "Iconic Identity of Iconic Indiscernibles": What you can't tell
apart is the same to you.

The categorical representations are approximate in an even more important
sense: The only features the category filter picks out are the ones
that are needed in order to identify the confusable alternatives in
the context you have sampled to date. Hence an X is just what your
current, approximate, provisional context-dependent feature-filter picks
out reliably from among the X's and Non-X's you have encountered so far:
"The Categorical Identity of Unsorted or Unsortable Members" (i.e.,
X's are identically X's unless and until reliably identified or identifiable
otherwise).

Since this is not a "wide" grounding, there is nothing oracular or
omniscient or metaphysical about what the X is that this picks out.
There is no God's-eye view from which you can say what X's "really"
are. There's just the mundane historical fact -- available to an
outside observer, if there is one -- about what the actual distal objects
were whose proximal projections you were sampling. Those furnished your
context, and your fallible, context-dependent representations will
always be approximate relative to those objects.

In conclusion, the only differences in the object that are reflected
in the iconic and categorical representations are the ones present in
the proximal projection of the alternatives sampled to date
(and preserved by the category-feature filter). The representations
are approximate (i.e., indifferent) with respect to any further distal
differences. Symbolic discourse may serve to further tighten the
approximation, but even that cannot be "exact," if for no other
reason than that there's always a tomorrow, in which the
context may be widened and the current representation may have to be
revised. -- But that's another story, and no longer concerns the
grounding problem but what's called "inductive risk."
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT