Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 133

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Saturday, 30 May 1987     Volume 5 : Issue 133 

Today's Topics:
Queries - Pattern Recognition Keynote Speaker Wanted &
Expert Systems for CAD & Approximate Structure Matching,
Philosophy - Complexity Theory and Philosophy,
Ethics - Text Critiquing and Eliza,
Humor - Artificial Stupidity,
Theory - The Symbol Grounding Problem

----------------------------------------------------------------------

Date: 21 May 87 16:00:42 GMT
From: sun!sunburn!gtx!al@seismo.CSS.GOV (Al Filipski 839-0732)
Reply-to: al@gtx.UUCP (Al Filipski 839-0732)
Subject: keynote speaker wanted

I am looking for a keynote speaker for a Symposium on Pattern Recognition
and Machine Intelligence to be held in Wichita, Kansas in the Spring of
1988. The Symposium will be part of the annual meeting of the Southwest
and Rocky Mountain Division of the American Association of the Advancement
of Science and will be attended mostly by scientists from the Midwest.
The speaker should be someone with a national reputation
and a historical perspective on the field (PR/AI) and its relation to problems
of interest to scientists. I would appreciate any advice and suggestions as
to qualified speakers who might not be too expensive.

Richard Duda (co-author of the text "Pattern Classification and Scene
Analysis"
) has been recommended, but I can't find him. I tried SRI,
but he is not there. Does anyone know where he is now?

[Syntelligence; 1000 Hamlin Court; Sunnyvale, CA 94088.
Phone (408) 745-6666. -- KIL]


--------------------------------------------------------------
| Alan Filipski, GTX Corporation, |
| 2501 W. Dunlap, |
| Phoenix, Arizona 85021, USA |
| |
| (602)870-1696 |
| |
| {ihnp4,cbosgd,decvax,hplabs,seismo}!sun!sunburn!gtx!al |

------------------------------

Date: Fri, 29 May 87 09:15 EST
From: SPANGLER%gmr.com@RELAY.CS.NET
Subject: Wanted: Information on current work in Expert Systems for CAD

I am beginning a survey of the current status of work in applying Expert
Systems technology to Computer Aided Design. This survey is being done
for the Knowledge Engineering group at General Motors.

I would greatly appreciate any descriptions of or references to research
in this area, as well as information on what CAD expert systems and
expert system shells are available for purchase.

-- Scott Spangler, spangler@gmr.com
-- Advanced Engineering Staff, GM

------------------------------

Date: Thu, 28 May 87 09:28 EDT
From: Roland Zito-Wolf <RJZ@JASPER.PALLADIAN.COM>
Reply-to: Roland Zito-Wolf <RJZ%JASPER@LIVE-OAK.LCS.MIT.EDU>
Subject: references re (approximate) structure matching


I am looking for references regarding the matching of complex structures
(matching on semantic networks or portions of networks) such as arise in doing
retrieval operations on knowledge-bases so represented.
Since the general mathcing problem is most likely intractable, I'm
looking for approximate or incomplete techniques, such as partial match,
resource-bounded match, matches using preference rules, etc.
References which explore algorithms in detail, and implemented systems,
would be especially useful. For example, does anyone know of a
detailed description of the KRL matcher?

Information on the more general problem of query/data-retrieval from
semantic networks would also be useful.

If there's sufficient interest, I'll post the results to the digest.
Thanks in advance.

Roland J. Zito-wolf
Palladian Software
4 Cambridge Center
Cambridge, Mass 02142
617-661-7171
RJZ%JASPER@LIVE-OAK.LCS.MIT.EDU

------------------------------

Date: 28 May 87 10:13:15 GMT
From: tedrick@ernie.Berkeley.EDU (Tom Tedrick)
Reply-to: tedrick@ernie.Berkeley.EDU (Tom Tedrick)
Subject: Complexity and Philosophy


>Lately I've been chating informally to a philosopher/friend about
>common interests in our work. He was unfamiliar with the concept of the
>TIME TO COMPUTE consequences of facts. Furthermore, the ramifactions of
>intractability (ie. if P != NP is, as we all suspect, true) seemed to
>be new to my friend. The absolute consequences are hard to get across
>to a non-computer scientist; They always say "but computers are getting
>faster all the time..."
.
>
>I'm digging around for references in AI on these ideas. This isn't my area.
>Can anyone suggest some?

I believe the philosophical consequences of complexity theory are
enormous and that the field is wide open for someone with the
ambition to pursue it.

------------------------------

Date: 29 May 87 15:48:17 GMT
From: tanner@osu-eddie.UUCP (Mike Tanner)
Reply-to: tanner@osu-eddie.UUCP (Mike Tanner)
Subject: Re: Philosophy, Artificial Intelligence and Complexity Theory


We have a paper to appear in this year's IJCAI called "On the
computational complexity of hypothesis assembly"
, by Allemang, Tanner,
Bylander, and Josephson.

Hypothesis assembly is a part of many problem solving tasks. Eg, in
medical diagnosis the problem is to find a collection of diseases
which are consistent, plausible, and collectively explain the
symptoms.

Our paper analyzes a particular algorithm we've developed for solving
this problem. The algorithm turns out to be polynomial under certain
assumptions. But the problem of hypothesis assembly is shown to be
NP-Complete, by reduction to 3-SAT, when those assumptions are
violated. In particular, if there are hypotheses which are
incompatible with each other it becomes NP-complete. (Another well
known algorithm for the same problem, Reggia's generalized set
covering model, is shown to be NP-complete also, by reduction to
vertex cover.)

The interesting part of the paper is the discussion of what this
means. The bottom line is, people solve problems like this all the
time without apparent exponential increase in effort. We take this to
mean human problem solvers are taking advantage of features of the
domain to properly organize their knowledge and problem solving
strategy so that these complexity issues don't arise.

In the particular case discussed in the paper the problem is the
identification of antibodies in blood prior to giving transfusions.
There exist pairs of antibodies that people simply cannot have both
of, for genetic reasons. So we're in the NP-complete case. But, it
is possible to reason up front about the antibodies and typically rule
out one of each pair of incompatible antibodies (the hypotheses).
Then do the assembly of a complete explanation. This results in
assembly being polynomial.

If you send me your land mail address I can send you a copy of the
paper. Or you can wait for the movie. (Dean Allemang will present it
in Milan.)

-- mike

ARPA: tanner@ohio-state
UUCP: ...!cbosgd!osu-eddie!tanner

------------------------------

Date: Thu, 28 May 87 19:29:27 pdt
From: Ethan Scarl <ethan@BOEING.COM>
Subject: Text Critiquing and Eliza

The "grammar checker" discussions were stirring some old memories which I
finally pinpointed: a 1973 debate (centered on Joe Weizenbaum and Ken
Colby) over whether Eliza should be used for actual therapy.

The heart of the grammar checker issue is whether a computational package of
limited or doubtful competence should be given an authoritative role for some
vulnerable part of our population (young students, or confused adults). What
was most shocking in the Eliza situation (and may be true here as well) was
the quick and profound acceptance of a mechanical confidante by naive users.
Competent and experienced writers have no trouble discarding (or
extrapolating from) Rightwriter's sillier outputs; the problem is with
inexperienced or disadvanted users. Many of us were (are) enraged at this
automated abuse as absurd, irresponsible, and even inhuman," only to be
stopped short by a sobering argument: "
if competent human help is scarce,
then isn't this better than nothing?"

The Rightwriter discussion summarizes/coheres rather well: Such systems are
suggestive aids for competent writers and may be useful in tutoring the
incompetent. Such systems will be unsuitable as replacement tutors for some
time to come, but may be worthwhile (in time and effort expended for results
achieved) as aids to be be used by a competent tutor or under the tutor's
supervision.

We are in deep trouble if there are no competent humans available to help
others who need it. But the secondary question: "
Is sitting in front of a
CRT better than sitting in a closet?" can at least be tested empirically.

In the Rightwriter case, I would expect that most students will quickly
understand the the program's analytic limitations after they are pointed out
by a teacher. However, the human teacher's perspective is essential.

------------------------------

Date: Thu, 28 May 87 09:35:48 pdt
From: Eugene Miya N. <eugene@ames-pioneer.arpa>
Subject: Re: Humor - Artificial Life: actually artificial stupidity

In article <23@aimmi.UUCP> Gilbert Cockton writes:
>
>Nah - that's not all the way. We also need
>
> 3. Artificial reasoning.
>
>This is when people who nothing about epistemology (philosophical and
>anthropological/sociologial aspects) or psychology lock themselves away on
>an AI project and make things up about how people reason. I may be
>oldfashioned, but I do miss empirical substance and conceptual coherence
>:-) :-):-) :-) :-):-) :-) :-):-) :-) :-):-) :-) :-):-) :-) :-):-) :-) :-):-)

Permit me to add what I mentioned to John Pierce when he was a Chief
Engineer over me:

4. Artificial stupidity

And I got the comment about there being enough natural stupidity in the
world.

>From the Rock of Ages Home for Retired Hackers:

--eugene miya
NASA Ames Research Center
eugene@ames-aurora.ARPA
"
You trust the `reply' command with all those different mailers out there?"
"
Send mail, avoid follow-ups. If enough, I'll summarize."
{hplabs,hao,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene

------------------------------

Date: 28 May 87 05:46:28 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem


Anders Weinstein of BBN wrote:

> a point that I thought was clearly made in our earlier
> discussion of the A/D distinction: loss of information, i.e.
> non-invertibility, is neither a necessary nor sufficient condition for
> analog to digital transformation.

The only point that seems to have been clearly made in the sizable discussion
of the A/D distinction on the Net last year (to my mind, at least) was that no
A/D distinction could be agreed upon that would meet the needs and
interests of all of the serious proponents and that perhaps there was
an element of incoherence in all but the most technical and restricted
of signal-analytic candidates.

In the discussion to which you refer above (a 3-level bottom-up model
for grounding symbolic representations in nonsymbolic -- iconic and
categorical -- representions) the issue was not the A/D
transformation but A/A transformations: isomorphic copies of the
sensory surfaces. These are the iconic representations. So whereas
physical invertibility may not have been more successful than any of
the other candidates in mapping out a universally acceptable criterion
for the A/D distinction, it is not clear that it can be faulted as a
criterion for physical isomorphism.
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 29 May 87 00:46:47 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu (Anders Weinstein)
Subject: Re: The symbol grounding problem

Replying to my claim that
>> ...loss of information, i.e.
>> non-invertibility, is neither a necessary nor sufficient condition for
>> analog to digital transformation.

in article <786@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>The only point that seems to have been clearly made in the sizable discussion
>of the A/D distinction on the Net last year (to my mind, at least) was that no
>A/D distinction could be agreed upon ...
>
>In the discussion to which you refer above ... the issue was not the A/D
>transformation but A/A transformations: isomorphic copies of the
>sensory surfaces. These are the iconic representations. So whereas
>physical invertibility may not have been more successful than any of
>the other candidates in mapping out a universally acceptable criterion
>for the A/D distinction, it is not clear that it can be faulted as a
>criterion for physical isomorphism.

Well the point is just the same for the A/A or "
physically isomorphic"
transformations you describe. Although the earlier discussion admittedly did
not yield a positive result, I continue to believe that it was at least
established that invertibility is a non-starter: invertibility has
essentially *nothing* to do with the difference between analog and digital
representation according to anybody's intuitive use of the terms.

The reason I think this is so clear is that for any one of the possible
transformation types -- A/D, A/A, D/A, or D/D -- one can find paradigmatic
examples in which invertibility either does or does not obtain. A blurry
image is uncontroversially an analog or "
iconic" representation, yet it is
non-invertible; a digital recording of sound in the audible range is surely
an A/D transformation, yet it is completely invertible, etc. All the
invertibility or non-invertibility of a transformation indicates is whether
or not the transformation preserves or loses information in the technical
sense. But loss of information is of course possible (and not necessary) in
any of the 4 cases.

I admit I don't know what the qualifier means in your criterion of "
physical
invertibility"; perhaps this alters the case.

Anders Weinstein

------------------------------

Date: 29 May 87 15:27:31 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem


aweinste@Diamond.BBN.COM (Anders Weinstein) of BBN Laboratories, Inc.,
Cambridge, MA writes:

> invertibility has essentially *nothing* to do with the difference
> between analog and digital representation according to anybody's
> intuitive use of the terms... A blurry image is uncontroversially
> an analog or "
iconic" representation, yet it is non-invertible;
> a digital recording of sound in the audible range is surely an A/D
> transformation, yet it is completely invertible. [I]nvertibility...
> [only] indicates whether... the transformation preserves or loses
> information in the technical sense. But loss of information is...
> possible in any of the 4 cases... A/D, A/A, D/A, D/D...
> I admit I don't know what the qualifier means in your criterion
> of "
physical invertibility"; perhaps this alters the case.

I admit that the physical-invertibility criterion is controversial and
in the end may prove to be unsatisfactory in delimiting a counterpart
of the technical A/D distinction that will be useful in formulating
models of internal representation in cognitive science. The underlying
idea is this:

There are two stages of A/D even in the technical sense. Signal
quantization (making a continuous signal discrete) and symbolization
(assigning names and addresses to the discrete "
chunks"). Unless the
original signal is already discrete, the quantization phase involves a
loss of information. Some regions of input variation will not be retrievable
from the quantized image. The transformation is many-to-fewer instead
of one-to-one. A many-to-few mapping cannot be inverted so as to
recover the entire original signal.

Now I conjecture that it is this physical invertibility -- the possibility
of recovering all the original information -- that may be critical in
cognitive representations. I agree that there may be information loss in
A/A transformations (e.g., smoothing, blurring or loss of some
dimensions of variation), but then the image is simply *not analog in
the properties that have been lost*! It is only an analog of what it
preserves, not what it fails to preserve.

A strong motivation for giving invertibility a central role in
cognitive representations has to do with the second stage of A/D
conversion: symbolization. The "
symbol grounding problem" that has
been under discussion here concerns the fact that symbol systems
depend for their "
meanings" on only one of two possibilities: One is
an interpretation supplied by human users -- "
`Squiggle' means `animal' and
`Squoggle' means `has four legs'" -- and the other is a physical, causal
connection with the objects to which the symbols refer. The first
source of "
meaning" is not suitable for cognitive modeling, for
obvious reasons (the meaning must be intrinsic and self-contained, not
dependent on human mental mediation). The second has a surprising
consequence, one that is either valid and instructive about cognitive
representations (as I tentatively believe it is), or else a symptom of
the wrong-headedness of this approach to the grounding problem, and
the inadequacy of the invertibility criterion.

The surprising consequence is that a "
dedicated system" -- one that is
hard-wired to its transducers and effectors (and hence their
interactions with objects in the world) may be significantly different
from the very *same* system as an isolated symbol-manipulating module,
cut off from its peripherals -- different in certain respects that could be
critical to cognitive modeling (and cognitive modeling only). The dedicated
system can be regarded as "
analog" in the input signal properties that are
physically recoverable, even if there have been (dedicated) "
digital" stages
of processing in between. This would only be true of dedicated systems, and
would cease to be true as soon as you severed their physical connection to
their peripherals.

This physical invertibility criterion would be of no interest whatever
to ordinary technical signal processing work in engineering. (It may
even be a strategic error to keep using the engineering "
A/D"
terminology for what might only bear a metaphorical relation to it.)
The potential relevance of the physical invertibility criterion
would only be to cognitive modeling, especially in the constrain that
a grounded symbol system must be *nonmodular* -- i.e., it must be hybrid
symbolic/nonsymbolic.

The reason I have hypothesized that symbolic representations in cognition
must be grounded nonmodularly in nonsymbolic representations (iconic and
categorical ones) is based in part on the conjecture that the physical
invertibility of input information in a dedicated system may play a crucial
role in successful cognitive modeling (as described in the book under
discussion: "
Categorical Perception: The Groundwork of Cognition,"
Cambridge University Press 1987). Of course, selective *noninvertibility*
-- as in categorizing by ignoring some differences and not others --
plays an equally crucial complementary role.

The reason the invertibility must be physical rather than merely
formal or conceptual is to make sure the system is grounded rather
than hanging by a skyhook from people's mental interpretations.
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT