Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 101

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest            Monday, 20 Apr 1987      Volume 5 : Issue 101 

Today's Topics:
Comments - Text Critiques & Statistical Expert Systems & Demons,
AI Tools - Demons in Simulated Annealing Optimization &
Neural Networks Survey Paper & Multilayer Connectionist Theory

----------------------------------------------------------------------

Date: Thu 16 Apr 87 17:45:55-PDT
From: PAT <HAYES@SPAR-20.ARPA>
Reply-to: HAYES@[128.58.1.2]
Subject: Re: AIList Digest V5 #95

Let me briefly add a seconding voice to Linda Means comments on the horrible
output of the style-criticising programs illustrated a while ago. That
people should suggest using such things to influence children almost makes
me agree with Weizenbaum. The thesis behind AI is that intelligence is
computation, but not TRIVIAL computation. Obviously nothing that could run
on a PC could possibly do a good job of such a very subtle and information-
-rich task as critiquing English style, and these things do a TERRIBLE job.
But perhaps the worst aspect of them used as pedagogical tools is not how
well they do the job, but that they so obviously work by applying some simple
and superficial rules in a context-insensitive fashion. Any kid who was
'taught' by one of these would quickly learn these rules. A few experiences
like this, though, and (s)he would learn that most problems are solved by
applying a few superficial rules without any need for deeper thinking, which is
a worse and more dangerous lesson. Im all for the application of AI to
education, but lets not get it confused with the thoughtless use of mediocre
code to subvert education.
Pat Hayes

------------------------------

Date: Fri, 17 Apr 87 00:52:10 EDT
From: lubinsky@topaz.rutgers.edu (David Lubinsky)
Subject: Re: Statistical Expert Systems

I have been working in approximately this field for the last year
at Bell Labs in Murray Hill with Bill Gale and Daryl Pregibon, two of the
most active researchers in the field. Hand is probably aware of Bill's recent
book 'AI in Statistics', where you can find a whole host of people working
in this area.

Personally, I have been working on a system called TESS, (Tree based
Environemnt for Statistical Strategy). TESS allows an expert statistician
to define, implement and critique strategies for particular data analysis
tasks. So far we have implemented two strategies, one for analysing
univariate batches, and one for bivariate analysis.

I can send copies of Tech. Reports, if you want or if you can wait, look
out for a paper called "Data Analysis as Search" in a special statistical
computing edition of Technometrics in the fall.

David

------------------------------

Date: Sat 18 Apr 87 08:46:45-PST
From: Oscar Firschein <FIRSCHEIN@IU.AI.SRI.COM>
Subject: re demons

The Selfridge citation should be 1958 not 1858.
Oscar

------------------------------

Date: Mon 13 Apr 87 10:03:12-PST
From: Stephen Barnard <BARNARD@IU.AI.SRI.COM>
Subject: demons

Everyone is familiar with Maxwell's demon, the tiny sprite that
reverses the increase of entropy dictated by the 2nd Law of
Thermodynamics. He sits at a trapdoor between two chambers containing
a gas in equilibrium (i.e., with maximum entropy) and segregates the
molecules into low- and high-energy populations, thereby moving the
system away from equilibrium and *decreasing* its entropy. If
Maxwell's demon could exist (and be tamed), we could build perpetual
motion machines. The thermal gradient between the chambers could
drive a heat engine.

Brillouin killed the demon by considering the connection between
thermodynamics and information theory. The demon would have to
acquire information about the position and velocity of the molecules
(for example, by bouncing photons off them), but this information would
be gained only at a cost that would balance the decrease in entropy
due to its actions.

Demons of another kind are still alive and well in physics, however.
Creutz described a Monte Carlo algorithm that simulates a system in
thermal equilibrium, much like the Metropolis algorithm used in
simulated annealing. The difference is that Creutz samples from the
*microcanonical ensemble*, in which the system is considered to be
thermally insulated (constant energy). When the state of the system
changes randomly, its potential energy usually changes, and this
difference is absorbed or emitted by a demon, which carries kinetic
energy.

What does this have to do with AI? Simulated annealing is an
effective optimization technique. It's been used for several vision
problems. Creutz's algorithm can be used in a new variant of
simulated annealing that is simpler, more efficient, and more easily
controlled than the standard Metropolis version.

references:

Brillouin, L., Science and Information Theory, Academic Press, New
York, 1962.

Creutz,M., Microcanonical Monte Carlo simulation, Physical Review
Letters, vol. 50, no. 19, May 9, 1983, pp. 363-373.

Barnard, S., Stereo matching by hierarchical, microcanonical
annealing, SRI technical note 414 (to appear in Proc. IJCAI87).

------------------------------

Date: 13-APR-1987 15:46
From: SIMPSONP@COD.NOSC.MIL
Subject: ANS Survey Paper

[Forwarded from the Neuron Digest by Laws@STRIPE.SRI.COM.]


A Survey of Artificial Neural Systems

Patrick K. Simpson

Unisys
San Diego Systems Engineering Center
4455 Morena Boulevard
San Diego, CA 92117
619/483-0900


Abstract

This paper is a survey of the field of Artificial
Neural Systems (ANSs). ANSs have a large number of highly
interconnected processing elements that demonstrate the
ability to learn and generalize from presented patterns.
ANSs represent a possible solution to previously difficult
problems in areas such as speech processing and natural
language understanding. This paper presents a brief history
of ANSs, examples of ANS models and areas where the technol-
ogy has been applied. Also discussed is the connection
between Artificial Intelligence (AI) and ANS, computer
architectures that are evolving from this field, and two ANS
algorithms.

[Copies are available from simpson@cod.nosc.mil or the address
listed above - MTG]

------------------------------

Date: 3-APR-1987 14:46
From: @C.CS.CMU.EDU:JOSE@LATOUR.ARPA
Subject: Multilayer Connectionist Theory

[Forwarded from the Neuron Digest by Laws@STRIPE.SRI.COM.]


Knowledge Representation in Connectionist Networks


Stephen Jose Hanson and David J. Burr

Bell Communications Research
Morristown, New Jersey 07960

Abstract

Much of the recent activity in connectionist models stems
from two important innovations. First, a layer of
independent, modifiable units (hidden layer) that can model
the statistics of the domain and in turn perform significant
associative mapping between stimulus pairs. Second, a
learning rule that dynamically creates representation in the
hidden layer based upon constraints from a teacher
signal. Both Boltzmann machine and back-propagation models
share these two innovations and interestingly ones that
were apparently well known by Rosenblatt[14]. Although
presently, many complex perceptual and cognitive models
have been constructed using these methods the exact
computational nature of the networks in terms of their
clustering, partitioning, and generalization behavior is not
well understood.

In this paper we present a uniform view of the
computational power of multi-layered learning (MLL)
models. We show that MLL models represent knowledge by
applying Boolean combination rules to partition the problem
space into regions. A by-product of these rules is that
knowledge is represented as distributed patterns of
activation in the hidden layers. Their partitioning
capability is related to both the neural device model and
the network complexity in terms of numbers and layers of
neurons. The device model determines the shape of an
elementary boundary segment and the network determines how
to combine the segments into region boundaries.

For continuous problem spaces two hidden layers are
sufficient to form arbitrary regions (or Boolean functions)
in the space, and for binary-valued spaces a single layer
suffices. Finally we show that networks can produce
probabilistic combination rules which closely approximate
the Bayes risk.


You can get a copy of this paper by replying to this message
or writing to jose@bellcore or djb@bellcore, comments
appreciated.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT