Copy Link
Add to Bookmark
Report

Neuron Digest Volume 11 Number 46

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Thursday, 12 Aug 1993                Volume 11 : Issue 46 

Today's Topics:
Administrivia - We're back
Whence cybernetics?
Re: Whence cybernetics?
Re: Whence cybernetics?
Re: Whence cybernetics?
Research posts at U. of Central England
P.C. Based Neural Softwares
Basins of Attraction of Cellular Automata
CASE project available for students
Training of Nets and Multiple Solutions


Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@psych.upenn.edu". The ftp archives are
available from psych.upenn.edu (130.91.68.31). Back issues requested by
mail will eventually be sent, but may take a while.

----------------------------------------------------------------------

Subject: Administrivia - We're back
From: "Neuron-Digest Moderator, Peter Marvit" <neuron@cattell.psych.upenn.edu>
Date: Wed, 11 Aug 93 13:30:57 -0500

Dear readers,

Again, I thank you for your patience during my slightly longer than
expected "holiday." The Neuron Digest resumes with this issue and will
be coming rather thickly in the next few weeks. Most of the issues will
be, as I previously wrote, long-overdue paper and technical report
announcements as well as conference announcments.

However, I will start off first with the usual set of discussion items
from you readers, plus a short set of postings about cybernetics from a
related list. Enjoy...

-Peter

: Peter Marvit, Neuron Digest Moderator <neuron-request@psych.upenn.edu> :
: Courtesy of the Psychology Department, University of Pennsylvania :
: 3815 Walnut St., Philadelphia, PA 19104 w:215/898-6274 h:215/387-6433 :



------------------------------

Subject: Whence cybernetics?
From: gal2@kimbark.uchicago.edu (Jacob Galley)
Organization: University of Chicago Computing Organizations
Date: 04 Jul 93 03:45:55 +0000

I have been studying linguistics and cognitive science type stuff for
about two years in college, and I am just now becoming aware of the
long line of cybernetic thought which runs parallel to "good
old-fashioned" symbolic AI. Why is this work now (and apparently
always since the schism) more obscure than work done in symbolic,
serial cognitive modelling?

I quote from _Foundations of Neural Networks_ by Tarun Khanna
(Addison-Wesley 1990):

#This continuous/symbolic dichotomy gave rise to and was then
#reinforced by other concurrently existing dichotomies. The
#cyberneticians dealt primarily with pattern recognition and were
#concerned with developing systems that learned. The AI community, on
#the other hand, concentrated on problem solving and therefore on
#creating systems that performed specific tasks demanding intelligence,
#for example, theorem-proving and game-playing. Predictably, each of
#these groups of tasks was easier to tackle in its specific class of
#systems. For example, it is easier to tackle a game-playing exercise
#in a programming system than in a continuous system. Simultaneously,
#cyberneticians were preoccupied with the neurophysiology and the AI
#community with psychology. While the former connection is easier to
#understand, the latter arose primarily because it is easier to
#postulate psychologically meaningful results using programming systems
#than it is to postulate physiological ones. Their preoccupation with
#neurophysiology led cyberneticians to deal primarily with highly
#parallel systems. The programming systems employed by the AI community
#were, on the other hand, inherently serial. (page 4)

[Khanna goes on to portray connectionism as a new hybrid between the
two traditions.]

I am amazed that this alternative to symbolic AI is so obscure. Why
are (symbolic) artificial intelligence classes, theories and opinions
so easy to find, but cybernetic thought has faded away, become
esoteric?

There are lots of reasons I can think of which seem reasonable, but I
don't know enough of the history to be sure:

* Cybernetic theory is more abstract, difficult, vague. (No idea yet
if this is even true.)

* The "Chomskyan Revolution" in linguistics and/or the "Cognitive
Revolution" in psychology tipped the scales in the symbolic AI
tradition's favor. (No idea what the causal relationships are
between the three symbolic schools, if any can be clearly
attributed.)

* The foundations of serial programming caught on before the
foundations of parallel programming (which we are still hammering
out today, imho), so applications of symbolic AI were more
successful, more glamorous, sooner.

Does anyone have any thoughts on this?

Jake.
- --
* What's so interdisciplinary about studying lower levels of thought process?
<-- Jacob Galley * gal2@midway.uchicago.edu

------------------------------

Subject: Re: Whence cybernetics?
From: gal2@kimbark.uchicago.edu (Jacob Galley)
Organization: University of Chicago
Date: 04 Jul 93 18:01:20 +0000


I received the following reply, and figured I might as well post it.
(I've added comp.ai.neural-nets to the list, since I now know it exists.)

- ---------

Date: Sun, 4 Jul 1993 00:41:16 -0700 (PDT)
From: Melvin Rader <radermel@u.washington.edu>
Subject: Re: Whence cybernetics

First off, I'm not posting this because I can't -- I just found this modem
in the parents' computer a couple days ago, and I haven't yet figured out
how to deal with my system's editor for posting to usenet.

Anyway, in response to your question:

By cybernetics, I take you to mean the study of neural networks
and connectionist models of artificial intelligence. By no means is it
dead, or even all that obscure. As an undergraduate at the Evergreen
State College in Olympia, WA this year I took four credits of
'Connectionism' and another four of programming of neural networks. I
believe there's a newsgroup devoted to neural networks as well.
Seymour Papert has written a whimsical account of the history of
network vs. symbolic approaches to artificial intelligence:

"Once upon a time two daughter sciences were born to the new
science of cybernetics. One sister was natural, with features inherited
from the study of the brain, from the way nature does things. The other
was artificial, related from the beginning to the use of computers. Each
of the sister sciences tried to build models of intelligence, but from
very different materials. The natural sister built models (called neural
networks) out of mathematically purified neurones. The artificial sister
built her models out of computer programs.
"In their first bloom of youth the two were equally successful and
equally pursued by suitors from other fields of knowledge. They got on
very well together. Their relationship changed in the early sixties when
a new monarch appeared, one with the largest coffers ever seen in the
kingdom of the sciences: Lord DARPA, the Defence Department's Advanced
Research Projects Agency. The artificial sister grew jealous and was
determined to keep for herself the access to Lord DARPA's research funds.
The natural sister would have to be slain.
"The bloody work was attempted by two staunch followers of the
artificial sister, Marvin Minsky and Seymour Papert, cast in the role of
the huntsman sent to slay Snow White and bring back her heart as proof of
the deed. Their weapon was not the dagger but the mightier pen, from which
came a book - Perceptrons ..."

Minsky and Papert's book did effectively kill further research
into neural networks for about two decades. The thrust of the book
was that with the learning algorithms that had been developed then, neural
networks could only learn linearly separable problems, which are always
simple (this was proved mathematically). Networks existed which could
solve more complicated problems, but they had to be "hard wired" - the
person setting up the network had to set it up in such a way that the network
already "knew" everything that it was going to be tested on; there was
no way for such a network to learn. (The book also raised some other,
more philosophical concerns.) Since learning was basically the only
advantage neural network models had over symbolic models (aside from an
asthetic appeal due to their resemblance to natural models), research into
neural networks died out. (Also, NN research is associated
philosophically with behaviorism - NNs solve through association. When
behaviorism died, it also helped bring down the NN field.)
However, in the late 70's (I think) the 'backpropagation training
algorithm' was developed. Backpropagation allows the training of neural
networks which are powerful enough to solve non-linearly separable
problems, although it has no natural equivalent. With the development of
backpropagation, and with the association of several big names with the
field, research into network models of artificial intelligence revived.
I understand the term 'Connectionism' to apply to a field which
draws from neural network research and research into the brain. In
contrast to whatever book you were quoting from, I understand
connectionist thought to be at odds with the symbolic approach to
artificial intelligence. A good book to read on the subject is
Connectionism and the Mind by Bechtel and Abrahamsen. It is a good
introduction to connectionism and goes into the philosophy behind it all,
although some of the math is off.

- --Kimbo



- --
* What's so interdisciplinary about studying lower levels of thought process?
<-- Jacob Galley * gal2@midway.uchicago.edu

------------------------------

Subject: Re: Whence cybernetics?
From: wbdst+@pitt.edu (William B Dwinnell)
Organization: University of Pittsburgh
Date: 04 Jul 93 22:08:00 +0000


The passage you posted concerning cybernetics is somewhat misleading. The
term "cybernetics" was coined by Norbert Wiener in the 1940's, defining it
as "the entire field of control and communication theory, whether in the
machine or in the animal". In its narrowest sense, as Wiener wrote about
it, cybernetics might be thought of as a precursor to modern information
theory (he mentions Shannon, by the way, in his book "Cybernetics"), control
theory (including what we now call robotics), and, to some degree, prediction.
In the most general sense, "cybernetics" may be construed as covering all
of computer science, and more. It is common today for people to present
cybernetics in light of AI or robotics, but there is no reason to put
this special slant on cybernetics. Probably the most accurate short
definition of "cybernetics", using contemporary terminology would be a
proto-science concerning information theory and communication theory.

------------------------------

Subject: Re: Whence cybernetics?
From: minsky@media.mit.edu (Marvin Minsky)
Organization: MIT Media Laboratory
Date: 06 Jul 93 06:05:52 +0000

>Date: Sun, 4 Jul 1993 00:41:16 -0700 (PDT)
>From: Melvin Rader <radermel@u.washington.edu>
>Subject: Re: Whence cybernetics

> By cybernetics, I take you to mean the study of neural networks
>and connectionist models of artificial intelligence. By no means is it
>dead, or even all that obscure. As an undergraduate at the Evergreen
>State College in Olympia, WA this year I took four credits of
>'Connectionism' and another four of programming of neural networks. I

> Minsky and Papert's book did effectively kill further research
>into neural networks for about two decades. The thrust of the book
>was that with the learning algorithms that had been developed then, neural
>networks could only learn linearly separable problems, which are always
>simple (this was proved mathematically). Networks existed which could
>solve more complicated problems, but they had to be "hard wired" - the
>person setting up the network had to set it up in such a way that the network
> etc.

You'd better give those credits back. The book explained (1) some
theory of which geometric problems were linearly separable (and the
results were not notably simple), (2) derived lower bounds on how the
size of networks and coefficients grow with the size of certain
problems, and (3) these results have nothing whatever to do with the
learning algorithms involved, because they only discuss the existence
of suitable networks.

There was not so much research in neural networks between 1969, when
the book was published, and around 1980 or so. This may have been
partly because we showed that feedforward nets are impractical for
various kinds of invariant recognitions on large retinas, but they are
useful for many other kinds of problems. The problem was that too
many people propagated absurdly wrong summaries of what the book said
- -- as in the above account. There were some gloomy remarks near the
end of the book about the unavailability of convergence guarantees for
multilayer nets (as compared with the simple perceptron procedure,
which always converges for separable patterns), and this might have
discouraged some theorists. There still are no such guarantees for
learning algorithms of practical size -- but for many practical
purposes, no one cares much about that.

------------------------------

Subject: Research posts at U. of Central England
From: "r.kelly" <DFCA4601G@UNIVERSITY-CENTRAL-ENGLAND.AC.UK>
Date: Tue, 20 Jul 93 11:04:57


University of Central England
Neural Net Applications: Research Posts


Monitoring of Gas Cylinder Production:

This programme is concerned with the use of ANNs to identify a
number of problems commonly found during the production of aluminium
gas cylinders. ANNs will be trained to recognise and classify
problems from historical process data. Later stages of the work
will involve real-time data.

This is a 3 year SERC studentship that will be converted to a CASE
Award. Applicants will be expected to register for MPhil/PhD.


Monitoring and Control of Aluminium Spot-Welding:

This programme is concerned with the use of ANNs for the monitoring
and control of resistance spot-welding of aluminium. The work is
primarily concerned with the use of ANNs to provide a novel method of
NDT for spot-welds. Initial work will concentrate on historical
process data but it is anticipated that the later stages of the work
will involve enhancement to real-time operation. This program is
funded by Alcan International.

Although initially planned as a ONE year Reasearch Assistantship it
would be possible to convert the post into a TWO or THREE year
studentship if preferred to allow registration for MPhil/Phd.


Candidates for both posts should have an interest in ANNs and good
mathematical skills, ideally with a knowledge of MATLAB.

Enquiries to: Dr Keith Osman, Manufacturing Technology, UCE Birmingham
tel: 021-331-5662 fax: 021-331-6315 email: k.a.osman@uk.ac.uce





------------------------------

Subject: P.C. Based Neural Softwares
From: "R.S.Habib" <R.S.Habib@lut.ac.uk>
Date: Thu, 22 Jul 93 16:12:56 +0000

Hello There,


Can anybody advice me how can I get a list of some avialble P.C. based
softwre tools preferably but (not necessary) Back-prop. based Any advice
is appreciated.


Thank you


R. Istepanian
E-mail : R.S.Habib@lut.ac.uk


------------------------------

Subject: Basins of Attraction of Cellular Automata
From: jboller@is.morgan.com (John Boller)
Date: Fri, 30 Jul 93 15:37:18 -0500

Hi,
I am looking for references to the comparison of
Basins of Attraction of Cellular Automata and
Neural Networks.
I would greatly appreciate anyone who could point
me in the correct direction.

thanks, john boller

email: jboller@is.morgan.com



------------------------------

Subject: CASE project available for students
From: John Shawe-Taylor <john@dcs.rhbnc.ac.uk>
Date: Tue, 03 Aug 93 14:00:39 +0000

The following quota CASE project title and description has been agreed
with an industrial partner. Any interested students should contact
asap either John Shawe-Taylor by email (john@dcs.rhbnc.ac.uk) or Dieter
Gollmann by telephone (0784-443698).

Many thanks,
John Shawe-Taylor
Department of Computer Science,
Royal Holloway, University of London,
Egham, Surrey TW20 0EX UK
Fax: (0784) 443420


Constrained Models of Time series Prediction: Theory and Practice

The project will examine neural models for time series prediction. The
problem of introducing constraints will be examined, in particular
criteria for choosing appropriate cost functions and a posteriori
estimates of confidence levels will be studied, including situations
where shifts occur in the underlying distributions. These principles will
be addressed in the context of applications on real data arising from
complex predictions of market developments.




------------------------------

Subject: Training of Nets and Multiple Solutions
From: jboller@is.morgan.com (John Boller)
Date: Thu, 05 Aug 93 15:07:18 -0500

Hi,
I was wondering if anyone had anecdotes or sources
about the behaviour of Neural Nets when the training
sets permit a number of solutions as equally likely
(same utility value). I am looking at Basins of
Attraction, and this case would seem to be related.

Thanks, John Boller
email: jboller@is.morgan.com



------------------------------

End of Neuron Digest [Volume 11 Issue 46]
*****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT