Copy Link
Add to Bookmark
Report

Neuron Digest Volume 01 Number 01

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

NEURON Digest       1 December 1986      Volume 1 - Number 1 

Topics in this NEURON Digest --
Moderators comments
Excerpts from recent AILIST Digests (Thanks KEN)
Partial trip review from Fall Joint Computer Conference
Available Information

----------------------------------------------------------------------

NEURON Digest subscribers;

Welcome. This is the premiere edition of the NEURON Digest.
Much of what appears in this volume has been collected over the
past month from various sources. Hopefully, each of you will
begin to send messages/notices/requests in the near future and
this digest will get off to spectacular start.

I received approximately 200 responses from my mailings (US
Postal and AILIST Digest). Many of these were re-distribution
addresses, so we can guess that the subscription to this list is
around 300. I have "lost" at least 20 of the mail messages
(through my own fault) and hope to get re-mailings of those folks
requests.

There were quite a few comments mixed in with the requests; from
remarks about the name (some liked NEURON, some didn't), to
questions regarding the operations of ARPANet (most of which I
couldn't answer). For the most part, I choose the FROM: field
from the messages as my TO: field; even when the body of the
message read differently. The reasoning for this is that I
figured that if the networks could get the message to me from
that address, then I could get it back the same way. If any of
you want to change the address I have for you, or change over to
a local re-distribution system, simply send a note to me at
NEURON-REQUEST. Hopefully, over the next couple of months, I can
learn enough about the networks to be able to "read" an address
accurately.

Well, keep those card and letters coming. I will send out a
Digest as soon as enough (?) information comes in!

Regards,
Michael T. Gately

p.s. One comment I received mentioned that the address:
NEURON%TI-CSL.CSNET@CSNET-RELAY.ARPA would work just as well as
the one I originally sent out, and is shorter. However, as I
mentioned, I trust the FROM: field the most.

------------------------------

[These are excerpts from AILIST Digest V4 #240. - MG]


Date: 28 Oct 86 21:05:49 GMT
From: uwslh!lishka@rsch.wisc.edu (a)
Subject: Re: simulating a neural network


I just read an interesting short blurb in the most recent BYTE issue
(the one with the graphics board on the cover)...it was in Bytelines or
something. Now, since I skimmed it, my info is probably a little sketchy,
but here's about what it said:

Apparently Bell Labs (I think) has been experimenting with neural
network-like chips, with resistors replacing bytes (I guess). They started
out with about 22 'neurons' and have gotten up to 256 or 512 (can't
remember which) 'neurons' on one chip now. Apparently these 'neurons' are
supposed to run much faster than human neurons...it'll be interesting to see
how all this works out in the end.

I figured that anyone interested in the neural network program might
be interested in the article...check Byte for actual info. Also, if anyone
knows more about this experiment, I would be interested, so please mail me
any information at the below address.

--
Chris Lishka /l lishka@uwslh.uucp
Wisconsin State Lab of Hygiene -lishka%uwslh.uucp@rsch.wisc.edu
\{seismo, harvard,topaz,...}!uwvax!uwslh!lishka

------------------------------

Date: 27 Oct 86 19:50:58 GMT
From: yippee.dec.com!glantz@decwrl.dec.com
Subject: Re: Simulating neural networks

*********************

Another good reference is:

Martin, R., Lukton, A. and Salthe, S.N., "Simulation of
Cognitive Maps, Concept Hierarchies, Learning by Simile, and
Similarity Assessment in Homogeneous Neural Nets,"
Proceedings
of the 1984 Summer Computer Simulation Conference, Society for
Computer Simulation, vol. 2, 808-821.

In this paper, Martin discusses (among other things) simulating
the effects of neurotransmittors and inhibitors, which can have
the result of generating goal-seeking behavior, which is closely
linked to the ability to learn.

Mike Glantz
Digital Equipment Centre Technique Europe
BP 29 Sophia Antipolis
06561 Valbonne CEDEX
France

My employer is not aware of this message.

*********************

------------------------------

Date: 27 Oct 86 17:36:23 GMT
From: zeus!berke@locus.ucla.edu (Peter Berke)
Subject: Glib "computation"

In article <1249@megaron.UUCP> wendt@megaron.UUCP writes:
>Anyone interested in neural modelling should know about the Parallel
>Distributed Processing pair of books from MIT Press. They're
>expensive (around $60 for the pair) but very good and quite recent.
>
>A quote:
>
>Relaxation is the dominant mode of computation. Although there
>is no specific piece of neuroscience which compels the view that
>brain-style computation involves relaxation, all of the features
>we have just discussed have led us to believe that the primary
>mode of computation in the brain is best understood as a kind of
>relaxation system in which the computation proceeds by iteratively
>seeking to satisfy a large number of weak constraints. Thus,
>rather than playing the role of wires in an electric circuit, we
>see the connections as representing constraints on the co-occurrence
>of pairs of units. The system should be thought of more as "settling
>into a solution"
than "calculating a solution". Again, this is an
>important perspective change which comes out of an interaction of
>our understanding of how the brain must work and what kinds of processes
>seem to be required to account for desired behavior.
>
>(Rumelhart & Mcclelland, Chapter 4)
>

Isn't 'computation' a technical term? Do R&Mc prove that PDP is
equivalent to computation? Would Turing agree that "settling into
a solution"
is computation? Some people have tried to show that
symbols and symbol processing can be represented in neural nets,
but I don't think anyone has proved anything about the problems
they purportedly "solve," at least not to the extent that Turing
did for computers in 1936, or Church in the same year for lambda
calculus.

Or are R&Mc using 'computing' to mean 'any sort of machination whatever'?
And is that a good idea?

Church's Thesis, that computing and lambda-conversion (or whatever he
calls it) are both equivalent to what we might naturally consider
calcuable could be extended to say that neural nets "settle" into
the same solutions for the same class of problems. Or, one could
maintain, as neural netters tend to implicitly, that "settling" into
solutions IS what we might naturally consider calculable, rather than
being merely equivalent to it. These are different options.

The first adds "neural nets" to the class of formalisms which can
express solutions equivalent to each other in "power," and is thus
a variant on Church's thesis. The second actually refutes Church's
Thesis, by saying this "settling" process is clearly defined and
that it can realize a different (or non-comparable) class of problems,
in which case computation would not be (provably) equivalent to it.

Of course, if we could show BOTH that:
(1) "settling" is equivalent to "computing" as formally defined by Turing,
and (2) that "settling" IS how brains work,
then we'd have a PROOF of Church's Thesis.

Until that point it seems a bit misleading or misled to refer to
"settling" as "computation."

Peter Berke

------------------------------

[The following are excerpts from the AILIST Digest V4 #257 - MG]


Date: 4 Nov 86 11:25:18 GMT
From: mcvax!ukc!dcl-cs!strath-cs!jrm@seismo.css.gov (Jon R Malone)
Subject: Request for information (Brain/Parallel fibers)

<<<<Lion eaters beware>>>>
Nice guy, into brains would like to meet similiarly minded people.
Seriously : considering some simulation of neural circuits. Would
like pointers to any REAL work that is going on (PS I have read the
literature).
Keen to run into somebody that is interested in simulation at a low-level.
Specifically:
* mossy fibers/basket cells/purkyne cells
* need to find out parallel fiber details:
* length of
* source of/destination of

Any pointers or info would be appreciated.

------------------------------

Date: 4 Nov 86 18:40:15 GMT
From: mcvax!ukc!stc!datlog!torch!paul@seismo.css.gov (paul)
Subject: Re: THINKING COMPUTERS ARE A REALITY (?)

People who read the original posting in net.general (and the posting about
neural networks in this newsgroup) may be interested in the following papers:

Boltzmann Machines: Constraint Satisfaction Networks that Learn.
by Geoffrey E. Hinton, Terrence J. Sejnowski and David H. Ackley
Technical Report CMU-CS-84-119
(Carnegie-Mellon University May 1984)

Optimisation by Simulated Annealing
by S. Krikpatrick, C.D.Gelatt Jr., M.P.Vecchi
Science Vol. 220 No. 4598 (13th May 1983).

...in addition to those recommended by Jonathan Marshall.

Personally I regard this type of machine learning as something of a holy grail.
In my opinion (and I stress that it IS my own opinion) this is THE way to
get machines that are both massively parallel and capable of complex tasks
without having a programmer who understands the in's and out's of the task
to be accomplished and who is prepared to spend the time to hand code (or
design) the machine necessary to do it. The only reservation I have is whether
or not the basic theory behind Boltzmann machines is good enough.

Paul.

------------------------------

From: NGSTL1::JIMCL "VMS -- Love it or Leave it" 12-NOV-1986 07:39
To: CRL1::GATELY,JIMCL
Subj: RE: @ai >> FJCC Trip Report now available free of charge

[The following is a segment of a trip report on the Fall Joint
Computer Conference. MG]


Un-Trip Report -- Fall Joint Computer Conference 1986

INFOMART, Dallas November 2-6

Jim Carlsen-Landy


Critic's Choice Awards
----------------------
Best Word: "hopeware" -- software that would be great if only it
were actually implemented (courtesy of Gary Cottrell, UC Berkeley;
used to describe his parallel distributed processing approach to
natural language processing)

Best Quote: "FORTRAN is an insult to the term language." -- Dr. Wilson,
in the Plenary Address on Nov. 5

Second Best Quote: "People are smarter ... because they have brains."
-- Gary Cottrell (again)


Technical Session Reviews
-------------------------

"Connectionist Approaches to Natural Language Processing"

Gary Cottrell, U. Cal. San Diego, Cognitive Science Institute


Natural Language Track, "Text Processing"
Session Chair: Richard Granger, Univ. of California at Irvine

Session 2, Day 1, fourth speaker

Impressions: absolutely wonderful, saved the whole session; good (though
unplanned) intro to parallel distributed processing


1. Parallel distributed processors

a. Networks of simple modules

b. High degree of connectivity - weighted links

c. Only activations can cross links

d. No interpretation is done
-- all units compute activation internally based on their inputs
-- "solution" is stable network state

e. Knowledge is encoded in connections
-- units represent hypotheses
-- connections encode constraints between hypotheses
-- the network "relaxes" to a stable state
-- performs parallel constraint satisfaction


2. PDP in natural language processing

a. Connectionist parsing

syntax ======== semantics
\ /
\ /
word-sense
|
lexical

b. Word-sense ambiguity resolved by PDP relaxation
-- related meanings of words in sentence "vote" for each other
-- many small aspects of phrase will coalesce into a larger
symbol (meaning)

c. PDP demonstrates human-like behavior with regard to NL processing

d. Can perform multiple constraint relaxation

e. Can represent meaning effectively
-- shades of meaning
-- smoothly varying constraints
-- filling in default values (extreme ellipsis)

f. Demonstrates rule-like behavior without entering rules

3. Parting comments

a. Buy the "Parallel Distributed Processing" book (MIT Press)

b. All of the above is still in the "hopeware" stage (i.e. it
sounds great, but nobody's actually tried it yet)



No corresponding paper in Proceedings.
What I left out of the notes was an interesting example of how a connectionist
system, given the results of people's feelings about what kind of furniture
belonged in different kinds of rooms (e.g. a bathtub has a high incidence
in bathrooms), was able to build a complete room given one or two items
of furniture. The interesting thing was that when they gave it conflicting
initial information (e.g. a room with a bathtub and a TV), it made up a
new kind of room, and filled in the furniture it "thought" should be in
it. Fun stuff.

There are also some notes available about his work, in TROFF (UNIX) format,
that you can get from him directly. His email address is:

cottrell@nprdc.arpa


Enjoy
Jim C-L

------------------------------
From: WATROUS%ENIAC.SEAS.UPENN.EDU%LINC.CIS.UPENN.EDU@CSNET-RELAY.ARPA
To: NEURON@TI-CSL
Subj: NEURON Digest

I recently completed a Technical Report entitled
"Learning Phonetic Features Using Connectionist Networks: An
Experiment in Speech Recognition"
which will be of interest
to some of the subscribers to the digest. It is available as
MS-CIS-86-78 from the University of Pennsylvania. Dr. Lokendra
Shastri is the co-author.


Here is the mail address I use for my location at Siemens,

princeton\!siemens\!rlw.uucp@CSNET-Relay

You probably already know about the connectionist mailing
list maintained by D. Touretzky at CMU: the contact address is

Connectionists-Request @ C.CS.CMU.EDU



Cheers,

Ray Watrous

*****************
END NEURON DIGEST

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT