Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 232

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Friday, 24 Oct 1986      Volume 4 : Issue 232 

Today's Topics:
Queries - Neuron Chip & Neural Nets,
Learning - Neural Network Simulations & Cellular Automata,
Psychology - Self-Awareness,
Logic Programming - Bratko Review & Declarative Languages Bibliography

----------------------------------------------------------------------

Date: Tue, 21 Oct 86 20:11:17 pdt
From: Robert Bryant - Cross 8/87 <rbryant%wsu.csnet@CSNET-RELAY.ARPA>
Subject: Neuron Chip

INSIGHT magazine, Oct 13, 1986, page 62, had a brief article about a neuron
chip being tested by AT&T Bell Labs. "...registers, the electronic equivalent
of nerve cell synapses..." if any one has any more detailed information on
this please respond.
Rob Bryant
rbryant@wsu.csnet

[I believe Bell Labs was among the places putting one of Hopfield's
relaxation nets on a chip. They have also recently announced the
construction of an expert system on a chip (10,000 times as fast ...),
which I assume is a different project. -- KIL]

------------------------------

Date: Thu, 23 Oct 86 15:42:05 -0100
From: "Michael K. Jackman" <mkj%vax-d.rutherford.ac.uk@Cs.Ucl.AC.UK>
subject: Knowledge representation and Sowa's conceptual graphs



A number of us at Rutherford Appleton Laboratory (IKBS section)
have become interested in Sowa's approach to knowledge representation,
which is based on conceptual graphs. (see Clancey's review in AI 27,
1985, Fox, Nature 310, 1984). We believe it to be a particular powerful
and useful approach to KR and we are currently currently implementing
some of his ideas.

We would like to contact any other workers in this field
and exchanging ideas on Sowa's approach. Anyone interested should
contact me at Rutherford.

Michael K. Jackman
IKBS section - Rutherford Appleton Laboratory (0235-446619)

------------------------------

Date: 20 Oct 86 18:25:50 GMT
From: jam@bu-cs.bu.edu (Jonathan Marshall)
Subject: Re: simulating a neural network

In article <223@eneevax.UUCP> iarocci@eneevax.UUCP (Bill Dorsey) writes:
>
> Having recently read several interesting articles on the functioning of
>neurons within the brain, I thought it might be educational to write a program
>to simulate their functioning. Being somewhat of a newcomer to the field of
>artificial intelligence, my approach may be all wrong, but if it is, I'd
>certainly like to know how and why.
> The program simulates a network of 1000 neurons. Any more than 1000 slows
>the machine down excessively. Each neuron is connected to about 10 other
>neurons.
> .
> .
> .
> The initial results have been interesting, but indicate that more work
>needs to be done. The neuron network indeed shows continuous activity, with
>neurons changing state regularly (but not periodically). The robot (!) moves
>around the screen generally winding up in a corner somewhere where it occas-
>ionally wanders a short distance away before returning.
> I'm curious if anyone can think of a way for me to produce positive and
>negative feedback instead of just feedback. An analogy would be pleasure
>versus pain in humans. What I'd like to do is provide negative feedback
>when the robot hits a wall, and positive feedback when it doesn't. I'm
>hoping that the robot will eventually 'learn' to roam around the maze with-
>out hitting any of the walls (i.e. learn to use its senses).
> I'm sure there are more conventional ai programs which can accomplish this
>same task, but my purpose here is to try to successfully simulate a network
>of neurons and see if it can be applied to solve simple problems involving
>learning/intelligence. If anyone has any other ideas for which I may test
>it, I'd be happy to hear from you.


Here is a reposting of some references from several months ago.
* For beginners, I especially recommend the articles marked with an asterisk.

Stephen Grossberg has been publishing on neural networks for 20 years.
He pays special attention to designing adaptive neural networks that
are self-organizing and mathematically stable. Some good recent
references are:

(Category Learning):----------
* G.A. Carpenter and S. Grossberg, "A Massively Parallel Architecture for
a Self-Organizing Neural Patttern Recognition Machine." Computer
Vision, Graphics, and Image Processing. In Press.
G.A. Carpenter and S. Grossberg, "Neural Dynamics of Category Learning
and Recognition: Structural Invariants, Reinforcement, and Evoked
Potentials." In M.L. Commons, S.M. Kosslyn, and R.J. Herrnstein (Eds),
Pattern Recognition in Animals, People, and Machines. Hillsdale, NJ:
Erlbaum, 1986.
(Learning):-------------------
* S. Grossberg, "How Does a Brain Build a Cognitive Code?" Psychological
Review, 1980 (87), p.1-51.
* S. Grossberg, "Processing of Expected and Unexpected Events During
Conditioning and Attention." Psychological Review, 1982 (89), p.529-572.
S. Grossberg, Studies of Mind and Brain: Neural Principles of Learning,
Perception, Development, Cognition, and Motor Control. Boston:
Reidel Press, 1982.
S. Grossberg, "Adaptive Pattern Classification and Universal Recoding:
I. Parallel Development and Coding of Neural Feature Detectors."
Biological Cybernetics, 1976 (23), p.121-134.
S. Grossberg, The Adaptive Brain: I. Learning, Reinforcement, Motivation,
and Rhythm. Amsterdam: North Holland, 1986.
* M.A. Cohen and S. Grossberg, "Masking Fields: A Massively Parallel Neural
Architecture for Learning, Recognizing, and Predicting Multiple
Groupings of Patterned Data." Applied Optics, In press, 1986.
(Vision):---------------------
S. Grossberg, The Adaptive Brain: II. Vision, Speech, Language, and Motor
Control. Amsterdam: North Holland, 1986.
S. Grossberg and E. Mingolla, "Neural Dynamics of Perceptual Grouping:
Textures, Boundaries, and Emergent Segmentations." Perception &
Psychophysics, 1985 (38), p.141-171.
S. Grossberg and E. Mingolla, "Neural Dynamics of Form Perception:
Boundary Completion, Illusory Figures, and Neon Color Spreading."
Psychological Review, 1985 (92), 173-211.
(Motor Control):---------------
S. Grossberg and M. Kuperstein, Neural Dynamics of Adaptive Sensory-
Motor Control: Ballistic Eye Movements. Amsterdam: North-Holland, 1985.


If anyone's interested, I can supply more references.

--Jonathan Marshall

harvard!bu-cs!jam

------------------------------

Date: 21 Oct 86 17:22:54 GMT
From: arizona!megaron!wendt@ucbvax.Berkeley.EDU
Subject: Re: simulating a neural network

Anyone interested in neural modelling should know about the Parallel
Distributed Processing pair of books from MIT Press. They're
expensive (around $60 for the pair) but very good and quite recent.

A quote:

Relaxation is the dominant mode of computation. Although there
is no specific piece of neuroscience which compels the view that
brain-style computation involves relaxation, all of the features
we have just discussed have led us to believe that the primary
mode of computation in the brain is best understood as a kind of
relaxation system in which the computation proceeds by iteratively
seeking to satisfy a large number of weak constraints. Thus,
rather than playing the role of wires in an electric circuit, we
see the connections as representing constraints on the co-occurrence
of pairs of units. The system should be thought of more as "settling
into a solution" than "calculating a solution". Again, this is an
important perspective change which comes out of an interaction of
our understanding of how the brain must work and what kinds of processes
seem to be required to account for desired behavior.

(Rumelhart & Mcclelland, Chapter 4)

Alan Wendt
U of Arizona

------------------------------

Date: 22 Oct 86 13:58:12 GMT
From: uwmcsd1!uwmeecs!litow@unix.macc.wisc.edu (Dr. B. Litow)
Subject: cellular automata

Ed. Stephen Wolfram Contains many papers by Wolfram.
Available from Taylor&Francis,Intl. Publications Service,242 Cherry St.,
Philadelphia 19106-1906

*** REPLACE THIS LINE WITH YOUR MESSAGE ***

------------------------------

Date: 19 Oct 86 23:10:13 GMT
From: jbn@glacier.stanford.edu (John B. Nagle)
Subject: A pure conjecture on the nature of the self


Conjecture: the "sense of identity" comes from the same
mechanism that makes tickling yourself ineffective.

This is not a frivolous comment. The reflexes behind tickling
seem to be connected to something that has a good way of deciding
what is self and what isn't. There are repeatable phenomena here that
can be experimented with. This may be a point of entry for work on some
fundamental questions.


John Nagle

[I apologize for having sent out a reply to this message before
putting this one in the digest. -- KIL]

------------------------------

Date: 21 Oct 86 18:19:52 GMT
From: cybvax0!frog!tdh@eddie.mit.edu (T. Dave Hudson)
Subject: Re: A pure conjecture on the nature of the self

> Conjecture: the "sense of identity" comes from the same
> mechanism that makes tickling yourself ineffective.

Suppose that tickling yourself may be ineffective because of your
mental focus. Are you primarily focusing on the sensations in the
hand that is doing the tickling, not focusing, focusing on the idea
that it will of course be ineffective, or focusing on the sensations
created at the tickled site?

One of my major impediments to learning athletics was that I had no
understanding of what it meant when those rare competent teachers told
me to feel the prescribed motion. It requires an act of focusing on
the sensations in the different parts of your body as you move. Until
you become aware of the sensations, you can't do anything with them.
(Once you're aware of them, you have to learn how to deal with a
multitude of them, but that's a different issue.)

Try two experiments.

1) Walk forward, and concentrate on how your back feels. Stop, then
place your hand so that the palm and fingertips cover your lower
back at the near side of the spine. Now walk forward again.
Notice anything new?

2) Run one hand's index fingertip very lightly over the back of the
other hand, so lightly that you can barely feel anything on the back
of the other hand, so lightly that maybe you're just touching the
hairs on that hand and not the skin. Close your eyes and try to
sense where on the back of that hand the fingertip is as it moves.
Now do you feel a tickling sensation?

David Hudson

------------------------------

Date: 16 Oct 86 07:48:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Reviews

[Forwarded from the Prolog Digest by Laws@SRI-STRIPE.]

I'm in the middle of reading the Bratko book, and I would give
it a very high rating. The concepts are explained very clearly,
there are lots of good examples, and the applications covered
are of high interest. Part I (chapters 1-8) is about Prolog
per se. Part II (chapters 9-16) shows how to implement many
standard AI techniques:

chap. 9 - Operations on Data Structures
chap. 10 - Advanced Tree Representations
chap. 11 - Basic Problem-solving Strategies
chap. 12 - Best-first: a heuristic search principle
chap. 13 - Problem reduction and AND/OR graphs
chap. 14 - Expert Systems
chap. 15 - Game Playing
chap. 16 - Pattern-directed Programming

Part I has 188 pages, part II has 214.

You didn't mention Programming in Prolog by Clocksin & Mellish -
this is also very good, and covers some things that Bratko
doesn't (it's more concerned with non-AI applications), but all
in all, I slightly prefer Bratko's book.

-- John Cugini

------------------------------

Date: Mon, 6 Oct 86 15:47:15 MDT
From: Lauren Smith <ls%lambda@LANL.ARPA>
Subject: Bibliography on its way

[Forwarded from the Prolog Digest by Laws@SRI-STRIPE.]

I have just sent out the latest update of the Declarative
Languages bibliography. Please notify the appropriate
people at your site - especially if there were several
requests from your site, and you became the de facto
distributor. Again, the bibliography is 24 files.

This is the index for the files, so you can verify that you
received everything.

ABDA76a-AZAR85a BACK74a-BYTE85a CAMP84a-CURR72a DA83a-DYBJ83b
EGAN79a-EXET86a FAGE83a-FUTO85a GABB84a-GUZM81a HALI84a-HWAN84a
ICOT84a-IYEN84a JACOB86a-JULI82a KAHN77a-KUSA84b LAHT80a-LPG86a
MACQ84a-MYCR84a NAGAI84a-NUTE85a OHSU85a-OZKA85a PAPAD86a-PYKA85a
QUI60 RADE84a-RYDE85a SAIN84a-SZER82b TAGU84a-TURN85b
UCHI82a-UNGA84 VALI85-VUIL74a WADA86a-WORL85a YAGH83a-YU84a

There has been a lot of interest regarding the formatting of
the bibliography for various types of word processing systems.
The biblio is maintained (in the UK) in a raw format, hence that
is the way that I am distributing it. Since everyone uses
different systems, it seems easiest to collect a group of macros
that convert RAW FORMAT ===> FAVORITE BIBLIO FORMAT and distribute
them. So, if you have a macro that does the conversion please
advertise it on the net or better yet, let me know so I can let
everyone else know about it.

If you have any additions to make, please send them to:

-- Andy Cheese at
abc%computer-science.nottingham.ac.uk@cs.ucl.ac.uk
or Lauren Smith at ls@lanl.arpa

Thank you for your interest.

-- Lauren Smith

[ I will be including one file per issue of the Digest until
all twenty four files are distributed starting with the next
issue. -ed ]

[AIList will not be carrying this bibliography. -- KIL]

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT