Copy Link
Add to Bookmark
Report

Neuron Digest Volume 02 Number 15

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

NEURON Digest	Wed Jul  8 08:13:34 CDT 1987 
Volume 2 / Issue 15
Today's Topics:

Connectionist AI Grad Schools
Call for Papers - Neural Networks
Borrowing from Biology [Half in Jest]
Tech. Reports Available
Tech Report: Generalization of Backpropagation

Taking AI models and applying them to biology...
Re: Taking AI models and applying them to biology...
Models of biological aging
Neurons can't regenerate? (was: Re: Taking AI models...)
Re: Models of biological aging
Re: Neurons can't regenerate? (was: Re: Taking AI models...)

----------------------------------------------------------------------

Date: 29 May 1987 13:26-EDT
From: Brian.Yamauchi@speech2.cs.cmu.edu
Subject: Connectionist AI Grad Schools

I will be graduating from Carnegie-Mellon next May, with a BS in applied
math/computer science, and I am planning to attend graduate school with the
goal of a PhD in computer science.

My field of interest is artificial intelligence, specifically, connectionist
artificial intelligence. I am currently consdiering Carnegie-Mellon, MIT,
Caltech, Stanford, UCSD, and the University of Rochester. Are there any
other universities that I should be considering? Are there any universities
conducting connectionist AI research that I have missed?

I would greatly appreciate any information that anyone could provide. Also,
I would be interested in hearing any opinions about the relative merits of
the computer science graduate programs at these universities, both in
general and relative to my specific interests.

Thanks in advance,

Brian Yamauchi
yamauchi@speech2.cs.cmu.edu

------------------------------

Date: 8 June 1987, 16:25:07 EDT
From: Bruce Shriver <SHRIVER@ibm.com>
Subject: Call for Papers - Neural Networks

Call for Papers and Referees

Special Issue of Computer Magazine
on Neural Networks

The March, 1988 issue of Computer magazine will be devoted
to a wide range of topics in Neural Computing. Manuscripts
that are either tutorial, survey, descriptive, case-study,
applications-oriented or pedagogic in nature are immediately
sought in the following areas:

o Neural Network Architectures
o Electronic and Optical Neurocomputers
o Applications of Neural Networks in Vision, Speech
Recognition and Synthesis, Robotics, Image Process-
ing, and Learning
o Self-Adaptive and Dynamically Reconfigurable Systems
o Neural Network Models
o Neural Algorithms and Models of Computation
o Programming Neural Network Systems

INSTRUCTIONS FOR SUBMITTING MANUSCRIPTS

Manuscripts should be no more than 32-34 typewritten,
double-spaced pages in length including all figures and ref-
erences. No more than 12 references should be cited. Papers
must not have been previously published nor currently sub-
mitted for publication elsewhere. Manuscripts should have a
title page that includes the title of the paper, full name
of its author(s), affiliation(s), complete physical and
electronic address(es), telephone number(s), a 200 word ab-
stract, and a list of keywords that identify the central is-
sues of the manuscript's content.

DEADLINES

o A 200 word abstract on the manuscript is due as soon
as possible.
o Eight (8) copies of the full manuscript is due by
August 30, 1987.
o Notification of acceptance is November 1, 1987.
o Final version of the manuscript is due no later than
December 1, 1987.

SEND SUBMISSIONS AND QUESTIONS TO

Bruce D. Shriver
Editor-in-Chief, Computer
IBM T. J. Watson Research Center
P. O. Box 704
Yorktown Heights, NY 10598
Phone: (914) 789-7626

Electronic Mail Addresses:
arpanet: shriver@ibm.com
bitnet: shriver at yktvmh
compmail+: b.shriver

-------

------------------------------

Date: Wed, 10 Jun 87 09:51 EDT
From: Seth Steinberg <sas@bfly-vax.bbn.com>
Subject: Borrowing from Biology [Half in Jest]

Actually, the biologists have been borrowing from the history of the
Roman Empire. Cincinatus comes down from his farm and codifies the
laws for the Republic and creates a nearly perfect mechanism which
starts taking over the Mediterranean basin. By providing for a means
of succession (read "DNA replication"), the Empire is able to achieve
higher levels of organization. Unfortunately, the military (read "the
immune system"
), slowly grows in strength as the Empire expands and
finally reaches a limit to its expansion and spends the next millenium
rotting away in Byzantium.

Theories about entropy are about complex systems in general, not just
the behavior of energy in steam engines. Biologists have latched onto
them to account for aging in organisms and to explain the epochs of
evolution. (Why aren't there any new phyla being created?) If you've
ever tried to make a major change in a decade old program think of what
the biologists are up against with their billion year old kludges.
Last month, an article in Scientific American described a glucose
complex based aging mechanism, arguing that many aging effects could be
caused by very slow chemical reactions induced by the operating
environment. Next month we may discover an actual internal counter
within each cell. It is quite probable that there are dozens of
mechanisms at work. With 90% of the genome encoding for garbage,
elegant design is more of a serendipity than the norm.

Seth Steinberg
sas@bbn.com

P.S. Did you notice the latest kludge? They've found a gene whose DNA
complement also encodes a gene! Kind of like a 68000 program you can
execute if you put a logical complement on each instruction fetch.
Neat, huh?

------------------------------

Date: Thu, 28 May 87 11:34:54 EDT
From: sg@corwin.ccs.northeastern.edu
Subject: Tech. Reports Available

The following reports are available:

1. Random Cells: An Idea Whose Time Has Come and Gone
... And Come Again?

2. Sequential Associative Memories

3. Automated Generation of Connectionist Expert Systems
For Problems Involving Noise and Redundancy

Abstracts follow.

-------------------------------------------------------------------------


Random Cells: An Idea Whose Time Has Come and Gone ... And Come Again?

Steve Gallant and Donald Smith

ABSTRACT:
Frank Rosenblatt proposed using random functions in a neural (or connectionist)
network in order to improve the learning capability of other cells. This
proposal was not successful, for reasons reviewed here. A modification of
Rosenblatt's idea is to: 1) use random linear discriminants, each of which is
connected to all inputs, for fixed cells, and 2) use the pocket algorithm, a
well-behaved modification of perceptron learning, to generate coefficients
for cells that learn. The main idea is to have a sufficiently rich
distributed representation in the activations of the random discriminants
rather than having individual random cells compute individual features. To
emphasize this distinction, the procedure has been named the Distributed
Method. This revised approach overcomes or sidesteps the main theoretical
problems with Rosenblatt's original proposal.

Experimental results for some of the most difficult classical learning
problems are presented. The Distributed Method appears to
be a practical way to construct network models for complex problems.
[to appear IEEE-ICNN]

-----------------------------------------------------------------------


Sequential Associative Memories

Steve Gallant

Tech. Report NU-CCS-87-20

ABSTRACT:
Humans are very good at manipulating sequential information, but sequences
present special problems for connectionist models.

This paper examines subnetworks of connectionist models called
Sequential Associative Memories (SAM's) that have feedback and high
connectivity. The coefficients for SAM cells are unmodifiable and are
generated at random.

SAM's serve two functions:

1. Their activations determine a state for the network which permits
previous inputs and outputs to be recalled, and
2. They increase the dimensionality of input and output representations
to make it possible for other (modifiable) cells in the network to learn
difficult tasks.

The second function is similar to what has been called the distributed
method, a way of generating intermediate cells for problems not involving
sequences.

To illustrate the underlying difficulties of learning various tasks we
examine several ways of learning to add, the most difficult of which serves
as an indication of how well SAM's can manipulate general sequences.
The question of network macrostructure is also addressed.

Our conclusion is that SAM's represent a promising approach for sequence
manipulation by connectionist models.

-----------------------------------------------------------------------

Automated Generation of Connectionist Expert Systems
For Problems Involving Noise and Redundancy

Steve Gallant

ABSTRACT:
When creating an expert system, the most difficult and
expensive task is constructing a knowledge base. This is
particularly true if the problem involves noisy data and redundant
measurements.

This paper shows how the MACIE process for generating connectionist
expert systems from training examples can be modified to accommodate
noisy and redundant data. The basic idea is to dynamically generate
appropriate training examples by constructing both a `deep' model
and a noise model for the underlying problem. Learning with
winner-take-all groups of variables is also required.

These techniques are illustrated with a small
example which would be very difficult for standard expert system
approaches.
[to appear: AAAI Workshop on Uncertainty in AI, Seattle, 1987.
This is an 80% revised version of TR SG-86-38]

-----------------------------------------------------------------------
Copies are available from:

_____________________________________
| |
|Steve Gallant |
|College of Computer Science, 221 CN |
|Northeastern University |
|Boston, MA 02115 |
|_____________________________________|

CSNet: SG@CORWIN.CCS.NORTHEASTERN.EDU


------------------------------

Date: Mon, 1 Jun 87 16:29:35 EDT
From: Fernando Pineda <fernando@aplvax.arpa>
Subject: Tech Report: Generalization of Backpropagation

GENERALIZATION OF BACKPROPAGATION TO
RECURRENT NEURAL NETWORKS

Fernando J. Pineda

Johns Hopkins University, Applied Physics Laboratory

5/27/1987


ABSTRACT

The feedforward neural network of Rumelhart Hinton and Williams (1)
is generalized to networks with feedback loops. Activation is propagated
throughout the network by differential equations which are of the Hopfield
form. The backpropagation of error is performed by a second set of
differential equations which depend on the steady state solutions of the
forward propagation equations. The forward and backward propagation equations
can be be viewed as the dynamical equations for two interacting neural
networks. One which performs the memory recall task and one which adjusts
the synaptic weights of the first.
The new neural network bears a resemblance to the network proposed
by Lapedes and Farber (2). However, the new algorithm requires considerably
less memory and execution time.

(1) D.E. Rumelhart, G.E. Hinton and R.J. Williams, in Parallel Distributed
Processing, eds. D.E. Rumelhart and J.L. McClelland, MIT Press, (1986)

(2) Alan Lapedes and Robert Farber, Physica ,D22, 247-259, (1986)


[Copies are available from FERNANDO@APLVAX.ARPA or by writing to the
address listed above. Ask for memo S1A-63-87 ]


------------------------------

Date: 6 Jun 87 01:38:26 GMT
From: mnetor!yetti!unicus!craig@seismo.css.gov (Craig D. Hubley)
Subject: Taking AI models and applying them to biology...

[The remainder of this NEURON Digest is a series of messages forwarded
to me by Ken Laws of AILIST Digest. - MTG]

Forgive the wide cross posting, net.gods, but I am interested in gathering
an opinion from biological and artifical intelligence people on a model
that arises from AI but has (possibly) biological implications:

Foreword or WHY I'M WRITING THIS.
--------------------------------
I was semi-surprised in recent months to discover that cognitive psychology,
far from developing a bold new metaphor for human thinking, has (to a degree)
copied at least one metaphor from third-generation computer science.

This description of the human memory system, though cloaked in vaguer terms,
corresponds more or less one-to-one with the traditional computer
architecture we all know and love. To wit:

- senses have "iconic" and "echo" memories analogous to buffers.
- short term memory holds information that is organized for quick
processing, much like main storage in a computing system.
- long term memory holds information in a sort of semantic
association network where large related pieces of information
reside, similar to backing or "archived" computing storage.

At least this far, this theory appears to owe a lot to computer science.
Granted, there is lots of empirical evidence in favour, but we all know
how a little evidence can go far too far towards developing an analogy.

What I think we may need are good parallel connectionist computing models
for the social sciences to copy, rather than these old ones that we are
beginning to fuse and modify and discard. After all, engineering can
construct and test artifacts much quicker than psychologists can. And
investigate their insides and their performance as well...

The Point or WHAT I'M THINKING ABOUT
------------------------------------
Single cells are constructed according to instructions resident
in their own DNA. When their reproductive process fails, they
die, become cancerous, etc...

In computing terms, a self-reproducing program messes up the code
and therefore fails to function (it does not reproduce). Or, it may
continue to reproduce a flawed cell (cancer...).

But a biological mechanism such as, say, a muscle or a brain is
a massively parallel system consisting of many many redundant cells,
each of which is capable of performing (at least almost) the same
function.

So many many parts would have to fail before the effect was enough
to endanger the system as a whole. That is, it degrades gracefully.
This effect has been observed in parallel sensing systems, which
use several low-resolution phased fields that redundantly cover
the same area. Removing one such field results in a loss of
resolution, but not utter failure to detect a stimulus. Details
in Geoffrey Hinton and others... (Byte AI issue, 1985?)

At some point of degradation, the whole parallel system will collapse.
Or an aged human being will die of a cold.

The Question or WHAT DO YOU THINK?
----------------------------------
Apparently, all human organ weights begin to decline shortly after puberty.
The cumulative effect of this seeming reduction of resources isn't felt so
strongly until middle-age, when we become more susceptible to disease.

So far, this is just a statement of the nature of parallel systems.

But does it hold up as a theory of aging?

- Is mitosis sufficiently prone to failure to account for organ decline?
- Statistically, one would expect exponential distribution for
failure of single cells, the rate dependent on mitosis failure,
and perhaps modified by other cell-killing factors
- Does organ failure, medically, occur at the point where
a parallel processing system, mathematically, would fail?

I've heard that mammal cells appear to suffer a "hard" reproductive limit
of 52 mitosis operations, and that meiosis "resets this counter" to 0.

- any comment on this, bio-med types? Is it true?
- Would a theory assuming a simple variable or random "counter" in each cell
limiting its reproductive span better explain aging (programmed cells...)

It doesn't seem so... regardless of the origin of the failure, the observed
degradation of the system as a whole would still follow this pattern.

The upshot of this is that a potentially useful life science model may have
just materialized in artificial intelligence.

The main flaw that I can see in it is that a cell is complex mechanism in
and of itself, and so the success/failure of each might be subject to
many factors in parallel as well. That is, it might not fail the way a
short subroutine would were it copied badly, which is the gist of this.
But then one might find a lower level where the parts were sufficiently
monolithic that the analogy held.

This seems to kick the butt of the good old 'Entropy' theory... cop-out.
Incompentent nineteenth century philosophers leaned heavily on entropy.

Comments? Flames? The name of a good shrink?

Musing,
Craig.

------------------------------

Date: 10 Jun 87 02:49:55 GMT
From: hao!boulder!pell@husc6.harvard.edu (Anthony Pelletier)
Subject: Re: Taking AI models and applying them to biology...

(Craig D. Hubley) writes:
(cognative psycology)
>far from developing a bold new metaphor for human thinking, has (to a degree)
>copied at least one metaphor from third-generation computer science.
>

one of the things that has always amused me is that, to the extent that
I understand the structuring of computers, it seems that the cell
and the computer scientists have come up with similar solutions to
many of the same questions. This is particularly true when one looks
at information flow in the cell. I feel comfortable in assuming that
the cell had little help from the CS types in solving problems of information
flow.
It is likely to be true that contemporaries of in different scientific
fields play with each other's ideas. This is why "Nature" insists on being so
broad and why F.H.C.C. can get work.

But I should stay more to the point.

>The Question or WHAT DO YOU THINK?
>----------------------------------
>Apparently, all human organ weights begin to decline shortly after puberty.
>The cumulative effect of this seeming reduction of resources isn't felt so
>strongly until middle-age, when we become more susceptible to disease.

>- Is mitosis sufficiently prone to failure to account for organ decline?
>
>I've heard that mammal cells appear to suffer a "hard" reproductive limit
>of 52 mitosis operations, and that meiosis "resets this counter" to 0.
>

It would seem to me that the step that is likely to give the cell trouble
is not mitosis but DNA replication. If a whole chromosome lost or
non-disjoined, that cell is in some serious trouble. Progressive
accumulation mistakes through replication and general maintanence seems a more
likely culprit.

I confess that once the topic turns to outside the single cell or involves
more than, say, two cells, I am hopelssly lost.
So the question of aging is outside my capabilities. This will not, of course,
stop me from volenteering the following:
I have never liked the "hard-wired-number-of-mitosis" model.
I am not sure why; it just seems implausible, or worse yet, unecessary.
Supposedly "immortal" cells, like bacteria, actually have a rather high death
rate in the population (try doing a particle count then plating them out to see
how many are actually able to continue dividing).
Their apparent immortality is the result of unrestrained growth.
I suspect the failure rate is similar between bacteria and individual cells
of a metazoan. The difference may be simply that a metazoan cannot tolerate
unrestrained growth of cell populations. The cells are forced to stop
dividing when in contact with other cells. they can be induced to re-enter
the cycle by growth factors released, for example, when the skin is cut.
I would guess that if one coupled the limitations on growth necessary to
be a metazoan with accumulated errors, both during replication and
simple maitanence, one could explain gradual breakdown of tissue without
invoking the "hard-wire" model.

oh well, I've gone on too long already.


tony (few degrees are worth remembering--and none are worth predicting)

Pelletier
Molecular etc. Bio
Boulder, Co. 80309-0347

P.S. I think alot about information flow problems and would enjoy
discussions on that...if anyone wants to chat.

------------------------------

Date: 7 Jun 87 00:28:36 GMT
From: chiaraviglio@husc4.harvard.edu (lucius chiaraviglio)
Subject: Re: Taking AI models and applying them to biology...
References: <622@unicus.UUCP>

In article <622@unicus.UUCP> craig@unicus.UUCP (Craig D. Hubley) writes:
>I've heard that mammal cells appear to suffer a "hard" reproductive limit
>of 52 mitosis operations, and that meiosis "resets this counter" to 0.
>
>- any comment on this, bio-med types? Is it true?
>- Would a theory assuming a simple variable or random "counter" in each cell
>limiting its reproductive span better explain aging (programmed cells...)

Random failure may be a significant factor in aging, but a hard limit
on the number of times a cell may divide before it self-destructs has been
observed in tissue culture, where the cells are for the most part not
dependant on each other. Those cells which manage to get past the hard limit
are abnormal (although not necessarily cancerous) in ways beyond their mere
ability to keep dividing after they were supposed to self-destruct. I don't
remember most of the details of this, but I do remember that they tend to
become tetraploid (I think also aneuploid) due to an increase in the rate of
mitotic failure.

-- Lucius Chiaraviglio
lucius%tardis@harvard.harvard.edu
seismo!tardis.harvard.edu!lucius

------------------------------

Date: 11 Jun 87 15:22:13 GMT
From: hao!boulder!eddy@ames.arpa (Sean Eddy)
Subject: Models of biological aging
References: <622@unicus.UUCP>

In article <622@unicus.UUCP> craig@unicus.UUCP (Craig D. Hubley) writes:
>- Is mitosis sufficiently prone to failure to account for organ decline?
> - Statistically, one would expect exponential distribution for
> failure of single cells, the rate dependent on mitosis failure,
> and perhaps modified by other cell-killing factors
> - Does organ failure, medically, occur at the point where
> a parallel processing system, mathematically, would fail?
>
>I've heard that mammal cells appear to suffer a "hard" reproductive limit
>of 52 mitosis operations, and that meiosis "resets this counter" to 0.
>
>- any comment on this, bio-med types? Is it true?
>- Would a theory assuming a simple variable or random "counter" in each cell
>limiting its reproductive span better explain aging (programmed cells...)

The 'hard reproductive limit' you refer to is called the Hayflick
limit, and is by no means a hard and fast law.

Weismann (1891) proposed that aging was the result of our cells'
having finite replicative capacities. The advent of tissue culture
techniques showed however that certain cell lines (for instance,
the now-famous HeLa line) can be maintained in culture through
essentially an infinite number of divisions. It was not realized
until later that these cell lines were actually highly abnormal,
immortalized cells.

Hayflick (1965) did an experiment in which he took cells from
human embryos and grew them in vitro. He observed that the cells
would go through about 50 divisions (40-60) and then die. Thus
this limit of 50 CPD (cell population doublings) is called the
Hayflick limit. In actual practice, it is not thought that any
human cell approaches 50 divisions during the human lifetime.

Cells from older humans will undergo fewer divisions in vitro;
the older the source, the fewer divisions. There is considerable
variation in this data. Also, the slope of the loss of replicative
potential is small, at 0.2CPD lost/year of age.

A look at other animals than humans shows a rough correspondence
of Hayflick limit with lifespan. Mice have a maximum life span
of about two years, and a Hayflick limit of about 9.2 CPD;
Galapagos tortoises live to be older than 150 years and have
a Hayflick limit of 120 CPD.

Individuals afflicted with certain syndromes that accelerate the
rate of aging (progeria; Werner's syndrome; Down's syndrome) show
lower than normal Hayflick limits.

It is, as yet, not clear whether cell death, aging, and the Hayflick limit
are the result of a specific 'death program' in the genetics of
the individual cells; or whether they are the consequence of
progressive accumulation of damage.

Programmed cell death is now accepted as a real feature of
developmental biology. A well known example is the metamorphosis
of amphibians (the loss of a tadpole's tail). If one fuses
two cells, one immortalized and one normal, the resulting fusion
cell will be mortal; this result (of immortalization being
recessive) has been taken to suggest that mortality is due
to a genetic program. However, the model suffers from a
difficulty in that it becomes necessary to propose how the cell
knows when it is time to die.

The model of accumulated random damage explains the timing
of cell death, and different Hayflick limits can be explained
by different repair efficiencies. It becomes difficult to
explain cell immortalization by this model however, while
programmed cell death can explain immortalization as an escape from
the control of the program.

I hope that this information is of use in the discussion,
though I confess my negligible knowledge of AI makes it unclear
to me what we're really discussing... I wanted to point out,
however, that researchers in the field seem to consider
the models of random damage accumulation and genetically
programmed death as being opposed.


- Sean Eddy
- MCD Biology; U. of Colorado at Boulder; Boulder CO 80309
- eddy@boulder.colorado.EDU !{hao,nbires}!boulder!eddy
-
- "Why should the government subsidize intellectual curiosity?"
- - Ronald Reagan

------------------------------

Date: 12 Jun 87 16:08:04 GMT
From: hao!boulder!eddy@ames.arpa (Sean Eddy)
Subject: Re: Taking AI models and applying them to biology...
References: <622@unicus.UUCP>, <1331@sigi.Colorado.EDU>

In article <1331@sigi.Colorado.EDU> pell@boulder.Colorado.EDU writes:
>It would seem to me that the step that is likely to give the cell trouble
>is not mitosis but DNA replication. If a whole chromosome lost or
>non-disjoined, that cell is in some serious trouble. Progressive
>accumulation mistakes through replication and general maintanence seems a more
>likely culprit.

"General maintenance" is a very important thing to bring up. It seems
to me that replication/mitosis can't be the whole story in aging. One
must also propose other models because there are cells that do not
divide after a certain point, yet still age and die. Neurons are the
classic example; not only do they not divide, they cannot even
be replaced (in humans) if damaged.

- Sean Eddy
- MCD Biology; U. of Colorado at Boulder; Boulder CO 80309
- eddy@boulder.colorado.EDU !{hao,nbires}!boulder!eddy
-
- "So what. Big deal."
- - Emilio Lazardo

------------------------------

Date: 14 Jun 87 05:33:39 GMT
From: ihnp4!cuae2!ltuxa!ttrdc!levy@ucbvax.Berkeley.EDU (Daniel R. Levy)
Organization: AT&T, Skokie, IL
Subject: Neurons can't regenerate? (was: Re: Taking AI models...)
References: <622@unicus.UUCP>, <1331@sigi.Colorado.EDU>, <1349@sigi.Colorado.EDU>

In article <1349@sigi.Colorado.EDU>, eddy@boulder.Colorado.EDU (Sean Eddy) writes:
> ...there are cells that do not
> divide after a certain point, yet still age and die. Neurons are the
> classic example; not only do they not divide, they cannot even
> be replaced (in humans) if damaged.

Am I misinformed, then, when I hear about nerves growing back together in
people who have an accidentally severed appendage surgically reattached?
Also, what about the nerves which grow back into a wounded region of the
body, say an area of burned flesh?
--
|------------dan levy------------| Path: ..!{akgua,homxb,ihnp4,ltuxa,mvuxa,
| an engihacker @ | vax135}!ttrdc!ttrda!levy
| at&t data systems division | Disclaimer: try datclaimer.
|--------skokie, illinois--------|
-------

------------------------------

Date: 14 Jun 87 05:20:21 GMT
From: ihnp4!cuae2!ltuxa!ttrdc!levy@ucbvax.Berkeley.EDU (Daniel R. Levy)
Subject: Re: Models of biological aging
References: <622@unicus.UUCP>, <1343@sigi.Colorado.EDU>

In article <1343@sigi.Colorado.EDU>, eddy@boulder.Colorado.EDU (Sean Eddy) writes:
< Hayflick (1965) did an experiment in which he took cells from
< human embryos and grew them in vitro. He observed that the cells
< would go through about 50 divisions (40-60) and then die. Thus
< this limit of 50 CPD (cell population doublings) is called the
< Hayflick limit. In actual practice, it is not thought that any
< human cell approaches 50 divisions during the human lifetime.

Is this true even for skin cells? Where DOES all the new skin come
from as the old skin cells constantly die and flake off, then? (At
least I was under the impression that skin cells do this. I also once read
an article [Scientific American?] which said, as best as I could understand
it, that intestinal cells continually regenerate and get sloughed off during
the normal digestive process. That's a lot of cell division, or am I
mistaken?)
--
|------------dan levy------------| Path: ..!{akgua,homxb,ihnp4,ltuxa,mvuxa,
| an engihacker @ | vax135}!ttrdc!ttrda!levy
| at&t data systems division | Disclaimer: try datclaimer.
|--------skokie, illinois--------|
-------

------------------------------

Date: 14 Jun 87 16:37:31 GMT
From: hao!boulder!eddy@ames.arpa (Sean Eddy)
Subject: Re: Neurons can't regenerate? (was: Re: Taking AI models...)
References: <1331@sigi.Colorado.EDU>, <1349@sigi.Colorado.EDU>, <1757@ttrdc.UUCP>

In article <1757@ttrdc.UUCP> levy@ttrdc.UUCP (Daniel R. Levy) writes:
>In article <1349@sigi.Colorado.EDU>, (Sean Eddy) writes:
>> ...there are cells that do not
>> divide after a certain point, yet still age and die. Neurons are the
>> classic example; not only do they not divide, they cannot even
>> be replaced (in humans) if damaged.
>
>Am I misinformed, then, when I hear about nerves growing back together in
>people who have an accidentally severed appendage surgically reattached?
>Also, what about the nerves which grow back into a wounded region of the
>body, say an area of burned flesh?

"Nerves growing back together" is different from neurons being
replaced. You can sever an axon and the the part of the axon
still attached to the cell body will regrow and form new
attachments, usually in the right way. But if you destroy the cell
body, end of story. To my knowledge, the cell bodies themselves
are not replaceable (yet; it would be nice if we could find
a way).


- Sean Eddy
- MCD Biology; U. of Colorado at Boulder; Boulder CO 80309
- eddy@boulder.colorado.EDU !{hao,nbires}!boulder!eddy
-
- Go Celtics!!
-------

------------------------------

Date: 15 Jun 87 00:11:40 GMT
From: muscat!sniff!warren@decwrl.dec.com (warren sypteras)
Organization: DEC Westfields ULTRIX System.
Subject: Re: Neurons can't regenerate? (was: Re: Taking AI models...)
References: <1331@sigi.Colorado.EDU>, <1349@sigi.Colorado.EDU>, <1757@ttrdc.UUCP>

In article <1757@ttrdc.UUCP> levy@ttrdc.UUCP (Daniel R. Levy) writes:
>In article <1349@sigi.Colorado.EDU>, eddy@boulder.Colorado.EDU (Sean Eddy) writes:
>> ...there are cells that do not
>> divide after a certain point, yet still age and die. Neurons are the
>> classic example; not only do they not divide, they cannot even
>> be replaced (in humans) if damaged.
>
>Am I misinformed, then, when I hear about nerves growing back together in
>people who have an accidentally severed appendage surgically reattached?
>Also, what about the nerves which grow back into a wounded region of the
>body, say an area of burned flesh?
>--

No, you are not misinformed. Damaged nerves can heal but only to a very
limited extent.

So you're both right. In Seans case Neurons do die and are not replaced
through normal cell division. However, the brain is so..... redundant
that you could kill off billions of Neurons and very possibly never no
it. Wars have historically been very beneficial in gaining insights into
brain (Neuron) function. Also, in a classic experiment called the Island
Experiment a monkey (I think) was taught a particular trick. The monkey
underwent a series of operations each one of which removed a piece of
the monkeys cortex. To everyones surprise, in the end, when all that was
left was a little island of cortex; the monkey could still do the trick.

In your case Daniel, you know that there are different degrees of
burns. Small amounts of nerve damage from small burns can be tolerated.
However, you must also know that severe burns cause irreparable damage
to nerves.

Think about this the next time you have a beer. The alcohol, it is said,
kills Neurons. :-) :-)

A good book to read for those interested in the brain is "Mind and
Supermind"
by Albert Rosenfield, Holt Rinehart Winston. It's easy
reading and very interesting.

Warren (an amateur doctor at best)
-------

------------------------------

End of NEURON-Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT