Copy Link
Add to Bookmark
Report

Neuron Digest Volume 06 Number 04

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest	Monday, 22 Jan 1990		Volume 6 : Issue 4 

Today's Topics:
Research Assistant job (VLSI Implem. of Arti. Neural Nets)
Seminar: Applications of Neural Networks
Textbook announcement and outline
PhD program
NATO Conference Announcement
COGNITIVE NEUROSCIENCE RFP
TR: Networks That Learn Phonology


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Research Assistant job (VLSI Implem. of Arti. Neural Nets)
From: marwan@extro.ucc.su.oz.au (Marwan Jabri,
Sydney Univ. Elec. Eng., Tel: +61-2-692 2240)
Organization: University Computing Service, Uni. of Sydney, Australia.
Date: 05 Dec 89 05:13:32 +0000


Research Assistant


Microelectronic Implementations of Artificial Neural Networks

Systems Engineering and Design Automation Laboratory
Sydney University Electrical Engineering


Applications are invited from enthusiastic persons to work on the
development and implementation of analog/digital microelectronic building
blocks and simulation models for artificial neural networks. The project
aims at developing a multi-level simulation environment (mathematical,
structural and physical) for artificial neural networks.

Applicants should have an electrical engineering degree or equivalent.
The appointee may apply for enrollment towards a postgraduate degree
(part-time).

Preference will be given to applicants who have experience in artificial
neural networks, MOS analog or digital integrated circuit design or VLSI
computer aided design.

The appointment is for one year with prospect of renewal.

Salary range (according to qualifications)
Research Assistant Grade I ($22,585-$25,271)

Applications including Academic Record and CV may be sent to:


Dr M.A. Jabri,
Sydney University Electrical Engineering
NSW 2006 Australia
Tel: (+61-2) 692-2240
Fax: (+61-2) 692-3847
Email: marwan@ee.su.oz.au

Marwan Jabri E-mail: marwan@ee.su.oz
Systems Engineering and Design Automation Laboratory Fax: (+61-2) 692 3847
Sydney University Electrical Engineering
NSW 2006 Australia

------------------------------

Subject: Seminar: Applications of Neural Networks
From: MLWLG@CUNYVM.CUNY.EDU
Organization: The City University of New York - New York, NY
Date: 08 Dec 89 21:01:38 +0000


NEW YORK CHAPTER IEEE COMPUTER SOCIETY
Eighteenth Semiannual Seminar


APPLICATIONS OF NEURAL NETWORKS


Wednesday, February 21, 1990 --- 9:00am - 4:30pm
United Engineering Center Auditorium
345 East 47th Street, New York, N.Y.


There has been considerable publicity given recently to the field of
neural networks and neurocomputing. While the promises made have been
great, what is the true measure of this technology? What can actually
be accomplished, and what is only hyperbole? In this seminar you learn
not only the basic theory of neurocomputing, but also gain
understanding of ways in which it is being successfully applied in
real world situations.

The Keynote Address will be given by Casimir C. "Casey" Klimasauskas,
the President and founder of Neural Ware, a leading developer of
neural network tools.


Fee: $125 for IEEE members; $150 for non-members, with $25 discount
for early registration with payment before February 7, 1990. Fee
includes seminar proceedings, lunch and coffee. For special student
and group rates, and for further information contact:

Jim Barbera (212) 395-8765

Andrew Weigel (212) 440-8533

Larry Muller MLWLG@CUNYVM.CUNY.EDU

Note: The above program is subject to change.
- ---------------------------------------------------------------------

Registration for "Applications of Neural Networks"

To: Andrew Weigel, c/o Eclipse Software, 30 West 15th Street,
Suite 5N, New York, N.Y. 10011.

Make checks payable to "IEEE Computer Society"


Name:_______________________________ IEEE No. ___________

Addr:_______________________________

_______________________________

_______________________________ Phone _____________

------------------------------

Subject: Textbook announcement and outline
From: B344DSL%UTARLG.ARL.UTEXAS.EDU@ricevm1.rice.edu
Date: Sun, 10 Dec 89 13:56:00 -0600


This message is an announcement of a forthcoming graduate
textbook neural networks by Daniel S. Levine. The title
of the book is Introduction to Neural and Cognitive
Modeling, and the publisher is Lawrence Erlbaum Associates,
Inc. The book should be in production early in 1990, so
should, with luck, be ready by the start of the Fall, 1990
semester at universities. Chapters 2 to 7 will contain
homework exercises. Some of the homework problems will
involve computer simulations of models already in the
literature. Others will involve thought experiments about
whether a particular network can model a particular
cognitive process, or how that network might be modified to
do so.

The table of contents follows. Please contact the author
or publisher for further information.

Author: Daniel S. Levine
Department of Mathematics
University of Texas at Arlington
Arlington, TX 76019-9408
817-273-3598
b344dsl@utarlg.bitnet

Publisher: Lawrence Erlbaum Associates, Inc.
365 Broadway
Hillsdale, NJ 07642
201-666-4110



Table of Contents:

PREFACE

CHAPTER 1: BRAIN AND MACHINE: THE SAME PRINCIPLES?

What are Neural Networks?
What are Some Principles of Neural Network Theory?
Methodological Considerations


i 36

CHAPTER 2: HISTORICAL OUTLINE

2.1 -- Digital Approaches

The Mc Culloch-Pitts network
Early Approaches to Modeling Learning: Hull and Hebb
Rosenblatt's Perceptrons
Some Experiments with Perceptrons
The Divergence of Artificial Intelligence and Neural
Modeling

2.2 -- Continuous and Random-net Approaches

Rashevsky's Work
Early Random Net Models
Reconciling Randomness and Specificity

2.3 -- Definitions and Detailed Rules for Rosenblatt's
Perceptrons


CHAPTER 3: ASSOCIATIVE LEARNING AND SYNAPTIC PLASTICITY


3.1 -- Physiological Bases for Learning

3.2 -- Rules for Associative Learning

Outstars and Other Early Models of Grossberg
Anderson's Connection Matrices
Kohonen's Work

3.3 -- Learning Rules Related to Changes in Node Activities

Klopf's Hedonistic Neurons and the Sutton-Barto
Learning Rule
Error Correction and Back Propagation
The Differential Hebbian Idea
Gated Dipole Theory

3.4 -- Associative Learning of Patterns

Kohonen's Recent Work: Autoassociation and
Heteroassociation
Kosko's Bidirectional Associative Memory


3.5 -- Equations and Some Physiological Details

Neurophysiological Principles
Equations for Grossberg's Outstar
Kohonen's Early Equations (Example: Simulation of Face
Recognition)
Derivation of the Back Propagation Learning Law Due to
Rumelhart, Hinton, and Williams
Equations for Sutton and Barto's Learning Network
Gated Dipole Equations Due to Grossberg
Kosko's Bidirectional Associative Memory (BAM)
Kohonen's Autoassociative Maps


CHAPTER 4: COMPETITION, INHIBITION, SPATIAL CONTRAST
ENHANCEMENT, AND SHORT-TERM MEMORY

4.1 -- Early Studies and General Themes (Contrast
Enhancement, Competition, and Normalization)

Nonrecurrent Versus Recurrent Lateral Inhibition

4.2 -- Lateral Inhibition and Excitation Between
Sensory Representations

Wilson and Cowan's Work
Work of Grossberg and Colleagues
Work of Amari and Colleagues
Energy Functions in the Cohen-Grossberg and Hopfield-
Tank Models
The Implications of Approach to Equilibrium

4.3 -- Competition and Cooperation in Visual Pattern
Recognition Models

Visual Illusions
Boundary Detection Versus Feature Detection
Binocular and Stereoscopic Vision
Comparison of Grossberg's and Marr's Approaches

4.4 -- Uses of Lateral Inhibition in Higher-level
Processing

4.5 -- Equations for Various Competitive and Lateral
Inhibition Models

Equations of Sperling and Sondhi
Equations of Wilson and Cowan
Equations of Grossberg and his Co-workers: Analytical
Results
Equations of Hopfield and Tank
Equations of Amari and Arbib


CHAPTER 5: CONDITIONING, ATTENTION, REINFORCEMENT, AND
COMPLEX ASSOCIATIVE LEARNING

5.1 -- Network Models of Classical Conditioning

Early Work: Uttley and Brindley
Rescorla and Wagner's Psychological Model
Grossberg: Drive Representations and Synchronization
Aversive Conditioning and Extinction
Differential Hebbian Theory Versus Gated Dipole Theory

5.2 -- Attention and Short Term Memory in the Context of
Conditioning

Grossberg's Approach to Attention
Sutton and Barto's Approach to Blocking
Some Contrasts Between the Above Two Approaches
Further Connections with Invertebrate Neurophysiology
Gated Dipoles and Aversive Conditioning

5.3 -- Equations for Some Conditioning and Associative
Learning Models

Klopf's Drive-reinforcement Model
Some Later Variations of the Sutton-Barto model:
Temporal Difference
The READ Circuit of Grossberg, Schmajuk, and Levine
The Aplysia Model of Gingrich and Byrne


CHAPTER 6: CODING AND CATEGORIZATION

6.1 -- Interactions Between Short and Long Term Memory in
Code Development: Examples from Vision

Malsburg's Model with Synaptic Conservation
Grossberg's Model with Pattern Normalization
Mathematical Results of Grossberg and Amari
Feature Detection Models with Stochastic Elements
From Feature Coding to Categorization

6.2 -- Supervised Classification Models

The Back Propagation Network and its Variants
Some Models from the Anderson-Cooper School

6.3 -- Unsupervised Classification Models

The Rumelhart-Zipser Competitive Learning Algorithm
Adaptive Resonance Theory
Edelman and Neural Darwinism

6.4 -- Translation and Scale Invariance

6.5 -- Equations for Various Coding and Categorization
Models

Malsburg's and Grossberg's Development of Feature
Detectors
Some Implementation Issues for Back Propagation
Equations
Brain-state-in-a-box Equations
Rumelhart and Zipser's Competitive Learning Equations
Adaptive Resonance Equations


CHAPTER 7: OPTIMIZATION, CONTROL, DECISION MAKING, AND
KNOWLEDGE REPRESENTATION

7.1 -- Optimization and Control

Hopfield, Tank, and the Traveling-Salesman Problem
Simulated Annealing and Boltzmann Machines
Motor Control: the Example of Eye Movements
Motor Control: Arm Movements
Speech Recognition and Synthesis
Robotic Control

7.2 -- Decision Making and Knowledge Representation

What, if Anything, do Biological Organisms Optimize?
Affect, Habit, and Novelty in Neural Network Theories
Neural Control Circuits, Neurochemical Modulation, and
Mental Illness
Some Comments on Models of Specific Brain Areas
Knowledge Representation: Letters and Words
Knowledge Representation: Concepts and Inference

7.3 -- Equations for a Few Neural Networks Performing
Complex Tasks

Hopfield and Tank's "Traveling Salesman" Network
The Boltzmann Machine
Grossberg and Kuperstein's Eye Movement Network
VITE and Passive Update of Position (PUP) for Arm
Movement Control
Affective Balance and Decision Making Under Risk


CHAPTER 8: A FEW RECENT ADVANCES IN NEUROCOMPUTING AND
NEUROBIOLOGY

8.1 -- Some "Toy" and Real World Computing Applications
8.2 -- Some Biological Discoveries


APPENDIX 1: BASIC FACTS OF NEUROBIOLOGY

The Neuron
Synapses, Transmitters, Messengers, and Modulators
Invertebrate and Vertebrate Nervous Systems
Functions of Subcortical Regions
Functions of the Mammalian Cerebral Cortex


APPENDIX 2: DIFFERENCE AND DIFFERENTIAL EQUATIONS IN NEURAL
NETWORKS

Example: the Sutton-Barto Difference Equations
Differential Versus Difference Equations
Outstar Equations: Network Interpretation and Numerical
Implementation
The Chain Rule and Back Propagation
Dynamical Systems: Steady States, Limit Cycles, and
Chaos


------------------------------

Subject: PhD program
From: mccoy@aristotle.ils.nwu.edu (Jim McCoy)
Date: 11 Dec 89 16:47:19 +0000

NEW! NEW! NEW! NEW! NEW! NEW! NEW! NEW! NEW!

NORTHWESTERN UNIVERSITY ANNOUNCES THE FORMATION OF

THE INSTITUTE FOR THE LEARNING SCIENCES



The Institute for the Learning Sciences at Northwestern University is an
interdisciplinary center devoted to research in artificial intelligence,
cognitive science, and education and educational software. Reflecting
its interdisciplinary nature, the faculty of the Institute is drawn from
several different departments and schools at Northwestern, including the
Department of Electrical Engineering and Computer Science, the School of
Education and Social Policy, and the Department of Psychology. The
Institute was formed in September, 1989, under the direction of Professor
Roger C. Schank, and is currently home to approximately 50 researchers,
including 25 graduate students.

The Institute for the Learning Sciences coordinates a graduate program
leading to a Ph.D. in any of three fields: Computer Science, Psychology,
or Education. Graduate students will follow a core curriculum for the
first year which is independent of the particular Ph.D. they have decided
to pursue. In general, 3-6 quarter courses will be taken in the second
year, based on consultation with the student and the student's advisor.
The majority of the second year and remainder of the graduate career are
dedicated to research under the direction of the faculty of the
Institute.

Fields of study related to the Institute: The Institute's mission
encompasses both basic and applied research, including a major focus on
the application of artificial intelligence to problems of education and
training. The Institute staff includes professional programmers devoted
to the development of educational software systems for schools and
corporate training. Other key research areas include:

- Scientific problems of language, thought, and memory.
- The construction of computer programs that reason, learn, conduct
conversations, display characteristics of human memory, plan, and
contain realistic models of the world.
- Understanding how children learn language, learn to think, plan and
reason.
- Education, especially the development of effective teaching methods.
- Computer vision.
- Models of emotion, human problem-solving and decision-making.

Faculty members, department and research interests:

Roger C. Schank, Director
Electrical Engineering and Computer Science, Psychology, and
Education and Social Policy; AI in education; memory processing,
case-based reasoning; natural language.

Lawrence Birnbaum
Electrical Engineering and Computer Science; Natural language
understanding; opportunistic planning; acquisition of planning
strategies.

Allan Collins
Education and Social Policy; Computers and education; human semantic
information processing; teaching and learning.

Gregg Collins
Electrical Engineering and Computer Science; Machine learning,
planning, problem-solving.

Paul Cooper
Electrical Engineering and Computer Science; Connectionist models;
computer vision.

Lawrence Henschen
Electrical Engineering and Computer Science; Automated reasoning and
inference; logic and applications.

Marvin Manheim
J.L. Kellogg Graduate School of Management; Computer-assisted human
problem solving; use of information technologies in organizations;
manufacturing, logistics, transportation, and global enterprise.

Gail McKoon
Psychology; Psycholinguistics, memory, cognition.

Andrew Ortony
Education and Social Policy, and Psychology; Knowledge representation;
language comprehension; models of cognition, emotion, and motivation.

Roger Ratcliff
Psychology; Memory; language and cognition; mathematical and
connectionist modeling.

William Revelle
Psychology; Interrelationships of personality, motivation, and
cognitive performance.

Christopher K. Riesbeck
Electrical Engineering and Computer Science; Natural language
analysis; case-based reasoning; intelligent interfaces for training
and tutoring.

Dirk Ruiz
J.L. Kellogg Graduate School of Management; Learning; problem-solving;
integrated cognitive architectures (SOAR); managerial cognition.

Applications for the Ph.D. program are currently being accepted for Fall,
1990. Full funding is available, including summer support. Preferred
applicants will have practical experience in a relevant area, such as
computer programming or classroom education, an undergraduate degree in a
field of study related to the Institute's research mission, and a record
of high academic achievement. For additional information, write to the
Graduate Programs Coordinator at The Institute for the Learning Sciences,
Northwestern University, 1890 Maple Avenue, Evanston, IL 60201 Phone
(708) 491-3500.

------------------------------

Subject: NATO Conference Announcement
From: R09614%BBRBFU01.BITNET@vma.CC.CMU.EDU
Date: Mon, 18 Dec 89 14:24:49 +0100

ANNOUNCEMENT:

_______________________________________
NATO Advanced Research Workshop on

Self-organization, Emerging Properties and Learning.

Center for Studies in Statistical Mechanics and Complex Systems

The University of Texas
Austin, Texas, USA

March 12-14, 1990
_______________________________________
Topics
- Self-Organization and Dynamics in Networks of Interacting Elements
- Dynamical Aspects of Neural Activity: Experiments and Modelling
- From Statistical Physics to Neural Networks
- Role of Dynamical Attractors in Cognition and Memory
- Dynamics of Learning in Biological and Social Systems

The goal of the workshop is to review recent progress on self-
organization and the generation of spatio-temporal patterns in multi-unit
networks of interacting elements, with special emphasis on the role of
coupling and connectivity on the observed behavior. The importance of
these findings will be assessed from the standpoint of information and
cognitive sciences, and their possible usefulness in the field of
artificial intelligence will be discussed. We will compare the collective
behavior of model networks with the dynamics inferred from the analysis
of cortical activity. This confrontation should lead to the design of
more realistic networks, sharing some of the basic properties of
real-world neurons.

Sponsors
- --------
- - NATO International Scientific Exchange Programmes
- - International Solvay Institutes for Physics and Chemistry,
Brussels, Belgium
- - Center for Statistical Mechanics and Complex Systems, The
University of Texas at Austin
- - IC2 Institute of The University of Texas at Austin

International Organizing Committee
- --------------------------------
Ilya Prigogine, The University of Texas at Austin and Free
University of Brussels
Gregoire Nicolis, Free University of Brussels
Agnes Babloyantz, Free University of Brussels
J. Demongeot, University of Grenoble, France
Linda Reichl, The University of Texas at Austin

Local Organizing Committee
- -------------------------
Ilya Prigogine, George Kozmetsky, Ping Chen, Linda Reichl,
William Schieve, Robert Herman, Harry Swinney, Fred Phillips

For Further Information Contact:
- -----------------------------
Professor Linda Reichl
Center for Statistical Mechanics
The University of Texas
Austin, TX 78712, USA

Phone: (512) 471-7253; Fax: (512) 471-9637;
Bitnet: CTAA450@UTA3081 or PAPE@UTAPHY


------------------------------

Subject: COGNITIVE NEUROSCIENCE RFP
From: Steve Hanson <jose@neuron.siemens.com>
Date: Tue, 19 Dec 89 17:01:37 -0500


MCDONNELL-PEW PROGRAM IN COGNITIVE NEUROSCIENCE

December 1989

Individual Grants-in-Aid for Research and Training

Supported jointly by the James S. McDonnell Foundation and The Pew
Charitable Trusts

INTRODUCTION

The McDonnell-Pew Program in Cognitive Neuroscience has been created
jointly by the James S. McDonnell Foundation and The Pew Charitable
Trusts to promote the development of cognitive neuroscience. The
foundations have allocated $12 million over an initial three-year period
for this program.

Cognitive neuroscience attempts to understand human mental events by
specifying how neural tissue carries out computations. Work in cognitive
neuroscience is interdisciplinary in character, drawing on developments
in clinical and basic neuroscience, computer science, psychology,
linguistics, and philosophy. Cognitive neuroscience excludes
descriptions of psychological function that do not address the underlying
brain mechanisms and neuroscientific descriptions that do not speak to
psychological function.

The program has three components.

(1) Institutional grants have been awarded for the
purpose of creating centers where cognitive scientists and
neuroscientists can work together.

(2) To encourage Ph.D. and M.D. investigators in
cognitive neuroscience, small grants-in-aid will be
awarded for individual research projects.

(3) To encourage Ph.D. and M.D. investigators to acquire
skills for interdisciplinary research, small training
grants will be awarded.

During the program's initial three-year period, approximately $4 million
will be available for the latter two components -- individual
grants-in-aid for research and training -- which this announcement
describes.

RESEARCH GRANTS

The McDonnell-Pew Program in Cognitive Neuroscience will issue a limited
number of awards to support collaborative work by cognitive
neuroscientists. Applications are sought for projects of exceptional
merit that are not currently fundable through other channels, and from
investigators who are not already supported by institutional grants under
this Program.

Preference will be given to support projects requiring collaboration or
interaction between at least two subfields of cognitive neuroscience.
The goal is to encourage broad, national participation in the development
of the field and to facilitate the participation of investigators outside
the major centers of cognitive neuroscience.

Submissions will be reviewed by the program's advisory board. Grant
support under this component is limited to $30,000 per year for two
years, with indirect costs limited to 10 percent of direct costs. These
grants are not renewable.

The program is looking for innovative proposals that would, for example:

- -- combine experimental data from cognitive psychology
and neuroscience;

- -- explore the implications of neurobiological methods for
the study of the higher cognitive processes;

- -- bring formal modeling techniques to bear on cognition;

- -- use sensing or imaging techniques to observe the brain during
conscious activity;

- -- make imaginative use of patient populations to analyze cognition;

- -- develop new theories of the human mind/brain system.

This list of examples is necessarily incomplete but should suggest the
general kind of proposals desired. Ideally, a small grant-in-aid for
research should facilitate the initial exploration of a novel or risky
idea, with success leading to more extensive funding from other sources.

TRAINING GRANTS

A limited number of grants will also be awarded to support training
investigators in cognitive neuroscience. Here again, the objective is to
support proposals of exceptional merit that are underfunded or unlikely
to be funded from other sources.

Some postdoctoral awards for exceptional young scientists will be
available; postdoctoral stipends will be funded at prevailing rates at
the host institution, and will be renewable annually for periods up to
three years. Highest priority will be given to candidates seeking
postdoctoral training outside the field of their previous training.
Innovative programs for training young scientists, or broadening the
experience of senior scientists, are also encouraged. Some examples of
appropriate proposals follow.

- -- Collaboration between a junior scientist in a relevant
discipline and a senior scientist in a different discipline
has been suggested as an effective method for developing the field.

- -- Two senior scientists might wish to learn each other's
discipline through a collaborative project.

- -- An applicant might wish to visit several laboratories
in order to acquire new research techniques.

- -- Senior researchers might wish to investigate new methods
or technologies in their own fields that are
unavailable at their home institutions.

Here again, examples can only suggest the kind of training experience
that might be considered appropriate.

APPLICATIONS

Applicants should submit five copies of a proposal no longer than 10
pages (5,000 words).

Proposals for research grants should include:

- -- a description of the work to be done and where it might
lead;

- -- an account of the investigator's professional qualifications
to do the work.

Proposals for training grants should include:

- -- a description of the training sought and its relationship to the
applicant's work and previous training;

- -- a statement from the mentor as well as the applicant concerning
the acceptability of the training plan.

Proposals for both research grants and training grants should include:

- -- an account of any plans to collaborate with other cognitive
neuroscientists;

- -- a brief description of the available research facilities;

- -- no appendices.

The proposal should be accompanied by the following separate information:

- -- a brief, itemized budget and budget justification for the
proposed work, including direct costs, with indirect costs not
to exceed 10 percent of direct costs;

- -- curricula vitae of the participating investigators;

- -- evidence that the sponsoring organization is a
nonprofit, tax-exempt, public institution;

- -- an authorized form indicating clearance for the use
of human and animal subjects;

- -- an endorsement letter from the officer of the sponsoring
institution who will be responsible for administering the
grant.

Applications received on or before March 1 will be acted on by the
following September 1; applications received on or before September 1
will be acted on by the following March 1.

INFORMATION
For more information contact:

McDonnell-Pew Program in Cognitive Neuroscience
Green Hall 1-N-6
Princeton University
Princeton, New Jersey 08544-1010
Telephone: 609-258-5014
Facsimile: 609-258-3031
Email: cns@confidence.princeton.edu

ADVISORY BOARD

Emilio Bizzi, M.D.
Eugene McDermott Professor in the Brain Sciences and Human Behavior
Chairman, Department of Brain and Cognitive Sciences
Whitaker College
Massachusetts Institute of Technology, E25-526
Cambridge, Massachusetts 02139

Sheila Blumstein, Ph.D.
Professor of Cognitive and Linguistic Sciences
Dean of the College
Brown University
University Hall, Room 218
Providence, Rhode Island 02912

Stephen J. Hanson, Ph.D.
Group Leader
Learning and Knowledge Acquisition Research Group
Siemens Research Center
755 College Road East
Princeton, New Jersey 08540

Jon Kaas, Ph.D.
Centennial Professor
Department of Psychology
Vanderbilt University
Nashville, Tennessee 37240

George A. Miller, Ph.D.
James S. McDonnell Distinguished University
Professor of Psychology
Department of Psychology
Princeton University
Princeton, New Jersey 08544

Mortimer Mishkin, Ph.D.
Laboratory of Neuropsychology
National Institute of Mental Health
9000 Rockville Pike
Building 9, Room 1N107
Bethesda, Maryland 20892

Marcus Raichle, M.D.
Professor of Neurology and Radiology
Department of Radiology
Washington University School of Medicine
Barnes Hospital
510 S. Kingshighway, Campus Box 8131
St. Louis, Missouri 63110

Endel Tulving, Ph.D.
Department of Psychology
University of Toronto
Toronto, Ontario M5S 1A1
Canada

------------------------------

Subject: TR: Networks That Learn Phonology
From: Michael Gasser <gasser@iuvax.cs.indiana.edu>
Date: Tue, 19 Dec 89 21:32:02 -0500


NETWORKS THAT LEARN PHONOLOGY

Michael Gasser
Chan-Do Lee
Computer Science Department
Indiana University
Bloomington, IN 47405

Technical Report 300
December 1989

Abstract:

Natural language phonology presents a challenge to connectionists
because it is an example of apparently symbolic, rule-governed
behavior. This paper describes two experiments investigating the
power of simple recurrent networks (SRNs) to acquire aspects of
phonological regularity. The first experiment demonstrates the
ability of an SRN to learn harmony constraints, restrictions on the
cooccurrence of particular types of segments within a word. The
second experiment shows that an SRN is capable of learning the kinds
of phonological alternations that appear at morpheme boundaries, in
this case those occurring in the regular plural forms of English nouns.
This behavior is usually characterized in terms of a derivation from a
more to a less abstract level, and in previous connectionist treatments
(Rumelhart & McClelland, 1986; Plunkett & Marchman, 1989) it has been
dealt with as a process of yielding the combined form (plural) from the
simpler form (stem). Here the behavior takes the form of the more
psychologically plausible process of the production of a sequence of
segments given a meaning or of a meaning given a sequence of segments.
This is accomplished by having both segmental and semantic inputs and
outputs in the network. The network is trained to auto-associate the
current segment and the meaning and to predict the next phoneme.



To order copies of this tech report, send mail to Nancy Garrett at
nlg@cs.indiana.edu / Computer Science Department, Indiana University,
Bloomington, IN 47405.




------------------------------

End of Neuron Digest [Volume 6 Issue 4]
***************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT