Copy Link
Add to Bookmark
Report

Neuron Digest Volume 02 Number 09

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

NEURON Digest	Thu Apr  2 10:29:33 CST 1987  Volume 2 Number 9 

Today's Topics:

Administrivia
Videos, films and tapes
Technical Report announcement
UCB CogSci Seminar
Connectionist forum at University of Toronto, March 30th
IEEE Conference on Neural Information Processing
colloquium : Integrating psychology with a cognitive science model
Scheduled Talk.
Terrence J. Sejnowski talks
categories and counterfactuals


----------------------------------------------------------------------

Date: Thur, 02 Apr 87, 10:42:51 CST
From: gately@ti-csl.csnet (Michael Gately)
Subject: administrivia

This brief note is just to let you know about the 'tardiness' of
NEURON over the past few weeks. I have been trying to move the
files from one VAX (VMS) to another (Unix). In the process I
have been having to remember Unix and its various excentricities
(anyone ever try to append files with CAT FILE.1 >FILE.2?). I
think that I have all these problems solved now and should be
able to get this digest out in a more timely manner.

I will be sending another note in the near future in an attempt
to try to consolidate the many sites and perhaps to try to
unload some of the BITNET traffic. But this wont be until next
month.

Finally, a request: Would each of you try to send requests, etc.
to NEURON@TI-CSL.CSNET from now on, it seems that if you send
information to GATELY... I must then forward that to the other
computer, and when the digest-formatter tries to work over the
'forwarded' data, I get all kinds of gibberish. Thank you.

Regards,
Mike


------------------------------

From: tenorio@ee.ecn.purdue.edu (Manoel F Tenorio)
Subject: Videos, films and tapes
Date: Wed, 01 Apr 87 11:48:12 EST

We are looking for interesting samples of different NN applications to be
presented at in-house courses and seminars at Purdue. If you have an
interesting application (tape, film or just description) for which you
would allow us to include in these presentations, contact us at:
M. F. Tenorio
School of Electrical Engineering
Purdue University
W. Lafayette, In 47907

Please include details about the terms and conditions you would like us
to observe in using your material, and if you want to be informed about the
presentations and the expose it is getting.


------------------------------

Date: 30-MAR-1987 14:36
From: JAB%INDIANA.CSNET
Subject: Technical Report announcement


COMPLEX COGNITIVE INFORMATION-PROCESSING:
A COMPUTATIONAL ARCHITECTURE WITH A CONNECTIONIST IMPLEMENTATION

John Barnden,
TR 211, Computer Science Department,
Indiana University, Bloomington, IN 47405--4101,
Dec. 1986.


ABSTRACT

Much of human cognition appears to involve the rapid processing of
temporary, complex information structures. These might, for instance,
express the meanings of heard utterances, or represent the current
environment during common-sense reasoning or planning. There is a need to
explain how data structures of appropriate complexity and expressive power
might be realized in the physiological ``hardware'' of the brain. The work
reported here provides one partial though detailed answer. It is centered
on an abstract computational architecture or system that can embody complex
data structures and their manipulations. The architecture can in turn be
realized straightforwardly in neural or connectionist networks. The
resulting mapping from data structures to neural/connectionist networks
differs significantly from those previously entertained, and successfully
addresses some contemporary challenges to the field of connectionism.

Facts about brain circuitry provide useful constraints on the architecture
and the information processing it supports. Conversely, the architecture's
neural implementation leads to detailed [tentative!] suggestions about the
operation of brain circuitry. The architecture has special psychological
significance in its support of spatial-analogue representation, where an
array-like medium acts as an internal model of an array of spatial regions.
In particular, the ``spatial mental models'' of the psychologist
Johnson-Laird are easily accommodated. The architectural (and hence the
neural) realization of Johnson-Laird's theory is a major focus of the
research.


------------------------------

Date: Fri, 20 Mar 87 10:05:47 PST
From: admin%cogsci.Berkeley.EDU@berkeley.edu (Cognitive Science Program)
Subject: UCB CogSci Seminar


BERKELEY COGNITIVE SCIENCE PROGRAM

Cognitive Science Seminar - IDS 237B

Tuesday, March 31, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
2515 Tolman Hall

``From Signals to Symbols in Neural Network Models''
Terrence J. Sejnowski
Division of Biology
California Institute of Technology

At the earliest stages of sensory processing and at the
final common motor pathways, neural computation is best
described as signal processing. Somewhere in the nervous system
these signals are used to form internal representations and to
make decisions that appear symbolic. A first step toward under-
standing the transition from signals to symbols can be made by
studying the development and internal structure of massively-
parallel nonlinear networks that learn to solve difficult signal
identification and categorization problems. The concept of
``feature detector'' is explored in a problem concerning sonar
target identification that appears to be solved by humans and
network models in similar ways. The concept of a ``semi-
distributed population code'' is illustrated by the problem of
pronouncing English text in which invariant internal codes
emerge not at the level of single processing units, but at the
level of cell assemblies.
---------------------------------------------------------------

UPCOMING TALKS

Apr. 28: Eran Zaidel, Psychology Dept., Brain Research Insti-
tute, UCLA
---------------------------------------------------------------
ELSEWHERE ON CAMPUS

SESAME Colloquium: Robbie Case, Ontario Institute for Studies in
Education, Monday, March 30, at 4:00 p.m., 2515 Tolman.
---------------------------------------------------------------


------------------------------

Date: Mon, 16 Mar 87 21:36:48 est
From: Graeme Hirst <im4u!ut-sally!seismo!utai!gh>
Subject: Connectionist forum at University of Toronto, March 30th

CONNECTIONISM AND COMPUTATION: MODELS OF MIND IN THE
COGNITIVE SCIENCES

Monday, March 30, 1987, West Hall, University College, University
of Toronto

Sponsored by: The McLuhan Program in Culture and Technology
The Specialist Program in Cognitive Science and
Artificial Intelligence.
Dept. of Psychology, University of Toronto
Dept. of Computer Science, University of Toronto

PROGRAM:

Morning Session: Chair: Alison Gopnik, Depts. of Psychology
and Linguistics, University of Toronto.

9:30 Geoffrey Hinton, Dept. of Computer Science, Carnegie-Mellon
University.

What's in a Symbol: Representing Hierarchical Structures in
Connectionist Networks

10:45 David Kirsh, Artificial Intelligence Laboratory, M.I.T.

Some Mechanisms of Thought

Afternoon Session: Chair: Lynd Forguson, Dept. of Philosophy,
University of Toronto

2:00 Paul Smolensky, Dept. of Computer Science, University of
Colorado.

On the Relation Between Symbolic and Connectionist
Computation

3:15 Zenon Pylyshyn, Center for Cognitive Science, University of
Western Ontario

What are Connectionist Models About?

4:30 Discussion



For further details contact Sylvia Wookey, McLuhan Program in
Culture and Technology, 39a Queens Park Circle, Toronto, Ont.,
M5S 1A1, 416-978-7026.


------------------------------

Date: 23 Mar 1987 19:14-EST
From: CLAU@a.isi.edu
Subject: IEEE Conference on Neural Information Processing

This is a reminder that there will be an IEEE Conference on

Neural Information Processing Systems - Natural and Synthetic

November 8-12, 1987 (Sun - Thur), Boulder, Colorado

General Chairman: Edward C. Posner

Program Chairman: Yaser Abu-Mostafa


Submission of Contributed Papers -

Authors should send six copies of a 500-word summary and one copy
of a

50-100 word abstract clearly stating their results to Program
Chairman:

Professor Yaser S. Abu-Mostafa

Caltech 116-81

Pasadena, CA 91125


The deadline for receiving the abstract and summary is May 1,
1987. Ealier

submission is encouraged.


For information concerning the conference, please contact:

Dr. Edward C. Posner

Neural Networks Meeting

Caltech 116-81

Pasadena, CA 91125

Telephone (818)356-4852 or (818)354-6224


------------------------------

Date: Wed 25 Mar 87 12:35:58-CST
From: Nandu Desai <CC.DESAI@R20.UTEXAS.EDU>
Subject: colloquium : Integrating psychology with a cognitive science model

****


COLLOQUIUM

Earl B. Hunt
University of Washington

Thursday, March 26 4 p.m. GSB 3.106

INTEGRATING PSYCHOLOGY WITH A COGNITIVE SCIENCE MODEL


ABSRTACT:

The blackboard architecture used to model high level cognition has been
applied to simulation of human performance in a variety of real time
situations ranging from simple choice reaction times to text comprehension.
This has forced a consideration of the relation between connectionist
models and the blackboard architecture. The problem and potential of the
approach will be discussed.


------------------------------

Date: 30 Mar 1987 1411-EST
From: Wendy Gissendanner <WLG@C.CS.CMU.EDU>
Subject: Scheduled Talk.


CONNECTED SPEECH RECOGNITION : EXPERTISE ACQUIRED IN PHONEMIC MARKOV
MODELS AND FIRST EXPERIMENTS WITH CONNECTIONISM.


Speaker: Herve Bourlard, Philips Research Laboratory, Brussels
Date: Friday, April 10
Time: 3 p.m.
Place: Wean Hall 5409
Host: Dave Touretzky

ABSTRACT

Hidden Markov Models (HMM) are now widely used for automatic
isolated and connected speech recognition. According to the level at
which they are defined, these models will represent words, phonemes or
any other subword unit [Bahl et al. 1983, Bourlard et al. 1985].
One of their main advantages lies on the possibility to take account
of the time sequential order of speech signals and to include the
"time warping" process [Bourlard et al. 1985]. Another important
characteristic of these models is their learning ability. Given
several presentations of patterns (i.e. string of acoustic vectors or
symbols), Baum's training algorithm [Baum L.E. 1972] adjusts the
parameters of the HMM's to increase the probability that each model
produces its associated data. In the learning phase, known sentences
are matched again their models (obtained by concatenation of the
models of their constituting words or phonemes) by using Baum's
algorithm or a simplified version (known as Viterbi criterion) which
can be formulated in terms of dynamic programming [Bourlard et al.
1985]. In this way, the statistical variations in speaking rate and
pronunciation are so taken into account.

However, the a-priori choice of the model topology (number of states,
allowable transitions, probability distributions and transition
rules) limits the flexibility of the models and hinders to include
non-explicit knowledge on speech production and recognition
processes.

Connectionist machines [Hinton et al. 1984, Rumelhart et al.1986]
could circumvent this drawback. Their appeal for speech
recognition problems lies in their ability to acquire knowledge on
patterns by learning and to recognize patterns similar to those
presented in the learning set. Already, these networks have proved
useful in several applications and, more particularly, in speech
synthesis [Sejnowsky et al. 1986] and speech recognition [Prager et
al. 1986, Watrous et al. 1986].

In this talk, after an overview of results which have been
obtained at Philips by using Phonemic Markov Models for dealing with
continuous speech recognition, several experiments using connectionist
models on speech data will be reported. When comparing with HMM's,
the main particularity of these models is their ability to memorize
and generalize high-order constraints (implicit knowledge) between
features. Their learning procedure [Rumelhart et al. 1986] simply
performs a least square error optimization in the weight space.
However, the difficulty is to include the sequential and time
distorsion aspects of speech.


------------------------------

Date: Mon, 30 Mar 87 08:43:50 PST
From: admin%cogsci.Berkeley.EDU@berkeley.edu (Cognitive Science Program)
Subject: Terrence J. Sejnowski talks

SPECIAL TALK by Terrence Sejnowski presented by:

Parallel Computation: Neural Nets, Vision, and Optimization
Affinity Group in Biological Information Processing

Title: Computing Shape from Shading with a Neural Network Model
Speaker: Terrence J. Sejnowski. Dept. Biophysics,
Johns Hopkins Univ. and Div. Biol., Caltech.
Time: Mon., 30-Mar-87, 2:00--3:30 PM. Sibley Aud., Bechtel Bldg., UCB.

Abstract:
A layered feedforward network was constructed that extracts principal
curvatures and the direction of maximal curvature using shading
information contained in images of simple objects independent of the
direction of illumination. The input to the network is from an array
of processing units having overlapping concentric on-center and
off-center receptive fields similar to those of principal cells in the
lateral geniculate nucleus. The network's output is from a population
of units that conjointly represent the curvature and orientation in a
coarsely coded representation. An intermediate layer of ``hidden''
units have oriented receptive fields that resemble the simple cells
in visual cortex of cats and monkeys. These units respond maximally
to oriented bars and edges, but in this network their function is
not to detect bounding contours but to extract curvature information
from shaded images.

References:
1. Ikeuchi, K. & Horn, B. K. P., "Numerical shape from shading
and occluding boundaries"
, Artificial Intellligence 15 (1981)

Our speaker will be available for informal discussions from approx.
1:00-1:45 PM and 3:15-5:00 PM in room 120C Bechtel.

Note: If your name or address is incorrect, you wish to be
added/deleted from our list, or you need further information on
this series please contact:
Prof. Shankar Sastry. EECS. 261M Cory 642-1857 {sastry@esvax}
Grad. Student Curt Deno. EECS. 346 Cory 642-8216 {dcdeno@esvax}

****PLEASE NOTE, a REMINDER:

Terrence Sejnowski will be speaking at the COGNITIVE SCIENCE SEMINAR,
Tuesday, March 31, 11:00-12:30. His title is: "From Signals to Symbols
in Neural Network Models"
.


------------------------------

Date: 2-APR-1987 04:23
From: CLARK@ASU.CSNET
Subject: Colloquium: VLSI Implementation of Neural Systems (ASU)

VLSI IMPLEMENTATION OF NEURAL SYSTEMS

L.A. Akers

Center for Solid State Electronics Research
Arizona State University

The Computer and the Brain
An International Symposium
1:30pm
Tempe Mission Palms Hotel
Tempe, AZ.
April 13, 1987

Very Large Scale Integration (VLSI) and Ultra Large Scale Integration
(ULSI) is allowing hundreds of thousands to millions of semiconductor
switches to be fabricatedon a chip. While some very sophisticated architecturesand novel systems have been implemented on a chip, serious problems relating
to yields and testing exist. These problems are expected to seriously affect
endeavors in the future. We need to use the tremendous technology currently
available in a more intelligent fashion.

This talk will briefly discuss the historical development of integrated
circuits and discuss its projected future. A comparison between ULSI and
neural switching elements and architectures will be made. The performance,
advantages, and problems of von Neumann and non-von Neumann (neuralmorpically
inspired) architectures will be made with special attention to the requirements
for VLSI implementation. Lastly, the design methodology necessary to combine
neural type computation with standard ICs will be discussed.



MORE***************


Conference Announcement:

The ASU College of Engineering and Applied Sciences and the Arizona Center
for Medieval and Renaissance Studies presents:

THE COMPUTER AND THE BRAIN

An International Symposium
April 12-15, 1987

In Commemoration of John von Neumann (1903-1957)

Tempe Mission Palms Hotel - near the campus of Arizona State University

Scheduled to speak:
Eugene Wigner
Nicholas von Neumann
Edward Teller - Lawrence Livermore
Robert P. Multhauf - Smithsonian Institute
Terrence Sejnowski - Johns Hopkins University
L.A. Akers - Arizona State University
Lawrence D. Jackel - AT&T Bell Labs
S.K. Heinger - Univ. of North Carolina, Chapel Hill
John C. Haugeland - Univ. of Pittsburgh
Ray Jackendoff - Brandeis University
Wendy Wilkins - Arizona State University
Yorick Wilks - New Mexico State Univ. - Las Cruces
Andras Pellionisz - New York University School of Medicine
David Hestenes - Arizona State University
Peter Killeen - Arizona State University
Rodolfo Llinas - New York University School of Medicine
Lynn Nadel - University of Arizona
David Rummelhart - University of San Diego
Steven Jobs - NEXT, Palo Alto, California

Registration Fees:
$50 - Before April 6
(ASU faculty, staff and students and employees of sponsors free
with ID)
$65 - After April 6

$35 - Banquet, Tuesday, April 14 (entertainment provided)

For more information contact : Jeanie R. Brink - Ariz Center for
Renaissance Studies
Arizona State University, Tempe AZ 85287

Hotel:
Tempe Mission Palms Hotel
60 E. Fifth Street
Tempe, AZ 85281 1-800-547-8705, in Ariz: 1-800-826-5839

ask for ACMRS rate : $70 single or double




------------------------------

Date: 31-MAR-1987 00:12
From: SKRZYPEK@LOCUS.UCLA.EDU
Subject: Neuro-session, IEEE/Bio-Med Soc.


LATE CALL FOR PAPERS

IEEE/Engineering in Medicine and Biology Society Ninth Annual
Conference, Boston, Nov. 13-16, 1987.

ABSTRACT SUBMISSIONS are invited for Technical program on Neural
Networks and Connectionist Models (NNCM).

DEADLINES: April 15, 1987 is the firm deadline for receipt of
initial abstracts. Authors will be notified of acceptance by May
1, 1987, and furnished with an AUTHOR'S KIT for preparation of
the two-page Short Paper, which will then be due by June 15,
1987.

SUBMISSION: A 250-word ABSTRACT must be submitted for critical
review of proposed presentations. Use the form provided below,
following its instructions carefully. If the proposed presenta-
tion is accepted, a two-page Short Paper will be required of the
authors for publication in the official conference proceedings.
(The initial abstract will not be published.) Use an asterisk to
indicate authors who are IEEE members. This abstract is for
critical review only, and will not be reproduced. Therefore, ex-
act margins and format are not critical. Clarity is paramount.
Oral presentations and accepted papers are expected to be limited
to 15 minutes in traditional meetings sessions.

CRITERIA FOR ACCEPTANCE: Abstracts must be substantive in na-
ture, presenting in concise form the objectives of the work re-
ported, methodology utilized, results obtained, and the signifi-
cance of these results. Abstracts reporting substantial current
progress are encouraged. Abstracts must contain results of work
done prior to submission, although preliminary data are accept-
able. Abstracts clearly commercial in nature will not be accept-
ed.

Send Abstract and Submission Form to:

Prof. Josef Skrzypek
Computer Science Dept.
UCLA
L.A., CA 90024
(213) 825-2381
SKRZYPEK@CS.UCLA.EDU


ABSTRACT SUBMISSION FORM

Title: ____________________________________________________________
(do not exceed 80 characters, including spaces)

Author #1 Name: ___________________________________________________
last first initial

Additional Authors: ________________________________
last initials

_______________________________
last initials

Address corresp. to: _______________________________________
degree/title
at: _______________________________________________________
___________________________________________________________
___________________________________________________________
_____________________________________________________ __________
(city) (state) (country)
__________________ ____________________________________________
(zip or mail code) (phone)

If my abstract is accepted, I will attend the IEEE-EMBS Ninth Annual
Conference in Boston for presentation.

Signature: ___________________________________ Date _______________
(presenting author)


------------------------------

Date: Tue, 17 Mar 87 00:17:10 est
From: Bob French <french@farg.umich.edu>
Subject: categories and counterfactuals


The Role of Categories in the Generation of Counterfactuals:
A Connectionist Interpretation

by Robert M. French and Mark Weaver

Department of Electrical Engineering and Computer Science
University of Michigan
Ann Arbor, Michigan 48109
Tel. (313) 763-5875

Keywords: counterfactuals, norm theory, connectionism, categories

Abstract

This paper proposes that a fairly standard connectionist category model
can provide a mechanism for the generation of counterfactuals --
non-veridical versions of perceived events or objects. A distinction is
made between evolved counterfactuals, which generate mental spaces (as
proposed by Fauconnier), and fleeting counterfactuals, which do not. This
paper explores only the latter in detail. A connection is made with the
recently proposed counterfactual theory of Kahneman and Miller;
specifically our model shares with theirs a fundamental rule of
counterfactual production based on normality. The relationship between
counterfactuals and the psychological constructs of ``schema with
correction'' and ``goodness'' is examined. A computer simulation in support
of our model is included.


The paper has been submitted to the Cognitive Science Society Conference 1987
to be held in Seattle, WA. in July.


Anyone interested in a copy of the paper, should get in touch with
Bob French as follows: french@farg.umich.edu



------------------------------

Date: Tue, 17 Mar 87 00:16:40 est
From: french@farg.umich.edu (Bob French)
Subject: categories and counterfactuals


The Role of Categories in the Generation of Counterfactuals:
A Connectionist Interpretation

by Robert M. French and Mark Weaver

Department of Electrical Engineering and Computer Science
University of Michigan
Ann Arbor, Michigan 48109
Tel. (313) 763-5875

Keywords: counterfactuals, norm theory, connectionism, categories

Abstract

This paper proposes that a fairly standard connectionist category model
can provide a mechanism for the generation of counterfactuals --
non-veridical versions of perceived events or objects. A distinction is
made between evolved counterfactuals, which generate mental spaces (as
proposed by Fauconnier), and fleeting counterfactuals, which do not. This
paper explores only the latter in detail. A connection is made with the
recently proposed counterfactual theory of Kahneman and Miller;
specifically our model shares with theirs a fundamental rule of
counterfactual production based on normality. The relationship between
counterfactuals and the psychological constructs of ``schema with
correction'' and ``goodness'' is examined. A computer simulation in support
of our model is included.


The paper has been submitted to the Cognitive Science Society Conference 1987
to be held in Seattle, WA. in July.


Anyone interested in a copy of the paper, should get in touch with
Bob French as follows: french@farg.umich.edu


------------------------------

End of NEURON-Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT