Copy Link
Add to Bookmark
Report
Neuron Digest Volume 11 Number 47
Neuron Digest Friday, 13 Aug 1993 Volume 11 : Issue 47
Today's Topics:
NIPS91 paper in neuroprose
IJNS contents volume 2, issue 4 (1991)
TR - Training Second-Order Recurrent Neural Networks Using Hints
TR - connectionist model for commonsense reasoning with rules
Paper - Logics and Variables in Connectionist Models
TR - Fuzzy Evidential Logic: A Model of Causality for Commonsense Reasoning}
Preprint available: A network to velocity vector-field correction
neural-oscillator network, reprints available
ALOPEX algorithm solves the MONK's problems
TR - Modelling the Development of Topography and Ocular Dominance
Preprint available: A network to velocity vector-field correction
Neural Chess: Paper Presentation
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@psych.upenn.edu". The ftp archives are
available from psych.upenn.edu (130.91.68.31). Back issues requested by
mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: NIPS91 paper in neuroprose
From: giles@research.nj.nec.com (Lee Giles)
Date: Wed, 05 Feb 92 15:29:51 -0500
The following paper has been placed in the Neuroprose archive.
Comments and questions are invited.
*******************************************************************
--------------------------------------------
EXTRACTING AND LEARNING AN "UNKNOWN" GRAMMAR
WITH RECURRENT NEURAL NETWORKS
--------------------------------------------
C.L.Giles*, C.B.Miller D.Chen, G.Z.Sun, H.H.Chen, Y.C.Lee
NEC Research Institute *Institute for Advanced Computer Studies
4 Independence Way Dept. of Physics & Astronomy
Princeton, N.J. 08540 University of Maryland
giles@research.nj.nec.com College Park, Md 20742
___________________________________________________________________
------------------------------
Subject: IJNS contents volume 2, issue 4 (1991)
From: Benny Lautrup <LAUTRUP@nbivax.nbi.dk>
Date: Mon, 09 Mar 92 17:28:00 +0100
INTERNATIONAL JOURNAL OF NEURAL SYSTEMS
The International Journal of Neural Systems is a quarterly journal
which covers information processing in natural and artificial neural
systems. It publishes original contributions on all aspects of this
broad subject which involves physics, biology, psychology, computer
science and engineering. Contributions include research papers, reviews
and short communications. The journal presents a fresh undogmatic
attitude towards this multidisciplinary field with the aim to be a
forum for novel ideas and improved understanding of collective and
cooperative phenomena with computational capabilities.
ISSN: 0129-0657 (IJNS)
- ----------------------------------
Contents of Volume 2, issue number 4 (1991):
1. N. Burgess, M.N. Granieri & S. Patarnello:
3-D object classification: Application of a constructor algorithm.
2. R. Meir:
On deriving deterministic learning rules from stochastic systems.
3. E.M. Johansson, F.U. Dowla & D.M. Goodman:
Backpropagation learning for multi-layer feed-forward neural networks
using the conjugate gradient method.
4. W. Banzhaf & M. Schmutz:
Some notes on competition among cell assemblies.
5. M. Bengtsson:
Asymptotic properties of a third order neural network.
6. C.J.P. Vicente, J. Carrabina & E. Vladerrama:
Discrete learning in feed-forward neural networks.
7. J. Chen, M.A. Shanblatt & C-H Maa:
Improved neural networks for linear and non-linear programming.
8. M. Bahrami:
Recognition of rules and exceptions by neural networks.
9. A.V. Robins:
Multiple representations in connectionist systems.
10. D.G. Stork:
Book Review
Evolution of the first nervous systems by P.A.V. Anderson (ED).
- ----------------------------------
Editorial board:
B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge)
S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge)
D. Stork (Stanford) (Book review editor)
Associate editors:
B. Baird (Berkeley)
D. Ballard (University of Rochester)
E. Baum (NEC Research Institute)
S. Bjornsson (University of Iceland)
J. M. Bower (CalTech)
S. S. Chen (University of North Carolina)
R. Eckmiller (University of Dusseldorf)
J. L. Elman (University of California, San Diego)
M. V. Feigelman (Landau Institute for Theoretical Physics)
F. Fogelman-Soulie (Paris)
K. Fukushima (Osaka University)
A. Gjedde (Montreal Neurological Institute)
S. Grillner (Nobel Institute for Neurophysiology, Stockholm)
T. Gulliksen (University of Oslo)
D. Hammerstrom (Oregon Graduate Institute)
D. Horn (Tel Aviv University)
J. Hounsgaard (University of Copenhagen)
B. A. Huberman (XEROX PARC)
L. B. Ioffe (Landau Institute for Theoretical Physics)
P. I. M. Johannesma (Katholieke Univ. Nijmegen)
M. Jordan (MIT)
G. Josin (Neural Systems Inc.)
I. Kanter (Princeton University)
J. H. Kaas (Vanderbilt University)
A. Lansner (Royal Institute of Technology, Stockholm)
A. Lapedes (Los Alamos)
B. McWhinney (Carnegie-Mellon University)
M. Mezard (Ecole Normale Superieure, Paris)
J. Moody (Yale, USA)
A. F. Murray (University of Edinburgh)
J. P. Nadal (Ecole Normale Superieure, Paris)
E. Oja (Lappeenranta University of Technology, Finland)
N. Parga (Centro Atomico Bariloche, Argentina)
S. Patarnello (IBM ECSEC, Italy)
P. Peretto (Centre d'Etudes Nucleaires de Grenoble)
C. Peterson (University of Lund)
K. Plunkett (University of Aarhus)
S. A. Solla (AT&T Bell Labs)
M. A. Virasoro (University of Rome)
D. J. Wallace (University of Edinburgh)
D. Zipser (University of California, San Diego)
- ----------------------------------
CALL FOR PAPERS
Original contributions consistent with the scope of the journal are
welcome. Complete instructions as well as sample copies and
subscription information are available from
The Editorial Secretariat, IJNS
World Scientific Publishing Co. Pte. Ltd.
73, Lynton Mead, Totteridge
London N20 8DH
ENGLAND
Telephone: (44)81-446-2461
or
World Scientific Publishing Co. Inc.
Suite 1B
1060 Main Street
River Edge
New Jersey 07661
USA
Telephone: (1)201-487-9655
or
World Scientific Publishing Co. Pte. Ltd.
Farrer Road, P. O. Box 128
SINGAPORE 9128
Telephone (65)382-5663
------------------------------
Subject: TR - Training Second-Order Recurrent Neural Networks Using Hints
From: omlinc@cs.rpi.edu (Christian Omlin)
Date: Fri, 08 May 92 13:22:03 -0500
The following paper has been placed in the Neuroprose archive.
Comments and questions are encouraged.
*******************************************************************
--------------------------------------------
TRAINING SECOND-ORDER RECURRENT NEURAL
NETWORKS USING HINTS
--------------------------------------------
C.W. Omlin* C.L. Giles
Computer Science Department *NEC Research Institute
Rensselaer Polytechnic Institute 4 Independence Way
Troy, N.Y. 12180 Princeton, N.J. 08540
omlinc@turing.cs.rpi.edu giles@research.nj.nec.com
Abstract
--------
We investigate a method for inserting rules into discrete-time
second-order recurrent neural networks which are trained to
recognize regular languages. The rules defining regular languages
can be expressed in the form of transitions in the corresponding
deterministic finite-state automaton. Inserting these rules as hints
into networks with second-order connections is straight-forward.
Our simulations results show that even weak hints seem to improve
the convergence time by an order of magnitude.
(To be published in Machine Learning: Proceedings of the Ninth
International Conference (ML92),D. Sleeman and P. Edwards (eds.),
Morgan Kaufmann, San Mateo, CA 1992).
********************************************************************
Filename: omlin.hints.ps.Z
- ----------------------------------------------------------------
FTP INSTRUCTIONS
unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52)
Name: anonymous
Password: anything
ftp> cd pub/neuroprose
ftp> binary
ftp> get omlin.hints.ps.Z
ftp> bye
unix% zcat omlin.hints.ps.Z | lpr
(or whatever *you* do to print a compressed PostScript file)
- ----------------------------------------------------------------
- ----------------------------------------------------------------------------
Christian W. Omlin Troy, NY 12180 USA
Computer Science Department Phone: (518) 276-2930 Fax: (518) 276-4033
Amos Eaton 119 E-mail: omlinc@turing.cs.rpi.edu
Rensselaer Polytechnic Institute omlinc@research.nj.nec.com
- ----------------------------------------------------------------------------
------------------------------
Subject: TR - connectionist model for commonsense reasoning with rules
From: rsun@athos.cs.ua.edu (Ron Sun)
Date: Fri, 05 Jun 92 10:09:22 -0600
TR availble:
A Connectionist Model for Commonsense Reasoning Incorporating Rules
and Similarities
by Ron Sun
For the purpose of modeling commonsense reasoning, we investigate
connectionist models of rule-based reasoning, and show that while such
models can usually carry out reasoning in exactly the same way as
symbolic systems, they have more to offer in terms of commonsense
reasoning. A connectionist architecture, {\sc CONSYDERR}, is proposed
for capturing certain commonsense reasoning competence, which partially
remedies the brittleness problem in traditional rule-based systems. The
architecture employs a two-level, dual representational scheme, which
utilizes both localist and distributed representations and explores the
synergy resulting from the interaction between the two. {\sc CONSYDERR}
is therefore capable of accounting for many difficult patterns in
commonsense reasoning with this simple combination of the two levels.
This work also shows that connectionist models of reasoning are not just
``implementations" of their symbolic counterparts, but better
computational models of commonsense reasoning.
It is FTPable from archive.cis.ohio-state.edu
in: pub/neuroprose (Courtesy of Jordan Pollack)
No hardcopy available.
FTP procedure:
unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get sun.ka.ps.Z
ftp> quit
unix> uncompress sun.ka.ps.Z
unix> lpr sun.ka.ps (or however you print postscript)
------------------------------
Subject: Paper - Logics and Variables in Connectionist Models
From: rsun@athos.cs.ua.edu (Ron Sun)
Date: Fri, 05 Jun 92 10:09:35 -0600
Beyond Associative Memories:
Logics and Variables in Connectionist Models
Ron Sun
abstract
This paper demonstrates the role of connectionist (neural network) models
in reasoning beyond that of an associative memory. First we show that
there is a connection between propositional logics and the weighted-sum
computation customarily used in connectionist models. Specifically, the
weighted-sum computation can handle Horn clause logic and Shoham's logic
as special cases. Secondly, we show how variables can be incorporated
into connectionist models to enhance their representational power. We
devise solutions to the connectionist variable binding problem to enable
connectioninst networks to handle variables and dynamic bindings in
reasoning. A new model, the Discrete Neuron formalism, is employed for
dealing with the variable binding problem, which is an extension of the
weighted-sum models. Formal definitions are presented, and examples are
analyzed in details.
To appear in: Information Sciences,
special issues on neural nets and AI
It is FTPable from archive.cis.ohio-state.edu
in: pub/neuroprose
No hardcopy available.
FTP procedure:
unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get sun.beyond.ps.Z
ftp> quit
unix> uncompress sun.beyond.ps.Z
unix> lpr sun.beyond.ps (or however you print postscript)
------------------------------
Subject: TR - Fuzzy Evidential Logic: A Model of Causality for Commonsense Reasoning}
From: rsun@athos.cs.ua.edu (Ron Sun)
Date: Fri, 05 Jun 92 10:09:48 -0600
TR availble:
Fuzzy Evidential Logic: A Model of Causality for Commonsense Reasoning}
Ron Sun
This paper proposes a fuzzy evidential model for commonsense causal
reasoning. After an analysis of the advantages and limitations of
existing accounts of causality, a generalized rule-based model FEL ({\it
Fuzzy Evidential Logic}) is proposed that takes into account the
inexactness and the cumulative evidentiality of commonsense reasoning.
It corresponds naturally to a neural (connectionist) network. Detailed
analyses are performed regarding how the model handles commonsense causal
reasoning.
To appear in Proc. of 14th Coggnitive Science Conference, 1992
- ----------------------------------------------------------------
It is FTPable from archive.cis.ohio-state.edu
in: pub/neuroprose (Courtesy of Jordan Pollack)
No hardcopy available.
FTP procedure:
unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get sun.cogsci92.ps.Z
ftp> quit
unix> uncompress sun.cogsci92.ps.Z
unix> lpr sun.cogsci92.ps (or however you print postscript)
------------------------------
Subject: Preprint available: A network to velocity vector-field correction
From: Alfred_Nischwitz <alfred@lnt.e-technik.tu-muenchen.de>
Date: Thu, 11 Jun 92 17:04:12 +0100
The following paper has been accepted for publication in the proceedings
of the International Conference on Artificial Neural Networks '92 in
Brighton:
Relaxation in 4D state space - A competitive network
approach to object-related velocity vector-field correction
by Helmut Gluender Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
and Astrid Lehmann Lehrstuhl fuer Nachrichtentechnik
Technische Universitaet Muenchen
Arcisstrasse 21, D-8000 Muenchen 2, Germany
ABSTRACT:
A standard principle of (energy-)minimization is applied to the
problem of visual motion analysis. In contrast to well-known
mathematical optimization procedures and universal optimizing
networks it is proposed to use a problem-adapted network
architecture. Owing to the bilocal coincidence-type motion
detector considered here the task of object-related motion
analysis appears as a geometric correspondence problem. Hence,
the correct spatio-temporal correspondeces between elements in
consecutive images must be selected from all possible ones. This
is performed by neighborhood operations that are repeatedly
applied to the instantaneous signal representation in the
space/velocity-domain until an estimate of the actual flow-field
is reached.
Hardcopies of the paper are available. Please send requests
to the following address in Germany:
Helmut Gluender
Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
or via email to:
alfred@lnt.e-technik.tu-muenchen.de
communicated by Alfred Nischwitz
------------------------------
Subject: neural-oscillator network, reprints available
From: "Lambert Schomaker <SCHOMAKER@NICI.KUN.NL>" <SCHOMAKER@NICI.KUN.NL>
Date: Wed, 17 Jun 92 10:42:00 +0700
[]
Reprints of the following publication are available:
Schomaker, L.R.B., 1992. A neural-oscillator network model of
temporal pattern generation. Human Movement Science 11, 181-192.
Abstract.
Most contemporary neural network models deal with essentially static,
perceptual problems of classification and transformation. Models such as
multi-layer feedforward perceptrons generally do not incorporate time as an
essential dimension. Where time is involved, the proposed solutions suffer
from serious limitations. The TDNN solution for the representation of time is
limited by its a priori fixed time window, whereas recurrent networks of the
Jordan or Elman kind are particularly difficult to train. Biological neural
networks, however, are inherently temporal systems. In modelling motor
behaviour, it is essential to have models that are able to produce temporal
patterns of varying duration and complexity. A model is proposed, based on a
network of pulse oscillators consisting of neuron/interneuron (NiN) pairs.
Due to the inherent temporal properties, a simple NiN net, taught by a
pseudo-Hebbian learning scheme, could be used in simulating handwriting
pen-tip displacement of individual letters.
------------------------------
Subject: ALOPEX algorithm solves the MONK's problems
From: unni@neuro.cs.gmr.com (K.P.Unnikrishnan)
Date: Thu, 18 Jun 92 13:30:36 -0500
In one of the recent issues of 'Neuron Digest', S.B. Thrun had reported
performance comparisons of different learning algorithms, (both machine
learning and neural network), on the MONK problems. Though a number of
algorithms (for example, AQ17-DCI, AQ17-HCI, AQ15-GA, Assistant
Professional, Backpropagation and Cascade correlation) were found to give
100% correct results on two of the three sets of the problems, none of
the algorithms gave 100% correct classifications for all the three data
sets. We have found that a multi-layer perceptron trained using the
ALOPEX algorithm gives 100% correct classification of all three data
sets.
The details of the ALOPEX algorithm can be found in the paper titled
'LEARNING IN CONNECTIONIST NETWORKS USING THE ALOPEX ALGORITHM' (Proc.
IJCNN '92, pp. 926-931). A copy of the this paper has been placed at the
NEUROPROSE ftp archive under the name unni.alopex.ps.Z. If you would like
a copy of the simulation program, send a note to unni@neuro.cs.gmr.com
K.P. Unnikrishnan
GM Research Labs.
ABSTRACT
- ----------
LEARNING IN CONNECTIONIST NETWORKS USING THE ALOPEX ALGORITHM
K. P. Unnikrishnan & K. P. Venugopal
We describe the Alopex algorithm as a universal learning algorithm for
neural networks. The algorithm is stochastic and it can be used for
learning in networks of any topology, including those with feedback. The
neurons could contain any transfer function and the learning could
involve minimization of any error measure. The efficacy of the algorithm
is investigated by applying it on multilayer perceptrons to solve
problems such as XOR, parity and encoder. These results are compared with
that ones obtained using back- propagation learning algorithm. The
scaling properties of Alopex are studied using the encoder problem of
different sizes. Taking the specific case of XOR problem, it is shown
that a smoother error surface, with fewer local minima, could be obtained
by using an information theoretic error measure. An appropriate
'annealing' scheme for the algorithm is described and it is shown that
the Alopex can escape out of the local minima.
FTP INSTRUCTIONS
- ----------------
neuro% ftp archive.cis.ohio-state.edu
Name: anonymous
Password: guest
ftp> binary
ftp> cd pub/neuroprose
ftp> get unni.alopex.ps.Z
ftp> quit
neuro% uncompress unni.alopex.ps.Z
neuro% lpr unni.alopex.ps
------------------------------
Subject: TR - Modelling the Development of Topography and Ocular Dominance
From: Geoffrey Goodhill <gjg@cns.edinburgh.ac.uk>
Date: Tue, 23 Jun 92 18:49:54 +0000
The following technical report version of my thesis is now available
in neuroprose:
Correlations, Competition, and Optimality:
Modelling the Development of Topography and Ocular Dominance
CSRP 226
Geoffrey Goodhill
School of Cognitive and Computing Science
University Of Sussex
ABSTRACT
There is strong biological evidence that the same mechanisms underly
the formation of both topography and ocular dominance in the visual
system. However, previous computational models of visual development
do not satisfactorily address both of these phenomena
simultaneously. In this thesis we discuss in detail several
models of visual development, focussing particularly on the form
of correlations within and between eyes.
Firstly, we analyse the "correlational" model for ocular dominance
development recently proposed in [Miller, Keller & Stryker 1989] .
This model was originally presented for the case of identical
correlations within each eye and zero correlations between the eyes.
We relax these assumptions by introducing perturbative correlations
within and between eyes, and show that (a) the system is unstable to
non-identical perturbations in each eye, and (b) the addition of small
positive correlations between the eyes, or small negative correlations
within an eye, can cause binocular solutions to be favoured over
monocular solutions.
Secondly, we extend the elastic net model of [Goodhill 1988, Goodhill
and Willshaw 1990] for the development of topography and ocular
dominance, in particular considering its behaviour in the
two-dimensional case. We give both qualitative and quantitative
comparisons with the performance of an algorithm based on the
self-organizing feature map of Kohonen, and show that in general the
elastic net performs better. In addition we show that (a) both
algorithms can reproduce the effects of monocular deprivation, and (b)
that a global orientation for ocular dominance stripes in the elastic
net case can be produced by anisotropic boundary conditions in the
cortex.
Thirdly, we introduce a new model that accounts for the development of
topography and ocular dominance when distributed patterns of activity
are presented simultaneously in both eyes, with significant
correlations both within and between eyes. We show that stripe width
in this model can be influenced by two factors: the extent of lateral
interactions in the postsynaptic sheet, and the degree to which the
two eyes are correlated. An important aspect of this model is the form
of the normalization rule to limit synaptic strengths: we analyse this
for a simple case.
The principal conclusions of this work are as follows:
1. It is possible to formulate computational models that account for
(a) both topography and stripe formation, and (b) ocular dominance
segregation in the presence of *positive* correlations between
the two eyes.
2. Correlations can be used as a ``currency'' with which to compare
locality within an eye with correspondence between eyes. This
leads to the novel prediction that stripe width can be influenced
by the degree of correlation between the two eyes.
Instructions for obtaining by anonymous ftp:
% ftp cheops.cis.ohio-state.edu
Name: anonymous
Password:neuron
ftp> binary
ftp> cd pub/neuroprose
ftp> get goodhill.thesis.tar
ftp> quit
% tar -xvf goodhill.thesis.tar (This creates a directory called thesis)
% cd thesis
% more README
WARNING: goodhill.thesis.tar is 2.4 Megabytes, and the thesis takes up
13 Megabytes if all files are uncompressed (there are only 120 pages
- - the size is due to the large number of pictures). Each file within
the tar file is individually compressed, so it is not necessary to
have 13 Meg of spare space in order to print out the thesis.
The hardcopy version is also available by requesting CSRP 226 from:
Berry Harper
School of Cognitive and Computing Sciences
University of Sussex
Falmer
Brighton BN1 9QN
GREAT BRITAIN
Please enclose a cheque for either 5 pounds sterling or 10 US dollars,
made out to "University of Sussex".
Geoffrey Goodhill
University of Edinburgh
Centre for Cognitive Science
2 Buccleuch Place
Edinburgh EH8 9LW
email: gjg@cns.ed.ac.uk
------------------------------
Subject: Preprint available: A network to velocity vector-field correction
From: Alfred_Nischwitz <alfred@lnt.e-technik.tu-muenchen.de>
Date: Thu, 02 Jul 92 13:51:37 +0100
The following paper has been accepted for publication in the proceedings
of the International Conference on Artificial Neural Networks '92 in
Brighton:
Relaxation in 4D state space - A competitive network
approach to object-related velocity vector-field correction
by Helmut Gluender Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
and Astrid Lehmann Lehrstuhl fuer Nachrichtentechnik
Technische Universitaet Muenchen
Arcisstrasse 21, D-8000 Muenchen 2, Germany
ABSTRACT:
A standard principle of (energy-)minimization is applied to the
problem of visual motion analysis. In contrast to well-known
mathematical optimization procedures and universal optimizing
networks it is proposed to use a problem-adapted network
architecture. Owing to the bilocal coincidence-type motion
detector considered here the task of object-related motion
analysis appears as a geometric correspondence problem. Hence,
the correct spatio-temporal correspondeces between elements in
consecutive images must be selected from all possible ones. This
is performed by neighborhood operations that are repeatedly
applied to the instantaneous signal representation in the
space/velocity-domain until an estimate of the actual flow-field
is reached.
Hardcopies of the paper are available. Please send requests
to the following address in Germany:
Helmut Gluender
Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
or via email to:
alfred@lnt.e-technik.tu-muenchen.de
communicated by Alfred Nischwitz
------------------------------
Subject: Neural Chess: Paper Presentation
From: David Kanecki <kanecki@cs.uwp.edu>
Date: Sun, 12 Jul 92 18:53:47 -0600
[[ Editor's Note: Long-term readers will remember David's thoughtful
postings in the past and his long-term work with applying connectionist
models to the game of chess. Unfortunately this announcement sat in my
queue too long for it to be timely. However, I encourage any you who
have a serious interest in this problem to get in touch with David.
Perhaps he will be kind enough to provide a recap of his talk to Digest
readers? -PM ]]
PAPER PRESENTATION ANNOUNCEMENT
"Simulation as an Intelligent, Thinking Computer Program as
Neural Chess"
By
David H. Kanecki, Bio. Sci., A.C.S.
40th Summer Computer Simulation Conference
Society for Computer Simulation
July 27-31, 1992
Nugget Hotel, Sparks, NV
Group 4, Session 7, Tuesday, July 28, 3:30-5:00
Carson Room, Nugget Hotel, Sparks, NV
In the above presentation, I will present results obtained from my
ten year development project. In addition, I will present a new methology
to modelling cognitive reasoning.
If anyone wishes a copy of this paper or to attend the conference, please
contact the Society for Computer Simulation, San Diego, CA. As to
electronic distribution of my paper, I will not have any information until
I check with the society after August 1st.
This work is a major breakthrough in intelligent thinking systems that
can be used for applications of navigation, logistics, etc.
* * *
This conference will present 15 topics of which 2 are related to neural
network applications. The two topics are intelligent systems, group 3, and
AI/KBS in simulation, group 4, of which 25 seminars are scheduled.
"As we learn and teach, we move to the next higher level of intelligence."
David H. Kanecki, Bio. Sci., A.C.S.
P.O. Box 93
Kenosha, WI 53141
kanecki@cs.uwp.wisc.edu
------------------------------
End of Neuron Digest [Volume 11 Issue 47]
****************************************