Copy Link
Add to Bookmark
Report

Neuron Digest Volume 06 Number 36

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest	Wednesday, 30 May 1990		Volume 6 : Issue 36 

Today's Topics:
Modelling the hippocampus
Re: Modelling the hippocampus
Re: Modelling the hippocampus
Summary: Getting confidence measures, probabilities, back-prop, etc.
No More Backprop Blues -- with BackPercolation
Looking or Genisis/Exodus
Re: Looking or Genisis/Exodus
Re: Looking or Genisis/Exodus
request for help - NNs in signal recognition
Student looking for simmulators
Where can I get:Farmer,Sidorvitch:"Predicting Chaotic Time Series"?
Re: Where can I get:Farmer,Sidorvitch:"Predicting Chaotic Time Series"?
Parallel NN-simulator
Neural_nets in Teletraffic Science/Engineering?
Share room at IJCNN San Diego?
Connectionism, Explicit Rules, and Symbolic Manipulation
Re: Back Propagation for Training Every Input Pattern with Multiple Output
Re: Back-propagation/NN benchmarks
Neural Nets and forecasting
Re: Neural Nets and forecasting
Help needed on Counter-Propagation networks
TR - BP with Dynamic Topology
IJCNN Reminder
AI Workshop


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Modelling the hippocampus
From: stucki@allosaur.cis.ohio-state.edu (David J Stucki)
Organization: Ohio State University Computer and Information Science
Date: 25 Apr 90 16:02:45 +0000


I am interested in any pointers to current research/literature on computational
modelling of the hippocampus, especially connectionist models.

thanks in advance,

dave...
- -=-
David J Stucki /\ ~~ /\ ~~ /\ ~~ /\ ~~ c/o Dept. Computer and
537 Harley Dr. #6 / \ / \ / \ / \ / Information Science
Columbus, OH 43202 \/ \ / \ / \ / 2036 Neil Ave.
stucki@cis.ohio-state.edu ~ \/ ~~ \/ ~~ \/ Columbus, OH 43210

------------------------------

Subject: Re: Modelling the hippocampus
From: steinbac@hpl-opus.HP.COM (Gunter Steinbach)
Organization: HP Labs, High Speed Electronics Dept., Palo Alto, CA
Date: 25 Apr 90 18:33:45 +0000


I just clipped an article from -don't laugh- the 1st issue of
"Workstation News" that mentions work at IBM's T.J. Watson Research
Center in Yorktown Heights, NY. They modeled 10000 cells in the
hippocampus on an IBM 3090 supercomputer. They say they found brain
waves in the simulation, and quote one Dr. Robert Traub as saying "now
we are beginning to do experiments on it as if it were an organism in
its own right."


No other names or refs were given. I can FAX the article if you want,
but it's not much more than what I quoted, just a "newsbite".

Guenter Steinbach gunter_steinbach@hplabs.hp.com

------------------------------

Subject: Re: Modelling the hippocampus
From: lyle@wheaties.ai.mit.edu (Lyle J. Borg-Graham)
Organization: MIT Artificial Intelligence Laboratory
Date: 26 Apr 90 19:44:54 +0000

I have worked on a model of the somatic properties of hippocampal
pyramidal cells, focusing on the interaction of the myriad of voltage,
time, and calcium dependent channels, with some suggestions as to
possible computational roles of these mechanisms:

"Modelling the Somatic Electrical Response of Hippocampal
Pyramidal Neurons"


MIT Artificial Intelligence Laboratory Technical Report, AI-TR 1161

Also, there is a brief paper on this in the first NIPS proceedings (1987):

"Simulations Suggest Information Processing Roles for the Diverse
Currents in Hippocampal Neurons"


Lyle Borg-Graham, MIT Center for Biological Information Processing

------------------------------

Subject: Summary: Getting confidence measures, probabilities, back-prop, etc.
From: irani@umn-cs.cs.umn.edu (Erach Irani)
Organization: University of Minnesota, Minneapolis - CSCI Dept.
Date: 26 Apr 90 00:36:39 +0000

[[ A summary of responses from a question asked many moons ago... ]]

1. There's a lot out there. If you're interested, reply and I'll get a
bibliography together for you. <I've asked for the bibliography. If
you're interested perhaps you should ask
cjoslyn@bingvaxu.cc.binghamton.edu directly>.

2. I'm not certain what you mean by these exactly, but if you mean by
confidence measures what I think you might, you might also post this to
sci.psychology, because cognitive psychologists have been concerned with
the validity of these measures for decades.... <I am posting this,
thanks>

3. I'm not sure what your biases are, but if you're interested in looking
at heuristic approaches, I recommend:

Carver, N. Evidence-Based Plan Recognition. COINS Technical Report 88-13,
February 1988, Dept. of Computer and Information Science, University of
Massachusetts, Amherst MA.

Cohen, P.R. Heuristic Reasoning about Uncertainty: An Artificial
Intelligence Approach. 1985, Morgan Kaufmann Publishers, Inc., Los Altos
CA.

4. see Wipke, W. T.; Dolata, D. P. "A Multi-Valued Logic Predicate
Calculus Approach to Synthesis Planning"
. in Applications of Artificial
Intelligence in Chemistry: Pierce, T.; Hohne, B., Eds.; American Chemical
Society, Symposium Series No. 306: 1986, pp 188-208.

and Dolata's thesis

5. I don't know exactly what you are doing, but you might benifit
greatly from looking at some references on Pattern Recognition. This
stuff is written so even us CS guys can usually understand it.

6. D. Dubois & H. Prade, Possibility Theory, Plenum Press 1986. The book
explains among other things the relations and differences between
Dempster/Shafer, probability and possibility theories.

For current research, see the International Journal of Approximate
Reasoning.


7. I've decided to rely heavily on probabilistic inference -
specifically, Judea Pearl's Bayesian network approach. Far from being an
AI invention (or reinvention), probability theory has ....

I have a textboook to recommend for you. The book is: Probabilistic
Reasoning in Intelligent Systems: Networks of Plausible Inference, by
Judea Pearl, pub- lished by Morgan-Kaufman in 1988. Lot's of good points
described.

Another good reference is: Uncertainty in Artificial Intelligence, which
is a series put out by Morgan-Kaufman. Peter Cheesman and others have
published critiques of fuzzy logic and Dempster-Shafer. Other articles
defending these approaches are also included. The book from this series
that I've read gave me a broad view of the approaches available, and I
recommend you take a look at it. Pearl had an article in the book I read,
but it wasn't nearly as good as his book was at introducing his approach.

8. I did get references to 2 programs. I'm following up on them. <I'll
post information on them to the net when I get it and if they do not
mind.> Also, I am posting to sci.psychology to obtain information in
evidential measures.

A few people mentioned that it wasn't very clear what I was doing. I'm
sorry. I'm doing this work as part of my thesis research and did not
want to reveal too much of it at present. <If you do want the highlights
of my thesis proposal, email me a note, and I will mail them to you when
my proposal is ready and approved. If you want to know more then, I'll
be happy to discuss it with you>

Thanks for your replies,
erach (irani@cs.umn.edu)



------------------------------

Subject: No More Backprop Blues -- with BackPercolation
From: mgj@cup.portal.com (Mark Gregory Jurik)
Organization: The Portal System (TM)
Date: 28 Apr 90 07:51:28 +0000

Backpercolation, a complement to Backpropagation, adjusts weights on the
basis of reducing error assigned locally to each cell in the network. In
contrast, backpropagation is based on adjusting weights in proportion to
each cell's output error gradient.

Here are some answers to the more common questions I have received:

1. Although either method works fairly well alone, when combined they
provide amazing convergence. The reason is that when combined, the
weight vectors are given both direction (from Backprop) and distance
(from Backperc) for their next update.

2. Learning rate can be fixed to 1.0 for some problems.

3. Cells employ the atanh(x) output function because the symmetry of its
output lets 0 signify "no correlation", which is intuitively appealing.

I am looking for adventurous souls willing to experiment with the
algorithm, and maybe publish results, in exchange for being informed
about the algorithm's performance.

Domestic requests for the preliminary report : Send a self-addressed
stamped envelope to PO 2379, Aptos, CA 95001. Overseas requests : Skip
the SASE. I will air mail it to you. Sorry, the report will not be
e-mailed.

Mark Jurik

------------------------------

Subject: Looking or Genisis/Exodus
From: bmcprog@bru.mayo.edu (Bruce Cameron)
Organization: Mayo Foundation Biotechnology Computing Resource
Date: 08 May 90 15:05:51 +0000

I am trying to locate a neural-net simulator that runs under X, I believe
that there is a package available called Genisis/Exodus. Does anyone know
where I could get this via anonymous ftp? Many thanks for your help.

--Bruce
----------------------------------------------------
Bruce M. Cameron bmc@bru.mayo.edu
Medical Sciences 1-14 (507) 284-3288
Mayo Foundation WD9CKW
Rochester, MN 55905
----------------------------------------------------

------------------------------

Subject: Re: Looking or Genisis/Exodus
From: bmcprog@bru.mayo.edu (Bruce Cameron)
Organization: Mayo Foundation Biotechnology Computing Resource
Date: 09 May 90 19:16:53 +0000


I would like to thank all those who responded to my previous post. The
response was quick and overwhelming.

A summary of publicly available neural net simulators follows:

1) Genesis & Xodus:
available from genesis.cns.caltech.edu (131.215.135.64) on an
as is basis. You will be required to register for ftp access.
To do so, login as "genesis" and follow the instructions.


2) Connectionist Simulator:
available from cs.rochester.edu. SunView based interface

3) UCLA SFINX:
available from retina.cs.ucla.edu as sfin_v2.0.tar.Z. Requires
a color display?

4) SunNet:
available from boulder.colorado.edu or try 128.138.240.1 as
SunNet5.5.tar.Z.


----------------------------------------------------
Bruce M. Cameron bmc@bru.mayo.edu
Medical Sciences 1-14 (507) 284-3288
Mayo Foundation WD9CKW
Rochester, MN 55905
----------------------------------------------------

------------------------------

Subject: Re: Looking or Genisis/Exodus
From: smagt@fwi.uva.nl (Patrick van der Smagt)
Date: 10 May 90 06:59:45 +0000

>A summary of publicly available neural net simulators follows:
>
>2) Connectionist Simulator:
> available from cs.rochester.edu. SunView based interface

The new version (4.2) also runs with X windows.

Patrick van der Smagt

------------------------------

Subject: request for help - NNs in signal recognition
From: mjb@goanna.oz.au (Mat Boek)
Organization: Comp Sci, RMIT, Melbourne, Australia
Date: 16 May 90 03:12:25 +0000



I am about to do some work on using neural networks to recognise and
identify different types of continuous signals (sinusoidal, square,
sawtooth, etc.), independent of amplitude and frequency. I would
appreciate any help in the way of suggestions, and especially references
to any similar work.

Thanks in advance,

Mat.

------------------------------

Subject: Student looking for simmulators
From: frankt@cs.eur.nl (Frank van Tol)
Organization: Erasmus Universiteit Rotterdam, dept. CS (Informatica)
Date: 16 May 90 09:17:53 +0000

Student(s) looking for NN-simmul.

I m looking for -sources of- NN-simmulators that makes it possible to set
up a net,train it,'ask' some questions,modify the net eg. add a node
manualy ajust a weight and go back to training to start allover again
with a modified net. Currently i have a program for the IBM-PC but it's
only capable to train and very slow. So C sources for Unix or PC would
be welcom....

-Frank van tol
frankt@cs.eur.nl
bitol@hroeur5.bitnet

------------------------------

Subject: Where can I get:Farmer,Sidorvitch:"Predicting Chaotic Time Series"?
From: rca@cs.brown.edu (Ronald C.F. Antony)
Organization: Brown University Department of Computer Science
Date: 16 May 90 23:40:40 +0000

I need some help in getting the article mentioned above. I found it
referenced in the paper by Lapedes and Farber on "Nonlinear Signal
Processing using Neural Networks: Prediction and Signal Modelling"
.

If anyone could tell me where I can get the paper by Farmer and
Sidorvitch, please mail me or even better, send me a copy to the
following address:

Ronald C.F. Antony
Brown University
P.O.Box #234
Providence, RI 02912-0234

Thanks a lot!

Ronald
-----------------------------------------------------------------------------
"The reasonable man adapts himself to the world; the unreasonable one persists
in trying to adapt the world to himself. Therefore all progress depends on the
unreasonable man."
Bernhard Shaw | rca@cs.brown.edu or antony@browncog.bitnet

------------------------------

Subject: Re: Where can I get:Farmer,Sidorvitch:"Predicting Chaotic Time Series"?
From: rca@cs.brown.edu (Ronald C.F. Antony)
Organization: Brown University Department of Computer Science
Date: 19 May 90 17:16:09 +0000

here is the reference to the article I'm looking for:

D.Farmer, J.Sidorvitch
preprint "Perdicting Chaotic Time Series"
Los Alamos National Lab
5/87

this is the paper I found the reference in:

A.Lapedes, R.Farber
preprint "Nonlinear Signal Processing Using Neural Networks:
Prediction and System Modelling"

Loa Alamos National Lab
7/87

Ronald
- ------------------------------------------------------------------------------
"The reasonable man adapts himself to the world; the unreasonable one persists
in trying to adapt the world to himself. Therefore all progress depends on the
unreasonable man."
Bernhard Shaw | rca@cs.brown.edu or antony@browncog.bitnet

------------------------------

Subject: Parallel NN-simulator
From: crwth@kuling.UUCP (Olle G{llmo)
Organization: DoCS, Uppsala University, Sweden
Date: 18 May 90 08:39:11 +0000

Hi there!

I am about to design a parallel computer (conventional microprocessors/
controllers), dedicated to simulate neural networks. This is a request
for ideas, references to articles and answers to questions of the
following kind:

What is your idea of a suitable architecture?
How do you cope with the high fan-in/fan-out of the nodes?
Communication: Bus? Serial?
How much memory (to store weights in) would each processor
need? Lower limit?

Please answer by Email to crwth@DoCS.UU.SE

Thank you!

/Crwth

---- "If God is perfect -- why did He create discontinous functions?" ----
Olle Gallmo, Dept. of Computer Systems, Uppsala University
Snail Mail: Box 520, S-751 20 Uppsala, Sweden
Email: crwth@DoCS.UU.SE

------------------------------

Subject: Neural_nets in Teletraffic Science/Engineering?
From: jhaynes@surf.sics.bu.oz (John Haines)
Organization: School of Info. & Computing Science, Bond University, Australia.
Date: 24 May 90 02:20:12 +0000


Does anyone have any details of any Neural Network applications in the
area of Teletraffic Science/Engineering?

Thank you

John Haynes
Computing Sciences
Bond University
Gold Coast, Australia, 4229.

------------------------------

Subject: Share room at IJCNN San Diego?
From: sdd.hp.com!zaphod.mps.ohio-state.edu!samsung!umich!umeecs!msi-s0.msi.umn.edu!mv10801@ucsd.edu (Jonathan Marshall [Learning
Center])
Organization: Center for Research in Learning, Perception, and Cognition
Date: 24 May 90 16:48:29 +0000

I am looking for a roommate to share a double hotel room at the IJCNN
(International Joint Conference on Neural Networks) in San Diego, June
17-21. I already have a (cancellable) reservation at the San Diego
Marriott; the cost to each of us would be $65/night plus tax. I can
be reached at mv10801@uc.msc.umn.edu, or at 612-626-1565 (office) or
612-724-5742 (home). Thanks a lot!

o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o
o o
o Jonathan Marshall mv10801@uc.msc.umn.edu o
o Center for Research in Learning, Perception, and Cognition o
o 205 Elliott Hall, University of Minnesota, Minneapolis, MN 55455, USA o
o o
o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o

------------------------------

Subject: Connectionism, Explicit Rules, and Symbolic Manipulation
From: hadley@fornax.UUCP (Bob Hadley)
Organization: School of Computing Science, SFU, Burnaby, B.C. Canada
Date: 24 May 90 19:43:59 +0000


Connectionism, Rule Following, and Symbolic Manipulation

by

Robert F. Hadley

School of Computing Science
Simon Fraser University
Burnaby, Canada V5A 1S6
hadley@cs.sfu.ca


Abstract

At present, the prevailing Connectionist methodology for representing
rules is to implicitly embody rules in "neurally-wired" networks. That
is, the methodology adopts the stance that rules must either be
hard-wired or "trained into" neural structures, rather than represented
via explicit symbolic structures. Even recent attempts to implement
production systems within connectionist networks have assumed that
condition-action rules (or rule schema) are to be embodied in the
structure of individual networks. Such networks must be grown or trained
over a significant span of time. However, arguments are presented herein
that humans sometimes follow rules which are very rapidly assigned
explicit internal representations, and that humans possess general
mechanisms capable of interpreting and following such rules. In
particular, arguments are presented that the speed with which humans are
able to follow rules of novel structure demonstrates the existence of
general-purpose rule following mechanisms. It is further argued that the
existence of general-purpose rule following mechanisms strongly indicates
that explicit rule following is not an isolated phenomenon, but may well
be a pervasive aspect of cognition. The arguments presented here are
pragmatic in nature, and are contrasted with the kind of arguments
developed by Fodor and Pylyshyn in their recent, influential paper.

------------------------------

Subject: Re: Back Propagation for Training Every Input Pattern with Multiple Output
From: eurtrx!schrein@relay.EU.net (SKBS)
Date: Sun, 27 May 90 14:07:49 +0200


See:
Schreinemakers, J.F. and Touretzky, D.S. Interfacing a neural network with
a rule-based reasoner for diagnosing mastitis. In Maureen Caudill, editor,
Proceedings of the International Joint Conference on Neural Networks,
volume 2, pages 487-491, Hillsdale, NJ, January 1990. IEEE & INNS,
Lawrence Erlbaum Associates, Inc.


------------------------------

Subject: Re: Back-propagation/NN benchmarks
From: dank@moc.Jpl.Nasa.Gov (Dan Kegel)
Date: Mon, 28 May 90 09:03:23 -0700

For the stout at heart, there are three very difficult standard problem
sets that have been used to compare pattern recognition systems.

1. The Texas Instruments / NBS Spoken Digit Database
This is a database of hundreds of people reciting strings of numbers,
together with a machine readable list of what they said.
We used this to cut our teeth when I worked at a speech recognition
company. It is available from NTIS on VHS videocasette; the speech is
in PCM format, which is only readable by special audiophile equipment, and
the labels are in 1200-baud modem format.
I think a digitized copy in 'tar' format on 8mm videotape exists, and
is much easier to use than the VHS version, but I don't know if it
is publicly available.

2. The DARPA 1000-word continuous speech task
Don't even think about it. This is just barely possible to tackle
with state-of-the-art statistical pattern recognizers. It consists
of a hundred people reciting sentences from a 1000-word vocabulary
as if they were asking a computer for information on the status of
ships in a harbour. For more info, look up Kai-Fu Lee's "Sphinx"
speech recognition system papers in 1988 & 1989 ICASSP procedings.

3. The (NTIS?) handwritten English letter database
I haven't seen this one myself, but I think it consists of digitized
images of handwriting. Like the NBS spoken digit database, it is
a very large real-world task.

To tackle (1) or (2) with hidden Markov modelling, you should have at
least a 10-mips machine and a gigabyte of disk available to you. I have
no idea how much muscle you would need to tackle them with neural
networks, but the thought scares me.

Dan Kegel (dank@moc.jpl.nasa.gov)

------------------------------

Subject: Neural Nets and forecasting
From: ilo0005@discg1.UUCP (cherie homaee)
Organization: Defense Industrial Supply Center, Philadelphia, Pa
Date: 29 May 90 18:26:56 +0000


Has anyone used neural nets for forecasting? If so have you used any
other neural paradigm other than back-propagation?

------------------------------

Subject: Re: Neural Nets and forecasting
From: uflorida!beach.cis.ufl.edu!zt (tang)
Organization: UF CIS Department
Date: 30 May 90 22:12:56 +0000

>Has anyone used neural nets for forecasting? If so have you used any
>other neural paradigm other than back-propagation?

We have done some experiments with time series forecasting using back-
propagation. Our results show that neural nets perform well compared with
traditional methods, especially for long term forecasting. Our initial
report will appear in the proceedings of the "First Workshop on Neural
Networks, Auburn, 1990"
.

See also "Neural Networks as Forecasting Experts: An Empirical Test,
Proceedings of the IJCNN Meeting, Washington, 1990"
, by Sharda and Patil.

------------------------------

Subject: Help needed on Counter-Propagation networks
From: Barry Kristian Ellingsen <ellingsen@cc.uib.no>
Date: 30 May 90 21:35:29 +0200

I have just started working on a thesis on neural networks, and as i
havent been able to find much information on Counter-Propagation networks
i would greatly appreciate it if one of you know where i can find it.

send email to: ellingsen@cc.uib.no (Robert Heggdal)


------------------------------

Subject: TR - BP with Dynamic Topology
From: leff@DEPT.CSCI.UNT.EDU ("Dr. Laurence L. Leff")
Organization: University of North Texas in Denton
Date: 21 May 90 05:03:59 +0000



The technical report

"BACKPROAGATION with DYNAMIC TOPOLOGY and SIMPLE ACTIVATION FUNCTIONS"

describes an algorithm within the BP framework which uses very simple
activation functions and builds a tree-like topology as the net
learns.

It is available by anonymous ftp (see below), emailing
guy@cs.flinders.oz.au or writing to:

Guy Smith,
Discipline of Computer Science,
Flinders University,
Adelaide 5042,
AUSTRALIA.

It's written in Latex. Email me if you'd like a Latex copy.

Guy Smith.

- ------------------ ftp session ----------------

unix% ftp flinders.cs.flinders.oz.au.
Name (flinders:guy): anonymous
ftp> cd pub
ftp> get GrowNet.dvi.Z
ftp> quit

unix% uncompress GrowNet.dvi.Z

unix% lpr -d GrowNet.dvi

------------------------------

Subject: IJCNN Reminder
From: Louis Steven Biafore <biafore%cs@ucsd.edu>
Date: Mon, 28 May 90 20:51:47 -0700


............................................................
International Joint Conference on Neural Networks
San Diego, CA. - June 17-21, 1990

The 1990 IJCNN is sponsored by the IEEE Council on Neural Networks and
the International Neural Network Society (INNS). The IJCNN will cover
the full spectrum of neural computing from theory such as neurodynamics
to applications such as machine vision. Meet leading experts and
practitioners during the largest conference in the field. For further
information contact Nomi Feldman, Meeting Management, 5665 Oberlin Dr.,
Suite 110, San Diego, CA 92121. Telephone (619) 453-6222.

Registration

The conference registration fee includes admission to all sessions,
exhibit area, Sunday Welcome Reception and Wednesday Party. TUTORIALS ARE
NOT INCLUDED. The registration fee is $280. Single day registration is
available for $110 (proceedings not included). Full-time students may
attend for $50, proceedings and Wednesday Party not included.

Schedule of Events

Sunday 17 June TUTORIALS (8 am - 6 pm)
RECEPTION (6 pm - 8 pm)
INDUSTRY PANEL (8 pm - 10 pm)

Monday 18 June TECHNICAL SESSIONS (8 am - 5 pm)
BIOENGINEERING PANEL (12 pm - 1:30 pm)
PLENARY SESSIONS (8 pm - 10 pm)

Tuesday 19 June TECHNICAL SESSIONS (8 am - 5 pm)
PLENARY SESSIONS (8 pm - 10 pm)

Wednesday 20 June TECHNICAL SESSIONS (8 am - 5 pm)
PARTY (6 pm - 8 pm)
GOVERNMENT PANEL (8 pm - 10 pm)

Thursday 21 June TECHNICAL SESSIONS (8 am - 5 pm)

Tutorials

Thirteen tutorials are planned for Sunday 17 June.

Adaptive Sensory-Motor Control - Stephen Grossberg
Associative Memory - Bart Kosko
Chaos for Engineers - Leon Chua
Dynamical Systems Review - Morris Hirsch
LMS Techniques in Neural Networks - Bernard Widrow
Neural Network Applications - Robert Hecht-Nielsen
Neurobiology I: Neurons and Simple Networks - Walter Freeman
Neurobiology II: Advanced Networks - Allen Selverston
Optical Neurocomputers - Demitri Psaltis
Reinforcement Learning - Andrew Barto
Self-Organizing Feature Maps - Teuvo Kohonen
Vision - John Daugman
VLSI Technology and Neural Network Chips - Lawrence Jackel

Exhibits

Exhibitors will present innovations in neural networks,
including neurocomputers, VLSI neural networks, implementations,
software systems and applications. IJCNN is the neural network
industy's largest tradeshow. Vendors may contact Richard Rea
at (619) 222-7447 for additional information.

Accomodations

IJCNN 90 will be held at the San Diego Marriott Hotel on San
Diego Bay (619) 234-1500.

............................................................

Please direct questions to the appropriate individual as specified above
(please don't send questions to me).

S. Biafore - UCSD

------------------------------

Subject: AI Workshop
From: dietrich@bingvaxu.cc.binghamton.edu (Eric Dietrich)
Organization: SUNY Binghamton, NY
Date: 22 May 90 20:22:52 +0000



Below is the program (as of May 22) for an upcoming workshop
on the scientific aspects of artificial intelligence. Interested parties
are invited to attend and contribute to the discussion. The workshop can
accomodate about 20 more participants. All of the paper slots are taken,
however.

- ---------------------------------------------------------------------
ARTIFICIAL INTELLIGENCE:
AN EMERGING SCIENCE OR DYING ART FORM?

June 21 - 23, 1990
Dept. of Philosophy
SUNY Binghamton
Binghamton, NY

Sponsored by AAAI,
The SUNY Research Foundation,
Taylor and Francis - Publishers of the
Journal of Experimental and Theoretical AI, and
IBM

Program

Thursday, June 21, 1990

8:00 - 8:30 Breakfast

8:30 - 8:45 Welcome and Introductory Remarks:
Eric Dietrich (Workshop Chairperson)

Session 1:

8:45 - 9:45 Bill Rapaport, Computer Science, SUNY - Buffalo
"What is Cognitive Science"

10:00 - 10:30 J. Terry Nutter, Computer Science, Virginia Tech
"AI, Science, and Intellectual Progress:
Preliminary remarks and arguments"


10:45 - 11:00 Coffee Break

11:00 - 11:30 Kevin Korb, Philosophy, Indiana University
"Searle's AI Program"

11:45 - 12:15 Richard Wyatt, West Chester University
"Understanding Machines"

12:30 - 1:30 Lunch


Session 2:

1:30 - 2:30 Clark Glymour, Philosophy, Carnegie Mellon Univ.
"Artificial Intelligence and Computation Theory"

2:45 - 3:15 Peter Turney, National Research Council, Canada
"The Gap between Abstract and Concrete Results
in Machine Learning"


3:30 - 3:45 Coffee Break

3:45 - 4:15 Peter Kugel, Computer Science, Boston College
"Is It Time to Replace Turing's Test?"

4:30 - 5:00 Tom Bylander, Computer Science, Ohio State Univ.
"Tractability and Artificial Intelligence"

6:30 Dinner


Friday, June 22, 1990

8:00 - 8:45 Breakfast

Session 3:

8:45 - 9:45 Jim Hendler, Computer Science, Univ. of Maryland
"Down with Solipsism: the Challenge to AI from
Connectionism"


10:00 - 10:30 Kihong Park, Computer Science, Univ. of South Carolina
"Some Consequences of Intelligent Systems Trying to
Design Intelligent Systems"


10:45 - 11:00 Coffee Break

11:00 - 11:30 Saul Traiger, Philosophy, Occidental College
"Solipsism, Individualism, and Cognitive Science"

11:45 - 12:15 Selmer Bringsjord, Philosophy, Rensselaer
"Is Connectionism a Challenge to Good, Old Fashioned
Artificial Intelligence?"


12:30 - 1:30 Lunch


Session 4:

1:30 - 2:30 John Sowa, IBM Systems Research
"Crystallizing Theories Out of Knowledge Soup"

2:45 - 3:15 Anthony Maida, Computer Science, Pennsylvania State
Univ.
"The Propositional Version of the Knowledge
Representation Hypothesis"


3:30 - 3:45 Coffee Break

3:45 - 4:15 Steve Downes, Philosophy, Virginia Tech
"Philosophy, Android Epistemology, and AI"

4:30 - 5:00 Jon Sticklen, Computer Science, Michigan State Univ.
"The Relationship between AI and Application Domains"

6:30 Dinner


Saturday, June 23, 1990

8:00 - 8:45 Breakfast

Session 5:

8:45 - 9:45 Susan Josephson, Philosophy, Columbus College
of Art and Design
"What Kind of Science is Artificial Intelligence?"

10:00 - 10:30 Danny Kopec, Computer Science, Univ. of Maine
"Artificial Intelligence: How to Regain Credibility
for our Discipline"


10:45 - 11:00 Break

11:00 - 11:30 Sylvia Candelaria de Ram, Computing Research Lab,
New Mexico State Univ.
"Real-world Sensors, Meaning, or Mentalese"

12:00 - 12:45 Lunch and Closing Remarks

------------------------------

End of Neuron Digest [Volume 6 Issue 36]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT