Copy Link
Add to Bookmark
Report

Neuron Digest Volume 06 Number 34

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Wednesday, 23 May 1990                Volume 6 : Issue 34 

Today's Topics:
Re: Temporal Pulse Coding
no. hidden units
Re: no. hidden units
Re: no. hidden units
Re: no. hidden units
Re: no. hidden units
Re: no. hidden units
Re: no. hidden units
HICSS-Neural Nets Call for Papers
HICSS - Last Year's Titles
July 1990 Summer Workshop on Biological Neural Systems, Berkeley CA
IEE/OUG AI Colloquium ``Symbols versus Neurons?'' Advance Announcement


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Re: Temporal Pulse Coding
From: valtonen@isy.liu.se (Krister Valtonen)
Organization: Dept of EE, University of Linkoping
Date: 26 Apr 90 07:31:41 +0000

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) writes:

>Modulated pulse codes represent a very interesting way of implementing
>neural networks in silicon. At IJCNN '90 I saw a neural chip implemented
>by using AND gates for multiplication and OR gates for adding
>signals set up and random bit encodings from a certain magnitude.

>Current-based methods of neural net implementation seem very promising
>right now, and current pulsing would represent a low duty-cycle
>method of implementation (which would reduce power problems).

Here at Linkoping Univeristy (Sweden) we have a project on "pulse coded"
neural networks. The "pulses coded" approach is interesting from several
point of views, such as hardware implementation, stochastic search
techniques and possibly also from biological point of view. We will
present an article at the IJCNN San Diego conference in June [1].

Alan Murray and collegues has done some nice work on the subject which
is worth looking at. See for example [2].

[1] Valtonen, Kronander and Ingemarsson. Pulse-stream neural networks and
reinforcement learning. In "International Joint Conference on
Neural Networks, San Diego"
, June 1990.

[2] Murray, Brownlow, et. al. Pulse-firing Neural Chips Implementing
Hundreds of Neurons. In David S. Touretzky, editor, "Advances in
Neural Information Processing Systems 2"
, 1990.
- --
"When the shadow of a short man becomes long, the darkness is near"
- Swedish saying
Dept. of Electrical Engineering
University of Linkoping, Sweden valtonen@isy.liu.se

------------------------------

Subject: no. hidden units
From: korst@fwi.uva.nl (M.J. van der Korst (I85))
Organization: FWI, University of Amsterdam
Date: 04 May 90 15:08:54 +0000

I am looking for references on the effect of the number of hidden units
in a 3-layer BP network on learning. Especially the effect of 'too much'
hidden units on the number of local minima is of interest. Intuitively I
would say that the number of local minima increases with the number of
hidden units. I need however some backup from the literature.

Thanks

Michiel v.d. Korst
korst@fwi.uva.nl

------------------------------

Subject: Re: no. hidden units
From: eeoglesb@cybaswan.UUCP (j.oglesby eleceng pgrad)
Date: 09 May 90 13:52:27 +0000


I'm not convinced that adding extra nodes in the hidden layer either
increases the number of local minima or the likelyhood of ending up
in one. Every time you add a node you increase the dimensionality of
the weights space by the fan in of that node. This means MORE
constraints must be satisfied to form a minima and so they MAYBE less
likely to occur !

Anybody else like to comment ?

-----------------------------------------------------------------------------
John Oglesby. UUCP : ...!ukc!pyr.swan.ac.uk!eeoglesb
Digital Signal Processing Group, JANET : eeoglesb@uk.ac.swan.pyr
Electrical Engineering Dept., Phone : +44 792 205678 Ex 4564
University of Wales, Fax : +44 792 295686
Swansea, SA2 8PP, U.K. Telex : 48358
-----------------------------------------------------------------------------

------------------------------

Subject: Re: no. hidden units
From: froncio@caip.rutgers.edu (Andy Froncioni)
Organization: Rutgers Univ., New Brunswick, N.J.
Date: 11 May 90 00:45:20 +0000


Well, I don't know too much about neural nets, as such, but if it can be
viewed as a form of nonlinear interpolation, then neural nets may be
subject to some modified form of so-called "compatibility conditions".
These arise in variational formulations (FEM) and, when violated, lead to
spurious solutions. As far as I know, the problem sometimes gets better
with the addition of nodes, but never goes away entirely.

The conditions are described in Chapter 4 of:

Hughes, T.J.R. "The Finite Element Method: Linear Static and Dynamic
Finite Element Analysis."
Prentice-Hall, 1987.

Again, I don't know how well it extends to NN theory, though.


Andy Froncioni

Andy Froncioni
froncio@caip.rutgers.edu


------------------------------

Subject: Re: no. hidden units
From: russell@minster.york.ac.uk
Organization: Department of Computer Science, University of York, England
Date: 11 May 90 16:34:32 +0000


Yeah, I'll shove in my tuppeny-worth!

Often the reason for local minima are due to an inadequate number of
hidden nodes so that the problem cannot be represented accurately at that
layer. The solution in practical cases is to add another node or two,
and the problem goes away. So, does this mean that more nodes=less
minima. Maybe, maybe not. What I've just said I reckon is true, but is
in effect a lower bound on the number of nodes required to solve the
problem satisfactorially. If we move to the region where an upper bound
may exist, i.e. loads of nodes, then the situation is more complex.

Assuming we are in a domain where there are a large number of hidden
nodes, (enough to `solve' the problem, based on past experience - now see
why the keywords are what they are!) but that local minima are occuring.
Our nodes are doing a few things simultaneously, and minima occur when
different patterns of input trigger similar responses from the nodes.
Now, adding more nodes may reduce the number of things that each hidden
unit does at once, effectively orthogonalising their data processing, and
therefore seperating the different inputs. In this scenario, the network
has moved towards a 1:1 mapped associative memory.

However, adding more nodes could open up the possibility of producing
similar outputs for a wider range of inputs, leading to more local
minima...

I think that both situations can occur, but that the second scenario is
much less likely than the first, and so the overall effect is that more
nodes = less minima.

I think I agree with John Oglesby. Another way of viewing it is to say
that the hidden units act as feature detectors. The more features we
detect in our input, the less `similar' they are and so the less likely
that local minima will occur during learning.

Perhaps some real work needs to be done.

Russell.

____________________________________________________________
Russell Beale, Advanced Computer Architecture Group,
Dept. of Computer Science, University of York, Heslington,
YORK. YO1 5DD. UK. Tel: [044] (0904) 432762

russell@uk.ac.york.minster JANET
russell%minster.york.ac.uk@nsfnet-relay.ac.uk ARPA
..!ukc!minster!russell UUCP
russell@minster.york.ac.uk eab mail
____________________________________________________________
these opinions may be mine, but read the keywords too!!

------------------------------

Subject: Re: no. hidden units
From: ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards)
Organization: The Johns Hopkins University - HCF
Date: 14 May 90 21:07:23 +0000


I think that unless you are dealing with a very low dimensional weight
space (i.e. simplest XOR problem with 2-2-1 net), local minima are not a
significant problem. Long valleys, and other weight space features which
may slow down the search, but not stop it, are a common occurance,
however. But if you chose a more reasonable search strategy (i.e.
conjugate gradient methods), learning speeds will increase.

In general, my experience is that more hidden units allow learning to
go faster (in terms of epochs), and yield a less general result than
fewer hidden units (which take longer to teach, but arrive at a more
"general" [i.e. not so finely tuned around the test inputs] result).

-Thomas Edwards

------------------------------

Subject: Re: no. hidden units
From: mathew@jane.Jpl.Nasa.Gov (Mathew Yeates)
Organization: Image Analysis Systems Grp, JPL
Date: 15 May 90 17:02:05 +0000

The number of nodes has nothing to do with the number of constraints.
Actually, in backpropagation, there are no constraints in the usual use
of the word. All we have is a function (of the weights) to be minimized,
where the weights can take on any value.

If we use a more liberal interpretation of the word, then the patterns
themselves impose "soft constraints" (vs. "hard" constraints like Ax=b)
on the problem. Yes you are right, the more of these constraints we have,
the less likely we are to end up in a local minimum. This idea is
exploited by Yu and Simmons in "Extra Output Biased Learning" to appear
in the Proceedings of the International Joint Conference on Neural
Networks, 1990. By enlarging the output layer and forcing the network to
learn a more difficult mapping, they avoid local minima. I havent
actually tried this but it makes sense. Their paper is available through
anonymous ftp from cheops (?). This is the same TR mentioned earlier on
this board.

-mathew

------------------------------

Subject: Re: no. hidden units
From: eeoglesb@cybaswan.UUCP (j.oglesby eleceng pgrad)
Date: 16 May 90 09:41:49 +0000

> In general, my experience is that more hidden units allow learning to
>go faster (in terms of epochs), and yield a less general result
>than fewer hidden units (which take longer to teach, but arrive at
>a more "general" [i.e. not so finely tuned around the test inputs] result).

I agree. Local minima are not a problem for *REAL WORLD* problems,
you can always escape in at least one direction. If you have a >LARGE<
amount of hidden units then you are less likely to get a >GOOD< solution.

The next question is how do you improve performance if adding hidden units
is not the answer ?

John Oglesby.

------------------------------

Subject: HICSS-Neural Nets Call for Papers
From: david@uhccux.uhcc.hawaii.edu (David Lassner)
Organization: University of Hawaii
Date: 10 May 90 00:54:33 +0000

Posted on behalf of Bill Remus (cbadwre@uhccvm.uhcc.hawaii.edu):



CALL FOR PAPERS
HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 24


NEURAL NET APPLICATIONS IN BUSINESS II


KAILUA-KONA, HAWAII - JANUARY 9-11, 1991


The Emerging Technologies and Applications Track of HICSS-24 will
contain a special set of sessions focusing on a broad selection
of topics in the area of Neural Net Applications in Business.
The presentations will provide a forum to discuss new advances in
these applications.

Papers are invited that may be theoretical, conceptual, tutorial,
or descriptive in nature. Of special interest, however, are
papers detailing solutions to practical problems. Those papers
selected for presentation will appear in the Conference
Proceedings, which are published by the Computer Society of the
IEEE. HICSS-24 is sponsored by the University of Hawaii in
cooperation with the ACM, the IEEE Computer Society, and the
Pacific Research Institute for Information Systems and Management
(PRIISM). Submissions are
solicited in the areas:

(1)The application of neural nets to model business tasks
performed by people (e.g. Dutta and Shekhar paper on
Applying Neural Nets to Rating Bonds, ICNN, 1988, Vol. II, pp.
443-450)

(2)The development of neural nets to model human decision tasks
(e.g. Gluck and Bower, Journal of Experimental Psychology:
General, 117(3), 227-247)

(3)The application of neural nets to improving modeling tools
commonly used in business (e.g. neural networks to perform
regression-like modeling)

(4)The embedding of neural nets in commercial products (e.g.
OCR scanners)

Our order of preference is from (1) to (4) above. Papers
which detail actual usage of neural networks are preferred
to those which only propose uses.

INSTRUCTIONS FOR SUBMITTING PAPERS: Manuscripts should be
12-26 typewritten, double-spaced pages in length. Do not
send submissions that are significantly shorter or longer
than this. Each manuscript will be subjected to
refereeing. Manuscript papers should have a title pages
that includes the title of the paper, full name(s) of its
author(s), affiliation(s), complete mailing and electronic
address(es), telephone number(s), and a 300- word abstract
of the paper.


DEADLINES

A 300-word optional abstract may be submitted by April
30, 1990 by Email or mail. (If no reply to email in 7
days, send by U.S. mail also.)

Feedback to author concerning abstract by May 31, 1990.

Six paper copies of the manuscript are due by June 26, 1990.

Notification of accepted papers by September 1, 1990.

Accepted manuscripts, camera-ready, are due by October 1, 1990.

SEND SUBMISSIONS AND QUESTIONS TO:

Prof. William Remus OR Prof. Lance Eliot
College of Business System Sciences Department
University of Hawaii University of Southern California
2404 Maile Way P.O. Box 30041
Honolulu, HI 96822 USA Long Beach, CA 90853 USA
Tel.: (808)948-7608 (213)439-7021
EMAIL: CBADWRE@UHCCVM.BITNET ELIOT@ECLA.USC.EDU
FAX: (808)942-1591


David Lassner, University of Hawaii Office of Information Technology
Internet: david@uhccux.uhcc.hawaii.edu Bitnet: david@uhccux
Voice: 808/948-5023 Fax: 808/948-5025

------------------------------

Subject: HICSS - Last Year's Titles
From: david@uhccux.uhcc.hawaii.edu (David Lassner)
Organization: University of Hawaii
Date: 10 May 90 00:56:37 +0000

Posted on behalf of Bill Remus (cbadwre@uhccvm.uhcc.hawaii.edu):

The Hawaii International Conference on Systems Sciences has many tracks
of computer related papers including tracks on artificial neural
networks. Since this proceedings is a usual place the ANN researchers to
look for ANN papers, included below is a list of last year's ANN papers.
Both the below tracks will be offered again in 1991. For past papers,
see the proceedings (where the papers appear) published by IEEE Press.
For more information on this conference email CBARBED@UHCCVM.BITNET.


NEURAL NET APPLICATIONS IN BUSINESS

"Forecasting Country Risk Ratings Using a Neural Network," by Jean-Claude
Cosset and Jean Roy

"Neural Network Pattern Learning for Classifying Administrators from
Examples,"
Alvin J. Surkan, Fred C. Wendel, and Sang M. Lee

"Neural Network Models of Managerial Judgement" by William Remus and Tim Hill

"Fault Tolerant Hashing and Information Retrieval Using Back Propagation" by
Kejitan Dontas, Jayshree Sarma, Padmini Srinivasan, and Harry Wechsler

"Imputation of the Algorithms for Certainty Factor Manipulation by Individuals
Using Neural Networks and Regression: A Comparison to Expert System Shells,"

by William Rybolt, David Kopcso, and Leo L. Pipino

"Neuronet-based Decision Making," by L.S. Hsu, H.H. Teh, S.C. Chan and K.F.
Loe


NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES

Minitrack Coordinators: O.K. Ersoy and H.H. Szu

"Speaker-Independent Vowel Recognition: Comparison of Backprogagation and
Trained Classification Trees"
by R.A. Cole, Y.K. Muthusamy, L. Atlas, T. Leen,
and M. Rudnick

"Model of Auto-associative Memory That Stores and Retrieves Data Regardless of
Thier Orthogonality, Randomness, or Size"
by D. Bairaktaris

"An Approach for Solving the Parameter Setting Problem" by K. Fleischer, J.
Platt, and A. Barr

"Parallel, Self-organizing Hierarchical Neural Networks" by O.K. Ersoy and D.
Hong

"Nonlinear Mapping with Minimal Supervised Learning" by V.V. Tolat and A.M.
Peterson

"Mapping of Neural Networks on Honeycomb Architectures: Area Analysis" by V.
Milutinovic, V. Upatising, and S.H. Zak

"The Self Organizing Neural Network Algorithm: Adapting Structure for Optimum
Supervised Learning"
by M.F. da M. Tenorio

"A Self-organization Architecture for Clustering Analysis" by F.Y. Shih and J.
Moh

"CAPS: A Connectionist Architecture for Production Systems" by A.S. Bhogal,
R.E. Seviora, and M.I. Elmarsry


David Lassner, University of Hawaii Office of Information Technology
Internet: david@uhccux.uhcc.hawaii.edu Bitnet: david@uhccux
Voice: 808/948-5023 Fax: 808/948-5025

------------------------------

Subject: July 1990 Summer Workshop on Biological Neural Systems, Berkeley CA
From: neural@bistro.berkeley.edu (Neural System Conference)
Organization: U.C. Berkeley, NASA Ames, Lawerence Livermore Labs
Date: 14 May 90 21:02:54 +0000

* * * Summer 1990 Workshop/Conference Announcement (Revised 5/14) * * *

Title: Analysis and Modeling of Neural Systems
Location: Clark Kerr Campus, Berkeley CA
Dates: July 25-27, 1990 (Wed-Fri, 3 full days)
Sponsors: Institute for Scientific Computing Research, LLNL;
NASA Ames Research Center; University of California at Berkeley

Poster abstract submission deadline extended to: June 15th, 1990

This workshop will focus on quantitative analyses of results from
recent neurophysiologic investigations into the structure and operation
of nerve cells and systems. Twenty two invited speakers will provide
reviews of their respective fields and summaries of their own recent
research. There will be no parallel sessions and the workshop will be
structured to stimulate and facilitate the active involvement of all
attendees. Although oral presentations will be limited to the invited
speakers, contributions are solicited for poster sessions.
Presentations are welcome in areas including subcellular systems,
cellular systems, multi-cellular systems, and tools and techniques.

Meeting Coordinator:
Terry Contreras
Lawrence Livermore Natl. Lab.
P.O. Box 808, Mailstop L-426
Livermore, CA 94550 USA
(415) 422-7132 FAX (415) 423-4980
Electronic Mail:
Curt Deno
neural@robotics.berkeley.edu, ucbvax!robotics!neural, or
neural%robotics@ucbvax.bitnet

Organizing and Program Committee:

D. Curtis Deno U.C. Berkeley and Smith Kettlewell Eye Res. Inst.
Frank H. Eeckman Lawrence Livermore Natl. Laboratory
Edwin R. Lewis U.C. Berkeley
John P. Miller U.C. Berkeley
Muriel D. Ross NASA Ames Res. Center
Nora G. Smiriga Lawrence Livermore Natl. Laboratory

- ---------------------------------------------------------------------
WORKSHOP SCHEDULE Preliminary Program (revised 4/14/90)

Tuesday, July 24th, 1990

5:00 - 7:00 PM Registration Center Open
6:00 PM Opening Reception

Wednesday, July 25th, 1990

7:30 - 3:00 PM Registration Center Open
8:00 - 10:00 AM NEURAL CODING
speakers: G.Gerstein, M.Wilson, M.Meister
10:00 - 10:30 AM Coffee Break
10:30 - 12:30 PM CELLULAR AND DENDRITIC MODELING
speakers: W.Rall, J.Rinzel, E.Kairiss
12:30 - 2:00 PM Lunch
2:00 - 3:00 PM SUBCELLULAR SYSTEMS
speaker: P.Adams
3:00 - 5:00 PM POSTER SESSIONS

Thursday, July 26th, 1990

7:30 - 3:30 PM Registration Center Open
8:00 - 10:00 PM CENTRAL PATTERN GENERATORS
speakers: A.Cohen, T.Williams, F.Nagy
10:00 - 10:30 AM Coffee Break
10:30 - 12:30 AM MOTOR SYSTEMS
speakers: N.Hogan, S.Giszter, J.Houk
12:30 - 2:00 PM Lunch
2:00 - 3:00 PM OSCILLATIONS IN CORTICAL SYSTEMS
speaker: C.Gray
3:00 - 5:00 PM POSTER SESSIONS
5:30 PM Bus ride to the San Francisco Exploratorium
6:30 PM Dinner at the Exploratorium, access to exhibits
speaker: S.Dreyfus

Friday, July 27th, 1990

7:30 - 12:00 PM Registration Center Open
8:00 - 10:00 AM SYSTEMS ARCHITECTURE
speakers: D.VanEssen, P.Sterling, H.Orbach
10:00 - 10:30 AM Coffee break
10:30 - 12:30 PM NEURAL IMAGING
speakers: S.Shamma, C.Carr
12:30 - 2:00 PM Lunch
2:00 - 4:00 PM PSYCHOPHYSICS
speakers: C.Wehrhahn, N.Franceschini
4:00 - 6:00 PM POSTER SESSIONS

- ---------------------------------------------------------------------
REGISTRATION FORM
Mail or fax copy to Meeting Coordinator. Your Name and Affiliation
will appear on your badge, so please print clearly.

Name:

Title:

Organization:

Address:

City, State, Zip:

Country:

Telephone:

e-mail address:

fax number:

Are you submitting a poster?

Poster Title and Section (Neural Coding, etc.):

Fees: Regular $175
Full Time Student* $125
Extra Banquet Tickets $50
Optional Parking Permit, Clark Kerr Campus $10
Make payable to U.C. Regents (check or money order in U.S. $ only)

Total payment enclosed:


Please Return to:
Terry Contreras, Meeting Coordinator
P.O. Box 808, L-426
Lawrence Livermore National Laboratory
Livermore, CA 94550 USA
(415) 422-7132
or fax to (415) 423-4980

* Note: Students who submit a poster will be considered for a partial
reimbursement of fees.

All attendees are encouraged to submit a poster. The focus of this
workshop is on biological neural systems and network models. Topics
include: subcellular systems, cellular systems, multi-cellular systems,
and tools and techniques.

- ---------------------------------------------------------------------
Registration Fees Include:

admission to meeting workshops and poster sessions
evening reception on Tuesday night
coffee and donuts during the breaks
three group lunches at the meeting facilities
banquet at the San Francisco Exploratorium
round trip travel from the meeting facilities to the Exploratorium

Cancellation Policy:

The registration fees will be refunded upon receipt of a written
request postmarked before July 1, 1990. After this date NO refund will
be made. Registrants who do not attend and who do not cancel in
writing before July 1st, 1990, are liable for the full amount of the
registration fee. You must obtain a cancellation number from our
meeting coordinator to make the cancellation valid.

Registration:

Registration is limited to 300 attendees. Complete the registration
form and mail or fax to the Meeting Coordinator. Send payment by mail.
You are registered upon receipt of payment. To register on site (on
space available basis) please register at the Kerr Center on Tuesday,
July 24th 1990 from 5-7 PM.

Housing and Hotels:

A list of nearby hotels is available from our meeting coordinator.
Single and double rooms on the Kerr Campus site are available for $34
(single) and $42 (double) per night.

Location:

The Clark Kerr Campus is located near the south-east corner of the U.C.
Berkeley main campus. It is within walking distance (5-10 minutes) from
the center of the UCB campus.

Transportation:

Berkeley can be reached via the Oakland International Airport (OAK,
about 25-35 minutes by car) and the San Francisco International Airport
(SFO, about 40-50 minutes by car). There is a mini-van shuttle from
SFO to the Berkeley Durant Hotel (about $15; the Durant Hotel is within
5 minutes walking distance from the Clark Kerr Campus). There is a
BART subway connection (AIR-BART) from OAK to downtown Berkeley (stop:
BERKELEY). The BART stop is within 20 minutes walking distance from
the Clark Kerr Campus. Several limo and door-to-door van services are
available at both airports. Both airports have extensive car rental
facilities.

Weather:

Bay area weather in July is expected to be very pleasant. Day time
temperatures of 70-85 F (20-30 C). No rain is expected. It often is
chilly at night and in the early morning hours. San Francisco tends to
be more windy and cooler than Berkeley.

------------------------------

Subject: IEE/OUG AI Colloquium ``Symbols versus Neurons?'' Advance Announcement
From: phw@ukc.ac.uk (P.H.Welch)
Organization: Computing Lab, University of Kent at Canterbury, UK.
Date: 16 May 90 13:19:04 +0000




Symbols versus Neurons?
~~~~~~~~~~~~~~~~~~~~~~~

Joint IEE and OUG Colloquium


(1st October 1990; IEE, Savoy Place, London)



Advance Announcement
~~~~~~~~~~~~~~~~~~~~
This one-day meeting is being organised by the Institution of Electrical
Engineers (Professional Group Committee C4 -- Artificial Intelligence) in
collaboration with the Occam User Group (Artificial Intelligence Special
Interest Group). It will take place at the London headquarters of the IEE
(Savoy Place, London, WC2R 0BL) on 1st October 1990.

This colloquium will in fact be the 2nd International Conference of the OUG
Artificial Intelligence SIG.


Aims of the Colloquium
~~~~~~~~~~~~~~~~~~~~~~
In recent years, transputer-based parallel computers have gained in
significance as a platform for the development of AI applications and tools.
Sub-symbolic or neo-connectionist approaches are occupying a growing position
at the side of classical symbolic approaches. This colloquium will highlight
this development with a comparison between symbolic and connectionist
approaches and their implementations. In posing the question ``Symbols
versus Neurons?'', a forum will be provided for directly tackling the central
conflict of the AI debate today.

Conference Programme
~~~~~~~~~~~~~~~~~~~~
KEYNOTE SPEAKERS

Professor Tom Addis (University of Reading, UK)
Knowledge and the Structure of Machines

Pau Bofill (Barcelona Polytechnic, Spain) and
Jose del Millan (EC, Ispra, Italy)
A Systolic Algorithm for Back Propagation:
Mapping onto a Transputer Network

INVITED SPEAKERS

Professor Kimmo Kaski (Oxford University, UK)
Simulating Neural Networks in Distributed Environments

Professor E. von Goldammer (Universitaet Luebeck, West Germany)
Neural Nets -- Applications in Medicine

Professor Andre Bakkers (Twente University, Enschede, The Netherlands)
Applications of Neural Control

Dr Jean Sallantin (CRIM, France)
Artificial Intelligence for Genomic Interpretation

Dr Lyubomir Stoychev (IMS, Sofia, Bulgaria)
Relational and Differential Logic for Knowledge Processing

Dr Terence C. Fogarty (Bristol Polytechnic, UK)
Using the Genetic Algorithm to Adapt Intelligent Systems

Reem Bahgat (Imperial College, London, UK)
Symbolic Constraint-Based Reasoning in PANDORA

Joachim Stender (Brainware GmbH, Berlin, West Germany)
Machine Learning Applications on Transputers

Zoltan Schreter (Universitaet Zuerich, Switzerland)
Connectionism -- A Link between Psychology and Neuroscience?

Steffen Schulze-Kremer (Freie Universitaet, Berlin, West Germany)
Inductive Protein Structure Analysis using Transputers

* * *

Further Information
~~~~~~~~~~~~~~~~~~~
The conference programme has been planned by Joachim Stender, Chairperson of
the OUG AI SIG. Further information on the content of this programme should
be addressed to :-

Joachim Stender/Eva Hillebrand
Brainware GmbH
Gustav-Meyer-Allee 25
D-1000 Berlin 65
West Germany
(Tel: +49 30 463 30 58)
(FAX: +49 30 469 46 49)
(FAX: +49 30 469 46 49)

or to :-

Professor T Addis
Department of Computer Science
University of Reading
P O Box 220
Whiteknights
Reading
Berkshire, RG6 2AX
U.K.
(Tel: +44 734 875123)
(FAX: +44 734 751994)

Registration Details
~~~~~~~~~~~~~~~~~~~~
This meeting will be advertised by the IEE in the normal manner for its
colloquia series. All bookings will be handled by the IEE on their
registration forms -- these are not yet available. At the moment, it is
anticipated that the registration fee will be the normal one (i.e. L26.50 for
IEE members and L43.50 for non-members, pounds sterling). OUG members may
also qualify for the reduced fee.

In the meantime, delegates wishing to attend should send their names and
addresses to :-

Professor P H Welch
Computing Laboratory
University of Kent
Canterbury
Kent
CT2 7NF
U.K.
(Tel: +44 227 764000)
(FAX: +44 227 762811)
(email: phw@ukc.ac.uk)

------------------------------

End of Neuron Digest [Volume 6 Issue 34]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT