Copy Link
Add to Bookmark
Report

Neuron Digest Volume 06 Number 49

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Friday, 17 Aug 1990                Volume 6 : Issue 49 

Today's Topics:
Re: universe and intelligence
Really smart systems
Re: Really smart systems
Re: Really smart systems
tasks for reinforcement learning
the dumb universe
Help for RTRL?
Re: Help for RTRL?
PYGMALION Overview
NN-definition Language
Re: NN-definition Language
Last Call for Papers for AGARD Conference


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Re: universe and intelligence
From: arti6!chen@relay.EU.net (Chung-Chih Chen)
Date: Wed, 15 Aug 90 12:40:22 +0100


Concerning the reply from Douglas Danforth:

>Intelligence is in the mind of the intelligent AND why do you consider
>intelligence significant?

What I really mean in my last message can be explained more clearly as
follows: According to the inflationary model of the universe (see, for
example, the article by A. Guth & P. Steinhardt in the book edited by P.
Davies), the universe could have been created from virtually nothing (so
the universe may be the ultimate free lunch!). If the seemly very
complex behaviors of the current universe came from nothing, then can we
imagine that we may build a universal model of our brain (or neural
network) which will produce intelligent behaviors from nothing?

Nolfi et al. showed that complex and apparently purposeful behavior can
arise from random variation in networks. In some way they have shown
that intelligence can come from nothing (or evolution).
I hope this will clarify my idea.

%E P. Davies
%B The New Physics
%I Cambridge University Press
%D 1989

%A S. Nolfi
%A J. L. Elman
%A D. Parisi
%T Learning and Evolution in Neural Networks
%J CRL Technical Report 9019
%I Center for Research in Language, University of California
San Diego
%D July 1990


------------------------------

Subject: Really smart systems
From: kingsley@hpwrce.HP.COM (Kingsley Morse)
Organization: Ye Olde Salt Mines
Date: 14 Aug 90 21:05:25 +0000

If we aspire to making truly intelligent machines, our algorithms must
scale up well to large training sets. In other words, the computational
complexity of our algorithms must accomodate many training patterns and
many dimensions. You may have heard of the "curse of dimensionality". If
the computational complexity of our algorithms grows slowly, then we can
train with more patterns and dimensions to get a smarter system. On the
other hand, if the computation required grows explosively as the number
of training patterns or dimensions are increased, then even astoundingly
fast hardware won't help.

So, our challange is to find algorithms which scale up well to large
problems, so we can make really smart systems. Listen........... I'll
post some computational complexity figures for some common algorithms. If
you add to (correct?) these, please do. The terminology that I propose is
that linear computational complexity is better than polynomial which is
better than exponential. Furthermore, the algorithms' scalability must be
measured with respect to learning, recall, patterns, and dimensions.


Algorithm Learning Recall
patterns dimensions patterns dimensions
------------------------------------------------------------
Backprop polynomial ? independent linear

Nearest
neighbor linear linear linear linear

Cart nlogn exponential independent ?


This leads me to believe that nearest neighbor algorithms work better for
learning a lot because they can be trained faster.

Any contributions to this list are welcome.

------------------------------

Subject: Re: Really smart systems
From: mcsun!ukc!reading!minster!russell@uunet.uu.net
Organization: Department of Computer Science, University of York, England
Date: 17 Aug 90 11:21:56 +0000


Well, if you want tuppeny-worth's thrown in, here's mine.

Backprop has NO guarenteed convergence, therefore to quote computational
complexity is misleading. It may be possible to give an "average case"
complexity figure, but this is so dependent on the initial weight
settings as to be not too much use. Tesauro and Janssens report
empirical results between learning time and predicate order (q) of
patterns. The net has q inputs, 2q nodes in the first layer, fully
connected, and 1 output node. The task set is the parity function on the
q bits. Leaning times in b-p scale as approx. 4^q. The task set is of
size 2^q so the training time is about 2^q. Conclusion is that
empirically learning time is of exponential order.

The perceptron scales as exponential in input size (Hampson and Volper
1986).

(See Neural Network Design and the Complexity of Learning by Judd for
more.....I aint read it all yet, but it's interesting so far.....)

I confess to not understanding quite what "Cart" means in the above table -
I can make an educated guess, however.....

To add another algorithm to the list, the ADAM system, based on the
Willshaw net (i.e. a distributed associative memory) (Austin 1987) has
complexity as follows:

Algorithm Learning Recall
patterns dimensions patterns dimensions
------------------------------------------------------------
ADAM linear linear(quadratic) indep. linear(nlogn)

brackets refer to abnormal, but permissible, parameterisation. (Beale
1990).

Note that recall may take longer than teaching - but also note that the
realm of use of ADAM means that the multiplicative constants in front of
the order terms are extremely small.

Russell.


------------------------------

Subject: Re: Really smart systems
From: usenet@tut.cis.ohio-state.edu (usenet news poster)
Organization: National Library of Medicine, Bethesda, Md.
Date: 17 Aug 90 21:19:11 +0000

>From a separate communiction with km:

CART means "Classification and Regression Tree". It is similar to ID3,
and here's how it works. The training vectors are used to "grow" a
decision tree. The decision tree can be used catagorize new inputs.

OK, let me add a couple more cents to the pot. Classifications need to
consider both the computational complexity and the storage requirements.
Algorithms like nearest neighbor, CART? and ADAM? require that the
complete training set be stored for recall while a perceptron needs
storage of the order #inputs+#outputs, and a single hidden layer fully
connected neural net needs (#inputs+#output)*#hidden_nodes.

A second consideration is will the algorithm generalize? Ie., will it
make use of information from more than one input pattern to formulate an
output.

So at the risk of grossly misquoting and being flamed horribly let me
reorder the classification in terms of Np=#patterns, Ni=#inputs,
No=#outputs, and Nh=#hidden nodes:

Algorithm Learning time Recall time Storage Generalizes
-----------------------------------------------------------------------------
Perceptron No*(x^Ni) No*Ni No*Ni limited

Neural Net (1 hidden layer) Nh*(Ni+No) Nh*(Ni+No) yes

Backprop x^(Nh*(Ni+No)) ?
Conjugate gradient (Nh*(Ni+No))^3 - locally
Monte Carlo x^(Nh*(Ni+No)) ?

Nearest neighbor (Ni+No)*Np Ni*Np Np*(Ni+No) no

Class & Reg. Tree Ni*(Np log Np) Ni*log Np Np*(Ni+No) no
(CART) + (Ni+No)*Np ? + Ni*log Np

Adaptive Memory Np? ? Np*(Ni+No)? no?
(ADAM)

David States


------------------------------

Subject: tasks for reinforcement learning
From: finton@ai.cs.wisc.edu (David J. Finton)
Organization: U of Wisconsin CS Dept
Date: 15 Aug 90 17:28:08 +0000

I'm looking for good demonstration tasks for my reinforcement-learning
algorithm:

(1) Are there any good examples of real-world tasks which require
reinforcement learning -- where supervised techniques such as back-prop
would be unsuitable?

(2) Are there any studies which compare performance of reinforcement
learning systems with standard techniques (eg, back-prop, ID3) on such
tasks?

(3) Are there available standard data sets for such tasks?

(4) Are there studies comparing reinforcement learning on with back-prop
on standard back-prop tasks?

David Finton

------------------------------

Subject: the dumb universe
From: Stephen Smoliar <smoliar@vaxa.isi.edu>
Date: Wed, 15 Aug 90 17:38:37 -0700

There is no reason to assume that intelligence on the part of the
universe is prerequisite to it giving rise to intelligent systems. That
is the whole point of THE BLIND WATCHMAKER. It is also the point of THE
SOCIETY OF MIND. Minsky's argument is that you can build an intelligent
system from lots of little components, each of which is far to simple to
be, in itself, intelligent. Neurons are an example of such little
components.

------------------------------

Subject: Help for RTRL?
From: coms2146@waikato.ac.nz (Alistair Veitch, University of Waikato, New Zealand)
Organization: University of Waikato, Hamilton, New Zealand
Date: 16 Aug 90 03:54:44 +0000

Has anybody out there worked with Williams and Zipsers "Real-time
recurrent learning algorithm"
? [Connection Science, Vol 1, No 1].

We are currently trying to implement this algorithm, but have run into
some problems. We've got it to run succesfully on the various XOR
problems described, the "ab" problem (recognise the first "b" after an
"a") and the oscillation problems. What we can't seem to achieve is
success for the Turing machine problem. As this is perhaps the major
result of the paper, it seems important to duplicate it to reassure
ourselves that everything is correct. Has anyone else had success/failure
with this problem? If success, would it be possible to post your source?
(We think we've got it right, but...)

Alistair Veitch Phone: +64 71 562889 ext. 8768
Internet: coms2146@waikato.ac.nz +64 71 562388 (home)
SNAIL: Computer Science Dept, University of Waikato, Hamilton, New Zealand

------------------------------

Subject: Re: Help for RTRL?
From: ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards)
Organization: The Johns Hopkins University - HCF
Date: 16 Aug 90 19:38:09 +0000

In article <1243.26cac1c4@waikato.ac.nz> coms2146@waikato.ac.nz (Alistair Veitch, University of Waikato, New Zealand) writes:
>Has anybody out there worked with Williams and Zipsers "Real-time recurrent
>learning algorithm"
? [Connection Science, Vol 1, No 1].

I haven't actually implemented this algorithm, but I have heard that it
is important to use the "Teacher Forcing" method they discuss to learn
difficult problems.

You might also want to look at J. Schmidhuber, "Making the World
Differentiable: On using supervised learning fully-recurrent networks for
dynamic reinforcement learning and planning in non-stationary
environments"
, FKI Report 125-90, Technische Univeritat Munchen, 1990. A
pole-balancer is trained by reinforcement learning (i.e. apply pain when
the pole is dropped).

And to explain why gradient-descent methods will probably not give you
reasonable temporal learning see J. Schmidhuber, "Towards compositional
learning with dynamic neural networks"
, FKI Report 129-90, TUM, April
1990.

He explains that gradient-descent-only methods must take into account
training learned during all past time steps when dealing with a new
problem. For "toy" temporal learning problems, this is not a big
impediment. For "serious" temporal learning problems, dynamic neural
systems must develop methods of breaking goals down into subgoals, most
of which have already been learned, some of which need to be developed by
gradient-descent. In this way, only small problems are trained by
gradient-descent, and they are used by the system combinatorially to
allow the network-of-networks to solve real problems by
"divide-and-conquer" methods. The research is very fresh into this area,
and I think in about a year there will be a move away from naive
implementations of gradient-descent learning in both stationary and
temporal learning and a move towards connectionist compositional learning
(Cascade-Correlation is a simple example of this).

-Thomas Edwards

------------------------------

Subject: PYGMALION Overview
From: M.Azema@cs.ucl.ac.uk
Date: Fri, 17 Aug 90 10:01:52 +0100

In response to requests about the PYGMALION environment, (with some
delays) here is an overview:

PYGMALION Overview:
-------------------

The ESPRIT II PYGMALION project is intended to provide a focus for neural
computing research within the European Community. PYGMALION aims to
promote the application of neural networks by European industry, and to
develop European "standard" computational tools for programming and
simulation of neural networks.

The design philosophy of the PYGMALION neural programming environment is
twofold. Firstly, to provide an "open" programming environment - a
rudimentary "platform" - that can be easily extended and interfaced to
other tools. For this reason the core of the environment is X-windows, C
and C++; running on a colour workstation. Secondly, to provide
"portable" neural network applications, so that trained and partially
trained networks can be easily moved from machine to machine. For this
reason the (partially) trained neural network applications are specified
in a subset of C; essentially a C data structure.

The environment comprises 5 major parts:

Graphic Monitor, the graphical software environment for controlling the
execution and monitoring of a neural network application simulation. This
includes a simulation command language for setting up a simulation,
monitoring its execution, interactively changing values, and saving a
trained network.

Algorithm Library, the parameterised library of common neural networks,
written in the high level language and providing the user with a number
of validated modules for constructing applications.

High Level Language N, the object-oriented programming language for
defining, in conjunction with the algorithm library, a neural network
algorithm and application, by describing the network topology and its
dynamics.

Intermediate Level Language nC-code, the low level machine independent
network specification language for representing the partially trained or
trained neural network applications, a format analogous to P-code for
PASCAL systems.

Compilers to the target UNIX-based workstations and parallel
Transputer-based machines.

Availability of the software:
-----------------------------
A preliminary version of the PYGMALION software is now available (free of
charge). If you would like more information please contact :

Mike Hewetson
Department of Computer Science
University College London
Gower Street
London
WC1E 6BT
Voice: +44 (0) 71 387 7050 ext 3708
Fax: +44 (0) 71 387 1397
Email: M.Hewetson@cs.ucl.ac.uk



------------------------------

Subject: NN-definition Language
From: ethz!neptune!brain!thalmann@uunet.uu.net (Laura Thalmann)
Organization: Department for Informatik,Universitat Zurich-Irchel
Date: 17 Aug 90 11:23:54 +0000

Hi neural-experts,

This is a presentation of (yet another) neural network implementation, a
NN-definition language:

Condela means CONnection DEfinition LAnguage and it is a high level
programming language, specifically designed for the development and
modeling of neural network applications. It is a procedural and general
purpose language, that allows parallelism via the concept of selections,
i.e. groups of units or connections to which actions can be applied.
Units and connections can be created dynamically at any point in the
program flow. The parallelism expressible in Condela-3 is independent of
the underlying Hardware. Condela-3 is easy to teach, as it has few
language constructs, yet allows the expression of arbitrary network
topologies and learning paradigms due to its powerful statements and its
two levels of abstraction. It is easily portable to other operating
systems, its open design allows simple interfacing to existing
applications.

The following sample program demonstrates the classical XOR learning
problem using error back propagation.

1 TOPOLOGY
2 xor = LAYER input OF FIELD[2]; END;
3 LAYER hidden OF FIELD[2]; END;
4 LAYER output OF FIELD[1]; END;
5 VAR p : NETWORK OF xor;
6
7 PROCEDURE main();
8 VAR output_vec, input_vec : VECTOR;
9 input_layer, hidden_layer, output_layer : USEL;
10
11 BEGIN
12 CREATE p;
13 input_layer := { p.input[0..1] };
14 hidden_layer := { p.hidden[0..1] };
15 output_layer := { p.output[0] };
16 CONNECT input_layer TO hidden_layer INIT random();
17 CONNECT hidden_layer TO output_layer INIT random();
18 LOOP 1000000 TIMES
19 get_input(input_vec, output_vec);
20 input_layer : out := input_vec;
21 APPLY feed_forward() TO hidden_layer;
22 APPLY feed_forward() TO output_layer;
23 APPLY back_propagate_out(output_vec) TO output_layer;
24 APPLY back_propagate_hid() TO hidden_layer;
25 END;
26 END;

It has a 2 layered implementation that allows the "abstract" definition
of a neural network topology and behavior and a "concrete" implementation
in C. This compiler (implemented with lex and yacc) translates the
Condela-source to C and therefore allows simple interfacing to other
existing neural network simulation systems. I appreciate any comments.

-Nick.
,----------------------------------------------------,
| Nikolaus Almassy almassy@ifi.unizh.ch /
| University of Zurich-Irchel Tel:+41-1-257 43 15 /
| Department of Informatik Fax:+41-1-257 43 43 /
| Winterthurerstr. 190 CH-8057 SWITZERLAND /
`-----------------------------------------------'


------------------------------

Subject: Re: NN-definition Language
From: van-bc!ubc-cs!kiwi!snider@ucbvax.Berkeley.EDU (Duane Snider)
Organization: Microtel Pacific Research Ltd., Burnaby, B.C., Canada
Date: 17 Aug 90 17:45:19 +0000


> [[CONDELA]] It is a procedural and
^^^^^^^^^^
>general purpose language, that allows parallelism via the concept of
^^^^^^^^^^^^^^^^^^^^^^^^

It appears CONDELA isn't doing anything more than a programming language
like C++ could handle.

Are you sure that another language is necessary in this field, yet?

Duane Snider
snider@mpr.ca


------------------------------

Subject: Last Call for Papers for AGARD Conference
From: nelsonde%avlab.dnet@wrdc.af.mil
Date: Mon, 13 Aug 90 10:10:04 -0400


Subject: Last Call for Papers for AGARD Conference

We are extending the deadline for the abstracts for the papers to be
presented at the AGARD conference until 21 September 1990.

In case you have lost the Call for Papers, it is again attached to this
message.

Your consideration is greatly appreciated.

--Dale



AGARD
ADVISORY GROUP FOR AEROSPACE RESEARCH AND DEVELOPMENT
7 RUE ANCELLE - 92200 NEUILLY-SUR-SEINE - FRANCE
TELEPHONE: (1)47 38 5765 TELEX: 610176 AGARD
TELEFAX: (1)47 38 57 99
AVP/46 2 APRIL 1990




CALL FOR PAPERS

for the

SPRING, 1991 AVIONICS PANEL SYMPOSIUM

ON

MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS

to be held in

LISBON, Portugal

13-16 May 1991


This meeting will be UNCLASSIFIED


Abstracts must be received not later than 31 August 1990.


Note: US & UK Authors must comply with National Clearance Procedures
requirements for Abstracts and Papers.

THEME

MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS

A large amount of research is being conducted to develop and apply
Machine Intelligence (MI) technology to aerospace applications.
Machine Intelligence research covers the technical areas under the
headings of Artificial Intelligence, Expert Systems, Knowledge
Representation, Neural Networks and Machine Learning. This list is
not all inclusive. It has been suggested that this research will
dramatically alter the design of aerospace electronics systems
because MI technology enables automatic or semi-automatic operation
and control. Some of the application areas where MI is being
considered inlcude sensor cueing, data and information fusion,
command/control/communications/intelligence, navigation and guidance,
pilot aiding, spacecraft and launch operations, and logistics support
for aerospace electronics. For many routine jobs, it appears that MI
systems would provide screened and processed ata as well as
recommended courses of action to human operators. MI technology will
enable electronics systems or subsystems which adapt or correct for
errors and many of the paradigms have parallel implementation or use
intelligent algorithms to increase the speed of response to near real
time.

With all of the interest in MI research and the desire to expedite
transition of the technology, it is appropriate to organize a
symposium to present the results of efforts applying MI technology to
aerospace electronics applications. The symposium will focus on
applications research and development to determine the types of MI
paradigms which are best suited to the wide variety of aerospace
electronics applications. The symposium will be organizaed into
separate sessions for the various aerospace electronics application
areas. It is tentatively proposed that the sessions be organized as
follows:

SESSION 1 - Offensive System Electronics (fire control systems, sensor
cueing and control, signal/data/information fusion, machine
vision, etc.)

SESSION 2 - Defensive System electronics (electronic counter
measures, radar warning receivers, countermeasure
resource management, situation awareness, fusion, etc.)

SESSION 3 - Command/Control/Communications/Intelligence - C3I (sensor
control, signal/data/information fusion, etc.)

SESSION 4 - Navigation System Electronics (data filtering, sensor
cueing and control, etc.)

SESSION 5 - Space Operations (launch and orbital)

SESSION 6 - Logistic Systems to Support Aerospace Electronics (on and
off-board systems, embedded training, diagnostics and
prognostics, etc.)

GENERAL INFORMATION

This Meeting, supported by the Avionics Panel will be held in Lisbon,
Portugal on 13-16 May 1991.

It is expected that 30 to 40 papers will be presented. Each author
will normally have 20 minutes for presentation and 10 minutes for
questions and discussions. Equipment will be available for
projection of viewgraph transparencies, 35 mm slides, and 16 mm
films.

The audience will include Members of the Avionics Panel and 150 to
200 invited experts from the NATO nations. Attendance at AGARD
Meetings is by invitation only from an AGARD National Delegate or
Panel Member.

Final manuscripts should be limited to no more than 16 pages
including figures. Presentations at the meeting should be an extract
of the final manuscript and not a reading of it. Complete
instructions will be sent to authors of papers selected by the
Technical Programme Committee.

Authors submitting abstracts should insure that financial support for
attendance at the meeting will be available.

CLASSIFICATION

This meeting will be UNCLASSIFIED

LANGUAGES

Papers may be written and presented either in English or French.
Simultanewous interpretation will be provided between these two
languages at all sessions. A copy of your prepared remarks (Oral
Presentation) and visual aids should be provided to the AGARD staff
at least one month prior to the meeting date. This procedure will
ensure correct interpretation of your spoken words.

ABSTRACTS

Abstracts of papers offered for this Symposium are now invited and
should conform with the following instructions:

LENGTH: 200 to 500 words

CONTENT: Scope of the Contribution & Relevance to the Meeting
- Your abstract should fully represent your
contribution
SUMITTAL: To the Technical Programme committee by all authors (US
authors must comply with Attachment 1)
IDENTIFICATION: Author Information Form (Attachment 2) must be provided
with you abstract
CLASSIFICATION: Abstracts must be unclassified

Your abstracts and Attachment 2 should be mailed in time to reach all
members of the Technical Program Committee, and the Executive not
later than 31 AUGUST 1990 (Note the exception for the US Authors).
This date is important and must be met to ensure that your paper is
considered. Abstracts should be submitted in the format shown on the
reverse of this page.


TITLE OF PAPER

Name of Author
Organization or Company Affiliation
Address

Name of Co-Author
Organization or Company Affiliation
Address

The test of your ABSTRACT should start on this line.


PUBLICATIONS

The proceedings of this meeting will be published in a single volume
Conference Proceedings. The Conference Proceedings will include the
papers which are presented at the meeting, the questions/discussion
following each presentation, and a Technical Evaluation Report of the
meeting. It should be noted that AGARD reserves the right to print
in the Conference Proceedings any paper or material presented at the
Meeting. The Conference Proceedings will be sent to the printer on
or about July 1990. NOTE: Authors that fail to provide the required
Camera-Ready manuscript by this date may not be published.

QUESTIONS concerning the technical programme should be addressed to the
Technical Programme Committee. Administrative questions should be sent
directly to the Avionics Panel Executive.

GENERAL SCHEDULE

(Note: Exception for US Authors)
EVENT DEADLINE

SUBMIT AUTHOR INFORMATION FORM 31 AUG 90

SUBMIT ABSTRACT 31 AUG 90

PROGRAMME COMMITTEE SELECTION OF PAPERS 1 OCT 90

NOTIFICATION OF AUTHORS OCT 90

RETURN AUTHOR REPLY FORM TO AGARD IMMEDIATELY

START PUBLICATION/PRESENTATION CLEARANCE PROCEDURE UPON NOTIFICATION

AGARD INSTRUCTIONS WILL BE SENT TO CONTRIBUTORS OCT 90

MEETING ANNOUNCEMENT WILL BE PUBLISHED IN JAN 91

SUBMIT CAMERA-READY MANUSCRIPT AND PUBLICATION/
PRESENTATION CLEARANCE CERTIFICATE to arrive at AGARD by 15 MAR 91

SEND ORAL PRESENTATION AND COPIES OF VISUAL AIDS
TO THE AVIONICS PANEL EXECUTIVE to arrive at AGARD by 19 APR 91

ALL PAPERS TO BE PRESENTED 13-16 MAY 91

TECHNICAL PROGRAMME COMMITTEE

CHAIRMAN
Dr Charles H. KRUEGER Jr
Director, Systems Avionics Division
Wright Research and Development Center (AFSC), ATTN: AAA
Wright Patterson Air Force Base
Dayton, OH 45433, USA

Telephone: (513) 255-5218
Telefax: (513) 476-4020

Mr John J. BART Prof Dr A. Nejat INCE
Technical Director, Directorate Burumcuk sokak 7/10
of Reliability & Compatibility P.K. 8
Rome Air Development Center (AFSC) 06752 MALTEPE, ANKARA
GRIFFISS AFB, NY 13441 Turkey
USA

Mr J.M. BRICE Mr Edward M. LASSITER
Directeur Technique Vice President
THOMSON TMS Space Flight Ops Program Group
B.P. 123 P.O. Box 92957
38521 SAINT EGREVE CEDEX LOS ANGELES, CA 90009-2957
France USA

Mr L.L. DOPPING-HEPENSTAL Eng. Jose M.B.G. MASCARENHAS
Head of Systems Development C-924
BRITISH AEROSPACE PLC, C/O CINCIBERLANT HQ
Military Aircraft Limited 2780 OEIRAS
WARTON AERODROME Portugal
PRESTEN, LANCS PR4 1AX
United Kingdom

Mr J. DOREY Mr Dale NELSON
Directeur des Etudes & Syntheses Wright R & D Center
O.N.E.R.A. ATTN: AAAT
29 Av. de la Division Leclerc Wright Patterson AFB
92320 CHATILLON CEDEX Dayton, OH 45433
France USA

Mr David V. GAGGIN Ir. H.A.T. TIMMERS
Director Head, Electronics Department
U.S. Army Avionics R&D Activity National Aerospace Laboratory
ATTN: SAVAA-D P.O. Box 90502
FT MONMOUTH, NJ 07703-5401 1006 BM Amsterdam
USA Netherlands

AVIONICS PANEL EXECUTIVE
LTC James E. CLAY, US Army
Telephone Telex Telefax
(33) (1) 47-38-57-65 610176 (33) (1) 47-38-57-99
MAILING ADDRESSES:
From Europe and Canada From United States

AGARD AGARD
ATTN: AVIONICS PANEL ATTN: AVIONICS PANEL
7, rue Ancelle APO NY 09777
92200 Neuilly-sur-Seine
France

ATTACHMENT 1

FOR US AUTHORS ONLY

1. Authors of US papers involving work performed or sponsored by a
US Government Agency must receive clearance from their sponsoring
agency. These authors should allow at least six weeks for
clearance from their sponsoring agency. Abstracts, notices of
clearance by sponsoring agencies, and Attachment 2 should be sent
to Mr GAGGIN to arrive not later than 15 AUGUST 1990.

2. All other US authors should forward abstracts and Attachment 2 to
Mr GAGGIN to arrive before 31 JULY 1990. These contributors
should include the following statements in the cover letter:

A. The work described was not performed under sponsorship of a US
Government Agency.

B. The abstract is technically correct.

C. The abstract is unclassified.

D. The abstract does not violate any proprietary rights.

3. US authors should send their abstracts to Mr GAGGIn and Dr
KRUEGER only. Abstracts should NOT be sent to non-US members of
the Technical Programme Committee or the Avionics Panel
Executive.

ABSTRACTS OF PAPERS FROM US AUTHORS CAN ONLY BE SENT TO:

Mr David V. GAGGIN and Dr Charles H. KRUEGER Jr
Director Director, Avionics Systems Div
Avionics Research & Dev Activity Wright Research & Dev Center
ATTN: SAVAA-D ATTN: WRDC/AAA
Ft Monmouth, NJ 07703-5401 Wright Patterson AFB
Dayton, OH 45433

Telephone: (201) 544-4851 Telephone: (513) 255-5218
or AUTOVON: 995-4851

4. US authors should send the Author Information Form (Attachment 2)
to the Avionics Panel Executive, Mr GAGGIN, Dr KRUEGER, and each
Technical Programme Committee Member, to meet the above
deadlines.

5. Authors selected from the United States are remined that their
full papers must be cleared by an authorized national clearance
office before they can be forwarded to AGARD. Clearance
procedures should be started at least 12 weeks before the paper
is to be mailed to AGARD. Mr GAGGIN will provide additional
information at the appropriate time.

AUTHOR INFORMATION FORM
FOR
AUTHORS SUBMITTING AN ABSTRACT FOR THE AVIONICS PANEL SYMPOSIUM
on
MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS

INSTRUCTIONS

1. Authors should complete this form and send a copy to the Avionics
Panel Executive and all Technical Program Committee members by 31
AUGUST 1990.

2. Attach a copy of your abstract to these forms before they are
mailed. US Authors must comply with ATTACHMENT 1 requirements.

a. Probable Title Paper: __________________________________________
_____________________________________________________________________

b. Paper most appropriate for Session # ____________________________

c. Full Name of Author to be listed first on Programmee,
including Courtesy Title, First Name and/or Initials, Last
Name & Nationality.

d. Name of Organization or Activity: _______________________________
_____________________________________________________________________

e. Address for Return Correspondence: Telephone Number:

__________________________________ __________________

__________________________________ Telefax Number:

__________________________________ __________________

__________________________________ Telex Number:

__________________________________ __________________

f. Names of Co-Authors including Courtesy Titles, First Name and/or
Initials, Last Name, their Organization, and their nationality.

_________________________________________________________________

_________________________________________________________________

_________________________________________________________________

_________________________________________________________________

_________________________________________________________________

__________ ____________________
Date Signature
DUE NOT LATER THAN 21 SEPTEMBER 1990



------------------------------

End of Neuron Digest [Volume 6 Issue 49]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT