Copy Link
Add to Bookmark
Report

Neuron Digest Volume 12 Number 04

eZine's profile picture
Published in 
Neuron Digest
 · 14 Nov 2023

Neuron Digest   Tuesday, 28 Sep 1993                Volume 12 : Issue 4 

Today's Topics:
Adaptive Simulated Annealing (ASA) version 1.43
Journal of Computational Neuroscience
Proceedings of the 1993 NNSP Workshop
Chaos and NN's
Free simulation software
CMU Learning Benchmark Database Updated
postdoctoral fellowship opportunity for women and minorities
Job vacancy in evolutionary algorithms
'Fractal Extrapolation', is there such a thing?


Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@psych.upenn.edu". The ftp archives are
available from psych.upenn.edu (130.91.68.31). Back issues requested by
mail will eventually be sent, but may take a while.

----------------------------------------------------------------------

Subject: Adaptive Simulated Annealing (ASA) version 1.43
From: Lester Ingber <ingber@alumni.cco.caltech.edu>
Date: Fri, 17 Sep 93 09:22:05 -0800

========================================================================
Adaptive Simulated Annealing (ASA) version 1.43

To get on or off the ASA email list, just send an email to
asa-request@alumni.caltech.edu with your request.
________________________________________________________________________
Significant CHANGES since 1.34

Remarks were added in the NOTES for HP support, and for Turbo C, Turbo
C++, and MS Quick C on PCs.

Another option was added, the ASA_PRINT_MORE Printing Option, to give
more intermediate printout, i.e., the values of the best parameters and
cost function at each new acceptance state.
________________________________________________________________________
Wall Street Journal

I have been told that the WSJ will mention the world-wide use of the
ASA code, in an article to appear soon. I gave examples of some
projects using ASA, but I had to insist that the relevant people would
have to be contacted previous to citing them; see the related comment
in General Information below. Of course the press has the last word on
what they will publish/interpret.
________________________________________________________________________
FORTRAN?

I regularly receive requests to be able to run ASA with FORTRAN. I
cannot maintain both a C and a FORTRAN code, but there does seem to be
a genuine need to interface ASA with FORTRAN. In the NOTES are a
couple of suggestions: (1) Use f2c on FORTRAN programs; I have done
this and it works very well. (2) Try CFORTRAN to interface the C and
FORTRAN codes: (a) call FORTRAN from ASA, e.g., from cost_function(),
call a FORTRAN function that performs the actual calculation of the
cost function; (b) call ASA from FORTRAN, e.g., using the ADAPTING
section in the NOTES as a guide, call asa_main() from a FORTRAN
function. Can someone prepare templates for (a) and/or (b)? This
probably isn't easy to prepare for public release; about half a dozen
people started, but didn't complete such a project.
________________________________________________________________________
sa_pvt93.ps.Z

The new reference for this preprint in ftp.caltech.edu:pub/ingber is
%A L. Ingber
%T Simulated annealing: Practice versus theory
%J Mathl. Comput. Modelling
%V
%N
%D 1993
%P (to be published)
As announced previously, this is a much expanded version of the
original draft, e.g., including new ideas and calculations regarding
"quenching." In the acknowledgements, I give a sincere thanks to the
many users who read parts of previous drafts and who sent me their own
(p)reprints on simulated annealing. I'd be interested in hearing about
any systems that find the QUENCHing options as useful (lucky?) as the
ASA test problem did in this paper.
________________________________________________________________________
General Information

The latest Adaptive Simulated Annealing (ASA) code and some related
(p)reprints in compressed PostScript format can be retrieved via
anonymous ftp from ftp.caltech.edu [131.215.48.151] in the pub/ingber
directory.

Interactively: ftp ftp.caltech.edu, [Name:] anonymous, [Password:]
your_email_address, cd pub/ingber, binary, ls or dir, get
file_of_interest, quit. The latest version of ASA is asa-x.y.Z (x and
y are version numbers), linked to asa.Z. For the convenience of users
who do not have any uncompress utility, there is a file asa which is an
uncompressed copy of asa-x.y.Z/asa.Z; if you do not have sh or unshar,
you still can delete the first-column X's and separate the files at the
END_OF_FILE locations. There are patches asa-diff-x1.y1-x2.y2.Z up to
the present version; these may be concatenated as required before
applying. The INDEX file contains an index of the other files.

If you do not have ftp access, get information on the FTPmail service
by: mail ftpmail@decwrl.dec.com, and send only the word "help" in the
body of the message.

If any of the above are not possible, and if your mailer can handle
large files (please test this first), the code or papers you require
can be sent as uuencoded compressed files via electronic mail. If you
have gzip, resulting in smaller files, please state this.

Sorry, I cannot assume the task of mailing out hardcopies of code or
papers.

People willing to be contacted by others who might be interested in
their ASA projects could keep me informed on (1) the title and/or short
description of their project, and (2) whether I have permission to
release their names as well as the description of their projects.

Lester
========================================================================

|| Prof. Lester Ingber ||
|| Lester Ingber Research ||
|| P.O. Box 857 EMail: ingber@alumni.caltech.edu ||
|| McLean, VA 22101 Archive: ftp.caltech.edu:/pub/ingber ||


------------------------------

Subject: Journal of Computational Neuroscience
From: Jim Bower <jbower@smaug.bbb.caltech.edu>
Date: Fri, 17 Sep 93 09:55:00 -0800


*******************************************************************

JOURNAL OF COMPUTATIONAL NEUROSCIENCE
*******************************************************************

From neurons to behavior: A Journal at the interface between
experimental and theoretical neuroscience.


MANAGING EDITORS:

James M. Bower Eve Marder
California Institute of Brandeis University
Technology

John Miller John Rinzel
University of California, National Institutes of Health
Berkeley

Idan Segev Charles Wilson
Hebrew University University of Tennessee,
Memphis



ACTION EDITORS:

L. F. Abbott, Brandeis University
Richard Andersen, Massachusetts Inst. of Technology
Alexander Borst, Max-Planck Inst., Tubingen
Robert E. Burke, NINDS, NIH
Catherine Carr, Univ. of Maryland, College Park
Rodney Douglas, Oxford University
G. Bard Ermentrout, University of Pittsburgh
Apostoles Georgopoulos, VA Medical Center, MN
Charles Gray, University of California, Davis
Christof Koch, California Institute of Technology
Gilles Laurent, California Institute of Technology
David McCormick, Yale University
Ken Miller, University of California, San Francisco
Steve Redman, Australian National University
Barry Richmond, NIMH, NIH
Terry Sejnowski, Salk Institute
Shihab Shamma, Univ. of Maryland, College Park
Karen Sigvardt, University of California, Davis
David Tank, Bell Labs
Roger Traub, IBM TJ Watson Research Center
Thelma Williams, University of London


JOURNAL DESCRIPTION:

The JOURNAL OF COMPUTATIONAL NEUROSCIENCE is intended to provide a
forum for papers that fit the interface between computational and
experimental work in the neurosciences. The JOURNAL OF
COMPUTATIONAL NEUROSCIENCE will publish full length original papers
describing theoretical and experimental work relevant to
computations in the brain and nervous system. Papers that combine
theoretical and experimental work are especially encouraged.
Primarily theoretical papers should deal with issues of obvious
relevance to biological nervous systems. Experimental papers
should have implications for the computational function of the
nervous system, and may report results using any of a variety of
approaches including anatomy, electrophysiology, biophysics,
imaging, and molecular biology. Papers that report novel
technologies of interest to researchers in computational
neuroscience are also welcomed. It is anticipated that all levels
of analysis from cognitive to single neuron will be represented in
THE JOURNAL OF COMPUTATIONAL NEUROSCIENCE.

*****************************************************************
CALL FOR PAPERS
*****************************************************************

For Instructions to Authors, please contact:

Karen Cullen
Journal of Computational Neuroscience
Kluwer Academic Publishers
101 Philip Drive
Norwell, MA 02061

PH: 617 871 6300
FX: 617 878 0449
EM: Karen@world.std.com
*****************************************************************

*****************************************************************
ORDERING INFORMATION:

For complete ordering information and subscription rates,
please contact:

KLUWER ACADEMIC PUBLISHERS
PH: 617 871 6600
FX: 617 871 6528
EM: Kluwer@world.std.com

JOURNAL OF COMPUTATIONAL NEUROSCIENCE
ISSN: 0929-5313
*****************************************************************



------------------------------

Subject: Proceedings of the 1993 NNSP Workshop
From: Raymond L Watrous <watrous@learning.siemens.com>
Date: Mon, 20 Sep 93 15:06:34 -0500


The 1993 IEEE Workshop on Neural Networks for Signal Processing was
held September 6 - September 9, 1993 at the Maritime Institute of
Technology and Graduate Studies Linthicum Heights, Maryland, USA.

Copies of the 593-page, hardbound Proceedings of the workshop may be
obtained for $50 (US, check or money order, please) postpaid from:

Raymond Watrous, Financial Chair

1993 IEEE Workshop on Neural Networks for Signal Processing
c/o Siemens Corporate Research
755 College Road East
Princeton, NJ 08540

(609) 734-6596
(609) 734-6565 (FAX)




------------------------------

Subject: Chaos and NN's
From: epperson@evolve.win.net (Mark Epperson)
Date: Tue, 21 Sep 93 04:18:10

I am looking for references/papers on using NN's and chaos theory
for filtering noisy signals.

Thanks in advance,
Mark Epperson

------------------------------

Subject: Free simulation software
From: Bard Ermentrout <bard@mthbard.math.pitt.edu>
Date: Thu, 23 Sep 93 10:41:48 -0500

F R E E S I M U L A T I O N S O F T W A R E

I thought perhaps that modellers etc might be interested to know of the
availability of software for the analysis and simulation of dynamical and
probabilistic phenomena. xpp is available free via anonymous ftp. It
solves integro-differential equations, delay equations, iterative
equations, all combined with probabilistic models. Postscript output is
supported. A variety of numerical methods are employed so that the user
can generally be sure that the solutions are accurate. Examples are
connectionist type neural nets, biophysical models, models with memory,
and models of cells with random inputs or with random transitions. A
graphical interface using X windows as well as numerous plotting options
are provided. The requirements are a C compiler and an OS capable of
running X11. The software has been successfully compiled on
DEC,HP,SUN,IBM,NEXT workstations as well as on a PC running Linux. Once
it is compiled, no more compilation is necessary as the program can read
algebraic expressions and interpret them in order to solve them. The
program has been used in various guises for the last 5 years by a variety
of mathematicians, physicists, and biologists. To get it follow the
instructions below:

- ------------Installing XPP1.6--------------------------------

XPP is pretty simple to install
although you might have to add a line here and there
to the Makefile. You can get it from
mthsn4.math.pitt.edu (130.49.12.1)
here is how:

ftp 130.49.12.1
cd /pub
bin
get xpp1.6.tar.Z
quit
uncompress xpp1.6.tar.Z
tar xf xpp1.6.tar
make -k

If you get errors in the compilation it is likely to be one
of the following:
1) gcc not found in which case you should edit the Makefile
so that it says CC= cc
2) Cant find X include files. Then edit the line that says
CFLAGS= ....
by adding
-I<pathname>
where <pathname> is where the include files are for X, e,g,
-I/usr/X11/include
3) Cant find X libraries. Then add a line
LDFLAGS= -L<pathname>
right after the CFLAGS= line where <pathname> is where to find the X11
libraries
then change this line:
$(CC) -o xpp $(OBJECTS) $(LIBS)
to this line
$(CC) -o xpp $(OBJECTS) $(LDFLAGS) $(LIBS)

That should do it!!

If it still doesnt compile, then you should ask your sysadmin about
the proper paths.

Finally, some compilers have trouble with the GEAR algorithm if they
are optimized so you should remove the optimization flags i.e. replace
CFLAGS= -O2 -I<your pathnames>
with
CFLAGS= -I<your pathnames>

delete all the .o files and recompile

Good luck!
Bard Ermentrout


Send comments and bug reports to
bard@mthbard.math.pitt.edu



------------------------------

Subject: CMU Learning Benchmark Database Updated
From: Matthew.White@cs.cmu.edu
Date: Fri, 24 Sep 93 03:15:48 -0500

The CMU Learning Benchmark Archive has been updated. As you may know, in
the past, all the data sets in this collection have been in varying
formats, requiring that code be written to parse each one. This was a
waste of everybody's time. These old data sets have been replaced with
data sets in a standardized format. Now, all benchmarks consist of a
file detailing the benchmark and another file that is either a data set
(.data) or a program to generate the appropriate data set (.c).

Data sets currently avaialable are:
nettalk Pronunciation of English words.
parity N-input parity.
protein Prediction of secondary structure of proteins.
sonar Classification of sonar signals.
two-spirals Distinction of a twin spiral pattern.
vowel Speaker independant recognition of vowels.
xor Traditional xor.


To accompany this new data file format is a file describing the format
and a C library to parse the data file format. In addition, the
simulator (C version) for Cascade-Correlation has been rewritten to use
the new file format. Both the parsing code and the cascade correlation
code are distributed as compressed shell archives and should compile with
any ANSI/ISO compatible C compiler.

Code currently available:
nevprop1.16.shar A user friendly version of quickprop.
cascor1a.shar The re-engineered version of the Cascade
Correlation algorithm.
parse1.shar C code for the parsing algorithm to the new
data set format.

Data sets and code are available via anonymous FTP. Instructions follow.

If you have difficulties with either the data sets or the programs,
please send mail to: neural-bench@cs.cmu.edu. Any comments or
suggestions should also be sent to that address. Let me urge you not to
hold back questions as it is our single best way to spot places for
improvement in our methods of doing things.

If you would like to submit a data set to the CMU Learning Benchmark
Archive, send email to neural-bench@cs.cmu.edu. All data sets should be
in the CMU data file format. If you have difficulty converting your data
file, contact us for assistance.


Matt White
Maintainer, CMU Learning Benchmark Archive

- --------------------------------------------------------------------

Directions for FTPing datasets:

For people whose systems support AFS, you can access the files directly
from directory "/afs/cs.cmu.edu/project/connect/bench".

For people accessing these files via FTP:

1. Create an FTP connection from wherever you are to machine
"ftp.cs.cmu.edu". The internet address of this machine is
128.2.206.173, for those who need it.

2. Log in as user "anonymous" with your own internet address as password.
You may see an error message that says "filenames may not have /.. in
them" or something like that. Just ignore it.

3. Change remote directory to "/afs/cs/project/connect/bench". NOTE: you
must do this in a single atomic operation. Some of the super
directories on this path are not accessible to outside users.

4. At this point the "dir" command in FTP should give you a listing of
files in this directory. Use get or mget to fetch the ones you want.
If you want to access a compressed file (with suffix .Z) be sure to
give the "binary" command before doing the "get". (Some version of
FTP use different names for these operations -- consult your local
system maintainer if you have trouble with this.)

5. The directory "/afs/cs/project/connect/code" contains public-domain
programs implementing the Quickprop and Cascade-Correlation
algorithms, among other things. Access it in the same way.




------------------------------

Subject: postdoctoral fellowship opportunity for women and minorities
From: Ken Miller <ken@phy.ucsf.edu>
Date: Mon, 27 Sep 93 03:13:58 -0800

The University of California annually awards 20 or more postdoctoral
fellowships to women and minorities under the "President's
Postdoctoral Fellowship Program". Fellowships are awarded to work
with a faculty member at any of the nine UC campuses or at one of the
three national laboratories associated with UC (Lawrence Berkeley,
Lawrence Livermore, and Los Alamos). Fellowships pay $26-27,000/year,
plus health benefits and $4000/year for research and travel.
Applicants must be citizens or permanent residents of the United
States, and should anticipate completion of their Ph.D.'s by July 1,
1994. For this year's competition, DEADLINE FOR APPLICATION IS
DECEMBER 14, 1993.

There are many of us who work in computational neuroscience or
connectionism in the UC system or the national labs. I would
encourage anyone eligible to make use of this opportunity to obtain
funding to work with one of us. In particular, I encourage anyone
interested in computational neuroscience to contact me to further
discuss my own research program and the research opportunities in
computational and systems neuroscience at UCSF.

To receive a fellowship application and further information, contact:

President's Postdoctoral Fellowship Program
Office of the President
University of California
300 Lakeside Drive, 18th Floor
Oakland, CA 94612-3550
Phone: 510-987-9500 or 987-9503

Ken Miller

Kenneth D. Miller telephone: (415) 476-8217
Dept. of Physiology internet: ken@phy.ucsf.edu
University of California, fax: (415) 476-4929
San Francisco
513 Parnassus
San Francisco, CA 94143-0444
[Office: S-859]


------------------------------

Subject: Job vacancy in evolutionary algorithms
From: S.FLOCKTON@rhbnc.ac.uk
Date: Mon, 27 Sep 93 12:41:37 +0000

ROYAL HOLLOWAY, UNIVERISTY OF LONDON

POST-DOCTORAL RESEARCH ASSISTANT

EVOLUTIONARY ALGORITHMS IN NON-LINEAR SIGNAL PROCESSING

Applications are invited for this SERC-funded post, tenable for
three years from 1 October 1993 or soon after, to carry out a
comparison of the effectiveness of evolution-based algorithms for
a number of signal processing problems. This comparison will be
done by study of example problems and developing theoretical ideas
concerning the behaviour of these algorithms. The successful
applicant will join a group investigating several different
aspects of genetic algorithms and neural networks. Royal Holloway,
one of the five multi-faculty Colleges of the University of
London, is situated in a campus environment approximately 20 miles
west of London, just outside the M25.

Applicants should hold a PhD in Electrical Engineering, Computer
Science, Physics, or a related field, preferably in digital signal
processing or genetic and/or other evolution-based algorithms.

Salary on the Research 1A Scale (UKpounds 14,962 - 17,320 pa,
inclusive of London Allowance).

Informal enquiries to Dr Stuart Flockton (Tel: 0784 443510 , Fax:
0784 472794, email: S.Flockton@rhbnc.ac.uk).
Further particulars from the Personnel Officer, Royal Holloway,
University of London, Egham, Surrey, TW20 0EX Tel: 0784 443030.

Closing date for applications: 15th October 1993


------------------------------

Subject: 'Fractal Extrapolation', is there such a thing?
From: pwm@csis.dit.csiro.au
Date: Mon, 27 Sep 93 13:53:29 -0500

Dear Digest Moderator,

I'd be grateful if you could include this in a furure digest:

I'm wondering if there is such a thing as fractal extrapolation. I have
a timeseries for which I have a probabalistic structural feature which
operates at many levels and allows me to make good predictions ~70%
at each level of the direction of the next move by delta.

I'm interested in trying to understand the 'big picture' here and
got to wondering whether there is such a thing as fractal extrapolation?

I would be grateful for any pointers to work along these lines.

Cheers,
Peter (milne@csis.dit.csiro.au)


------------------------------

End of Neuron Digest [Volume 12 Issue 4]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT