Copy Link
Add to Bookmark
Report

Neuron Digest Volume 06 Number 32

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Friday, 18 May 1990                Volume 6 : Issue 32 

Today's Topics:
Response on Defense Related Research
Response to "Response re Defense Related Research"
re: defense related research
"Press Agency For R+D"
Requesting Bibliography
NN Simulator for the CSPI SuperCard
NACSIS: NSF offers access to Japanese data bases
Ariel : a 100 gigaflops simulator for connectionism research
San Francisco Bay Area talk
NN for resource scheduling
Help request--Need the address to INS
NIPS NOTE
Another journal for neural networks
help with diploma thesis required
New FKI-Reports
Preprint announcement
call for papers -- psychologists
Optimality: BBS Call for Commentators


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Response on Defense Related Research
From: Bob Banks <rnb%computer-science.nottingham.ac.uk@NSFNET-RELAY.AC.UK>
Date: Tue, 15 May 90 11:21:08 +0100

[[ Editor's Note: To return back to the nature of this Digest, the
following three messages are the last I will accept on the general subject
of the role of governmental defense departments in the world.
Submissions *directly* relevant to artificial or natural neural network
research are always welcome, but I will refer further political
discussions to appropriate fora. I've made only an introductory comment
on this debate and feel it is still valid, but not in a Digest that has
primarily a technical focus. -PM ]]

I too am deeply saddened that so much of my own, and others' work in
Neural Networks looks likely to be applied to building weapons.

I'll restrict my reply to Dale Nelson to specific points on the funding
and application of NN research, although I have to say that to me, and
many others, his historical analysis seems amazingly naive. I'd suggest
he thinks a little more deeply about the issues, maybe starting with one
of Chomsky's excellent books? (Sorry - that's tongue in cheek - I'm sure
he has thought deeply, and holds his views sincerely - I'm just acting
the patronizing Brit!!!)

However, he has made one important factual error. University research,
and this e-mail network are NOT funded BY the DoD. They are funded by the
American taxpayer THROUGH the DoD. Hence it's illogical to point to
microwave ovens, VLSI chips, etc., as products of MILITARY research. If
the taxpayers' money had been channelled to research in some other way,
the most obvious assumption is that these things would have been
discovered earlier, because there would have been far fewer overheads,
going to weapons. This point is born out by the Japanese experience, and
a good deal of other research. (I could dig out the references if
anyone's interested.)

Exactly the same applies to Neural Nets - if military spending was cut,
there'd be MORE money AVAILABLE for basic research in them (other things
being equal).

One final point - I'd like to stay in applied Neural Nets research, but
because it looks as if the majority of applications in the next few years
will be military, I'm actively looking around for other areas of work. I
wonder how many others are in a similar situation? And can we do anything
about it, or are we completely helpless?

Bob Banks

------------------------------

Subject: Response to "Response re Defense Related Research"
From: YOUNG%RCSDY%gmr.com@RELAY.CS.NET
Date: Tue, 15 May 90 15:29:00 -0500

I believe Dale E. Nelson of the Air Force and J. P. Letellier of the Navy
offered unsound rebuttals to David Lovell's thoughtful considerations on
doing defense related research.

> Nelson: Many of the advances made for the military are actively being
> used in every day life... If it were not for the military, these
> things would probably not have been developed, or not developed this soon.

If military expenditures were instead used for DIRECTLY producing new
consumer products (instead of non-productive military goods), it is
undeniable that more, better and cheaper consumer goods could be produced
faster than they could be from any military "spinoffs", so this argument
is completely fallacious.

> Nelson: Remember, the purpose of SDI, is to DEFEND not attack.

The same so-called defensive capabilities of SDI are identical to, and
can be easily converted, to offensive capabilities. Letellier's
statement agrees "The line in research between what is a true defensive
weapon and what is an offensive (pun included) weapon is truly not
clear."
The objective studies with no economic self interest I have seen
indicate that SDI is highly destabilizing, and more likely to lead to
nuclear destruction, than to prevent it.

> Letellier: And, before the obvious response that we should never
> put our forces in harm's way, consider the history of peoples who have
> refused to defend themselves. The bullies keep on coming until they
> are stopped.

If Letellier believes countries go to war because of being "bullies" and
not because of socio-political and economic forces then he is dangerously
naive.

> Letellier: And, :-) packaged as making autonomous
> units (forts, ships, air bases) more self sustaining can make this into
> a "Defense" proposal! (-;

Again, money for the worthy goal of conservation efforts would go a lot
further if placed directly into such efforts rather than "indirectly"
through making autonomous military units. In fact, the argument can be
made that military expenditures on weapons are inherently
anti-conservation, since their inherent goal is destruction, not
conservation.

- R. Young

------------------------------

Subject: re: defense related research
From: Klaus Hahn <I3160901%DBSTU1.BITNET@CUNYVM.CUNY.EDU>
Organization: Dept. of Psycholoy, University of Braunschweig
Date: Fri, 18 May 90 13:58:01 -0500

[[ Editor's note: Coincidental and ironic that the last accepted
submission on this debate comes from Europe, now in the midst of their
own debate on the role of defense in their rapidly changing situation.
We may revisit this in the future. -PM ]]

I do hope this reply isn't starting some kind of net-war, BUT I think
there is too much wrong with Mr. Nelson's defending his job. The points
that ARE of importance are easily enough summarized: no matter how much
money the DoD spends supporting students, univer- sities, research
groups, brain trusts and the like, they IN FACT only give the money back
to where it came from. I just can't believe that Mr. Nelson earned the
mentioned $7M during his career. So there's no point in being thankful
to a group of people who grasp a lot of money in the first place and
spend it -and thats the second point- exactly for those projects they
need in order to prepare the killing of people, for that's what it's all
about, isn't it? They shouldn't tell everybody of microwave ovens, etc.,
because they didn't WANT to invent them, they just stumbled over things
like that.

There exist a few documents in Europe where people tried to assess the
by-products produced by military oriented research, and they claim that
this point is ridiculous. With about 10% of the money spent directly for
research these products would exist as well, if not better. What isn't so
easy to see is that there have to exist A LOT of products which don't
shine up at all, i.e. either they are not useful for the public (but
perhaps for warfare...) or they are just junk, and silently forgotten.
And for some private words: they could keep their microwave ovens, I
wouldn't mind.

Klaus Hahn

Dept. of Psychology
University of Braunschweig
Spielmannstr. 19
D-3300 Braunschweig, West-Germany


------------------------------

Subject: "Press Agency For R+D"
From: Hans_Michalec_FACTUM_EST@eurokom.ie
Date: Sun, 13 May 90 11:02:00 +0700

[[ Editor's Note: I present this without knowing anything about the
organization or correcting any of the content of the message. Independent
comments would be welcome. My first questions: What is the language of
publication? What is the background and aim of the organization? What
would be similar publications (in the U.S., Scientific American and
Science News comes to mind)? I guess I'd also like to see a sample issue
to better inderstand the format. -PM ]]

Dear Sir/Madam,

We presume, that you agree with the thought, R & D should not live in an
ivory-tower but turn to the public. Our aim is to act as a link between
science and the public. "Factum est" is the independent Austrian Press
Agency for Research & Development, was founded in 1988 and has now about
20 million potential readers. Beeing successful on the Austrian media
market, we plan to go abroad. Therefore we want to widen the spectrum of
our reports too.

Dear Sir/Madam, convinced, that your scientific work is of interest for
international readers, whether they are common people, scientists or
entrepreneurs, we invite you to cooperate - without any cost for you. If
you are interested:

+ Please send us some literature, papers, abstracts or descriptions of
your running or recently finished projects.
+ Papers should include main issues, final goals, methods and, very
important, some potential applications. Please don't write an
particular article for us, just send already published papers, we can
use as basic information for our article.
+ Please enclose reproducible color pictures (negative, slide or photo in
good quality). You can be sure getting back originals soon.
+ Having got your material, we contact you in case of open questions.
+ Please bear in mind: We report in a way as you may find it e.g. in
magazines like "Newsweek" or "Time Magazine", we are not a scientific
journal. Our approach is to inform the interested layman, who wants to
know, what's going on in research laboratories. Either to widen his
horizont, to stay up to date or to look for some industrial applications
of scientific knowledge. This approach of course means not:
inadmissible simplification of the results. Facts must stay facts.

Looking forward to hear from you

Yours sincerly

Mag. Bernhard Emerschitz
FACTUM EST
2nd Editor in Chief

Please contact us:
Post address:
Factum est
Waehringer Str. 57-7
A-1090 Vienna-AUSTRIA
Phone: +222-487940, Mag. Bernhard Emerschitz
Telex: 75312371=fact a
Eurokom: Hans Michalec FACTUM EST
Int. E-Mail: Hans_Michalec_FACTUM_EST@eurokom.ir

------------------------------

Subject: Requesting Bibliography
From: <PITOURA%GRPATVX1.BITNET@CUNYVM.CUNY.EDU>
Date: 14 May 90 13:54:27 -0400

Pitoura Evangelia
Computer Technology Institute
Po Box 1122
26110 Patras
Greece

We are interested in the Learnability of Concepts (in the Valiant's sence)
by Neural Networks. We are aware of the influencial paper by Blumer et.
al [1986], the paper by Baum and Haussler presented in NIPS 1988, and the
special issue in NN in the Journal of Complexity [1988].

We are asking for related bibliography in this context. For e-mail
send your messages to PITOURA@PATGRVX1.BITNET

[[ Editor's Note: As always, sharing the results with the readers of
Neuron Digest would be appreciated! -PM ]]

------------------------------

Subject: NN Simulator for the CSPI SuperCard
From: Lucia Sprotte-Kleiber <sprotte@ifi.unizh.ch>
Date: 15 May 90 12:01:00 +0200


SuperCard is an new board from CSPI which is based on the Intel's i860
Array Processor. (up to 640 MFLOPS). This board is available for Sun
stations (and other systems).

Do anyone know, whether there already exists an neural network simulator
for the SuperCard on the sun, or whether there exists a project to build
such a simulator.

Many thanks,

Lucia Sprotte-Kleiber, Institut fuer Informatik, Universitaet Zurich

------------------------------

Subject: NACSIS: NSF offers access to Japanese data bases
From: Douglas McNeal <dmcneal@note.nsf.gov>
Date: Mon, 26 Mar 90 14:34:23 -0500

NSF now offers U.S. scientists and engineers free on-line
access to nine Japanese science data bases. The data bases, which are
compiled and updated by Japan's National Center for Science Information
System (NACSIS), index such topics as research projects sponsored by
Japan's Ministry of Education, Science, and Culture; papers presented at
conferences of electronics societies; and all doctoral theses.

U.S. researchers may request searches by surface mail to
Room 416-A, NSF, Washington, D.C. 20550, or by electronic mail.
Researchers may also contact the operator to schedule training at the NSF
offices in using the Japanese-language system in person.

For further information, request NSF publication 90-33,
NACSIS, from NSF's Publications Unit. To request a search, to reserve
time on the system, or to discuss research support opportunities, please
call the NACSIS operator at (202) 357-7278 between the hours of 1 and 4
p.m., EST, or send a message by electronic mail to

nacsis@nsf.gov (Internet) or
nacsis@NSF (BitNet)

------------------------------

Subject: Ariel : a 100 gigaflops simulator for connectionism research
From: gately@resbld.csc.ti.com (Michael T. Gately)
Date: Tue, 15 May 90 13:34:00 -0500

[[ Editor's Note: Long time readers will recognize this author as the
founder of Neuron Digest. Hi, Mike! -PM ]]

We at Texas Instruments' Central Research Laboratories have designed and
prototyped a computer system to perform connectionism research. The
hardware, called Ariel, uses only conventional integrated circuitry and
is highly scalable. Our ultimate goal is to provide a system capable of
simulating the behavior of neural networks composed of millions of
neurons and 10s of billions of synapses at rates exceeding 100 billion
synapse operations per second. This level of performance approaches the
throughputs required to explore large network applications in machine
vision and artificial intelligence.

The hardware design consists of up to several thousand coarse-grained
processing modules that are organized into a hierarchial bus structure.
Each module contains several microprocessors, 128 Megabytes of fast RAM,
and a 140 Megabyte disk storage unit. A 1000-module version of Ariel
will allow 6.4 billion floating point synapses to reside in RAM, and will
carry out network calculations at peak rates above 30 billion connection
calculations per second. The entire system is controlled by one or more
workstation terminals. A small prototype (four modules) of the Ariel
architecture has been wirewrapped and is now undergoing evaluation, but
productization of this system has not been planned.

Our question is: Does anyone have a current problem that is so complex as to
require a simulation tool with the throughput of Ariel? In short, who is
stifled? We would be interested in getting some ideas from potential
users as to their projected needs in super-gigaflop connectionism
research. Please reply to me at:
gfrazier@resbld.ti.com
and mention the type of problem, the expected network size, and perhaps
the type of algorithm you expect to use (eg. vision, speech, robotics,
perceptron, RCE, hybrid, cellular automata, genetic, 1 million cells,
video rates, etc.). I will summarize any responses I get and put them
back on the network. A technical report can also be obtained by request
to the above address or by writing to;

Gary Frazier
Mail Station 154
Texas Instruments
P.O. Box 655936
Dallas, Texas 75265


------------------------------

Subject: San Francisco Bay Area talk
From: kingsley@hpwrc04.hp.com
Date: Tue, 15 May 90 18:11:59 -0700

**************************************************************
* *
* A I F O R U M M E E T I N G *
* *
* SPEAKER: Arthur Gaffin *
* TOPIC: Neural Nets, vision *
* *
* WHEN: 7PM Tuesday 5/22/90 *
* WHERE: Lockheed building 202, auditorium *
* 3251 Hanover Street *
* Palo Alto, CA *
* *
* AI Forum meetings are free, open and monthly! *
* Call (415) 594-1685 for more info *
**************************************************************


------------------------------

Subject: NN for resource scheduling
From: RM5I%DLRVM.BITNET@CUNYVM.CUNY.EDU
Date: Wed, 16 May 90 10:49:04 -0500

Hi,

I try to find people who are interested in using NN for resource
scheduling in different areas like mission planning in aero- or
spacebusiness.

If someone is interested i would like to discuss this area and change
information and applications.

Best regards

Roland Luettgens
German Aerospace Research Establishment
8031 Wessling
West Germany
Phone: 8153 28 1261
Fax : 8153 2447

------------------------------

Subject: Help request--Need the address to INS
From: craigh%dollar.wr.tek.com@RELAY.CS.NET
Date: Wed, 16 May 90 11:02:34 -0700


Could you please e-mail the address and phone number of the International
Neural-Net Society(INS)? (This may not be the exact name of the society
since I didn't write it down.)

A friend that that no longer has net-access is trying to track down what
happened to his application to the society. He especially wants access
to their journals.

Thanks,
Craig Hondo
Tektronix, Inc.
Beaverton, OR 97076
(503)629-1364
craigh@amadeus.WR.TEK.COM

------------------------------

Subject: NIPS NOTE
From: jose@learning.siemens.com (Steve Hanson)
Date: Thu, 17 May 90 07:06:40 -0500

[[ Editor's Note: Unfortunately, this Digest may come out too late for
the submission deadline. Oh well... -PM ]]

LAST MINUTE NOTE (RE: Cognitive Science/AI)

as you are doing your last minute details prior to mailing...
Remember there is a new submission category this year.
Anyone submitting summaries relevant to COGNITIVE SCIENCE or AI
please indicate this on your summary/abstract.

Steve

------------------------------

Subject: Another journal for neural networks
From: "Edward K. Blum" <blum%pollux.usc.edu@usc.edu>
Date: Thu, 17 May 90 12:06:43 -0700

Please add to your recent listing of journals which publish papers on
neural networks the following: Journal of Computer and System Sciences,
Managing Editor: Prof. E. K. Blum, Math. Dept., U. So. Cal, L.A. 90089 We
have recently added editors to our board to cover computational aspects
of neural networks, math. modeling of neurobiological phenomena,
theoretical aspects of n.n. and related topics. We consider in-depth
papers rather than short notes. Papers should be submitted to me.Thank
you. E. K. Blum

------------------------------

Subject: help with diploma thesis required
From: <PITOURA%GRPATVX1.BITNET@CUNYVM.CUNY.EDU>
Date: 17 May 90 18:11:00 -0400

Hi there! I am a student currently working on my diploma thesis on neural
networks and I have a few questions I'd like to ask. I started studying
the literature on neural nets some three months ago and from what I have
read thus far, it seems that the back propagation learning rule is the
rule most often used to train networks. I also know that several
variations on the original BP have been devised in an attempt to speed up
convergence and improve the properties of the trained net (e.g. enhance
generalization).

I would like to carry out an experimental study of several of these
variations and see for example how the convergence rate of each of the
several variations is affected by the training set size or which
variation generalizes better. (The list of things I want to test the BP
variations on is too long to give here, but if anyone wants to make any
additions just write to me.)

The first of my questions is whether such a study has already been done
(I'm not talking about studies that compare just one or two variations to
the original BP, but studies that use a common testbed to compare several
variations).

Second, what are the most important enhancments/modifications to the
original BP? I already know a few, but I am certain there are several
more. Can anyone give me a list of references to papers and/or TR's that
discuss BP variations?

Finally, I know that the majority function and the parity problem have
been used in the past to test the properties of BP. An early thought of
mine is to use in my study the task of detecting and correcting errors in
binary words using the Hamming code (in this task the input vectors
contain both the data and check bits while the output vectors use only
the data bits). This problem is quite interesting since the network will
have to learn how to use the check bits and, as far as I know, it has not
been used before. As I said however, this is just an early thought and if
someone has something better to suggest, just write to me.

I'd prefer it if responses to this message were sent directly to me at
PITOURA@GRPATVX1.BITNET.

Any information will be of great help to me.

PS. If you see other messages on this list with the same e-mail address
but asking for different things and you're wondering what is going on,
the explanation is simply that I share this e-mail address with other
people also working on NN.

[[ Editor's Note: Again, copies of responses or summaries might also be
sent to neuron-request@hplabs.hpl.hp.com for sharing with other
readers/researchers. -PM ]]

------------------------------

Subject: New FKI-Reports
From: Juergen Schmidhuber <schmidhu@tumult.informatik.tu-muenchen.de>
Date: Tue, 01 May 90 10:15:12 +0200

Two new reports on spatio-temporal credit assignment in neural networks
for adaptive control are available.

LEARNING TO GENERATE FOCUS TRAJECTORIES FOR ATTENTIVE VISION
FKI-REPORT 128-90
Juergen Schmidhuber and Rudolf Huber

One motivation of this paper is to replace the often unsuccessful and
inefficient purely static `neural' approaches to visual pattern
recognition by a more efficient sequential approach. The latter is
inspired by the observation that biological systems employ sequential
eye-movements for pattern recognition.

The other motivation is to demonstrate that there is at least one
principle which can lead to the LEARNING of dynamic selective spatial
attention.

A system consisting of an adaptive `model network' interacting with a
dynamic adaptive `control network' is described. The system LEARNS to
generate focus trajectories such that the final position of a moving
focus corresponds to a target to be detected in a visual scene. The
difficulty is that no teacher provides the desired activations of
`eye-muscles' at various times. The only goal information is the desired
final input corresponding to the target. Thus the task involves a complex
temporal credit assignment problem, as well as an attention shifting
problem.

It is demonstrated experimentally that the system is able to learn
correct sequences of focus movements involving translations and
rotations. The system also learns to track a moving target. Some
implications for attentive systems in general are discussed. For
instance, one can build a `mental focus' which operates on the set of
internal representations of a neural system. It is suggested that
self-referential systems which model the consequences of their own
`mental focus shifts' open the door for introspective learning in neural
networks.


TOWARDS COMPOSITIONAL LEARNING IN NEURAL NETWORKS
FKI-REPORT 129-90
Juergen Schmidhuber

None of the existing learning algorithms for neural networks with
internal and/or external feedback addresses the problem of learning by
composing subprograms, of learning `to divide and conquer'. In this work
it is argued that algorithms based on pure gradient descent or on
temporal difference methods are not suitable for large scale dynamic
control problems, and that there is a need for algorithms that perform
`compositional learning'. Some problems associated with compositional
learning are identified, and a system is described which attacks at least
one of them. The system learns to generate sub-goals that help to
achieve its main goals. This is done with the help of `time-bridging'
adaptive models that predict the effects of the system's sub-programs. A
simple experiment is reported which demonstrates the feasibility of the
method.


To obtain copies of these reports, write to

Juergen Schmidhuber
Institut fuer Informatik,
Technische Universitaet Muenchen
Arcisstr. 21
8000 Muenchen 2
GERMANY

or send email to
schmidhu@lan.informatik.tu-muenchen.dbp.de

Only if this does not work for some reason, try
schmidhu@tumult.informatik.tu-muenchen.de

Please let your message look like this:

subject: FKI-Reports
FKI-128-90, FKI-129-90
Physical address (not more than 33 characters per line)

DO NOT USE REPLY!

------------------------------

Subject: Preprint announcement
From: Rich Sutton <rich@gte.com>
Date: Thu, 03 May 90 12:12:19 -0400

How could a connectionist network _plan_ a sequence of actions before
doing them? The follow preprint describes one answer.



INTEGRATED ARCHITECTURES FOR LEARNING, PLANNING, AND REACTING
BASED ON APPROXIMATING DYNAMIC PROGRAMMING

Richard S. Sutton
GTE Labs

Abstract

This paper extends previous work with Dyna, a class of architectures for
intelligent systems based on approximating dynamic programming methods.
Dyna architectures integrate trial-and-error (reinforcement) learning and
execution-time planning into a single process operating alternately on
the world and on a learned model of the world. In this paper, I present
and show results for two Dyna architectures. The Dyna-PI architecture is
based on dynamic programming's policy iteration method and can be related
to existing AI ideas such as evaluation functions and universal plans
(reactive systems). Using a navigation task, results are shown for a
simple Dyna-PI system which simultaneously learns by trial and error,
learns a world model, and plans optimal routes using the evolving world
model. The Dyna-Q architecture is based on Watkins's Q-learning, a new
kind of reinforcement learning. Dyna-Q uses a less familiar set of data
structures than does Dyna-PI, but is arguably simpler to implement and
use. We show that Dyna-Q architectures are easy to adapt for use in
changing environments.

---------------

This paper will appear in the proceedings of the Seventh International
Conference on Machine Learning, to be held June, 1990.
For copies, send a request with your US MAIL address to: clc2@gte.com

------------------------------

Subject: call for papers -- psychologists
From: Noel Sharkey <noel%cs.exeter.ac.uk@NSFnet-Relay.AC.UK>
Date: Wed, 09 May 90 13:33:07 +0100

******************** CALL FOR PAPERS ******************

CONNECTION SCIENCE SPECIAL ISSUE


CONNECTIONIST MODELLING OF PSYCHOLOGICAL PROCESSES

EDITOR
Noel Sharkey

SPECIAL BOARD
Jim Anderson
Andy Barto
Thomas Bever
Glyn Humphries
Walter Kintsch
Dennis Norris
Ronan Reilly
Dave Rumelhart

The journal Connection Science would like to encourage submissions
from researchers modelling psychological data or conducting experiments
comparing models within the connectionist framework. Papers of this
nature may be submitted to our regular issues or to the special issue.

Authors wishing to submit papers to the special issue should mark
them SPECIAL PSYCHOLOGY ISSUE. Good quality papers not accepted
for the special issue may appear in later regular issues.


DEADLINE FOR SUBMISSION 12th October, 1990.


Notification of acceptance or rejection will be by the
end of December/beginning of January.


------------------------------

Subject: Optimality: BBS Call for Commentators
From: Stevan Harnad <harnad@clarity.Princeton.EDU>
Date: Wed, 09 May 90 16:06:51 -0400


Below is the abstract of a forthcoming target article to appear in
Behavioral and Brain Sciences (BBS), an international, interdisciplinary
journal providing Open Peer Commentary on important and controversial
current research in the biobehavioral and cognitive sciences. To be
considered as a commentator or to suggest other appropriate commentators,
please send email to:
harnad@clarity.princeton.edu or write to:
BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771]

Please specify the aspect of the article that you are qualified and
interested to comment upon. If you are not a current BBS Associate,
please send your CV and/or the name of a current Associate who would be
prepared to nominate you.
____________________________________________________________________
The Quest for Optimality: A Positive Heuristic of Science?

Paul J. H. Schoemaker
Center for Decision Research
Graduate School of Business
University of Chicago
Chicago, IL 6063

Abstract

This paper examines the strengths and weaknesses of one of science's most
pervasive and flexible metaprinciples: Optimality is used to explain
utility maximization in economics, least effort principles in physics,
entropy in chemistry, and survival of the fittest in biology. Fermat's
principle of least time involves both teleological and causal
considerations, two distinct modes of explanation resting on poorly
understood psychological primitives. The rationality heuristic in
economics provides an example from social science of the potential biases
arising from the extreme flexibility of optimality considerations,
including selective search for confirming evidence, ex post
rationalization, and the confusion of prediction with explanation.
Commentators are asked to reflect on the extent to which optimality is
(1) an organizing priniciple of nature, (2) a set of relatively
unconnected techniques of science, (3) a normative principle for rational
choice and social organization, (4) a metaphysical way of looking at the
world, or (5) something else still.

Key Words:
Optimization, Variational Principles, Rationality,
Explanation, Evolution, Economics, Adaptation, Causality, Heuristics,
Biases, Sociobiology, Control Theory, Homeostasis, Entropy, Regulation.

------------------------------

End of Neuron Digest [Volume 6 Issue 32]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT