Copy Link
Add to Bookmark
Report

Alife Digest Number 050

eZine's profile picture
Published in 
Alife Digest
 · 10 months ago

 
ALIFE LIST: Artificial Life Research List Number 50 Friday, November 30th 1990

ARTIFICIAL LIFE RESEARCH ELECTRONIC MAILING LIST
Maintained by the Indiana University Artificial Life Research Group

Contents:
extra copies of alife digests
Artificial Life bibliography
TR on animats' adaptive behavior
New York Times Article On Massively Parallel Computing
Job at the National Library of Medicine
Artificial Life Digest, #49

----------------------------------------------------------------------

From: Elisabeth Freeman <bfreeman@copper.ucs.indiana.edu>
Subject: extra copies of alife digests

We are aware of the problem with duplicate (or triplicate) copies of
the digest arriving at some sites. Unfortunately, this problem is
caused by iuvax.cs.indiana.edu being rebooted, so there's not much
we can do about at this time. When the machine is rebooted, old processes are
sometimes reinstated, which leads to extra mailings to some people
on the list. We apologize, and just have to hope that it doesn't
happen too often.

Elisabeth Freeman

------------------------------

Date: Sat, 24 Nov 90 23:11:46 PST
From: rbelew@UCSD.EDU (Rik Belew)
Subject: Artificial Life bibliography

As some of you have noticed, the annotated bibliography at the end of
Chris Langton's "Artificial Life" proceedings says this:

In order to compensate for the inadequacies of this bibliography,
it is being ported to an on-line bibliographic database
with an intelligent front end which will allow Internet access for
searching, downloading, and updating citations. Citations will be
able to be retrieved, added, or updated in either BibTeX or Refer
format. We expect that the bibliography will be available on-line
early in 1989. For information on accessing the bibliography via
Internet, contact Richard K. Belew ...

All true, except for the date! Due to the vagaries of funding and
other boring but unending problems, this project has fallen sadly
behind schedule. It continues to inch forward, however, and if
"1989" were "1991," Chris's statement would come close to being true.

But because the raw database is of great value to ALifer's, and
because I can't live with the guilt any longer, I'm posting the raw
BibTeX database Chris gave to the ALife server (iuvax.cs.indiana.edu,
129.79.254.192) now. It can be found in:
iuvax.cs.indiana:/pub/alife/papers/alife.bib

BIBLIO, the system providing all the additional services Chris
mentioned, will include these citations and approx. 10,000 others on
artificial intelligence and general computer science. I will announce
this system here on Alife-List when it is ready.

Richard K. Belew

rik@cs.ucsd.edu

Assistant Professor
CSE Department (C-014)
UCSD
San Diego, CA 92093
619 / 534-2601 or 534-5288 (messages)



------------------------------

Date: Mon, 26 Nov 90 10:18:05 +0100
From: meyer%FRULM63.BITNET@CUNYVM.CUNY.EDU (Jean-Arcady MEYER)
Subject: TR on animats' adaptive behavior

The following technical report is now available:

Tech. Rep. BioInfo-90-1 Jean-Arcady Meyer
Agnes Guillot

Abstract: Following a general presentation of the numerous means whereby
animats - i.e simulated animals or autonomous robots - are enabled to
display adaptive behaviors, various works making use of such means are
discussed.
This review is organized into three parts dealing respectively with
preprogrammed adaptive behaviors, with learned adaptive behaviors and
with the evolution of these behaviors.
A closing section addresses directions in which it would be desirable
to see future research oriented, so as to provide something other than
proofs of principle or ad hoc solutions to specific problems, however
interesting such proofs or solutions may be in their own right.

For a hardcopy of the above paper, please send a request for Tech. Rep.
BioInfo-90-1 to: meyer@frulm63.bitnet


------------------------------

From: Richard Rychtarik <rcr@Think.COM>
Date: Mon, 26 Nov 90 17:55:29 EST
Subject: New York Times Article On Massively Parallel Computing

This article appeared in Sunday's New York Times.

Richard

title: A TECHNOLOGY ONCE CONSIDERED DUBIOUS IS NOW THE WAVE OF THE FUTURE
author: JOHN MARKOFF

The long-running debate about how best to make computers
thousands of times more powerful than they are today appears to be
ending in a stunning consensus.
The nation's leading computer designers have agreed that a
technology considered the underdog as recently as two years ago has
the most potential for the next generation of the world's fastest
computers, called supercomputers.
The winning technology is known as massively parallel
processing, and it involves hundreds or thousands of independent
processors -- silicon chips that perform calculations -- teaming up
as one computer to solve problems far too imposing for even the
most powerful computers today.
Until now, the leading supercomputer makers -- including Cray
Research Inc., Convex Computer Corp. and IBM -- have built machines
that link only two to eight processors.
Among the problems the new technology could address are
simulating the reaction of the body to a new drug without the need
for human subjects; mapping the human genetic structure to better
understand inherited diseases; generating models of the world's
climates to study changes induced by air pollutants, and
recognizing spoken languages and images in order to improve the
versatility of factory robots.
A conference in New York earlier this month, sponsored by the
Institute for Electrical and Electronic Engineers, provided clear
signals of how much things have changed.
``We've moved from an era in which massively parallel computers
were thought of as toys to one where they've become something that
a scientist is willing to bet his career on,'' said Larry L. Smarr,
an astrophysicist who directs the National Center for Supercomputer
Applications in Champaign, Ill.
For example, leading computer designers at Cray and Convex said
last week that their companies are embarking on designs for
massively parallel computers.
And Intel Corp. announced at the conference that a consortium of
14 U.S. laboratories was buying a custom-made massively parallel
computer the company has based around 512 of its most powerful
microprocessors.
Yet another indication of the consensus came when W. Daniel
Hillis, an advocate of massively parallel computing, asserted in
the keynote address to an audience of almost 1,000 researchers that
conventional supercomputers would be obsolete by the middle of the
decade.
No one disputed his claim. At past conferences, he had faced
loud dissent.
Conventional supercomputers line up billions of calculations,
then use one or several extremely complex and expensive processors
to race through them.
By contrast, massively parallel computers divide a computing
problem among many simple, inexpensive processors.
Like Lilliputians, these processors, each with the same amount
of power as today's personal computer, can in theory work together
as an army with enough power to overwhelm a Gulliver, like the Cray
Y/MP supercomputer.
But until recently, it was thought that software to coordinate
the work of so many separate processors was either still many years
in the future or not possible.
As a result, many computer designers stood by the competing
technology as the most realistic alternative.
They argued that the chaos resulting from inadequate attempts to
harness many processors would offset any significant gains from the
many extra chips.
But now advances in programming techniques have convinced many
skeptics that problems that were until now solvable only by
single-processor supercomputers, known as vector processors, can be
attacked at much faster speeds by massively parallel machines.

``The world is moving to massively parallel computers more
quickly than I would have thought even recently,'' said George E.
Lindamood, a researcher at the Gartner Group who follows the
industry producing the highest-performance computers.
For example, researchers at Lockheed Corp. recently used a
massively parallel computer to solve an extremely complex problem --
the simulation of radar on the surfaces of a secret military plane
to determine whether it could avoid detection -- that a conventional
supercomputer could not handle.
Some computer scientists had thought the parallel computer could
not solve the problem either, because the task could not easily be
broken up into equations to be farmed out to the many processors.
But Lockheed researchers were able to develop software that
presented the equations in such a way that each processor could be
allocated a different variable for which to find a correct value.
The trick then was to coordinate the processors so that each
took into account the calculations of the others.
The process is something like opening a lock with thousands of
separate cylinders by turning all the cylinders, at once, to the
right numbers in the combination.
The size of the Lockheed problem was staggering, Hillis said.
The equations consisted of 54,000 variables.
Just writing out the equations required the equivalent of 40
encyclopedias of information.
More than 1,000 trillion mathematical operations were required,
and solving the problem took a week.
The victory for parallel processing has implications not only
for the computing industry but for U.S. competitiveness.
In the industry, Cray and Convex are finding themselves behind
companies that moved quickly toward parallel designs in recent
years and that are now selling such products.
Among those companies are Hillis' company, Thinking Machines
Corp., of Cambridge, Mass.; NCube, of Beaverton, Ore.; the
scientific-computer division of Intel, also in Beaverton; Maspar
Computer Corp., of Sunnyvale, Calif.; Teradata, of Los Angeles, and
Supercomputer Systems Inc., a start-up company run by Steve Chen, a
former Cray designer, and financed partly by IBM.
The largest Japanese computer makers have been slow to adapt to
parallel technology and are still building vector machines that
imitate those of Cray and Convex.
Indeed, the shift to massively parallel designs appears to
consolidate the U.S. lead in supercomputer technology.
Analysts give U.S. companies a lead of 18 months to two years.
But the Japanese government, recognizing the importance of
parallel computing, is organizing a larger research effort to catch
up.

And Fujitsu Ltd. has already begun test-marketing a massively
parallel computer.
While U.S. companies have led the development of conventional
supercomputers, Japanese companies have been gaining in that area.
As the supercomputer industry has grown from $89 million in 1980
to an expected $1.1 billion this year, the leading company, Cray
Research, has seen its share of the market fall to 52 percent, from
90 percent in 1980.
During that period, Japan's market share has risen to 28
percent, from nothing.
In a report prepared for the Department of Energy and the
President's Office of Science and Technology Policy, Lindamood
estimates that sales of massively parallel computers will exceed
vector-based supercomputer sales as early as 1996.
Scheduled to be released next month, the report concludes that a
proposed government program to finance high-performance computing
research with about $1.9 billion over five years would have a
striking impact on the market for massively parallel computers.
The report forecasts that government financing will generate an
additional $10.4 billion in revenue for the supercomputer industry
by the year 2000.
The breakthrough for parallel designs can be attributed to
improvements in software.
Newly developed computer languages simplify the coordination of
the many processors by automatically farming out data and
instructions, a task that once required painstaking work by
programmers.
What is more, many companies have now developed parallel
versions of traditional computer languages that permit existing
applications to be translated to run on newer parallel systems with
little restructuring.
In other cases, scientists are finding new ways to apply
parallel designs.
For example, massively parallel supercomputers have turned out
to be ideal for searching huge volumes of text.
The promise of parallel computers, which sell for $70,000 to $20
million, has led more than a dozen new companies to enter the
market, and analysts said it was unikely that all would survive.
Last month, Myrias, a U.S.-Canadian company, went out of
business after selling just 10 machines.
But others seem to be off to a promising start. NCube has sold
several hundred systems, many for business applications. Thinking
Machines expects its revenue to reach $60 million this year.
Although current supercomputer purchases lean toward scientific
and engineering applications, many of the new manufacturers are
counting on new business applications, like image recognition for
document processing and data base retrieval, to fuel their growth.
``In two years, most of our business will come from business
areas that don't exist today,'' said Jeffrey Kalb, president of
Maspar Computer.
Comments at the conference last week underscored the shift to
massively parallel computing.
``We take massively parallel computing very seriously,'' said
Stephen E. Nelson, Cray's vice president of technology.
In October, Cray moved Mr. Nelson, a designer of its
soon-to-be-introduced C-90 supercomputer, to a new position
exploring massively parallel computers.
Nelson said that the company is now within several months of
picking a new design strategy.
He said Cray had not ruled out the possibility that it would
develop a strategic alliance or buy an existing parallel computer
company.
Convex has also embarked on a new design project that will
exploit massive parallelism.
The project is being headed by Steven J. Wallach, the company's
vice president for technology, known for his earlier work in
designing a minicomputer.
``I don't see any alternative if we are to make a quantum leap
in speed,'' Wallach said.


------------------------------

Date: Thu, 29 Nov 90 11:51:04 EST
From: hunter@nlm.nih.gov (Larry Hunter)
Subject: Job at the National Library of Medicine

The National Library of Medicine is seeking qualified applicants for a
Computer Scientist at the GS-12 or GS-13 level for a position in the
Lister Hill National Center for Biomedical Communications. The
successful applicant will conduct basic research in the area of
machine learning, automated discovery and/or cognitive models of
learning and discovery, as applied to medical, bioscience or
biotechnology domains. Candidates should have a Ph.D. or equivalent
in computer science, substantial expertise in some aspect of machine
learning (e.g. case-based reasoning, genetic algorithms, inductive
inference, neural networks, or statistical induction) and preferably
some familarity with biology or medicine. Researchers possessing
Ph.D.s in other fields and significant computer programming experience
will also be considered.

The National Library of Medicine has extensive, state of the art
facilities including Sun, Silicon Graphics, IBM and Macintosh
workstations; access to IBM and Cray supercomputers and a TMC
Connection Machine; access to large databases of bibliographic,
macromolecular sequence, and other information; high speed, wide area
network links; and a variety of other audio, video and communications
technologies. The incumbent will collaborate with researchers in the
National Center for Biotechnology Information, The National Center for
Human Genome Research, and at other institutions, both nationally and
internationally.

Interested parties with relevant experience are invited to submit a
statement of interest and qualifications to Lawrence Hunter, National
Library of Medicine, Bldg. 38A, MS-54, Bethesda, MD 20894. Statements
may also be submitted by electronic mail to hunter@nlm.nih.gov
(internet) or by fax to (301)496-0673. Qualified applicants will then
be advised of formal application procedures.

The US Government is an equal opportunity employer; women, minorities,
veterans and the physically disabled are encouraged to apply.


------------------------------

Date: Fri, 30 Nov 90 02:14:01 est
From: "Peter Cariani" <peterc@chaos.cs.brandeis.edu>
Subject: Artificial Life Digest, #49

Addendum to the bibliography on emergence (Cariani)
-------------------------------------------------------------------------------

There are a few references that should be added to the list.

Salthe, Stanley S. 1985. Evolving Hierarchical Systems. New York: Columbia
University Press.
Minch, Eric. 1989. Representation of Hierarchical Structure in Evolving
Networks. Ph.D. dissertation, Systems Science Department, SUNY-Binghamton.

Both of these works deal with the origins and persistence of hierarchical
structures.

Meehl, E.E. & W. Sellars. 1956. "The Concept of Emergence." Minnesota Studies
in the Philosophy of Science, Vol 1, H. Feigl & M. Scriven, eds, University
of Minnesota Press. (reference given to me by Bob Richardson)

In addition to Henri Bergson and Lloyd Morgan, George Herbert Mead was a major
advocate of the "emergent evolution" concept. (See The Philosophy of the Act
(1938), The Philosophy of the Present (1932); Mind, Self & Society(1934) all
University of Chicago Press; George Herbert Mead: Self, Language and the
World by David Miller (1973), also University of Chicago Press)

------------------------------------------------------------------------------


------------------------------
End of ALife Digest
********************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT