Copy Link
Add to Bookmark
Report

Alife Digest Number 002

eZine's profile picture
Published in 
Alife Digest
 · 11 months ago

 
============================================================================
Artificial Life Digest Vol. 1 Num. 2
------------------------------------

Topics: Query about macro applications of AL (e.g., economic,
political, social).

AL vs AI discussion from Stephen Smoliar.

PD abstract from David Wine

Morality comment from Paul Robertson
============================================================================
Date: Tue 6 Mar 90 09:45:01-CST
From: Gary Knight <GARY@maximillion.cp.mcc.com>
Subject: Macro applications of AL

I am interested in learning about macro applications of AL
principles, by which I mean applications to large-scale human structures
such as economic, political, and social systems. I know there has been
some work done on economics at the Santa Fe Institute, and I was told that
there might be a panel on the subject at the 2nd Conference (which,
unfortunately, I was unable to attend). I will appreciate any references,
etc., on the subject.

Thanks,
Gary
--------------------------------------------------------------------------
Cc: ROSENBLO@vaxa.isi.edu
Subject: Re: Greetings, comments, and discussion...
Date: Tue, 06 Mar 90 10:19:37 PST
From: Stephen Smoliar <smoliar@vaxa.isi.edu>

Chris Langton has made some interesting observations on the relationship
between artificial life and artificial intelligence. For the most part,
I agree with these observations; but, as one whose primary concern is
artificial intelligence, I think there are some which may bear some further
discussion.

Let me start with one of Chris' punch lines:
>
> AI has largely ignored the possibility that there is MORE of
> relevance for intelligence in the organization of the brain than
> that it provides a capacity for universal computation.

I think I am willing to grant this point; but I would like to try to identify
where it came from, because I think it may illustrate another point relevant to
the bottom-up nature of the position Chris advocates. I think it is fair to
say that much of our thinking about artificial intelligence, particularly from
a standpoint of symbol manipulation, is based on contributions and observations
by Allen Newell. The reason I wish to make this point is that, in the field of
computer science, Newell has made a mark not only in artificial intelligence
but also in computer architecture; and since the beginning of the last decade,
it would appear that his concern for the latter has affected his thoughts about
the former.

For his 1980 Presidential Address to the American Association for Artificial
Intelligence, Newell delivered a paper which introduced his concept of the
"knowledge level." This concept grew out of his attempts to stratify the
design and implementation of computers into separable levels of concern.
At the lowest level were DEVICES, such as transistors and diodes. These
devices could then be assembled into CIRCUITS, the next level of Newell's
hierarchy. However, Newell proposed what amounted to two separate levels
of circuits, the higher level being the LOGIC CIRCUIT level. The elements
at this level were what we usually call "gates" which compute boolean functions
and provide the simplest memory in the form of flip-flops. Thus, Newell wished
to sort out the level which described how these gates were implemented from the
gates themselves.

The logic circuit level essentially provides the building blocks from which
computers are made. Such structures are usually the primary concern of a
course in computer architecture. Flip-flops are groups together to implement
larger units of memory called REGISTERS, and the heart of a central processing
unit may be expressed in terms of logical functions which dictate how register
contents are determined on the basis of other register contents. These
relationships constitute what Newell called the REGISTER-TRANSFER level.
This is the level which essentially implements what is commonly known as
the "machine code" of a computer. If one has a program stored in memory,
the register-transfer level should tell you all you need to know regarding
how that program will be executed.

Of course, it has been quite some time since any of us have though about what
our computers do in terms of machine code represented as strings of binary
digits. Rather, we work with symbolic representations which may be translated
down to this level. Thus, at the top of his hierarchy, Newell proposed a
SYMBOL level, consisting of those symbol constructs which one manipulates
in order to get the computer to actually compute, so to speak. This, then,
is Newell's cosmology of computer architecture.

The "knowledge level" is a result of trying to extrapolate on this cosmology in
the interests of artificial intelligence. The point is that each of Newell's
architecture levels--device, circuit, logic circuit, register-transfer, and
symbol--involves some concrete set of OBJECTS and a corresponding set of RULES
for combining those objects together. The knowledge level was the product of
a hypothesis that we could view knowledge in the same light (quoting Newell):

There exists a distinct computer systems level, lying immediately
above the symbol level, which is characterized by knowledge as the
medium and the principle of rationality as the law of behavior.

(The principle of rationality essentially embodies Newell's general approach to
problem solving: "Actions are selected to attain the agent's goals.")

Why have I bothered to tell this story? The reason is that I think it refines
Chris' initial assessment of what artificial intelligence has overlooked. In
particular, there are two critical assumptions embedded in the knowledge level
hypothesis:

1. Whatever knowledge is, it involves some concrete set of
objects, analogous to the objects of the other computer
systems levels, with rules which affect how they may
be combined.

2. Those objects are formed through combinations of objects
at the symbol level.

I believe this relates to the issue of universal computation, as Chris raised
it, because universal computation essentially endows a sort of "omnipotence"
to operations of symbol combination. However, it seems that the knowledge
level is an excellent summary of what it is about artificial intelligence
(and the way in which it is generally approached) and artificial life. In
particular as it pertains to emergent properties, artificial life research
encourages us to consider the possibility that knowledge may NOT be quite
the concrete set of objects which has concerned the tradition of artificial
intelligence.

Now I promised that I would try to tie this in with the bottom-up approach
which Chris advocates later in his discussion. I believe it was Chris who,
in his concluding conference remarks, tried to relate the bottom-up approach of
artificial life to J. G. Miller's "shred-out" hypothesis. To some extent,
Miller's hypothesis is similar to Newell's decomposition of computer
architecture into a hierarchy of levels. However, Miller wishes to
pursue the thesis that each level is, at some level of abstraction,
made of the same "stuff"--that is, there are isomorphisms which connect
the objects and rules of combination from one level to another. My guess
is that if one were to try to test Miller's hypothesis against the levels
of Newell's hierarchy, he might have great difficulty finding those
abstractions and isomorphisms; and if he were to extrapolate in the
OPPOSITE direction, towards what Chris calls the "fundamental physics"
of the objects at the device level, his task would probably be all the
more awesome.

The point I wish to make is that I do not really think we have an issue in
trying to debate bottom-up versus top-down thinking. I think that our major
concern should be to hold our attention to a single "level," as it were, for
any given problem. In other words, one may begin by assuming some set of
objects and rules of combination and then asking about the nature of the
behavior which ensues. This is how I view Chris' work with cellular automata.
What makes the work interesting is trying to deal with the question of HOW we
may describe that behavior in such a way that we may reason about it. Thus,
Chris has demonstrated how the terminology of phase transitions may serve our
need to describe the behavior of cellular automata. However, I'm not sure if
the suitability of this terminology has to do with any "fundamental physics"
or if it has to do with capturing an analogy between the entropy of
thermodynamics and the entropy of information.

I introduce this cautionary note because we must be aware that if we change our
objects and rules, we may also have to change our descriptive terminology. For
example, suppose we now situate our cellular automata in an "environment."
Rather than a uniform grid, the automata reside in a space which has properties
which can be "sensed;" and such sensations are part of the input space within
which behavior must be defined. Alternatively, keep the grid uniform and
consider a population of different "genotypes" of automata, i.e. automata
with different behavior rules inhabiting the same space. I think it is
important to observe that when we make such changes to our objects of study,
we should be prepared (although not necessarily compelled) to anticipate the
possibility of major changes in our descriptive terminology. In general,
coming up with the write language to describe what you observe coming out
of a black box is more than half the problem of figuring out what makes that
box tick.
--------------------------------------------------------------------------
From: David Wine <wine@cs.ucla.edu>
Subject: Prisoner's dilemma
Date: Tue, 06 Mar 90 12:44:21 -0800

> Date: 05 Mar 90 00:02:00 PST
> From: UK0053@applelink.apple.com
> Subject: Biomorphs

[. . .]

> I haven't anything very helpful for Mr Lugowski I'm afraid. I was aware of
> Axelrod's genetic algorithm work, but was never particularly clear about what
> it meant to say that the strategies that he bred were 'better' than Tit for
> Tat. There is also work by Boyd & Lorberbaum showing, as I recall, that Tit
> for Tat might actually be invaded, in the evolutionary sense, by a mixture of
> an arch-forgiving strategy (Tit for Two Tats) and a slightly nasty strategy
> (Suspicious Tit for Tat). That reference is
> R.Boyd & J.P.Lorberbaum (1987) No pure strategy is evolutionarily stable in
> the repeated Prisoner's Dilemma game. Nature 327, 58-9.
> But that is a theoretical result; they didn't breed anything.

[. . .]

> Best wishes
> Richard

As an addendum,

Author: Boyd R.
Address: Department of Anthropology, University of California, Los Angeles,
90024.
Title: Mistakes allow evolutionary stability in the repeated prisoner's

dilemma game.
Journal: Journal of Theoretical Biology, 1989 Jan 9, 136(1):47-56.

Abstract: The repeated prisoner's dilemma game has been widely used in analyses
of the evolution of reciprocal altruism. Recently it was shown that no pure
strategy could be evolutionarily stable in the repeated prisoner's dilemma.
Here I show that if there is always some probability that individuals will
make a mistake, then a pure strategy can be evolutionarily stable provided
that it is "strong perfect equilibria" against itself. To be a strong
perfect equilibrium against itself, a strategy must be the best response to
itself after every possible sequence of behavior. I show that both
unconditional defection and a modified version of tit-for-tat have this
property.
--------------------------------------------------------------------------
Date: Tue, 6 Mar 90 16:30 EST
From: Paul Robertson <probertson@alderaan.scrc.symbolics.com>
Subject: Artificial morality and ethics

Date: Fri, 2 Mar 90 22:54:07 EST
From: RAY <ray@vax1.acs.udel.edu>

Chaos at the Edge of Intelligence

... [verbose dribble about evolutionary dominance deleted]

The danger then is not that we might create life forms more intelligent
than ourselves, but that we might create forms that are marginally intelligent
like ourselves, or that we might destroy ourselves before a less
self-centered world view becomes widespread in the population.

Perhaps the real problem is that we may get so caught up in silly
hypothetical arguments that we never make any progress in artificial
life at all. Lets wait until Artificial life is rampant before we
consider destroying it in fear. But before we destroy it, we had better
debate the morality of turning off a computer that is supporting some
Artificial Life, and lets let our lawyers debate for us the potential
negligence suits that might prevail if a computer crashes due to
negligence of maintanence causing lots of artificial life to die.


=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=---=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
= Artificial Life Distribution List =
= =
= All submissions for distribution to: alife@iuvax.cs.indiana.edu =
= All list subscriber additions, deletions, or administrative details to: =
= alife-request@iuvax.cs.indiana.edu =
= All software, tech reports to Alife depository through =
= anonymous ftp at iuvax.cs.indiana.edu in ~ftp/pub/alife =
= =
= List maintainers: Elisabeth Freeman, Eric Freeman, Marek Lugowski =
= Artificial Life Research Group, Indiana University =
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=---=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT