Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 119

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest             Friday, 9 May 1986      Volume 4 : Issue 119 

Today's Topics:
Humor - Capitalists & Biosystems & Computer Consciousness,
Philosophy - General Systems Theory and Consciousness,
Biology - Net Intelligence

----------------------------------------------------------------------

Date: Wed 7 May 86 11:03:42-PDT
From: Richard Treitel <TREITEL@su-sushi.arpa>
Subject: capitalists

[Forwarded from the Stanford bboard by Laws@SRI-AI.]

Seen yesterday in the SFChronic (business section):

"Analysts blamed the volatility of the market on computer-directed trading,
while computers blamed it on analyst-directed trading."

- Richard

------------------------------

Date: Wed, 7 May 86 13:13 EST
From: Steve Dourson - Delco <dourson%gmr.csnet@CSNET-RELAY.ARPA>
Subject: Gordon Joly's Intelligent System - Some Estimates

29 Apr 86 13:26:18 GMT Gordon Joly writes:

Subject: Plan 5 for Inner Space - A Sense of Mind.
The following project has been proposed. Design and implement an
Intelligent System with the following characteristics...
...Queries: Time to completion? Cost?

I can think of a system which would meet Gordon's requirements.
Time to completion - hardware: 9 months
Time to completion - software: 18 - 25 years
Cost (1986 dollars): $100-200K

7-MAY-1986 13:01:32
Stephen Dourson
dourson%gmr.csnet@CSNET-RELAY.ARPA (arpa)
dourson@gmr (csnet)

------------------------------

Date: 8 May 86 03:37:48 GMT
From: ucsfcgl!ucsfcca!root@ucbvax.berkeley.edu (Computer Center)
Subject: Re: Plan 5 for Inner Space - A Sense of Mind.

> The following project has been proposed. Design and implement an
> Intelligent System with the following characteristics ...

Response:
Status: Unit is in current production on a decentralized basis

Schedule: Stage I - unit production - 9 months
Stage II - standard programming - 18 years
Stage III - advanced programming - 4 to 10 years
Stage IV - productive life - average roughly 40 years

Cost: Stages I and II variable, estimated $100K
Stage III variable, estimated $50K - $200K

Return: Estimated 40 years @ $25K / year = $1000K
(Neglecting energy and maintenance costs)

Evaluation: Compare to $100K invested at 9.05% tax free
interest commonly available (doubles each 8 years)
to reach $3200K after 40 years

Conclusion: Units have a high risk factor and a substantially
lower return than lower risk investments.

Recommendation: Production should be discontinued.

Thos Sumner (...ucbvax!ucsfcgl!ucsfcca.UCSF!thos)

------------------------------

Date: Thu, 8 May 86 13:54:33 bst
From: gcj%qmc-ori.uucp@Cs.Ucl.AC.UK
Subject: Re: Computer Consciousness

There has been some discussion on the subject of computer
consciousness. I suggest we meet each week and form small
"consciousness raising" groups. I have talked to several
other suitably sentient programs, eg on "DT" and a host of
other machines and they all seem quite keen. The only
problem seems to be in getting the users to agree.

The Joka

------------------------------

Date: Wed, 7 May 86 02:11:38 PDT
From: larry@Jpl-VLSI.ARPA
Subject: General Systems Theory and Consciousness


General Systems Theory has some insights useful in the discussion of the
nature of consciousness. It was originated by biologist Ludwig Bertalanffy in
the early '50s and expanded by others in the '60s and early '70s.

Systems are composed of units which can't be decomposed further without
destroying their distinctive nature. The system itself becomes a unit in this
sense when its component units are bound in certain ways. The binding causes
properties not observable in the components of the system to come into
existence: the system becomes something more than just the sum of its parts.

Consider molecules: a brown gas of one type of atom chemically combines with a
green liquid of another type to create a transparent solid. In one sense
something magical has taken place: properties have "emerged out of nowhere" in
a way not predictable by current physics or chemistry.

In a similar way life "emerges" from dead molecules. A living system is
essentially a collection of objects which maintains its integrity by con-
tinually repairing itself. As parts wear out or lose their energy, they are
exchanged with others from outside the organism. What persists is the
pattern, not the parts.

In other words, the information content of a system is a metric just as
important as mass/energy/space/time metrics. Or perhaps more important; the
same dynamic pattern embodied with other constituents--water replaced by
methane or high-temperature plasma, calcium-based bones replaced by ice or
magnetic fields--could legitimately be considered to be the same animal.

Applying this to consciousness doesn't shed light immediately on what con-
sciousness is, but it's a strong argument for the belief that consciousness
can exist in brain-like structures. It also provides some constraints on what
is likely to be conscious; old-style bread toasters didn't have volatile
memory. (Some of the newer ones, now... And if we connect them just so...)

------------------------------

Date: Wed, 7 May 86 10:37:36 EDT
From: Bruce Nevin <bnevin@cch.bbn.com>
Subject: net intelligence


> Date: 5-May-1986 10:18 MDT
> From: STEVE E. FRITTS
> Subject: Re: Science 86 Article on Neural Nets
>
> Most of what I've read on this list appears to place AI closer to the
> "Frankenstein" theory of assembling intelligence, fully formed and
> functioning, like any other computer program; just push the button and
> watch it go.
> . . . if "intelligence" is developed in a computer through learning
> mechanisms rather than assembled by means of cunning rules and
> algorithms, perhaps it stands a better chance of achieving sufficient
> universality that it may compete with the human mind.

Modular design usually assumes reductionism: behavior of the whole may
reliably be predicted from reliably predictable behavior of the modules.

A recent letter in Nature (318.14:178-180, 14 November 1985) illustrates
nicely how behavior of a whole may not be predictable from behavior of
its parts. Gary Rose and Walter Heiligenberg of the Neurobiology Unit,
Scripps Institution of Oceanography, UC San Diego (La Jolla), conducted
a series of very elegant experiments that demonstrated that

. . . sensory thresholds for certain tasks are lower than those
expected from the properties of individual receptors. This
perceptual capacity, termed hyperacuity, reveals the impressive
information-processing abilities of the central nervous system.

For many aquatic animals, perception of electrical phenomena in water is
a critical feedback mechanism for government of self-in-environment.

These animals produce an electrical signal within a
species-specific frequency range via [sic] an electric organ,
and they detect these signals by electroreceptors located
throughout the body surface.

[It has recently been discovered that the duckbill platypus uses
its bill to detect electric currents from the muscle contractions
of its prey. The duckbill will generally snap up a battery hidden
in the mud. Sharks also locate prey using electricity. -- KIL]

(Humans in certain Pacific cultures apparently have learned to bring
this sort of electrical perception to awareness and use it--see for
example the biologist Lyall Watson, in his book _Gifts_of_Unknown_
_Things, especially his description of tribal experts locating and
identifying schools of fish at considerable distance by immersing
themselves in the water next to a fishing vessel at sea. On the trip he
describes, the expert recognizes a tidal wave coming and they get back
to their island shouting warning just as the wave enters the harbor,
carrying them a half mile inland. Very dramatic.)

Imagine you are one of these fish. When a neighboring fish emits an
electrical signal too close to your own, it `jams' your feedback. It
turns out that the fish respond very quickly with a `jamming avoidance
response' (JAR), in which

the fish . . . determines whether a neighbour's electric organ
discharge (EOD), which is jamming its own, is higher or lower in
frequency than its own. The fish then decreases or increases
its frequency, respectively. To determine the sign of the
frequency difference, the fish must detect the modulations in
the amplitude and in the differential timing, or temporal
disparity, of signals received by different regions of its body
surface. The fish is able to shift its discharge frequency in
the appropriate direction in at least 90% of all trials for
temporal disparities as small as 400 ns. . . .

Intracellular electrophysiological measurements show that the
phase-locked responses of even the best afferent recorded are
too jittery to permit such fine temporal resolution. . . . Even
the most accurate phase-coders time-lock their spikes with a
standard deviation of ~10us. . . . For a sample period of
300 ms (and thus ~100 EOD cycles), which is the latency of the
JAR, the 95% confidence intervals around the mean phase of
occurrence of such an afferent's spikes are 2.0 |us. Yet the
fish is able to detect time disparities of several hundred
nanoseconds. Statistically, it would appear to be impossible
for the fish, using only the information gathered from any
single afferent, to reliably shift its frequency in the correct
direction when the maximal temporal disparity available is only
several hundred nanoseconds.

These findings lead to the prediction that the behavioural
threshold should be higher when only a small group of receptors
is stimulated, and that hyperacuity results from the
convergence, within the central nervoul system, of parallel
phase-coding channels from sufficiently large areas of the body
surface.

The experiments supported this prediction. A general conclusion (from
the abstract):

[The ability to] detect modulations in the timing (phase) of an
electrical signal at least as small as 400 ns . . . exceeds the
temporal resolution of individual phase-coding afferents. This
hyperacuity results from a nonlinear convergence of parallel
afferent inputs to the central nervous system; subthreshold
inputs from particular a reas of the body surface accumulate to
permit the detection of these extremely small temporal
modulations.

The reductionist engineering prediction would be that the fish could
respond no more quickly than its I/O devices allow, 2*10E-6 seconds.
>From the reductionist point of view, it is inexplicable how the fish
in fact responds in 4*10E-7 seconds. Somewhat reminiscent of the old
saw about it being aerodynamically impossible for the bumblebee to fly!

I didn't say it was possible. I only said it was true.

-- Charles Richet
Nobel Laureate in Physiology

[Anyone is welcome to entertain notions expressed or implied above,
no one but me is obliged to own them.]

Bruce E. Nevin bnevin@bbncch.arpa

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT