Copy Link
Add to Bookmark
Report
AIList Digest Volume 2 Issue 153
AIList Digest Monday, 12 Nov 1984 Volume 2 : Issue 153
Today's Topics:
Linguistics - Language Degeneration,
Algorithms - Malgorithms,
Project Report - IU via Dialectical Image Processing,
Seminars - Spatial Representation in Rats &
Human Memory Capacity & Design of Computer Tools &
Artificial Intelligence and Real Life
----------------------------------------------------------------------
Date: Sun, 11 Nov 84 14:54:08 est
From: FRAWLEY <20568%vax1%udel-cc-relay.delaware@udel-relay.ARPA>
Subject: Language Degeneration
I'd like to make a few comments on Briggs' statements on language
degeneration and Sanskrit, English, etc. The idea that a language
degenerates stems from the 19th century biological metaphor which
has been refuted for at least 100 years. Language is not alive; people
are. We in linguistics know that "language death" is a metaphor and
has almost nothing to do with the language as a system; it has everything
to do with the sociocultural conditions on speaking and the propagation
of cultural groups.
How can it be reasonably said that a language degenerates if it is
abused? What do you mean by "abused"??? If "abused" means "speaking in
short sentences," then everyone "abuses" the language, even the most
ardent pedants. Language indeed changes, but it does not degrade.
Briggs says that violations of the prototype are degenerations. This
is true by definition only. And this definition can be accepted only
if one also adheres to a Platonic notion of language history, wherein
the pure metaphysical Ursprache is degraded by impure shadowy manifestations
in the real world. Maybe Briggs is a Platonist, but then he's not saying
anything about the real world.
Popular use does NOT imply a reversion or "reversal" of progress
in language change. There is no progress in language change: a change
in one part of the system over time which complicates the system
generally causes a simplification in another part of the system.
So, Hoenigswald said that languages maintain about 50% redundancy
over time.
What is the "sophisticated machinery" Briggs talks about? I suspect that
he means that he thinks that language which have a lot of morphology
and are synthetic are somehow "more sophisticated" than "our poor
unfortunate English," which is analytic and generally free of
morpho-syntactic markings. Honestly, the idea that a synthetic language
is "better" than a degraded analytic English is another remnant of
the 19th century (where neo-Platonism also reigned).
The evolution of analytic languages from synthetic versions (i.e.,
pure to degraded) is not only charged with moral claims, but it is also
wrong.
1. Finnish has retained its numerous case markings over time, as has
Hungarian.
2. Colloquial Russian has begun to add case markings (instrumental in
the predicate nominative).
3. English is losing overt marking of the subjunctive: are we therefore
less able to express subjunctive ideas? Is English becoming (GOOD GOD!)
non-subjunctive, non-hypothetical....
If Briggs is right, then he himself is contributing to the degradation
by his very speech to his friends. (I, of course, don't believe this.)
Finally, if Briggs is right about the characteristics of natural language,
then any natural language can be a bridge, not necessarily Sanskrit. And
this claim is tantamount to saying only that translation is possible.
Bill Frawley
Address: 20568.ccvax1@udel
------------------------------
Date: 11 Nov 1984 18:02:20 EST
From: MERDAN@USC-ISI.ARPA
Subject: balgorithms
Here are couple balgorithms that I encountered on a single microprocessor
project. Neither of these balgorithms appeared the slightest bit bad to
their authors, and one of them was insulted when I pointed out how bad
his approach really was.
Balgorithm #1
Problem
Perform error correction for a 15,11 Hamming code on an 8 bit micro
(Intel 8008).
Original solution
Implement a feedback shift register with error trapping logic as with
a BCH code. Approximately 600 bytes of tricky code was required.
Better solution
Use the classic error dectection matrix method. I believe about 100
bytes of obvious code was required.
Balrgorithm #2
Problem
Calculate horizontal and vertical parity for a sequence of 5 bit char-
acters and tack them on at end of the sequence.
Original solution
Pick up each character and count the number of 1s by masking out each
bit with a separate mask, packing the resultant bit into a 5 bit word
on the fly. About 1500 bytes of very buggy code resulted.
Better solution
Treat the sequence in blocks of 5 characters. For each block prestore
a pattern assuming that parity is even. Pick up each character, determine
its parity (the load did this on the 8008), and clear the pattern for that
character. Or the patterns together, producing the result. About 150
bytes of mostly straight line code resulted.
Even better solution
Don't calculate parity in software but let the UART hardware generate
and check the parity.
Comment
In both cases I feel that the justification for the original solution
was that programmer wanted to do some tricky coding just to prove that
he could do it rather than understand the problem first. This tendency
does not seem to be going away as fast as we all would like it.
Thanks
Terry Arnold
------------------------------
Date: 13 Oct 1984 11:40-EDT
From: ISAACSON@USC-ISI.ARPA
Subject: Project Report - IU via Dialectical Image Processing (DIP)
[Forwarded from Vision-List by Laws@SRI-AI.]
Just read the summary of DARPA IU effort which I find very interesting.
By coincidence, we submitted this week to DARPA a summary of our current
efforts in "Dialectical Pattern Processing". Although phrased in
broader terms, much of this effort is also directed toward IU. We
enclose a copy of the report in the possible interest of the vision-list
readership. -- JDI
10/7/84
DARPA Research Summary Report
I M I Corporation, St. Louis, Missouri
Project Title: Dialectical Pattern Processing
Overview. Earlier work [1] has demonstrated unusual low-level
intelligence features in dialectical processing of string patterns.
This effort extends dialectical processing to 2-D arrays, with
applications in machine-vision. I M I Corporation is an innovator
in Dialectical Image Processing (DIP), a new subfield in very low-
level vision (VLLV) research. Dialectics is an elusive doctrine of
philosophy and (non-standard) logic that can be traced from Plato to
Hegel and beyond, but that has never lent itself to be grounded in
precise formalisms or in computing machines. Certain influential
philosophies hold that the universe operates in accord with
dialectical processes, culminating in the activity of thought
processes. This effort builds on the fact that [1] discloses the
first and only machine implementation of dialectical processes.
Objectives. A broad long-term objective is to test a working-
hypothesis that states that dialectical processes are fundamental
ingredients, in addition to certain others, in autonomically
emergent intelligences. Intelligences that bootstrap themselves in a
bottom-up fashion fall into this category. More immediate
objectives are (1) to demonstrate the technical feasibility of a
small number of VLSI chips to host a dialectical image processor,
and (2) to evaluate the type of intelligence inherent in networks of
dialectical processors, with emphasis on learning.
Approach. A mix of activities includes software simulation of
dialectical networks for image processing; VLSI-based hardware
design for dialectical image processors; and assessment of the
learning capabilities inherent in the above-mentioned systems.
Current Status & Future Plans. Consideration of the possibility
of dialectical processing began in the early Sixties. By now,
theoretical foundations have been laid and dialectical processing
has been amply demonstrated in strings and in 2-D arrays (see Fig. 1
& Fig. 2 below) to the point where it appears to support a viable
new computer-vision technology. Feasibility studies in the design
of VLSI-based DIPs have shown that reasonably large DIPs (100x100
pixels) will fit into a single card and can be readily implemented,
at least for experimentation. Scant resources limit the scope of
some and preclude others of the activities listed below, which are
considered important to the advancement of this technology.
* Run software simulations of DIP on better equipment (e.g., Lisp
machine or BION workstation) and attempt to extend effort to 3-D.
* Implement in VLSI hardware a prototype of a moderate size DIP.
* Attempt to specialize other vast parallel networks (e.g., Hillis'
Connection Machine [2] or Fahlman's Boltzmann Machine) into
dialectical image processors.
* Specialize a network of dialectical processors to support
low-level machine learning by analogy and metaphor.
Fig. 1 - DIP Analysis of a Plane Silhouette
[Graphics will be sent by US Mail]
Fig. 2 - Selected Steps from DIP Analysis of a Tank Silhouette
[Graphics will be sent by US Mail]
Resources and Participants. Available resources are limited. The
list of participants includes: Joel D. Isaacson, PhD, Principal
Investigator; Eliezer Pasternak, MSEE, Project Engineer; Steve
Mueller, BS/CS, Programmer; Ashok Jain, MS/CS, Research Assistant
(SIU-E).
Products, Demonstrable Results, Contact Points. Certain products
and results are proprietary and included in patent applications in
progress. Software simulation of DIP can be readily demonstrated.
A version written in Pascal for the IBM PC/XT is available on
request. Point of contact: Dr. Joel D. Isaacson, I M I
Corporation, 20 Crestwood Drive, St. Louis, Missouri 63105, Phone:
(314) 727-2207, (ISAACSON@USC-ISI.ARPA).
References
[1] Isaacson, J. D., "Autonomic String-Manipulation System," U. S.
Patent No. 4,286,330, August 25, 1981.
[2] Hillis, W. D., "The Connection Machine," Report AIM-646, The
Artificial Intelligence Laboratory, MIT, Sept. 1981.
Acknowledgements
Supported by the Defense Advanced Research Projects Agency of the
Department of Defense under ONR Contract No. N00014-82-C-0303. The
P.I. gratefully acknowledges additional support and encouragement
received from the Department of Mathematics, Statistics, and
Computer Science, Southern Illinois University at Edwardsville.
------------------------------
Date: Thu, 8 Nov 84 13:23:13 pst
From: chertok@ucbcogsci (Paula Chertok)
Subject: Seminar - Spatial Representation in Rats
BERKELEY COGNITIVE SCIENCE PROGRAM
Fall 1984
Cognitive Science Seminar -- IDS 237A
TIME: Tuesday, November 13, 11 - 12:30
PLACE: 240 Bechtel Engineering Center
DISCUSSION: 12:30 - 2 in 200 Building T-4
SPEAKER: C. R. Gallistel, Psychology Department,
University of Pennsylvania; Center for
Advanced Study in the Behavioral Sciences
TITLE: ``The rat's representation of navigational
space: Evidence for a purely geometric
module''
ABSTRACT: When the rat is shown the location of hidden
food and must subsequently find that loca-
tion, it relies strongly upon a spatial
representation that preserves the metric
properties of the enclosure (the large scale
shape of the environment) but not the
nongeometric characteristics (color, lumi-
nosity, texture, smell) of the surfaces that
define the space. As a result, the animal
makes many ``rotational'' errors in an
environment that has a rotational symmetry,
looking in the place where the food would be
if the environment were rotated into the
symmetrically interchangeable position. It
does this even when highly salient
nongeometric properties of the surfaces
should enable it to avoid these costly rota-
tional errors. Evidence is presented that
the rat notes and remembers these
nongeometric properties and can use them for
some purposes, but cannot directly use them
to establish positions in a remembered
space, even when it would be highly advanta-
geous to do so. Thus, the rat's position-
determining system appears to be an encapsu-
lated module in the Fodorian sense. Con-
siderations of possible computational rou-
tines used to align the currently perceived
environment with the animal's map (it's
record of the previously experienced
environment) suggest reasons why this might
be so. Old evidence on the finding of hid-
den food by chimpanzees suggests that they
rely on a similar module. This leads to the
conjecture that the module is universal in
higher vertebrates.
------------------------------
Date: Thu, 8 Nov 84 22:51:11 pst
From: Misha Pavel <mis@SU-PSYCH>
Subject: Seminars - Human Memory Capacity & Design of Computer Tools
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
*************************************************************************
Two talks by T.K. Landauer:
*************************************************************************
Some attempts to estimate the functional capacity
of human long term memory.
T. K. Landauer
Bell Communications Research, N.J.
Time: Wednesday, November 14, 1984 at 3:45 pm
Place: Jordan Hall, Building 420, Room 050
How much useful (i.e retrievable) information does a person's
memory contain? Not only a curiosity, even an approximate answer
would be useful in guiding theory about underlying mechanisms and
the design of artificial minds. By considering observed rates at
which knowledge is added to and lost from long-term memory, and
the information demands of adult cognition, several different es-
timates were obtained, most within a few orders of magnitude of
each other. Obtaining information measures from performance data
required some novel models of recognition and recall memory that
also will be described.
-------------------------------------------------------------------
PSYCHOLOGICAL INVENTION: some examples of cognitive research
applied to the design of new computer tools.
T. K. Landauer
Bell Communications Research, N.J.
Time: Friday, November 16, 1984 at 3:15 pm
Place: Jordan Hall, Building 420, Room 100
Computers offer the possibility of designing powerful tools to
aid people in cognitive tasks. When psychological research is
able to determine the factors that currently limit how well hu-
mans perform a particular cognition-based activity, the design
of effective new computer aids sometimes follows directly. Illus-
trative examples will be described in information retrieval and
text-editing applications. In the former, insights leading to in-
vention came from systematic observations of the actual linguis-
tic behavior of information-seekers, in the latter from correla-
tions of task performance with measured and observed differences
in individual characteristics.
------------------------------
Date: Fri, 09 Nov 84 16:25:11 EST
From: "Paul Levinson" <1303@NJIT-EIES.MAILNET>
Subject: Seminar - Artificial Intelligence and Real Life
"Artificial Intelligence and Real Life"
Abstract of talk to be given by Paul Levinson at the New School
for Social Research, November 12, 1984, 8 PM, 66W12th St., NYC.
Part of the 1984-1985 Colloquium on Philosophy and Technology,
sponsored by the Polytechnic Institute of New York and the New School.
Talk begins by distinguishing two types of "AI": "auxiliary" or
"augmentative" intelligence (as in mainframes extending and
augmentating the social epistemological enterprise of science, and
micros extending and augmenting thinking and communication on the
individual level), and "autonomous" intelligence, or claims that
computers/robots can or will function as self-operating entities, in
independence of humans after the initial human programming. The
difference between these two types of AI is akin to the difference
between eyeglasses and eyes.
Augmentative intelligence on the mainframe scientific level will
be assessed as reducing intractable immensities of data, or allowing
human cognition to process ever larger portions and systems of
information. Just as the telescope equalizes human vision to the vast
distances of the universe, so computers on the cutting edges of
science make our mental capacities more equal to the vast numerosity
of data we encounter in the macro and micro universes. The social and
psychological as well as cognitive consequences of micro computers and
the type of instant, intimate, intellectual and personal communication
they allow across distances will be compared to the Freudian
revolution at the turn of the century in its impact upon the human
psyche and the way we perceive ourselves. Critics of these two types
of computers such as Weizenbaum will be seen as part of a long line of
naive and failed media critics beginning at least as far back as
Socrates, who denounced writing as a "misbegotten image of the spoken
original," certain to be destructive of the intellect (Phaedrus).
"Expert systems" and "human meat machines" claims for autonomous
intelligence in machines will be examined and found wanting.
Alternative approaches such as Hofstadter's "bottom-up" ideas will be
discussed. A conception of the evolution of existence in the natural
cosmos as progressing in a subsumptive way from non-living to living
to intelligent material will be introduced, and this model along with
Hofstadter-type critiques will lead to the following conclusion: the
problem with current attempts at autonomous intelligence is that the
machines in which they're situated are not alive, or do not have
enough of the characteristics necessary for the sustenance of the
"living" label. Put otherwise, the conclusion will be: in order to
have artificial intelligence (the autononous kind), we first must have
artificial life; or: when we indeed have created artificial
intelligences which everyone agrees are truly intelligent and
autonomous, we'll look at these "machines" and say: My God (or
whatever)! They're alive.
Practical and moral problems that may arise from the creation of
machines that are more than metaphorically autonomous of their human
producers will be examined. These machines will most likely be in the
form of robots, since robots can move in the world and interact with
environments in the direct ways characteristic of living organisms.
------------------------------
End of AIList Digest
********************