Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 018

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Thursday, 30 Jan 1986      Volume 4 : Issue 18 

Today's Topics:
Machine Learning - Self Knowledge & Perceptrons,
Theory - Definition of Symbol

----------------------------------------------------------------------

Date: Thu 23 Jan 86 04:46:01-PST
From: Bill Park <PARK@SRI-AI.ARPA>
Subject: Re: Speech Learning

This stuff about Sejnowski's speaking reminds me eerily of the parts
of Asimov's @i{I, Robot}, that tells how Susan Calvin's career began:

>From "Robbie," where Susan is observing a little girl named Gloria
trying to get some help during a tour of the Museum of Science and
Industry ...

"The Talking Robot was a @i{tour de force}, a thoroughly
impractical device, possessing publicity value only. Once an
hour, an escorted group stood before it and asked questions
of the robot engineer in charge in careful whispers. Those
the engineer decided were suitable for the robot's circuits
were transmitted to the Talking Robot.

"
It was rather dull. It may be nice to know that the square
of fourteen is one hundred ninety-six, that the temperature
at the moment is 72 degrees Fahrenheit, and the air-pressure
30.02 inches of mercury, that the atomic weight of sodium is
23, but one doesn't really need a robot for that. One
especially does not need an unwieldy, totaly immobile mass of
wires and coils spreading over twnty-five square yards." ...

... ``There was an oily whir of gears and a mechanically
timbred voice boomed out in words that lacked accent and
intonation, `I -- am -- the -- robot -- that -- talks.'

``Gloria stared at it ruefully. It did talk, but the
sound came from inside somewheres. There was no @i{face} to
talk to. She said, `Can you help me, Mr. Robot, sir?'

``The Talking Robot was designed to answer questions, and
only such questions as it could answer had ever been put to
it. It was quite confident of its ability, therefore. `I
-- can -- help -- you.'

``'Thank you, Mr. Robot, sir. Have you seen Robbie?'

``'Who -- is -- Robbie?'

```He's a robot, Mr. Robot, sir.'' She stretched to
tip-toes. ``He's about so high, Mr. Robot, sir, only higher,
and he's very nice. He's got a head, you know. I mean you
haven't, but he has, Mr. Robot sir.'

``The Talking Robot had been left behind, `A -- robot?'

```Yes, Mr. Robot, sir. A robot just like you, except he
can't talk, of course, and -- looks like a real person.'

```A -- robot -- like -- me?'

```Yes, Mr. Robot, sir.'

```To which the Talking Robot's only response was an erratic
splutter and an occasional incoherent sound. The radical
generalization offered it, i.e., its existence, not as a
particular object, but as a member of a general group, was
too much for it. Loyally, it tried to encompass the concept
and half a dozen coils burnt out. Little warning signals
were buzzing.'

``(The girl in her mid -teens left at that point. She had
enough for her Physics-1 paper on `Practical Aspects of
Robotics.' This paper was Susan Calvin's first of many on
the subject.)''

------------------------------

Date: 23-Jan-86 12:52:19-PST
From: jbn@FORD-WDL1
Subject: Perceptrons-historical note

Since Perceptron-type systems seem to be making a comeback, a
historical note may be useful.

The original Perceptron was developed in the early 1950s, and
was a weighted-learning type scheme using electromechanical storage, with
relay coils driving potentiometers through rachets being the basic
learning mechanism. The original machine used to be on display at the
Smithsonian's Museum of History and Technology, (now called the Museum of
American History); it was a sizable unit, about the size of a VAX 11/780.
But it is no longer on display; I've been checking with the Smithsonian.
It has been moved out to their storage facility in Prince George's County,
Maryland. It's not gone forever; the collection is rotated through the
museum. If there's sufficient interest, they may put it back on display
again.

Another unit in the same collection has relevance to this digest;
Parts of Reservisor, the first airline reservations system, built for American
Airlines around 1954, are still on display; they have a ticket agent's terminal
and the huge magnetic drum. Contrast this with Minksy's recent claims seen
here that airline reservation systems were invented by someone at the MIT AI
lab in the 1960s.

John Nagle

------------------------------

Date: 22 Jan 86 14:41:45 EST
From: Mark.Derthick@G.CS.CMU.EDU
Subject: Re: What is a Symbol?

This is a response to David Plaut's post (V4 #9) in which he maintains that
connectionist systems can exhibit intelligent behavior and don't use
symbols. He suggests that either he is wrong about one of these two points,
or that the Physical Symbol System Hypothesis is wrong, and seeks a good
definition of 'symbol.

First, taking the PSSH seriously as subject to empirical confirmation
requires that there be a precise definition of symbol. That is, symbol is
not an undefined primitive for Cognitive Science, as point is for geometry.
I claim no one has provided an adequate definition. Below is an admittedly
inadequate attempt, together with particular examples for which the
definition breaks down.

1) It seems that a symbol is foremost a formal entity. It is atomic, and owes
its meaning to formal relationships it bears to other symbols. Any internal
structure a [physical] symbol might posess is not relevant to its meaning.
The only structures a symbol processor processes are symbol structures.

2) The processing of symbols requires an interpreter. The link between the
physical symbols and their physical interrelationships on the one hand, and
their meaning on the other, is provided by the interpreter.

3) Typically, a symbol processor can store a symbol in many physically
distinct locations, and can make multiple copies of a symbol. For instance,
in a Lisp blocks world program, many symbols for blocks will have copies of
the symbol for table on their property lists. Many functionally identical
memory locations are being used to store the symbols, and each copy is
identical in the sense that it is physically the same bit pattern. I can't
pin down what about the ability to copy symbols arbitrarily is necessary,
but I think something important lurks here.

The alternative to symbolic representations, analog (or direct)
representations, do not lend themselves to copying so easily. For instance,
on a map, distance relations between cities are encoded as distances between
circles on paper. Many relations are represented, as in the case with the
blocks world, but you can't make a copy of the circle representing a city.
If it's not in the right place, it just won't represent that city.

4) Symbols are discrete. This point is where connectionist representations
seem to diverge most from prototypical symbols. For instance, in Dave
Touretzky's connectionist production system model (IJCAI 85), working memory
elements are represented by patterns of activity over units. A particular
element is judged to be present if a sufficiently large subset of the units
representing the pattern for that element are on. Although he uses this
thresholding technique to enable discrete answers to be given to the user,
what is going on inside the machine is a continuum. One can take the
pattern for (goal clear block1) and make a sequence of very fine grained
changes until it becomes the pattern for (goal held block2).



To show where my definition breaks down, consider numbers as represented in
Lisp. I don't think they are symbols, but I'm not sure. First, functions
such as ash and bit-test are highly representation dependent. Everybody
knows that computers use two's complement binary representation for
arithmetic. If they didn't, but used cons cells to build up numbers from
set theory for instance, it would take all day to compute 3 ** 5. Computers
really really have special purpose hardware to do arithmetic, and computer
programmers, at least sometimes, think in terms of ALU's, not number theory,
when they program. So the Lisp object 14 isn'sometimes t atomic, sometimes
its really 1110.

Its easy to see that the above argument is trying to expose numbers as
existing at a lower level than real Lisp symbols. At the digital logic
level, then, bits would be symbols, and the interpreter would be the adders
and gates that implement the semantics of arithmetic. Similarly, it may be
the case that connectionist system use symbols, but that they do not
correspond to, eg working memory elements, but to some lower level object.

So a definition of "
symbol" must be relative to a point of view. With this
in mind, it seems that confirmation of the Physical Symbol System Hypothesis
turns on whether an intelligent agent must be a symbol processor, viewed
from the knowledge level. If knowledge level concepts are represented as
structured objects, and only indirectly as symbols at some lower level, I
would take it as disconfirmation of the hypothesis.

I welcome refinements to the above definition, and comments on whether Lisp
numbers are symbols, or whether ALU bits are symbols.

Mark Derthick
mad@g.cs.cmu.edu

------------------------------

Date: 27 January 1986 1532-PST (Monday)
From: hestenes@nprdc.arpa (Eric Hestenes)
Subject: Re: What is a symbol?

Article 125 of net.ai:

In article <724@k.cs.cmu.edu>, dcp@k.cs.cmu.edu (David Plaut) writes:
> It seems there are three ways out of this dilemma:
>
> (1) deny that connectionist systems are capable, in
> principle, of "
true" general intelligent action;
> (2) reject the Physical Symbol System Hypothesis; or
> (3) refine our notion of a symbol to encompass the operation
> and behavior of connectionist systems.
>
> (1) seems difficult (but I suppose not impossible) to argue for, and since I
> don't think AI is quite ready to agree to (2), I'm hoping for help with (3)
> - Any suggestions? > David Plaut > (dcp@k.cs.cmu.edu)


Symbol is unfortunately an abused word in AI. Symbol can be used in several
senses, and when you mix them things seem illogical, even though they are not.

Sense 1: A symbol is a token used to represent some aspect or element
of the real world.

Sense 2: A symbol is a chunk of knowledge / human memory that is of a certain
character. ( e.g. predicates, with whole word or phrase size units )

While PDP / connectionist models may not appear to involve symbolic processes,
meaning mental processes that operate on whole chunks of knowledge that
consistute symbols they DO assign tokens as structures that represent some
aspect or element. For instance, if a vision program takes a set of
bits from a visual array as input, then at that point each of the bits are
assigned a symbol and then a computation is performed upon the symbol.
Given that pdp networks do have this primitive characterization in every
situation, they fit Newell's definition of a Physical Symbol System
[paraphrased as] "
a broad class of systems capable of having and manipulating
symbols, yet realizable in the physical world." The key is to realize
that while the information that is assigned to a token can vary quite
significantly, as in connectionist versus high level symbolic systems,
the fact that a token has been assigned a value remains, and the manipulation
of that newly created symbol is carried out in either kind of system.

Many connectionists like to think of pdp systems as incorporating
"
microfeatures" or "sub-symbolic" knowledge. However, by this they do not mean
that their microfeatures are not symbols themselves. Rather they are actively
comparing themselves against traditional AI models that often insist on using
a single token for a whole schema ( word, idea, concept, production ) rather
than for the underlying mental structures that might characterize a word.
A classical example is the ( now old ) natural language approach to thinking
that parses phrases into trees of symbols. Not even the natural language
people would contend that the contents of memory resembles that tree of
symbols in terms of storage. In this case the knowledge that is significant to
the program is encoded as a whole word. The connectionist might create a
system that parses the very same sentences, with the only difference being
how symbols are assigned and manipulated. In spite of their different
approach, the connectionist version is still a physical symbol system in the
sense of Newell.

This point would be moot if one could create a connectionist machine that
computed exactly the same function as the high-level machine, including
manipulating high level symbols as whole. While both languages are Turing
equivalent, one has yet to see a system that can compile a high-level
programming language with a connectionist network. The problems with creating
such a machine are many; however, it is entirely possible, if not probable.
See the paper for a Turing <--> Symbol System proof.


Reference: Newell, Allen. Physical Symbol Systems.
Cognitive Science 4, 135-183 (1980).

Copy me on replies.

Eric Hestenes
Institute for Cognitive Science, C-015
UC San Diego, La Jolla, CA 92093
arpanet: hestenes@nprdc.ARPA
other: ucbvax!sdcsvax!sdics!hestenes or hestenes@sdics.UUCP

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT