Copy Link
Add to Bookmark
Report

Neuron Digest Volume 06 Number 03

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Thursday, 11 Jan 1990                Volume 6 : Issue 3 

Today's Topics:
Re: What is a Symbol System?
Re: What is a Symbol System?
Re: What is a Symbol System?
Re: What is a Symbol System?
Re: What is a Symbol System?
Alternative conceptions of symbol systems
Re: What is a Symbol System?
Re: What is a Symbol System?


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Re: What is a Symbol System?
From: cam@aipna.ed.ac.uk (Chris Malcolm)
Organization: Dept of AI, Edinburgh University, UK.
Date: 22 Nov 89 19:15:33 +0000

In your original posting you (Stevan Harnad) said:

So the mere fact that a behavior is "interpretable" as ruleful
does not mean that it is really governed by a symbolic rule.
Semantic interpretability must be coupled with explicit
representation (2), syntactic manipulability (4), and
systematicity (8) in order to be symbolic.

There is a can of worms luring under that little word "coupled"! What I
take it to mean is that this symbolic rule must cause the behaviour which
we interpret as being governed by the rule we interpret the symbolic rule
as meaning. Unravelled, that may seem stupendously tautologous, but
meditation on the problems of symbol grounding can induce profound
uncertainty about the status of supposedly rule-governed AI systems. One
source of difficulty is the difference between the meaning of the symbolic
rule to the system (as defined by its use of the rule) and the meaning we
are tempted to ascribe to it because we recognise the meaning of the
variable names, the logical structure, etc.

Brian Smith's Knowledge Representation Hypothesis contains a nice
expression of this problem of "coupling" interpretation and causal effect,
in clauses a) and b) below.

Any mechanically embodied intelligent process will be be
comprised of structural ingredients that a) we as external
observers naturally take to represent a propositional account of
the knowledge that the overall process exhibits, and b)
independent of such external semantical attribution, play a
formal but causal and essential role in engendering the
behaviour that manifests that knowledge.

[Brian C. Smith, Prologue to "Reflection and Semantics in a Procedural
Language"
in "Readings in Knowledge Representation" eds Brachman &
Levesque, Morgan Kaufmann, 1985.]

It is not at all clear to me that finding a piece of source code in the
controlling computer which reads IF STRING_PULLED THEN DROP_HAMMER is not
just a conjuring trick where I am misled into equating the English language
meaning of the rule with its function within the computer system [Drew
McDermott, Artificial Intelligence meets Natural Stupidity, ACM SIGART
Newsletter 57, April 1976]. In simple cases with a few rules and behaviour
which can easily be exhaustively itemised we can satisfy ourselves that our
interpretation of the rule does indeed equate with its causal role in the
system. Where there are many rules, and the rule interpreter is complex
(e.g. having a bucketful of ad-hoc conflict-resolution prioritising schemes
designed to avoid "silly" behaviour which would otherwise result from the
rules) then the equation is not so clear. The best we can say is that our
interpretation is _similar_ to the function of the rule in the system. How
reliably can we make this judgment of similarity? And how close must be the
similarity to justify our labelling an example as an instance of behaviour
governed by an explicit rule?

Why should we bother with being able to interpret the system's "rule" as a
rule meaningful to us? Perhaps we need a weaker category, where we identify
the whole caboodle as a rule-based system, but don't necessarily need to be
able to interpret the individual rules. But how can we do this weakening,
without letting in such disturbingly ambiguous exemplars as neural nets?

Chris Malcolm cam@uk.ac.ed.aipna 031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

------------------------------

Subject: Re: What is a Symbol System?
From: dhw@itivax.iti.org (David H. West)
Organization: Industrial Technology Institute
Date: 24 Nov 89 16:00:05 +0000

]>The question is: Given that they can both be
]>described as conforming to the rule "If the string is pulled, smash the
]>china,"
is this rule explicitly represented in both systems?

This is not an empirical question, but a question about how we wish
to use the words "described", "explicitly" and "rule".

]In your original posting you (Stevan Harnad) said:
]
] So the mere fact that a behavior is "interpretable" as ruleful
] does not mean that it is really governed by a symbolic rule.

There IS no "really": we can interpret our "observations" as we
wish, and take the consequences of our choice.

] Semantic interpretability must be coupled with explicit
] representation (2), syntactic manipulability (4), and
] systematicity (8) in order to be symbolic.
]
]There is a can of worms luring under that little word "coupled"! What I
]take it to mean is that this symbolic rule must cause the behaviour
]which we interpret as being governed by the rule we interpret the
]symbolic rule as meaning.

Rather: if WE are to call the system "symbolic", the meaning WE
ascribe to the symbols should be consistent with (OUR interpretation
of) the behavior we wish to regard as being caused by the rule.

] Unravelled, that may seem stupendously
]tautologous, but meditation on the problems of symbol grounding can
]induce profound uncertainty about the status of supposedly rule-governed
]AI systems. One source of difficulty is the difference between the
]meaning of the symbolic rule to the system

We can have no epistemological access to "the meaning of the symbolic rule
to the system"
except insofar as we construct for ourselves a consistent
model containing something that we interpret as such a meaning.
Symbol-grounding happens entirely within our mental models. Additionally,
many of us believe that the world is not conducive to the replication of
systems that react in certain ways (e.g. an object labelled "thermostat"
which turns a heater ON when the temperature is ABOVE a threshold is
unlikely to attract repeat orders), and this could be regarded as a
mechanism for ensuring symbol-function consistency, but the latter is still
all in our interpretation.

]Brian Smith's Knowledge Representation Hypothesis contains a nice
]expression of this problem of "coupling" interpretation and causal
]effect, in clauses a) and b) below.
]
] Any mechanically embodied intelligent process will be be
] comprised of structural ingredients that a) we as external
] observers naturally take to represent a propositional account of
] the knowledge that the overall process exhibits, and b)
] independent of such external semantical attribution, play a
] formal but causal and essential role in engendering the
] behaviour that manifests that knowledge.

I agree.

]It is not at all clear to me that finding a piece of source code in the
]controlling computer which reads IF STRING_PULLED THEN DROP_HAMMER is
]not just a conjuring trick where I am misled into equating the English
]language meaning of the rule with its function within the computer
]system [Drew McDermott, Artificial Intelligence meets Natural Stupidity

As McDermott points out, the behavior of such a system is unaffected if all
the identifiers are systematically replaced by gensyms (IF G000237 THEN
(SETQ G93753 T)), which causes the apparently "natural" interpretation to
vanish.

]Why should we bother with being able to interpret the system's "rule" as
]a rule meaningful to us?

It may just be the way we are.

]But how
]can we do this weakening, without letting in such disturbingly ambiguous
]exemplars as neural nets?

If we are "disturbed", it's a sign of internal inconsistency in our
construction of a world-view. People used to be disturbed by the idea of
light being both wave and particle. Now we're not.

- -David West dhw@itivax.iti.org

------------------------------

Subject: Re: What is a Symbol System?
From: harnad@phoenix.Princeton.EDU (S. R. Harnad)
Organization: Princeton University, NJ
Date: 24 Nov 89 18:46:57 +0000



Chris Malcolm cam@aipna.ed.ac.uk of Dept of AI, Edinburgh University, UK,
wrote:

> What I take [you] to mean is that [the] symbolic rule must cause the
> behaviour which we interpret as being governed by the rule we interpret
> the symbolic rule as meaning... [[ additional quote omitted ]]


I endorse this kind of scepticism -- which amounts to recognizing the
symbol grounding problem -- but it is getting ahead of the game. My
definition was only intended to define "symbol system," not to capture
cognition or meaning.

You are also using "behaviour" equivocally: It can mean the operations of
the system on the world or the operations of the system on its symbol
tokens. My definition of symbol system draws only on the latter (i.e.,
syntax); the former is the grounding problem.

It is important to note that the only thing my definition requires is that
symbols and symbol manipulations be AMENABLE to a systematic semantic
interpretation. It is premature (and as I said, anaother problem
altogether) to require that the interpretation be grounded in the system
and its relation to the world, rather than just mediated by our own minds,
in the way we interpret the symbols in a book. All we are trying to do is
define "symbol system" here; until we first commit ourselves on the
question of what is and is not one, we cannot start to speak coherently
about what its shortcomings might be!

(By the way, "meaning to us" is unproblematic, whereas "meaning to the
system"
is highly contentious, and again a manifestation of the symbol
grounding problem, which is certainly no definitional matter!)

[[additional quote omitted]]
> How reliably can we make this judgment of
> similarity? And how close must be the similarity to justify our
> labelling an example as an instance of behaviour governed by an
> explicit rule?

Again, you're letting your skepticism get ahead of you. First let's agree
on whether something's a symbol system at all, then let's worry about
whether or not its "meanings" are intrinsic. Systematic interpretability is
largely a formal matter; intrinsic meaning is not. It is not a "conjuring
trick"
to claim that Peano's system can be systematically interpreted as
meaning what WE mean by, say, numbers and addition. It's another question
altogether whether the system ITSELF "means" numbers, addition, or anything
at all: Do you see the distinctions.

(No one, has actually proposed the Peano system as a model of arithmetic
understanding, of course; but in claiming, with confidence, that it is
amenable to being systematically interpreted as what we mean by arithmetic,
we are not using any "conjuring tricks" either. It is important to keep
this distinction in mind. Number theorists need not be confused with
mind-modelers.)

But, as long as you ask, the criterion for "similarity" that I have argued
for in my own writings is the Total Turing Test (TTT), which, unlike the
conventional Turing Test (TT) (which is equivocal in calling only for
symbols in and symbols out) calls for our full robotic capacity in the
world. A system that can only pass the TT may have a symbol grounding
problem, but a system that passes the TTT (for a lifetime) is grounded in
the world, and although it is not GUARANTEED to have subjective meaning
(because of the other minds problem), it IS guaranteed to have intrinsic
meaning.

(The "Total" is also intended to rule out spurious extrapolations from toy
systems: These may be symbol systems, and even -- if robotic -- grounded
ones, but, because they fail the TTT, there are still strong grounds for
skepticism that they are sufficiently similar us in the relevant respects.
Here I do agree that what is involved is, if not "conjuring," then
certainly wild and unwarranted extrapolation to a hypothetical "scaling
up,"
one that, in reality, would never be able to reach the TTT by simply
doing "more of the same.")

> Why should we bother with being able to interpret the system's "rule" as
> a rule meaningful to us?

Because that's the part of how you tell whether you're even dealing with a
formal symbol system in the first place (on my definition).

Stevan Harnad

Stevan Harnad Department of Psychology Princeton University
harnad@confidence.princeton.edu srh@flash.bellcore.com
harnad@elbereth.rutgers.edu harnad@pucc.bitnet (609)-921-7771

------------------------------

Subject: Re: What is a Symbol System?
From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Organization: Yale University Computer Science Dept, New Haven CT 06520-2158
Date: 29 Nov 89 03:05:17 +0000

[[Harnad wrote:]]
>I'd rather not define it any way I like. I'd rather pin people down on
>a definition that won't keep slipping away, reducing all disagrements
>about what symbol systems can and can't do to mere matters of
>interpretation.
> ...

Which "people" need to be pinned down? Fodor, I guess, who has a strong
hypothesis about a Representational Theory of Meaning.

But suppose someone believes "It's all algorithms," and not much more?
He's willing to believe that intelligence involves an FFT here, some
inverse dynamics there, a few mental models, maybe some neural nets,
perhaps a theorem prover or two,.... His view is not completely vacuous
(Searle thinks it's even false). It might be a trifle eclectic for some
philosophers, but so what?

I realize that there is an issue about what symbol systems "can and can't
do."
It might turn out that computation is just a ridiculous model for
what goes on in the brain. All the AI types and cognitive psychologists
could then find something else to do. But it's simply not possible that it
could be revealed that there was a task X such that symbol systems cannot
do X and some other computational system can. That's because I and all the
other computationalists would just incorporate that new sort of system in
our universe of possible models. We wouldn't even notice that it hadn't
been incorporated already. In spite of philosophers' ardent hopes, there
simply is no natural category of Physical Symbol Systems separate from
Computational Systems in General.

>So the only thing at issue is whether a symbol system is required to be
>semantically interpretable. Are you really saying that most AI programs
>are not? I.e., that if asked what this or that piece of code means
>or does, the programmer would reply: "Beats me! It's just crunching
>a bunch of meaningless and uninterpretable symbols."


Well, of course, no one's going say, "My program is crunching meaningless
symbols."
The word "meaningless" has all these negative connotations; it
sounds like it's next to "worthlessness," "pointlessness." So everyone
will cheerfully claim that their symbols are "meaningful." But if you
press them on exactly what they're committing to, you're usually going to
start hearing about "procedural semantics" or "holistic semantics" or some
such twiddle.

The fact is that most symbols are conceived of as calculational devices
rather than denotational devices; their function is to compute rather than
to mean. Try asking a rug merchant what the meaning is of the position of
the third bead on the second wire of his abacus. If he thinks hard, he
might come up with something like "It denotes the third ten in the count of
shekels paid for these rugs."
But chances are it never crossed his mind
that the bead position required a denotation. After all, it's part of a
formal system. The meanings of the expressions of such a system can't
enter into its functioning. Why then is it so all-fired important that
every expression have a meaning?

-- Drew McDermott

------------------------------

Subject: Re: What is a Symbol System?
From: blenko-tom@CS.YALE.EDU (Tom Blenko)
Organization: Yale University Computer Science Dept, New Haven CT 06520-2158
Date: 29 Nov 89 04:06:58 +0000


I don't share Drew's disenchantment with semantic models, but I think there
is a more direct argument among his remarks: specifically, that it isn't a
particularly strong claim to say that an object of discussion has "a
semantics"
. In fact, if we can agree on what the object of discussion is,
I can almost immediately give you a semantic model -- or lots of semantic
models, some of which will be good for particular purposes and some of
which will not. And it doesn't make any Difference whether we are talking
about axioms of FOPC, neural networks, or wetware.

Richard Feynman had an entertaining anecdote in his biography about a
fellow with an abacus who challenged him to a "computing" contest. He
quickly discovered that the fellow could do compute simple arithmetic
expressions as fast as Feynman could write them down. So he chose some
problems whose underlying numerical structure he understood, but which it
turned out that the other fellow, who simply knew a rote set of procedures
for evaluating expressions, didn't

Who had a semantic model in this instance? Both did, but different models
that were suited to different purposes. I suspect that Harnad had a
particular sort of semantics in mind, but he is going to have to work a lot
harder to come up with his strawman (I don't believe it exists).

Tom

------------------------------

Subject: Alternative conceptions of symbol systems
From: Peter Cariani <peterc@cs.brandeis.edu>
Date: Wed, 06 Dec 89 13:48:09 -0500

Comments on Harnad's definition of symbol system:

We all owe to Steve Harnad the initiation of this important discussion.
I believe that Harnad has taken the discourse of the symbol grounding
problem in the right direction, toward the grounding of symbols in their
interactions with the world at large. I think, however, that we could go
further in this direction, and in the process continue to re-examine some
of the fundamental assumptions that are still in force.

The perspective presented here is elaborated much more fully and
systematically in a doctoral dissertation that I completed in May of this
year:

Cariani, Peter (1989) On the Design of Devices With Emergent Semantic Functions
Ph.D. Dissertation, Department of Systems Science, SUNY-Binghamton,
University Microfilms, Ann Arbor Michigan.

My work is primarily based on that of theoretical biologists Howard
Pattee and Robert Rosen. Pattee has been elaborating on the evolutionary
origins of symbols in biological systems while Rosen has concentrated on
the modelling relations that biological organisms implement. The Hungarian
theoretical biologist George Kampis has also recently published work along
these lines. I would like to apologise for the length of this response, but
I come out of a field which is very small and virtually unknown by those
outside of it, so many of the basic concepts must be covered to avoid
misunderstandings.

Here are some suggestions for clarifying this murky discourse about
symbol systems:

1) Define the tokens in terms of observable properties.

The means of recognizing the tokens or states of the system must be
given explicitly, such that all members of a community of
observer-participants can reliably agree on what "state" the physical
system is in. Without this specification the definition is gratuitous
hand-waving. I stress this because there are a number of papers in the
literature which discuss "computations" in the physical world (e.g. "the
universe is a gigantic computation in progress"
) without the slightest
indication of what the symbol tokens are that being manipulated, what the
relevant states of such systems might be, or how we would go about
determining, in concrete terms, whether a given physical system is to be
classified as a physical symbol system.

One has to be careful when one says "practically everything can be
interpreted as rule-governed."
Of course we can easily wave our hands and
say, yes, those leaves fluttering in the breeze over there are
rule-governed, without having any idea what the specific rules are or for
that matter, what the states are), but to demonstrate that a phenomenon is
rule-governed, we should show how we would come to see it as such: we
should concretely show what measurements need to be made, we should make
them, and then articulate the rules which describe/govern the behavior.

If we say "a computer is a physical symbol system" we mean that if we
look at the computer through the appropriate observational frame, measuring
the appropriate voltages at the logic gates, then we can use this device to
consistently and reliably implement a deterministic input-output function.
For each initial distinguishable state of affairs, by operating the device
we always arrive at one and only one end state within some humanly-relevant
amount of time. This is a functionally-based physically-implemented concept
of a formal system, one which is related to Hilbert's idea of reliable
physical operations on concrete symbols leading to consistent results.

Note that this definition is distinct from logicist/platonist
definitions which include nonconcrete objects (e.g. sets of sets) or
physically unrealizable objects (e.g. potential and actual infinities,
indefinitely extendible tapes).

2) Abandon the explicit-implicit rule distinction.

First, I'm not sure if Wittgenstein's distinction between "explicit" and
"implicit" rule-following is appropriate here, since we are taking the role
of external observers rather than participants in a language-game. If the
purpose of the definition is to give us criteria to decide whether we are
participating in a formal system, then we must know the rules to follow
them.

If the purpose is to identify "physical symbol systems" in nature and in
human artefacts, then this distinction is irrelevant. What does it mean for
a computer to explicitly or implicitly carry out a logical operation? If it
made a difference, then the device would cease to be wholly syntactic. If
it doesn't make a difference then we don't need it in our definition. Does
the brain implement a physical symbol system, and if so, does it follow
rules explicitly or implicitly? How would we decide?

3) Abandon semantic interpretability.

I'm not sure if I understand fully the motivation behind this criterion
of semantic interpretability. An external observer can assign whatever
meanings s/he chooses to the tokens of the formal device. This criterion
makes the definition very subjective, because it depends upon an arbitrary
assignment of meaning. I don't even see how this is restrictive, since the
observer can always come up with purely whimsical mappings, or to simply
let the tokens stand for themselves.

Note that "semantic interpretability" does not confer upon the physical
symbol system its own semantics. The relations of the symbols manipulated
in computer programs to the world at large are completely parasitical on
human interpreters, unless the computer is part of a robot (i.e. possesses
its own sensors and effectors). Merely being semantically interpretable
doesn't ground the semantics in a definite way; when we say "X in my
program represents the number of speeding violations on the Mass Pike"
we
stabilize the relation of the symbol X in the computer relative to
ourselves (assuming that we can be completely consistent in our
interpretation of the program). But each of us has a different tacit
interpretation of what "the number of speeding violations on the Mass Pike"
means. (Does a violator have to be caught for it to be a violation? Are
speeding police cars violations? How is speed measured?) In order for this
tacit interpretation to be made explicit we would need to calibrate our
perceptions and their classifications along with our use of language to
communicate them so that we as a community could reach agreement on our
interpretations.

The wrong turn that Carnap and many others made in the 1930's was to
assume that these interpretations could be completely formalized, that a
"logical semantics" was possible in which one could unambiguously determine
the "meaning of an expression" within the context of other expressions. The
only way to do this is to formalize completely the context, but in doing so
you transform a semantic relation into a syntactic one. The semantic
relation of the symbol to the nonsymbolic world at large gets reduced to a
syntactic rule-governed relation of the symbol to other symbols.
(Contingent truths become reduced to necessary truths.) What Carnap tried
to say was that as long as a proposition referred to an observation
statement (which refers to an act of observation), then that proposition
has a semantic content. This has led us astray to the point that many
people no longer believe that they need to materially connect the symbols
to the world through perception and action, that merely referring to a
potential connection is enough. This is perhaps the most serious failing of
symbolic AI, the failure to ground the symbols used by their programs in
materially implemented connections to the external world.

4) Abandon semantic theories based on reference

A much better alternative to logical semantics involves replacing these
syntactic theories of reference with a pragmatist semiotic when we go to
analyze the roles of symbols in various kinds of devices. Pragmatist
semiotics (as developed within Charles Morris' framework) avoid the formal
reductionism and realist assumptions of referential theories of meaning by
replacing correspondences between "objective" referents with physically
implemented semantic operations (e.g. measurement, perception, control,
action). These ideas are developed more fully in my dissertation. What one
must do to semantically ground the symbols is to connect them to the world
via sensors and effectors. If they are to be useful to the device or
organism, they must be materially linked to the world in a nonarbitrary
way, rather than referentially connected in someone's mind or postulated as
unspecified logical ("causal") connections (as in "possible world"
semantics).

4) Abandon Newell and Pylyshyn's Symbol Level.

Upon close examination of both of their rationales for a separate symbol
level, one finds that it rests precariously upon a distinction, between the
Turing machine's internal states and the state of affairs on the tape
(Pylyshyn, 1984, pp.68-74). Now the essential nature of this distinction is
maintained because one is potentially infinite and the other is finite
(else one could simply make a big finite-state-automaton and the
distinction would be an abitrary labelling of the global machine states),
but physically realizable devices cannot be potentially infinite, so the
essential, nonarbitrary character of the distinction vanishes (Cariani,
1989, Appendix 2).

5) Purge the definition of nonphysical, platonic entities (or at least
recognize them as such and be aware of them).

For example, the definition of physical symbol systems is intimately
tied up with Turing's definition of computation, but, as von Neumann noted,
this is not a physical definition; it is a formal one. Now, physically
realizable automata cannot have indefinitely extendible tapes, so the
relevance of potentially-infinite computations to real world computations
is dubious. Everything we can physically compute can be described in terms
of finite-state automata (finite tape Turing machines). We run out of
memory space and processing time long before we ever encounter
computability limitations. Computational complexity matters, computability
doesn't. I'd be especially interested in thoughtful counter-arguments to
this point.

6) Alternative I: Adopt a physical theory of symbolic action.

Howard Pattee has been developing a physical theory of symbolic function
for 20 years--symbolic processes are those which can be described in terms
of nonholonomic constraints (in terms of the equations of motion and basins
of attraction (in terms of trajectories)(see refs: Pattee; Cariani, Minch,
Rosen).

(Next summer there will be a workshop entitled "Symbols and Dynamics" at
the ISSS meeting in Portland, Ore., July 8-13, 1990. Contact: Gail
Fleischaker, 76 Porter St., Somerville, MA 02143 for more info.)

The only disadvantage of these approaches lie in their classical/ realist
assumption of complete knowledge of the state space within which the
symbolic activity occurs. These premises are deeply embedded in the very
terms of the disourse, but nevertheless, this descriptive physical language
is exceedingly useful as long as the limitations of these assumptions are
constantly kept in mind.

To translate from the semiotic to the physical, syntactic relations are
those processes for which a nonholonomic rate-independent equation of
constraint can completely replace the rate-dependent laws of motion. For an
electronic computer, we can replace all of the microscopic electromagnetic
equations of motion describing the trajectories of electrons with
macroscopic state-transition rules describing gate voltages in terms of
binary states. These state transition rules are not rate-dependent, since
they depend upon successions of states rather than time; consequently time
need not enter explicitly when describing the behavior of a computer in
terms of binary states of gate voltages.

Semantic relations are those processes which can be described in terms
of rate-independent terms coupled to rate-dependent terms: one side of the
constraint equation is symbolic and rate-independent, the other half is
nonsymbolic and rate-dependent. Processes of measurement are semantic in
character: a rate-dependent, nonsymbolic interaction gives rise to a
rate-independent symbolic output.

Here pragmatic relations are those processes which change the structure
of the organism or device, which appear in the formalism as changes in the
nonholonomic constraints over time.

7) Alternative II: Adopt a phenomenally grounded systems-theoretic definition.

Part of my work has been to ground the definition of symbol in terms of
the observed behavior of a system. This is the only way we will arrive at
an unambiguous definition. We select a set of measuring devices which
implement distinctions on the world which become our observable "states."
We observe the behavior of the physical system through this observational
framework. This strategy is similar to the way W. Ross Ashby grounded his
theory of systems.

Either the state-transitions are deterministic--state A is always
followed by state B which is always followed by state G--or they are
nondeterministic-- state D is sometimes followed by state F and sometimes
followed by state J. Here the relation between states A,B, and G appears
to be symbolic, because the behavior can be completely captured in terms of
rules, where the relation between the states D, F, and J appears
nonsymbolic, because the behavior depends upon aspects of the world which
are not captured by this observational frame. Syntactic, rule-governed,
symbol manipulations appear to an external observer as deterministic state
transitions (in Ashby's terms, "a state-determined system"). Semantic
processes appear to the observer as nondeterministic, contingent state
transitions leading to states which appear as symbolic. Pragmatic relations
appear as changes in the structure of the observed state-transitions.

8) Alternative III: Adopt a physical, mechanism-based definition of symbol
systems.

Symbolic and nonsymbolic can also be viewed in terms of "digital" and
"analog" in the sense of differentiated (discrete) and nondifferentiated
(continuous). Sensors implement semantic A-to-D operations. Logical
operations ("computations") implement syntactic, determinate D-to-D
transformations. Controls implement semantic D-to-A operations. One has to
be careful here, because there are many confusing uses of these words (e.g.
"analog computation"), and what appears to be "analog" or "digital" is a
function of how you look at the device. Given a particular observational
framework and a common usage of terms, however, these distinctions can be
made reliable. I would argue that von Neumann's major philosophical works
(General & Logical Theory of Automata, Self-Reproducing Automata, The
Computer and the Brain) all take this approach.

9) Alternative IV: Adopt a semiotic-functionalist definition of symbol systems.

It can be argued that the basic functionalities needed in the modelling
relation are the ability to convert nonsymbolic interactions with the world
into symbols (measurement), the ability to manipulate symbols in a
definite, rule-governed way (computations), and the ability to use a symbol
to direct action on the nonsymbolic world (controls). I have argued that
these functionalities are irreducible; One cannot achieve measurements by
doing computations simply because measurement involves a contingent
state-transition where two or more possible observed outcomes are reduced
to one observed outcome, whereas computation involves a necessary
state-transition, where each state has but one observed outcome. These
assertions are similar to the epistemological positions adopted by Bohr,
von Neumann, Aristotle and many others.

In such a definition, a physical symbol system is defined in terms of
its use to us as observer-participants. Are we trying to gain information
about the external world by reducing the possible observed states of a
sensor to one (by performing a measurement)? Are we trying to manipulate
symbols in a consistent, reliable way so that we always arrive at the same
outcome given the same input strings and rules. If so, we are performing
computations. Are we trying to use symbols to change the nonsymbolic world
by acting on it. If so, we are employing symbolically-directed control
operations.

In summary, there are many worthwile alternatives to the basic
assumptions that have been handed down to us through logical positivism,
model-theoretic semantics, artificial intelligence and cognitive
psychology. Steve Harnad has done us a great service in making many of
these assumptions visible to us and clarifying them in the process. There
are other conceptual frameworks which can be of great assistance to us as
we engage in this process: theoretical biology, semiotics/pragmatist
philosophy, cybernetics and systems theory. It is difficult to entertain
ideas which challenge cherished modes of thought, but such critical
questioning and debate are indispensible if we are to deepen our
understanding of the world around us.

References:
- ------------------------------------------------------------------------------
Cariani, Peter (1989) On the Design of Devices with Emergent Semantic
Functions. PhD Dissertation, Department of Systems Science, State University
of New York at Binghamton; University Microfilms, Ann Arbor, MI.
(1989) Adaptivity, emergence, and machine-environment dependencies. Proc
33rd Ann Mtg Intl Soc System Sciences (ISSS), July, Edinburgh, III:31-37.
Kampis, George (1988) Two approaches for defining "systems." Int. J. Gen.
Systems (IJGS), vol 15, pp.75-80.
(1988) On the modelling relation. Systems Research, vol 5, (2), pp. 131-44.
(1988) Some problems of system descriptions I: Function, II: Information.
Int. J. Gen. Systems 13:143-171.
Minch, Eric (1988) Representations of Hierarchical Structures in Evolving
Networks. PhD Dissertation, Dept. of Systems Science, SUNY-Binghamton.
Morris, Charles (1956) Foundations of the theory of Signs. In:Foundations
in the Unity of Science, Vol.I, Neurath, Carnap, & Morris, eds, UChicago.
Pattee, Howard H. (1968) The physical basis of coding and reliability in
biological evolution. In: Towards a Theoretical Biology (TTB) Vol. 1
C.H. Waddington, ed., Aldine, Chicago.
(1969) How does a molecule become a message? Dev Biol Supp 3: 1-16.
(1972) Laws and constraints, symbols and languages. In: TTB, Vol. 4
(1973) Physical problems in the origin of natural controls. In: Biogenesis,
Evolution, Homeostasis. Alfred Locker, ed., Pergamon Press, New York.
(1973) Discrete and continuous processes in computers and brains. In: The
Physics & Mathematics of the Nervous System, Guttinger & Conrad, eds S-V.
(1977) Dynamic and linguistic modes of complex systems. IJGS 3:259-266.
(1979) The complemetarity principle and the origin of macromolecular
information. Biosystems 11: 217-226.
(1982) Cell psychology: an evolutionary view of the symbol-matter problem.
Cognition & Brain Theory 5:325-341.
(1985) Universal principles of measurement and language functions in evol-
ving systems. In: Complexity, Language, and Life Casti & Karlqvist, S-V.
(1988) Instabilities and information in biological self-organization. In:
Self-Organizing Systems: The Emergence of Order. E Yates, ed. Plenum Press.
(1989) Simulations, realizations, and theories of life. Artificial Life,
C. Langton, ed., Addison-Wesley.
Rosen, Robert (1973) On the generation of metabolic novelties in evolution. In:
Biogenesis, Evolution, Homeostasis. A Locker, ed., Pergamon Press, New York.
(1974) Biological systems as organizational paradigms. IJGS 1:165-174
(1978) Fundamentals of Measurement and Representation of Natural Systems.
North Holland (N-H), New York.
(1985) Anticipatory Systems. Pergamon Press, New York.
(1986) Causal structures in brains and machines. IJGS 12: 107-126.
(1987) On the scope of syntactics in mathematics and science: the machine
metaphor. In: Real Brains Artificial Minds. Casti & Karlqvist, eds N-H.

------------------------------

Subject: Re: What is a Symbol System?
From: anwst@unix.cis.pitt.edu (Anders N. Weinstein)
Organization: Univ. of Pittsburgh, Comp & Info Services
Date: 06 Dec 89 22:25:43 +0000

"Explicit representation of the rules" is a big red herring.

At least two major articulations of the "symbolist" position are quite
clear: nothing requires a symbol system to be "rule-explicit" (governed by
representations of the rules) rather than merely "rule-implicit" (operating
in accordance with the rules). This point is enunciated in Fodor's
_Psychosemantics_ and also in Fodor + Pylyshyn's _Cognition_ critique of
connectionism. It is also true according to Haugeland's characterization of
"cognitivism" [Reprinted in his _Mind Design_]

The important thing is simply that a symbol system operates by manipulating
symbolic representations, as you've characterized them.

Many people seem to get needlessly hung up on this issue. My own
suggestion is that the distinction is of merely heuristic value anyway --
if you're clever enough, you can probably interpret any symbol system
either way -- and that nothing of philosophical interest ought to hinge on
it. I believe the philosopher Robert Cummins has also published arguments
to this effect, but I don't have the citations handy.

Anders Weinstein ARPA: anwst@unix.cis.pitt.edu
U. Pitt. Philosophy UUCP: {cadre,psuvax1}!pitt!cisunx!anwst
Pittsburgh, PA 15260 BITNET: anwst@pittvms

------------------------------

Subject: Re: What is a Symbol System?
From: harnad@phoenix.Princeton.EDU (Stevan Harnad)
Organization: Princeton University, NJ
Date: 14 Dec 89 06:51:47 +0000



anwst@unix.cis.pitt.edu (Anders N. Weinstein) of Univ. of Pittsburgh, Comp
& Info Services wrote:

> "Explicit representation of the rules" is a big red herring...

I'm not convinced. Whether a rule is explicit or implicit is not just a
matter of interpretation, because only explicit rules are systematically
decomposable. And to sustain a coherent interpretation, this
decomposability must systematically mirror every semantic distinction that
can be made in interpreting the system. Now it may be that not all features
of a symbol system need to be semantically interpretable, but that's a
different matter, since semantic interpretability (and its grounding) is
what's at issue here. I suspect that the role of such implicit,
uninterpretable "rules" would be just implementational.

Stevan Harnad Department of Psychology Princeton University
harnad@confidence.princeton.edu srh@flash.bellcore.com
harnad@elbereth.rutgers.edu harnad@pucc.bitnet (609)-921-7771

------------------------------

End of Neurons Digest
*********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT