Copy Link
Add to Bookmark
Report
NL-KR Digest Volume 04 No. 33
NL-KR Digest (3/29/88 19:42:24) Volume 4 Number 33
Today's Topics:
Re: Language, Thought, and Culture
Submissions: NL-KR@CS.ROCHESTER.EDU
Requests, policy: NL-KR-REQUEST@CS.ROCHESTER.EDU
----------------------------------------------------------------------
Date: Fri, 11 Mar 88 03:08 EST
From: Sarge Gerbode <sarge@thirdi.UUCP>
Subject: Re: Language, Thought, and Culture
In article <26415@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>I note that I do a lot of "thinking" in terms of verbal language.
>To me, this means that much of the processing is going on in
>Wernicke's Area of the left hemisphere. But I also note that
>my most profound insights seem to arise from a form of nonverbal
>processing which evidently occurs in the right hemisphere.
I think it is important to distinguish between *thinking* (an experiential
matter) and whatever neurological events may or may not be correlated with the
experience. Thus thinking is not what goes on in the brain. It goes on in
experience.
Experientially, though, I tend to agree with Barry.
--
"Absolute knowledge means never having to change your mind."
Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP: pyramid!thirdi!sarge
------------------------------
Date: Fri, 11 Mar 88 03:09 EST
From: Sarge Gerbode <sarge@thirdi.UUCP>
Subject: Re: language, thought, and culture
In article <900@bingvaxu.cc.binghamton.edu> vu0112@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes:
>In article <326@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:
>>I seems to me, though, that when I am "groping for words", I can be
>>entertaining a very clear concept, before I have the words for it. And I'm not
>>sure that there need be *any* form of representation associated with it.
>I doubt that any other kind of theory than a representational one could
>possibly explain this phenomena. I suspect there is a great difference
>between >semantic< representation (e.g. "deep structure") and
>"linguistic representation" (primarily aural).
I've seen you use the word "semantic" a good deal in recent postings, and I
think I came in rather late in the discussion. I thought it meant "having to
do with the meanings of words", so it isn't clear to me what "semantic
representation" might mean.
>In _Rules and Representations_ Chomsky asserts that the *primary*
>function of language is to >model the environment<. Communication is
>secondary. Certainly any theory relying primarily of communication must
>take a stand on private language, perhaps allowing intra-personal
>communication.
I suspect that the purpose of "autistic" or "private" languages, if such there
be, would be still one of communication -- to one's future self, i.e.,
to encode thoughts for mnemonic purposes. I once made up such a language, in
my post-graduate years, in order to facilitate note taking in philosophy. No
one else could read it, but I still found it useful. Except that I lost the
"key" to the symbols I was using and could no longer read it! So I have pages
and pages of notes that will probably forever remain unreadable. But I
wouldn't say that the language was to model the environment, but, rather, to
encode an already-existing *conceptual* schema.
>>Some thoughts are best communicated
>>verbally; others come across better in non-verbal media (a picture is worth a
>>thousand words, and all that).
>Understand that the >primary< distinction between linguistic and visual
>representations is that of digital and analog, or discrete and
>continuous, respectively. There is no reason why these distinct forms
>of representation cannot exist.
Of course. Which brings up a thought I had while reading Bateson, recently: I
think the distinction between analog and digital cannot be counted as absolute.
The elements of which a digital signal is composed must be comprehended
non-digitally. In the case of language, we have the words which appear as
analog images in our visual fields and must be interpreted in an analog way in
order to arrive at the elements used to construct a digital message.
Conversely, any analog message seems to be composed of digital elements (e.g.,
pixels, or quanta), which, themselves, are to be comprehended non-digitally,
and so forth, perhaps ad infinitum.
>>I think it *is* useful to distinguish thinking and "picturing" or
>>"representing" as two separate actions, rather than subsuming thinking as
>>verbal or aural picturing.
>I don't understand this.
I mean that thinking of something is different from picturing it. I can have
the concept of a horse without any specific picture of a horse (in my case, I
get sort of a fleeting impression of horsiness, but no specific picture). But
if someone asked me to "picture" a horse, I would have quite a different
experience from that of conceiving of a horse. I think that this is a useful
distinction. There are no specific pictures (aural or visual) that go along
with the concept of a horse. There are *associations* that accompany the
concept, and some of these are pictorial. But I don't think they are the
concept per se.
>>In other words, one
>>can quite consciously think about something without using words, I think. In
>>order to *express* the thoughts to someone else, or to record them, of course,
>>words are necessary.
>I suspect that any actor or artist would strongly disagree. When I draw
>a diagram for a software client I'm presenting a mixed
>continuous:discrete, visual:linguistic representation which works *much*
>better than one or the other strictly.
Interesting. I hope some artists will respond with their own experiences. My
own experience as a musician is that the music I am playing is entirely
non-verbal (unless I'm singing a song).
Also, the example you gave was one of *expression* of thoughts and pictures,
which certainly may involve words. I'm talking about the thoughts themselves,
apart from their recording or communication.
--
"Absolute knowledge means never having to change your mind."
Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP: pyramid!thirdi!sarge
------------------------------
Date: Fri, 11 Mar 88 10:06 EST
From: Cliff Joslyn <vu0112@bingvaxu.cc.binghamton.edu>
Subject: Re: language, thought, and culture
In article <334@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:
>In article <900@bingvaxu.cc.binghamton.edu> vu0112@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes:
>>I doubt that any other kind of theory than a representational one could
>>possibly explain this phenomena [groping for words]. I suspect there is a
>>great difference
>>between >semantic< representation (e.g. "deep structure") and
>>"linguistic representation" (primarily aural).
>
>I've seen you use the word "semantic" a good deal in recent postings, and I
>think I came in rather late in the discussion. I thought it meant "having to
>do with the meanings of words", so it isn't clear to me what "semantic
>representation" might mean.
I wish it was more clear to me as well! For the most part I identify a
theory of semantics with the field of semiotics, so I guess I'd have to
say that semantics has to do with the meaning of things in general, not
just words (e.g. icons, indeces, etc. etc. etc.). Obviously, anybody
can take anything to mean anything at any time, that is anything can be
"semanticized." I further believe that in linguistics semantics and
syntax are related so closely that they represent only directions on a
continuum. In any given situation, it may not be possible or useful to
make a qualitative distinction between them.
As far as what an actual "semantic representation" is, I'm unfortunately
unclear. I think that what I meant in the above is that our minds
maintain many complex levels of representation. Some are more
"surface," that is syntactic, while others are "deeper," more semantic.
Consider a person in a burning theater shouting the word "Fire!" That
person has a surface, highly "syntactic" representation of the motor
state necessary to utter that word, and also a deeper, more complex,
semantic representation involving sub-representations of heat, smoke,
panic, the dictionary, etc. My contention above is that when we grope
for words, our brains are operating on deep semantic representations
related to the *content*, *meaning* of the word being groped for.
>I suspect that the purpose of "autistic" or "private" languages, if such there
>be, would be still one of communication -- to one's future self, i.e.,
>to encode thoughts for mnemonic purposes.
First, I don't mean "private language" in that limitting sense of a full
blown "autistic" language. Rather, we go around making stuff up all the
time, novel phrases, words, as you note below, symbol systems of all
kinds, sometimes quite small ones. To the extent that this process is
kept private, it is private language. In relation to communication, it
is sometimes referred to as "intra-egoic" communication, similar to
"one's future self." Why not just "to oneself?" I talk to myself all the
time. I'm doing so now, as I type here on this black screen. For all I
know, you don't exist! :->
>I once made up such a language, in
>my post-graduate years, ..., but I
>wouldn't say that the language was to model the environment, but, rather, to
>encode an already-existing *conceptual* schema.
Yes, and the conceptual schema was also the model of the environment.
If your private language was a good one, it was a homomorphic
(hopefully, but not probably, isomorphic) model of that other model.
This homomorphism was much more useful (e.g. durable, extendable, etc.)
to you than the original (until you lost the code!).
>Of course. Which brings up a thought I had while reading Bateson, recently: I
>think the distinction between analog and digital cannot be counted as
absolute.
Yes, certainly true. In fact, digital is a *special case* of the
general analog case, just as a discrete point (0D object) is a *special
case* of the continuous line (1D object). To be more clear (:->),
something can be more or less analog, but digital is an absolute concept
representing the finite limit of "less analog." The limit for "more
analog" is infinte. (Does that mean anything to anybody? Gosh I hope
so.)
>The elements of which a digital signal is composed must be comprehended
>non-digitally. In the case of language, we have the words which appear as
>analog images in our visual fields and must be interpreted in an analog way in
>order to arrive at the elements used to construct a digital message.
>Conversely, any analog message seems to be composed of digital elements (e.g.,
>pixels, or quanta), which, themselves, are to be comprehended non-digitally,
>and so forth, perhaps ad infinitum.
Yes, certainly ad-infinitum. There is a theory that in all large,
multi-leveled systems, the transition between each level is mediated by
a digital-analog conversion of some sort.
>I mean that thinking of something is different from picturing it.
What about picturing something is a way (case) of thinking of it? Thus
it is both different and the same (it is similar).
>I can have
>the concept of a horse without any specific picture of a horse (in my case, I
>get sort of a fleeting impression of horsiness, but no specific picture).
What the concept of a horse as a "general" picture, a "prototype", or
"archetype" for horse?
>But
>if someone asked me to "picture" a horse, I would have quite a different
>experience from that of conceiving of a horse. I think that this is a useful
>distinction. There are no specific pictures (aural or visual) that go along
>with the concept of a horse. There are *associations* that accompany the
>concept, and some of these are pictorial. But I don't think they are the
>concept per se.
Gosh, this is not my phenomenological experience at all, just the
opposite! One of us is brain-damaged (:->)! Seriously, I doubt that it's
possible to define the concept of the horse per-se without resorting to
some kind of representation of the horse. This representation may be
more or less explicit, specific, detailed, visual, analog, etc., but
it's *got* to be there. In other words, where you draw qualitative
distinctions between a concept and an image, I see a quantitative
gradation. If for you that quantitative difference is sufficient to
motivate the full distinction, that's fine w/me, as long as you
recognize that by doing so you are consciously making the distinction
where there is really a continuum.
O---------------------------------------------------------------------->
| Cliff Joslyn, Professional Cybernetician
| Systems Science Department, SUNY Binghamton, New York, but my opinions
| vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .
------------------------------
Date: Fri, 11 Mar 88 20:11 EST
From: Martin Taylor <mmt@dciem.UUCP>
Subject: Re: language, thought, and culture
Walter Rolandi writes:
>What I would like to ask you is what sorts of problems you can solve by
>means of non-verbal thinking? Perhaps you pictorially explore visual
>scenarios that suggest the consequences of problem solutions that you
>can "see". But does this sort of problem solving really deserve to be
>called thinking?
Well, both Euler and Einstein claimed that the hardest part of their
work was to put what they had imaged into a communicable form, namely
mathematics. I think most people would agree that their problem-solving
derserves to be called thinking. They both claimed to think in some
kind of imagery, in which they saw how things worked together, and
then their main problem was to put words and symbols onto what they
had understood. (As part of my disclaimer, I disclaim being Einstein
or Euler, but that's how I solve problems, too. Words and symbols
hardly ever take part in my thinking, and putting the results into
words is often the hardest part of solving a problem).
There seem to be two major classes of cognitive style: linguistic and
non-linguistic. They can be distinguished by various kinds of
experiment. There may be balanced people who use both interchangeably
or cooperatively, but it's not so clear whether such people actually
exist.
--
Martin Taylor
....uunet!{mnetor|utzoo}!dciem!mmt
mmt@zorac.arpa
Magic is just advanced technology ... so is intelligence. Before computers,
the ability to do arithmetic was proof of intelligence. What proves
intelligence now? Obviously, it is what we can do that computers can't.
------------------------------
Date: Sun, 13 Mar 88 02:19 EST
From: Sarge Gerbode <sarge@thirdi.UUCP>
Subject: Re: language, thought, and culture
In article <926@bingvaxu.cc.binghamton.edu> vu0112@bingvaxu.cc.binghamton.edu
(Cliff Joslyn) writes:
>[O]ur minds
>maintain many complex levels of representation. Some are more
>"surface," that is syntactic, while others are "deeper," more semantic.
>Consider a person in a burning theater shouting the word "Fire!" That
>person has a surface, highly "syntactic" representation of the motor
>state necessary to utter that word, and also a deeper, more complex,
>semantic representation involving sub-representations of heat, smoke,
>panic, the dictionary, etc. My contention above is that when we grope
>for words, our brains are operating on deep semantic representations
>related to the *content*, *meaning* of the word being groped for.
Very interesting view -- one that is worthy of consideration. Is it possible
that what you refer to as deeper semantic representations are what a
psychologist would refer to as "associations" to the word or symbol, and a
linguist might refer to as "connotations"? The "surface" representation,
presumably, would be the form of the symbol itself (i.e. a representation of
the letters, etc.), while the connotations would form the associations
connected to that symbol. In my schema, the concept itself would be the
*denotation* of the word.
>>I suspect that the purpose of "autistic" or "private" languages, if such there
>>be, would be still one of communication -- to one's future self, i.e.,
>>to encode thoughts for mnemonic purposes.
>In relation to communication, [private language]
>is sometimes referred to as "intra-egoic" communication, similar to
>"one's future self." Why not just "to oneself?" I talk to myself all the
>time. I'm doing so now, as I type here on this black screen. For all I
>know, you don't exist! :->
Yes I do, by golly!! But there is a problem with oneself communicating to
oneself in present time. Communication requires the movement of a symbol or
token of some time across a distance from sender to receiver, with an accurate
receipt of the token by the receiver and an accurate understanding of the
meaning of the token by the receiver. In a normal communication context, if
any of these elements is absent, communication cannot take place. "Talking to
oneself", in present time, violates this idea of communication, because there
is no distance between the point from which the communication originates and
the point at which it arrives. Also, no information is conveyed that was not
already there, and so there is no net transfer of information. That is why I
prefer to think of such actions as communication to a future self. There, at
least, there is a distance in time (space too, if one moves), across which
information can be transfered. Since people like me have a delightful ability
to forget most things almost immediately, a communication to my future self
*is* a genuine transfer of new information, by the time it arrives! If one
never forgot anything, one would have no need to communicate to oneself, would
one?
Something you said got me thinking (and, believe it or not, I haven't been able
to formulate the words to express the thought I am getting -- but I also
haven't gotten the concept clearly either). Could it be that the "meaning" of
words, or assignment of such meaning is a Pavlovian process of association, in
which one element of experience serves as a reminder of another. In the case
of Pavlov, the reminder was unconscious, but in the case of language, the
reminder may be quite conscious. Words, then, would be valuable in that they
would serve to remind us of certain elements of experience, much as tying a
string around a finger might do so. By being associated with an element of
experience (such as a concept), they allow us to safely forget that element, in
the knowledge that we can look in a certain place and be reminded of it at
will. (like dumping a file system would allow us to reclaim that memory space
for something else, in the knowledge that we could reclaim anything we needed).
Sorry. That's as far as I have thought it through. And apologies for
posting a half-baked idea.
>There is a theory that in all large,
>multi-leveled systems, the transition between each level is mediated by
>a digital-analog conversion of some sort.
Oh yeah! That makes sense. I vaguely seem to recall seeing something like
that in Bateson also.
>I doubt that it's
>possible to define the concept of the horse per-se without resorting to
>some kind of representation of the horse. This representation may be
>more or less explicit, specific, detailed, visual, analog, etc., but
>it's *got* to be there.
I guess that depends what you mean by "define". If you mean one could not have
a clear concept of a horse without picturing one, I'm not sure I would agree.
I'm relatively bad at getting visual pictures of things (it requires a
considerable effort for me -- so maybe I *am* brain damaged ;-|), but I feel I
have a very clear concept of a horse. Further, the general concept of a horse
would not be adequately expressed by any picture of any *particular* horse.
So, yes, I think there is a qualitative difference. It isn't just that I would
call a fuzzy picture a concept and a distinct one a picture. I can get lots of
fuzzy pictures without any clear concept associated with them, and if I get a
clear picture, then I would not call that a concept either, but a picture.
I don't like the word "representation" too well, by the way, because it is
ambiguuous. It can mean "to stand for", as where a symbol represents a
concept, or it can mean what I call a "picture" -- an image of something. In
things like NLP, it generally seems to mean the latter. But I prefer not to
use the term at all and, rather, to speak of "conceptualizing" and "picturing"
-- as two separate activities.
--
"Absolute knowledge means never having to change your mind."
Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP: pyramid!thirdi!sarge
------------------------------
Date: Sun, 13 Mar 88 12:58 EST
From: Cliff Joslyn <vu0112@bingvaxu.cc.binghamton.edu>
Subject: Re: language, thought, and culture
In article <337@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:
>In article <926@bingvaxu.cc.binghamton.edu> vu0112@bingvaxu.cc.binghamton.edu
>(Cliff Joslyn) writes:
>
>>[O]ur minds
>>maintain many complex levels of representation. [etc]
>
>Very interesting view -- one that is worthy of consideration.
How very kind of you. . . (grrrr)
>Is it possible
>that what you refer to as deeper semantic representations are what a
>psychologist would refer to as "associations" to the word or symbol, and a
>linguist might refer to as "connotations"?
I wouldn't be surprised, but my training is not in psychology per se.
I suspect that these featurres form a kind of "semantic cloud"
surrounding a symbol.
>The "surface" representation,
>presumably, would be the form of the symbol itself (i.e. a representation of
>the letters, etc.), while the connotations would form the associations
>connected to that symbol. In my schema, the concept itself would be the
>*denotation* of the word.
Please understand that my position is that there is *only* a
quantitative difference between "deep" and "surface", "connotation" and
"denotation." In particular, I don't think that any denotations are held
in the mind, nor do I necessarily feel that denotations are forthcoming
in linguistic theory. Of course, the more *explicit*, clear, definite,
and external the "meaning" ("connotation", "semantic cloud") becomes,
the more we are justified in calling it denotation. Denotations, like
numbers, are theoretical entities of a very different sort from mental
representations.
>Yes I do, by golly!! But there is a problem with oneself communicating to
>oneself in present time. Communication requires the movement of a symbol or
>token of some time across a distance from sender to receiver, with an accurate
>receipt of the token by the receiver and an accurate understanding of the
>meaning of the token by the receiver. [etc. deleted]
Yes, there are a number of difficulties here. I suspect your direction
is the right one.
>Could it be that the "meaning" of
>words, or assignment of such meaning is a Pavlovian process of association, in
>which one element of experience serves as a reminder of another. [etc. deleted]
Yes, this is the way it would work. I've said elsewhere that genetics
is a semantic system. In this sense, the "meaning" of the gene is not
only the protein, but the protein in the context of the organism, its
environment, and the role of the portein for that organism. Similarly,
you can say that for the dog the "meaning" of the bell is not just food,
but food in the context of the laboratory, the trainer, etc.
Essentially, semantics is a very general many-one relation of "standing
for". It can be seen in many non-linguistic situations.
>>I doubt that it's
>>possible to define the concept of the horse per-se without resorting to
>>some kind of representation of the horse. This representation may be
>>more or less explicit, specific, detailed, visual, analog, etc., but
>>it's *got* to be there.
>
>I guess that depends what you mean by "define".
I''m sory, I should have said "possible to construct a theory of
concepts without representations."
>If you mean one could not have
>a clear concept of a horse without picturing one, I'm not sure I would
agree.
No, my assertion is that one could not have a clear concept of a horse
without some form of representation of the horse, not necessarily visual.
>I don't like the word "representation" too well, by the way, because it is
>ambiguuous. It can mean "to stand for", as where a symbol represents a
>concept, or it can mean what I call a "picture" -- an image of something. In
>things like NLP, it generally seems to mean the latter.
The strength of ambiguity is generality. That is my goal, a general
semantic theory. In semiotic theory there is a difference between
digital (linguistic, your first case above) and analog (you "picture" or
"image") symbols. But they are both representations, both symbols. It
is fine to speak of the specific *types* of symbols, as long as you
grant that they are both symbols.
>But I prefer not to
>use the term at all and, rather, to speak of "conceptualizing" and "picturing"
>-- as two separate activities.
They may in fact be separate activities, but they are also both
activities of representation. This primary semantic activity is more
interesting to me than either of the specific activities.
The concept of *similarity* is so important. Things that are similar
are both the same and different.
>Sarge Gerbode
O---------------------------------------------------------------------->
| Cliff Joslyn, Professional Cybernetician
| Systems Science Department, SUNY Binghamton, New York, but my opinions
| vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .
------------------------------
End of NL-KR Digest
*******************