Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 009

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Monday, 18 Jul 1988        Volume 8 : Issue 9 

Today's Topics:

Philosophy

Reproducing the brain in low-power analog CMOS (LONG!)
ASL notation
Smart money
AI (GOFAI) and cognitive psychology
Metaepistemology & Phil. of Science
Generality in Artificial Intelligence
Critique of Systems Theory

----------------------------------------------------------------------

Date: 2 Jul 88 15:03:55 GMT
From: uwvax!uwslh!lishka@rutgers.edu (Fish-Guts)
Reply-to: uwvax!uwslh!lishka@rutgers.edu (Fish-Guts)
Subject: Re: Reproducing the brain in low-power analog CMOS (LONG!)


In a previous article,jbn@GLACIER.STANFORD.EDU (John B. Nagle) writes:
>Date: Wed, 29 Jun 88 11:23 EDT
>From: John B. Nagle <jbn@glacier.stanford.edu>
>Subject: Reproducing the brain in low-power analog CMOS
>To: AILIST@ai.ai.mit.edu
>
>
> Forget Turing machines. The smart money is on reproducing the brain
>with low-pwer analog CMOS VLSI. Carver Mead is down at Caltech, reverse
>engineering the visual system of monkeys and building equivalent electronic
>circuits. Progress seems to be rapid. Very possibly, traditional AI will
>be bypassed by the VLSI people.
>
> John Nagle

There is one catch: you do need to know what the Brain (be it
human, monkey, or insect) is doing at a neuronal level. After two
courses in Neurobiology, I am convinced that humans are quite far away
from understanding even a fraction of what is going on.

Although I do not know much about the research happening at
Caltech, I would suspect that they are reverse engineering the visual
system from the retina and working their way back to the visual
cortex. From what I was taught, the first several stages in the
visual system consist of "preprocessing circuits" (in a Professor's
words), and serve to transform the signals from the rods and cones
into higher-order constructs (i.e. lines, motion, etc.). If this is
indeed what they are researching at Caltech, then it is a good choice,
although a rather low level one. I would bet that these stages lend
themselves well to being implemented in hardware, although I don't
know how many "deep" issues about the Brain and Mind that
implementation of these circuits will solve.

One of my Professors pointed out that studying higher order visual
functions is hard because of all of the preprocessing stages that
occur before the visual cortex (where higher order functions are
thought to occur). Since we do not yet know exactly what kinds of
"data abstractions" are reaching the visual cortex, it is hard to come
up with any hardcore theories of the visual cortex because noone
really knows what the input to that area looks like. Sure, we know
what enters the retina, and a fair bit about the workings of the rods
and cones, but how the signals are transformed as they pass through
the bipolar, horizontal, amacrine, and ganglion cells is not known to
any great certainty. [NOTE: a fair bit seems to be known about how
individual classes of neurons tranform the signals, but the overall
*picture* is what is still missing]. Maybe this is what the folks at
Caltech are trying to do; I am not sure. It would help a great deal
in later studies of the visual system to know what kinds of inputs are
reaching the visual cortex.

However, there are other areas of research that do address higher
order functions in cortex. The particular area that I am thinking of
is the Olfactory System, specifically the Pyriform Cortex. This
cortex is only one synapse away from the olfactory sensory apparatus,
so the input into the Pyriform Cortex is *not* preprocessed to any
great degree; much less so than the > 4 levels of preprocessing that
occur in the visual system. Futhermore, the Pyriform Cortex seems to
be similar in structure to several Neural Net models, including some
proposed by Hopfield (who does much of his work in hardware).
Therefore, it is much easier to figure out what sort of inputs are
reaching the Pyriform Cortex, though even this has been elusive to a
large degree. The current research indicates that certain Neural Net
models effectively duplicate characteristics of the cortex to a good
degree. I have read several interesting articles on the storage and
retrieval of previous odors in the Pyriform Cortex; these papers use
Content-Addressable Memory (also called Associative Memory) as the
predominant model. There is even some direct modelling being done by
a couple of Grad. students on a Sun workstatoin down in California.
[If anyone wants these references, send me email; if there is enough
interest, I will post them].

My point is: do not write off AI (especially Neural Net) theories
so fast; there is much interesting work being done right now that has
a much potential as being important in the long run as the work being
done at Caltech. Just because they are implementing Brain *circuits*
and *architecture* in hardware does not mean they will get any closer
than AI researchers. I still believe that AI researchers are doing
much more higher order research than anything that has been
implemented in hardware.

Furthermore, the future of AI does not lie only in Artificial
Intelligence and Computer Science. You bring up a good point: look at
other disciplines as well; your example was hardware. But look even
further: Neurobiology is a fairly important area of reasearch in this
area. So are the Computational sciences: Hopfield does quite a bit of
research here. Furthermore, Hopfield is (or was, at least) a
Physicist; in fact I saw him give a lecture here at the UW about his
networks where he related his work to AI, Neurobiology, Physics,
Computational Sciences, and more...and the talk took place (and was
sponsored by) the Physics department. Research into AI and Brain
related fields is being performed in many discplines, and each will
influence all others in the end.

Whew! Sorry to get up on my soapbox, but I had to let it out.
Remember, these are just my humble opinions, so don't take them too
seriously if you do not like them. I would be happy to discuss this
further over email, and I can give you references to some interesting
articles to read on ties between AI and Neurobiology (almost all
dealing with the Olfactory System, as that looks the most promising
from my point of view).

-Chris--
Christopher Lishka | lishka@uwslh.uucp
Wisconsin State Lab of Hygiene | lishka%uwslh.uucp@cs.wisc.edu
Immunology Section (608)262-1617 | ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
"...Just because someone is shy and gets straight A's does not mean they won't
put wads of gum in your arm pits."

- Lynda Barry, "Ernie Pook's Commeek: Gum of Mystery"

------------------------------

Date: 3 Jul 88 18:47:04 GMT
From: doug@feedme.UUCP (Doug Salot)
Reply-to: doug@feedme.UUCP (Doug Salot)
Subject: ASL notation

hayes.pa@XEROX.COM writes:
>On dance notation:
>A quick suggestion for a similar but perhaps even thornier problem: a notation
>for the movements involved in deaf sign language.

I'm not sure how this would benefit the deaf community, as most signers
are quite capable of reading written language, however work has been
done to "quantize" ASL. Look to the work of Poizner, Bellugi, et al
at the Salk Institute for suggested mappings between several movement
attributes (planar locus, shape, iteration freq., etc) and language
attributes (case, number, etc.).
---
Doug Salot || doug@feedme.UUCP || ...{trwrb,hplabs}!felix!dhw68k!feedme!doug
"Thinking: The Thinking Man's Sport"

------------------------------

Date: Thu, 7 Jul 88 09:09:00 pdt
From: Ray Allis <ray@BOEING.COM>
Subject: Smart money

John Nagle says:

> Forget Turing machines. The smart money is on reproducing the brain
>with low-power analog CMOS VLSI. Carver Mead is down at Caltech, reverse
>engineering the visual system of monkeys and building equivalent electronic
>circuits. Progress seems to be rapid. Very possibly, traditional AI will
>be bypassed by the VLSI people.

Hear, hear! Smart money indeed! I don't know whether Carver prefers to work
low-profile, but this has to be the most undervalued AI work of the century.
This is the most likely area for the next breakthrough toward creating
intelligent artifacts. And the key concept is that these devices are
ANALOG.


Ray Allis
Boeing Computer Services-Aerospace Support
CSNET: ray@boeing.com

------------------------------

Date: Tue, 12 Jul 88 12:48:25 EDT
From: Ralf.Brown@b.gp.cs.cmu.edu

To: comp-ai-digest@rutgers.edu
Path: b.gp.cs.cmu.edu!Ralf.Brown@B.GP.CS.CMU.EDU
From: Ralf.Brown@B.GP.CS.CMU.EDU
Newsgroups: comp.ai.digest
Subject: Re: Generality in Artificial Intelligence
Message-ID: <22d9eea5@ralf>
Date: 12 Jul 88 10:49:09 GMT
Sender: ralf@b.gp.cs.cmu.edu
Lines: 29
In-Reply-To: <19880712044954.9.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU>


In a previous article, YLIKOSKI@FINFUN.BITNET writes:
}Thus, it would seem that in order for us to see true commonsense
}knowledge exhibited by a program we need:
}
} * a vast amount of knowledge involving the world of a person
} in virtual memory. The knowledge involves gardening,
} Buddhism, the emotions of an ordinary person and so forth -
} its amount might equal a good encyclopaedia.

Actually, it would probably put the combined text of several good encyclopedias
to shame. Even encyclopedias and dictionaries leave out a lot of "common-sense"
information.

The CYC project at MCC is a 10-year undertaking to build a large knowledge
base of real world facts, heuristics, and methods for reasoning over the
knowledge base.[1] The current phase of the project is to carefully represent
400 articles from a one-volume encyclopedia. They expect their system to
contain about 10,000 frames once they've encoded the 400 articles, about half
of them common-sense concepts.


[1] CYC: Using Common Sense Knowledge to Overcome Brittleness and Knowledge
Acquisition Bottlenecks, AI Magazine v6 #4.

--
UUCP: {ucbvax,harvard}!cs.cmu.edu!ralf -=-=-=- Voice: (412) 268-3053 (school)
ARPA: ralf@cs.cmu.edu BIT: ralf%cs.cmu.edu@CMUCCVMA FIDO: Ralf Brown 1:129/31
Disclaimer? I |Ducharm's Axiom: If you view your problem closely enough
claimed something?| you will recognize yourself as part of the problem.

------------------------------

Date: 13 Jul 88 04:48:34 GMT
From: pitt!cisunx!vangelde@cadre.dsl.pittsburgh.edu (Timothy J Van)
Subject: AI (GOFAI) and cognitive psychology

What with the connectionist bandwagon, everyone seems to be getting a lot
clearer about just what AI is and what sort of a picture of cognition
it embodies. The short story, of course, is that AI claims that thought
in general and intelligence in particular is the rule governed manipulation
of symbols. So AI is committed to symbolic representations with a
combinatorial syntax and formal rules defined over them. The implemenation
of those rules is computation.

Supposedly, the standard or "classical" view in cognitive psychology is
committed to exactly the same picture in the case of human cognition, and
so goes around devising models and experiments based on these commitments.


My question is - how much of cognitive psychology literally fits this kind
of characterization? Some classics, for example the early Shepard and
Metzler experiments on image rotation dont seem to fit the description
very closely at all. Others, such as the SOAR system, often seem to
remain pretty vague about exactly how much of their symbolic machinery
they are really attributing to the human cognizer.

So, to make my question a little more concrete - I'd be interested to know
what people's favorite examples of systems that REALLY DO FIT THE
DESCRIPTION are? (Or any other interesting comments, of course.)

Tim van Gelder

------------------------------

Date: 14 Jul 88 09:55:40 EDT
From: David Chess <CHESS@ibm.com>
Subject: Metaepistemology & Phil. of Science

>In this sense the reality is unknowable. We only have
>descriptions of the actual world.

This "only" seems to be the key to the force of the argument. If
it were "we have descriptions of the actual world", it would sound
considerably tamer. The "only" suggests that there is something
else (besides "descriptions") that we *might* have, but that we
do not. What might this something else be? What, besides
"descriptions", could we have of the actual world? I certainly
wouldn't want the actual world *itself* in my brain (wouldn't fit).

Can anyone complete the sentence "The actual world is unknowable to
us, because we have only descriptions/representations of it, and not..."
?

(I would tend to claim that "knowing" is just (roughly) "having
the right kind of descriptions/representations of"
, and that
there's no genuine "unknowability" here; but that's another
debate...)

Dave Chess
Watson Research

* Disclaimer: Who, me?

------------------------------

Date: Thu, 14 Jul 88 15:06:53 BST
From: mcvax!doc.ic.ac.uk!sme@uunet.UU.NET
Reply-to: sme@doc.ic.ac.uk (Steve M Easterbrook)
Subject: Re: Generality in Artificial Intelligence

In a previous article, YLIKOSKI@FINFUN.BITNET writes:
>> "In my opinion, getting a language for expressing general
>> commonsense knowledge for inclusion in a general database is the key
>> problem of generality in AI."

>...
>Here follows an example where commonsense knowledge plays its part. A
>human parses the sentence
>
>"Christine put the candle onto the wooden table, lit a match and lit
>it."

>
>The difficulty which humans overcome with commonsense knowledge but
>which is hard to a program is to determine whether the last word, the
>pronoun "it" refers to the candle or to the table. After all, you can
>burn a wooden table.
>
>Probably a human would reason, within less than a second, like this.
>
>"Assume Christine is sane. The event might have taken place at a
>party or during her rendezvous with her boyfriend. People who do
>things such as taking part in parties most often are sane.
>
>People who are sane are more likely to burn candles than tables.
>
>Therefore, Christine lit the candle, not the table."


Aren't you overcomplicating it a wee bit? My brain would simply tell me
that in my experience, candles are burnt much more often than tables.
QED.

This to me, is very revealing. The concept of commonsense knowledge that
McCarthy talks of is simply a huge base of experience built up over a
lifetime. If a computer program was switched on for long enough, with a
set of sensors similar to those provided by the human body, and a basic
abililty to go out and do things, to observe and experiment, and to
interact with people, it would be able to gather a similar set of
experiences to those possessed by humans. The question is then whether
the program can store and index those experiences, in their totality, in
some huge episodic memory, and whether it has the appropriate mechanisms
to fire useful episodic recalls at useful moments, and apply those
recalls to the present situation, whether by analogy or otherwise.

>From this, it seems to me that the most important task that AI can
address itself to at the present is the study of episodic memory: how it
can be organised, how it can be accessed, and how analogies with past
situations can be developed. This should lead to a theory of experience,
ready for when robotics and memory capacities are advanced enough for
the kind of exeriment I descibed above. With all due respect to McCarthy
et al, attempts to hand code the wealth of experience of the real world
that adult human beings have accumulated are going to prove futile.
Human intelligence doesn't gather this commonsense by being explicitly
programmed with rules (formal OR informal), and neither will artificial
intelligences.

>It seems to me that the inferences are not so demanding but the
>inferencer utilizes a large amount of background knowledge and a good
>associative access mechanism.

Yes. Work on the associative access, and let the background knowledge
evolve itself.

>...
>What kind of formalism should we use for expressing the commonsense
>knowledge?

Try asking what kind of formalism should we use for expressing episodic
memories? Later on you suggest natural language. Is this suitable?
Do people remember things by describing them in words to themselves?
Or do they just create private "symbols", or brain activation patterns,
which only need to be translated into words when being communicated to
others? Note: I am not saying people don't think in natural language,
only that they don't store memories as natural language accounts.

I don't think this kind of experience can be expressed in any formalism,
nor do I think it can be captured by natural language. It needs to
evolve as a private set of informal symbols, of which the brain (human
or computer) does not need to consciously realise are there. All it needs
to do is associate the right thought with the symbols when they are
retrieved, i.e. to interpret the memories. Again I think this kind of
ability evolves with experience: at first, symbols (brain activity
patterns) would be triggered which the brain would be unable to interpret.

If this is beginning to sound like an advocation of neural
nets/connectionist learning, then so be it. I feel that a conventional
AI system coupled to a connectionist net for its episodic memory might
be a very sensible achitecture. There are probably other ways of
achieving the same behaviour, I don't know.

One final note. Read the first chapter of the Dreyfus and Dreyfus book
"Mind over Machine", for a thought-provoking account of how experts
perform, using "intuition". Logic is only used in hindsight to support
an intuitive choice. Hence "heuristics is compiled hindsight". Being
arch-critics of AI, the Dreyfuses conclude that the intuition that experts
develop is intrinsically human and can never be reproduced by machine.
Being an AI enthusiast, I conclude that intuition is really the
unconscious application of experience, and all that's needed to
reproduce it is the necessary mechanisms for storing and retrieving
episodic memories by association.

>In my opinion, it is possible that an attempt to express commonsense
>knowledge with a formalism is analogous to an attempt to fit a whale
>into a tin sardine can. ...

I agree.
How did you generate this analogy? Is it because you had a vast
amount of common sense knowledge about whales and sardines, and tins,
(whether expressed in natural language (vast!), or some formal system)
through which you had to wade to realise that sardines will fit
in small tins but whales will not, and eventually synthesis this particular
concept, or did you just try to recall (holistically) an image that
matched the concept of forcing a huge thing into a rigid container?

Steve Easterbrook.

------------------------------

Date: Fri, 15 Jul 88 13:55:04 PDT
From: larry@VLSI.JPL.NASA.GOV
Subject: Philosophy: Critique of Systems Theory

--
Using Gilbert Cockton's references to works critical of systems theory, over
the last month I've spent a few afternoons in the CalTech and UCLA libraries
tracing down those and other criticisms. The works I was able to get and
study are at the end of this message. I also examined a number of introduc-
tory texts on psychology and sociology from the last 15 years or so.

General Systems Theory was founded by biologist Ludwig von Bertallanfy in
the late '40s. It drew heavily on biology, borrowed from many areas, and
promised a grand unified theory of all the sciences. The ideas gained
momentum till in the early '70s in the "humanics" or "soft sciences" it had
reached fad proportions. Bertallanfy was made an honorary psychoanalyst,
for instance, and a volume appeared containing articles by various prominent
analysts discussing the effects of GST on their field. After that peak,
interest died down. Indexes in social-sciences textbooks carried fewer
references to smaller discussions. The GST journal became thinner and went
from annual to biannual; the latest issue was in 1985. Interest still
exists, however, and in various bibliographies of English publications for
1987 I found at total of seven new books.

What seems to have happened is that the more optimistic promises of GST
failed and lost it the support of most of its followers. Its more success-
ful ideas were pre-empted by several fields. These include control theory
in engineering, taxonomy of organizations in management, and the origins of
psychosis in social psychology.

For me the main benefit of GST has been a personally satisfactory resolution
of the reduction paradox, which follows.

Because of the limits of human intelligence, we simplify and view the
universe as various levels of abstraction, each level fairly independent of
the levels "above" and "below" it. This gives rises to arguments, however,
about whether, for instance, physics or psychology is "truer." The simple
answer is that both are only approximations of an ultimately unknowable
reality and, since both views are too useful give up, their incompatibility
is inevitable and we just have to live with it. This is what many physi-
cists have done with the conflict between quantum and wave views of energy.

The GST view is that each higher level of organization is built on a
previous one via systems, which are new kinds of units made from binding
together more elementary units. New kinds of systems exhibit synergy:
attributes and abilities not observed in any of their constituent elements.
But where do these attributes/abilities come from? I found no answer in the
writings on system theory that I read, so I had to make my own answer.

I finally settled on what interaction effects. Two atoms bound together
chemically affect each other's electron shrouds, forming a new shroud around
them both. This gives the resulting molecule a color, solubility, conduc-
tivity, and so on that neither solitary atom has.

Similarly, two people can cross a wall neither can alone by one standing on
the other's shoulders to reach the top, then pulling the bottom partner up.
A "living" machine can repair and reproduce itself. And consciousness can
arise from elements that don't have it -- memories, processors, sensors, and
effectors -- though no amount of study of individual elements will find life
or consciousness. They are the result of interaction effects.

_System Analysis in Public Policy: a Critique_, I. Hoos, 1972
_Central Problems in Social Theory_, Anthony Giddens, 1979
_System Theory, System Practice_, P. B. Checkland, 1981
_On Systems Analysis_, David Berlinski, 1976
_The Rise of Systems Theory_, R. Lilienfeld, 1978

The last two are reviewed in _Futures_, 10/2, p. 159 and 11/2, p. 165.
They also contain criticisms of system theory which, they complain,
Berlinski and Lilienfeld overlooked.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT