Copy Link
Add to Bookmark
Report

AIList Digest Volume 2 Issue 015

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Saturday, 11 Feb 1984      Volume 2 : Issue 15 

Today's Topics:
Proofs - Fermat's Theorem & 4-Color Theorem,
Brain Theory - Parallelism
----------------------------------------------------------------------

Date: 04 Feb 84 0927 PST
From: Jussi Ketonen <JK@SU-AI>
Subject: Fermat and decidability

From the logical point of view, Fermat's last theorem is a Pi-1
statement. It follows that it is decidable. Whether it is valid
or not is another matter.

------------------------------

Date: Sat 4 Feb 84 13:13:14-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Re: Spencer-Brown's Proof

I don't know anything about the current status of the computer proof of the
4-colour theorem, though the last I heard (five years ago) was that it was
"probably OK". That's why I use the word "theorem". However, I can shed
some light on Spencer-Brown's alleged proof -- I was present at a lecture in
Cambridge where he supposedly gave the outline of the proof, and I applauded
politely, but was later fairly authoritatively informed that it disintegrated
under closer scrutiny. This doesn't *necessarily* mean that the man is a
total flake, since other such proofs by highly reputable mathematicians have
done the same (we are told that one proof was believed for twelve whole years,
late in the 19th century, before its flaw was discovered).
- Richard

------------------------------

Date: Mon, 6 Feb 84 14:46:43 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Scientific Method

Isn't it interesting that most of what we think about proofs is belief!
I guess until one actually retraces the steps of a proof and their
justifications one can only express his belief in its truth or falsness.

--Charlie

------------------------------

Date: 3 Feb 84 8:48:01-PST (Fri)
From: harpo!eagle!allegra!alan @ Ucb-Vax
Subject: Re: brain, a parallel processor ?
Article-I.D.: allegra.2254

I've been reading things like:

My own introspection seem to indicate that ...
I find, upon introspection, that ...
I find that most of what my brain does is ...
I also feel like ...
I agree that based on my own observations, my brain appears to
be ...

Is this what passes for scientific method in AI these days?

Alan S. Driscoll
AT&T Bell Laboratories

------------------------------

Date: 2 Feb 84 14:40:23-PST (Thu)
From: decvax!genrad!grkermit!masscomp!clyde!floyd!cmcl2!rocky2!cucard!
aecom!alex @ Ucb-Vax
Subject: Re: brain, a parallel processor ?
Article-I.D.: aecom.358

If the brain was a serial processor, the limiting processing speed
would be the speed that neurons conduct signals. Humans, however, do
very complex processing in real time! The other possibility is that the
data structures of the brain are HIGHLY optimized.


Alex S. Fuss
{philabs, esquire, cucard}!aecom!alex

------------------------------

Date: Tue, 7 Feb 84 13:09:25 PST
From: Adolfo Di-Mare <v.dimare@UCLA-LOCUS>
Subject: I can think in parale||,

but most of time I'm ---sequential. For example, a lot of * I can
talk with (:-{) and at the same time I can be thinking on s.m.t.i.g
else. I also do this when ai-list gets too boring: I keep browsing
until I find something intere sting, and then I do read, with a better
level of under-standing. In the u-time, I can daydream...

However, If I really want to get s.m.t.i.g done, then I cannot think
on anything else! In this cases, I just have one main-stream idea in
my mind. When I'm looking for a solution, I seldom use depth first,
or bread first search. Most of the time I use a convynatium of all
these tricks I know to search, until one 'works'.

To + up, I think we @|-< can do lots of things in lots of ways. And
until we furnish computers with all this tools, they won't be able to
be as intelligent as us. Just parale|| is not the ?^-1.

Adolfo
///

------------------------------

Date: 7 Feb 1984 1433-PST
From: EISELT%UCI-20A@Rand-Relay
Subject: More on Philip Kahn's reply to Rene Bach

I recently asked Philip Kahn (via personal net mail) to elaborate on his three
cycle model of thought, which he described briefly in his reply to Rene Bach's
question. Here is my request, and his reply:

-------------------------

In your recent submission to AIList, you describe a three-process cycle
model of higher-level brain function. Your model has some similarities to
a model of text understanding we are working on here at UC Irvine. You say,
though, that there are "profuse psychophysical and psychological studies that
reinforce the ... model."
I haven't seen any of these studies and would
be very interested in reading them. Could you possibly send me references
to these studies? Thank you very much.

Kurt Eiselt
eiselt@uci-20a


------------------------

Kurt,

I said "profuse" because I have come across many psychological
and physiological studies that have reinforced my belief. Unfortunately,
I have very few specific references on this, but I'll tell you as much as
I can....

I claim there are three stages: associational, reasonability, and
context. I'll tell you what I've found to support each. Associational
nets, also called "computational" or "parameter" nets, have been getting
a lot of attention lately. Especially interesting are the papers coming out
of Rochester (in New York state). I suggest the paper by Feldman called
"Parameter Nets." Also, McCullough in "Embodiments of Mind" introduced a
logical calculus that he proposes neural mechanisms use to form assocational
networks. Since then, a considerable amount of work has been done on
logical calculus, and these works are directly applicable to the analysis
of associational networks. One definitive "associational network" found
in nature that has been exhaustively defined by Ratliff is the lateral
inhibition that occurs in the linear image sensor of the Limulus crab.
Each element of the network inhibits its neighbors based upon its value,
and the result is the second spatial derivative of the image brightness.
Most of the works you will find to support associational nets are directly
culled from neurophysiological studies. Yet, classical conditioning
psychology defines the effects of association in its studies on forward and
backward conditioning. Personally, I feel the biological proof of
associational nets is more concrete.
The support for a "reasonability" level of processing has more
psychological support, because it is generally a cognitive process.
For example, learning is facilitated by subject matter that is most
consistent with past knowledge; that is, knowledge is most facilitated by
a subject that is most "reasonable" in light of past knowledge.
Some studies have shown, though I can't cite them, that the less
"reasonable" a learning task, the lesser is the learned performance.
I remember having seen at least a paper (I believe it was by a natural
language processing researcher) that claimed that the facility of language
is a metaphorical process. By definition, a metaphor is the comparison
of alike traits in dissimilar things; it seems to me this is a very good
way to look at the question of reasonability. Again, though, no specific
references. In neurophysiology there are found "feedback loops" that
may be considered "reasonability" testers in so far that they take action
only when certain conditions are not met. You might want to look at work
done on the cerebellum to document this.
"Context" has been getting a lot of attention lately. Again,
psychology is the major source of supporting evidence, yet neurophysiology
has its examples also. Hormones are a prime example of "contextual"
determinants. Their presence or absence affects the processing that
occurs in the neurons that are exposed to them. But on a more AI level,
the importance of context has been repeatedly demonstrated by psychologists.
I believe that context is a learned phenomena. Children have no construct
of context, and thus, they are often able to make conclusions that may be
associationally feasible, yet clearly contrary to the context of presentation.
Context in developmental psychology has been approached from a more
motivational point of view. Maslowe's hierarchies and the extensive work
into "values" are all defining different levels of context. Whereas an
associational network may (at least in my book) involve excitatory
nodal influences, context involves inhibitory control over the nodes in
the associational network. In my view, associational networks only know
(always associated), (often associated), and (weak association).
(Never associated) dictates that no association exists by default. A
contextual network knows only that the following states can occur between
concepts: (never can occur) and (rarely occurs). These can be defined using
logical calculus and learning theory. The associational links are solely
determined by event pairing and is a more dynamic event. Contextual
networks are more stable and can be the result of learning as well as
by introspective analysis of the associational links.
As you can see, I have few specific references on "context," and rely
upon my own theory of context. I hope I've been of some help, and I would
like to be kept apprised of your work. I suggest that if you want research
evidence of some of the above, that you examine indices on the subjects I
mentioned. Again,

Good luck,
Philip Kahn

------------------------------

Date: 6 Feb 84 7:18:25-PST (Mon)
From: harpo!ulysses!mhuxl!eagle!hou5h!hou5a!hou5d!mat @ Ucb-Vax
Subject: Re: brain, a parallel processor ?
Article-I.D.: hou5d.809

See the Feb. Scientific American for an article on typists and speed. There
is indeed evidence for a high degree of parallelism even in SIMILAR tasks.

Mark Terribile

------------------------------

Date: Wed, 8 Feb 84 18:19:09 CST
From: Doug Monk <bro@rice>
Subject: Re: AIList Digest V2 #11

Subject : Mike Brzustowicz's 'tip of the tongue' as parallel process

Rather than being an example of parallel processing, the 'tip of the
tongue' phenomenon is probably more an example of context switch, where
the attempt to recall the information displaces it temporarily, due to
too much pressure being brought to bear. ( Perhaps a form of performance
anxiety ? ) Later, when the pressure is off, and the processor has a spare
moment, a smaller recall routine can be used without displacing the
information. This model assumes that concentrating on the problem causes
more of the physical brain to be involved in the effort, thus perhaps
'overlaying' the data desired. Once a smaller recall routine is used,
the recall can actually be performed.

Doug Monk ( bro.rice@RAND-RELAY )

------------------------------

Date: 6 Feb 84 19:58:33-PST (Mon)
From: ihnp4!ihopa!dap @ Ucb-Vax
Subject: Re: parallel processing in the brain
Article-I.D.: ihopa.153

If you consider pattern recognition in humans when constrained to strictly
sequential processing, I think we are MUCH slower than computers.

In other words, how long do you think it would take a person to recognize
a letter if he could only inquire as to the grayness levels in different
pixels? Of course, he would not be allowed to "fill in" a grid and then
recognize the letter on the grid. Only a strictly algorithmic process
would be allowed.

The difference here, as I see it, is that the human mind DOES work in
parallel. If we were forced to think sequentially about each pixel in our
field of vision, we would become hopelessly bogged down. It seems to me
that the most likely way to simulate such a process is to have a HUGE
number of VERY dumb processors in a heirarchy of "meshes" such that some
small number of processors in common localities in a low level mesh would
report their findings to a single processor in the next higher level mesh.
This processor would do some very quick, very simple calculations and pass
its findings on the the next higher level mesh. At the top level, the
accumulated information would serve to recognize the pattern. I'm really
speaking off the top of my head since I'm no AI expert. Does anybody know if
such a thing exists or am I way off?

Darrell Plank
BTL-IH
ihopa!dap

[Researchers at the University of Maryland and at the University of
Massachusetts, among others, have done considerable work on "pyramid"
and "processing cone" vision models. The multilayer approach was
also common in perceptron-based pattern recognition, although very
little could be proven about multilayer networks. -- KIL]

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT