Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 128

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Wednesday, 16 Nov 1988    Volume 8 : Issue 128 

Philosophy:

Lightbulbs and Related Thoughts
Attn set comments from a man without any
Artificial Intelligence and Intelligence

Notes on Neural Networks

----------------------------------------------------------------------

Date: 14 Nov 88 15:42:51 GMT
From: rochester!uhura.cc.rochester.edu!sunybcs!lammens@cu-arpa.cs.corn
ell.edu (Johan Lammens)
Subject: Re: Lightbulbs and Related Thoughts

In article <778@wsccs.UUCP> dharvey@wsccs.UUCP (David Harvey) writes:
>Don't forget to include the iconic memory. This is the buffers
>so to speak of our sensory processes. I am sure that you have
>saw many aspects of this phenomenon by now. Examples are staring
>at a flag of the United States for 30 seconds, then observing the
>complementary colors of the flag if you then look at a blank wall
>(usually works best if the wall is dark). [...]

Perhaps this question is a witness to my ignorance, but isn't the phenomenon
you describe a result of the way the retina processes images, and if
so, do you mean to say that iconic memory is located in the retina?

------------------------------------------------------------------------------
Jo Lammens Internet: lammens@cs.Buffalo.EDU
uucp : ..!{ames,boulder,decvax,rutgers}!sunybcs!lammens
BITNET : lammens@sunybcs.BITNET

------------------------------

Date: Mon, 14 Nov 88 15:14:20 CST
From: alk@ux.acss.UMN.EDU
Subject: Attn set comments from a man without any


The problem of constraint of the attention set by prior knowledge
which was observed by Tony Stuart, i.e. that a known solution may inhibit
the search for an alternative, even when the known solution does not
have optimal characteristics, goes far beyond the range of David Harvey's
statement that 'the only thing that can be said is that insconsistencies of
data with the rule base must allow for retraction of the rule and assertion
for [sic] new ones.' Stuart's observation, unless I miscontrue [please
correct me] is not focused on the deduction of hypotheses, but extends also
to realms of problem-solving wherein the suitability of a solution is
(at the least) fuzzy-valued, if not outright qualitative.
The correctness of a solution is not so
much at issue in such a case as is the *suitability* of that solution.
Of course this suggests the use of fuzzy-valued backward-chaining
reasoning as a possible solution to the problem (the problem raised by
Tony Stuart, not the "problem" faced by the AI entity), but I am unclear
as to what semantic faculties are required to implement such a system.
Perhaps the most sensible solution is to allow resolution of all
paths to continue in parallel (subconscious work on the "problem")
for some number of steps after a solution is already discovered.
(David Harvey's discussion prompts me to think in Prolog terms here.)

Why do I quote "problem"? Deconstruct! In this broader context,
a problem may consist of a situation faced by the AI entity, without
the benefit of a programmatic goal in the classical sense.
What do I mean by this? I'm not sure, but its there, nagging me.
Of course goal-formulation must be driven, but at some point
the subgoal-goal path reaches an end. This is where attention set
and sensation (subliminal suggestion? or perhaps those continuing
resolution processes, reawakened by the satisfaction of current
goals--the latter being more practically useful to the human
audience of the AI entity) become of paramount importance.

Here I face the dilemma: Are we building a practical, useful,
problem solving system, or are we pursuing the more elevated (???)
goal of writing a program that's proud of us? Very different things!

Enough rambling. Any comments?

--alk@ux.acss.umn.edu, BITNET: alk@UMNACUX.
U of Mn ACSS <disclaimer>
"Quidquid cognoscitur, cognoscitur per modum cognoscentis"

------------------------------

Date: 15 Nov 88 02:29:12 GMT
From: quintus!ok@unix.sri.com (Richard A. O'Keefe)
Subject: Re: Artificial Intelligence and Intelligence

In article <484@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>Definition of Intelligence:
>
>1. Know how to solve problems.
>2. Know which problems are unsolvable.
>3. Know #1, #2, and #3 defines intelligence.
>
>This is the correct definition of intelligence. If anyone disagrees, please
>state so and why.
>
(Gilbert Cockton is going to love me for this, I can tell...)
Intelligence is a social construct, an ascription of value to certain
characteristics and behaviours deemed to be mental. One child who has
memorized the periodic table of the elements will be deemed intelligent,
another child who has memorized baseball scores for the last N years
will be deemed sports-mad, even though they may have acquired comparable
bodies of information _by_comparable_means_. If we have three people in
a room: Subject, Experimenter, and Informant, if Subject does something,
and Informant says "that was intelligent", Experimenter is left wondering
"is that a fact about Subject's behaviour, or about Informant's culture?"
The answer, of course, is "yes it is".

Dijkstra's favourite dictionary entry is
"Intelligent, adj. ... able to perform the functions of a computer ..."
(Dijkstra doesn't think much of AI...)

In at least some respects, computers are already culturally defined as
intelligent.

>Human beings are not machines.
I agree with this.

>Human beings are capable of knowing which problems are unsolvable, while
>machines are not.
But I can't agree with this! There are infinitely many unsolvable
problems, and determining whether a particular problem is unsolvable
is itself unsolvable. This does _not_ mean that a machine cannot
determine that a particular problem _is_ solvable, only that there
cannot be a general procedure for classifying _all_ problems which is
guaranteed to terminate in finite time. Human beings are also capable
of giving up, and of making mistakes. Most of the unsolvable problems
I know about I was _told_; machines can be told!

Human beings are not machines, but they aren't transfinite gods either.

------------------------------

Date: Mon, 14 Nov 88 22:35:47 CST
From: David Kanecki <kanecki@vacs.uwp.wisc.edu>
Subject: Notes on Neural Networks


Notes on Neural Networks:


During the month of September while trying various
experiements on neural networks I noted two observations:

1. Based on how the data for the A and B matrix
are setup the learning equation of:
T
w(n)=w(n-1)+nn(t(n)-o(n)*i (n)

may take more presentations for the system to learn
then A and B output.

2. Neural Networks are self correcting in that if a
incorrect W matrix is given by using the presentation/
update process the W matrix will give the correct answers,
but the value of the individual elements will differ when
compared to a correct W matrix.


Case 1: Different A and B matrix setup

For example, in applying neural networks to the XOR problem
I used the following A and B matrix:

A H | H B
------- |------
0 0 0 | 0 0
0 1 0 | 0 1
1 0 0 | 0 1
0 1 1 | 1 1

My neural network learning system took 12 presentations to
arrive at the correct B matrix when presented with the corresponding
A matrix. The W matrix was:


W(12) = | -0.5 0.75 |
| -0.5 0.75 |
| 3.5 -1.25 |


For the second test I set the A and B matrix as follows:

A H | B
------------
0 0 0 | 0
0 1 0 | 1
1 0 0 | 1
1 1 1 | 0

This setup took 8 presentations for my neural network learning
system to arrive at a correct B matrix when presented with the
corresponding A matrix. The final W matrix was:

W(8) = | -0.5 -0.5 2.0 |


Conclusion: These experiements indicate to me that a
systems learning rate can be increased by presenting the
least amount of extraneous data.


--------------


Case 2: Self Correction of Neural Networks

In this second experiment I found that neural networks
exhibit great flexibility. This experiment turned out to
be a happy accident. Before I had developed my neural network
learning system I was doing neural network experiments by
speadsheet and hand transcription. During the transciption
three elements in 6 X 5 W matrix had the wrong sign. For example,
the resulting W matrix was:


| 0.0 2.0 2.0 2.0 2.0 |
|-2.0 0.0 4.0 0.0 0.0 |
W(0)= | 0.0 2.0 -2.0 2.0 -2.0 |
| 0.0 2.0 0.0 -2.0 2.0 |
|-2.0 4.0 1.0 0.0 0.0 |
| 2.0 -4.0 2.0 0.0 0.0 |




W(24) = | 0.0 2.0 2.0 2.0 2.0 |
|-1.53 1.18 1.18 -0.25 -0.15 |
| 0.64 0.12 -0.69 1.16 -0.50 |
| 0.27 -0.26 -0.06 -0.53 0.80 |
|-1.09 1.62 0.79 -0.43 -0.25 |
| 1.53 -1.18 -0.68 0.25 0.15 |


By applying the learning algorithm it took 24 presentations
the W matrix to give correct B matrix when presented with corresponding
A matrix.


But, when the experiment was run on my neural network learning
system I had a W(0) matrix of:

W(0) = | 0.0 2.0 2.0 2.0 2.0 |
|-2.0 0.0 4.0 0.0 0.0 |
| 0.0 2.0 -2.0 2.0 -2.0 |
| 0.0 2.0 -2.0 -2.0 2.0 |
|-2.0 4.0 0.0 0.0 0.0 |
| 2.0 -4.0 0.0 0.0 0.0 |


After 5 presentations the W(5) matrix came out to be:

W(5) = | 0.0 2.0 2.0 2.0 2.0 |
|-2.0 0.0 4.0 0.0 0.0 |
| 0.0 2.0 -2.0 2.0 -2.0 |
| 0.0 2.0 -2.0 -2.0 2.0 |
| 2.0 -4.0 0.0 0.0 0.0 |

Conclusion: Neural networks are self correcting but the final
W matrix way have different values. Also, if a W matrix does
not have to go through the test/update procedure the W matrix
could be used both ways in that a A matrix generates the B matrix
and a B matrix generates the A matrix as in the second example.

----------------


I am interested in communicating and discussing various
aspects of neural networks. I can be contacted at:

kanecki@vacs.uwp.wisc.edu

or at:

David Kanecki
P.O. Box 93
Kenosha, WI 53140

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT