Copy Link
Add to Bookmark
Report

AIList Digest Volume 2 Issue 117

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Wednesday, 12 Sep 1984    Volume 2 : Issue 117 

Today's Topics:
AI Tools - Expert-Ease,
Expert Systems - Lenat Bibliography,
Pattern Recognition - Maximal Submatrix Sums,
Cognition - The Second Self & Dreams,
Seminars - Computational Theory of Higher Brain Function &
Distributed Knowledge
----------------------------------------------------------------------

Date: 10 Sep 1984 13:16:20-EDT
From: sde@Mitre-Bedford
Subject: expert-ease

I got a flyer from Expert Systems Inc. offering something called Expert Ease
which is supposed to facilitate producing expert systems. They want $125 for
a demo version, so I thought to inquire if anyone out there can comment on
the thing, especially since the full program is $2000. I'm not eager to buy
a lemon, but if it is a worthwhile product, it might be justifiable as an
experiment.
Thanx in advance,
David sde@mitre-bedford

------------------------------

Date: Tue, 11 Sep 84 19:16 BST
From: TONY HASEMER (on ALVEY at Teddington) <TONH%alvey@ucl-cs.arpa>
Subject: Lenat

Please can anyone suggest any good references, articles etc. concerning
Lenat's heuristic inferencing machine I'd be very grateful.

Tony.

[I can suggest the following:

D.B. Lenat, "BEINGS: Knowledge as Interacting Experts,"
Proc. 4th Int. Jnt. Conf. on Artificial Intelligence,
Tblisi, Georgia, USSR, pp. 126-133, 1975.

D.B. Lenat, AM: An Artificial Intelligence Approach to Discovery
in Mathematics as Heuristic Search, Ph.D. Dissertation,
Computer Science Department Report STAN-CS-76-570,
Heuristic Programming Project Report HPP-76-8,
Artificial Intelligence Laboratory Report SAIL AIM-286,
Stanford University, Stanford, California, 1976.

D.B. Lenat, "Automated Theory Formation in Mathematics,"
5th Int. Jnt. Conf. on Artificial Intelligence, Cambridge, pp. 833-42, 1977.

D.B. Lenat and G. Harris, "Designing a Rule System That Searches for
Scientific Discoveries,"
in D.A. Waterman and F. Hayes-Roth (eds.),
Pattern-Directed Inference Systems, Academic Press, 1978.

D.B. Lenat, "The Ubiquity of Discovery," National Computer Conference,
pp. 241-256, 1978.

D.B. Lenat, "On Automated Scientific Theory Formation: A Case Study Using
the AM Program,"
in J. Hayes, D. Michie, and L.I. Mikulich (eds.),
Machine Intelligence 9, Halstead Press (a div. of John Wiley & Sons),
New York, pp. 251-283, 1979.

D.B. Lenat, W.R. Sutherland, and J. Gibbons, "Heuristic Search for
New Microcircuit Structures: An Application of Artificial Intelligence,"

The AI Magazine, Vol. 3, No. 3, pp. 17-33, Summer 1982.

D.B. Lenat, "The Nature of Heuristics," The AI Journal, Vol. 9, No. 2,
Fall 1982.

D.B. Lenat, "Learning by Discovery: Three Case Studies in Natural and
Artificial Learning Systems,"
in Michalski, Mitchell, and Carbonell (eds.),
Machine Learning, Tioga Press, 1982.

D. B. Lenat, Theory Formation by Heuristic Search,
Report HPP-82-25, Heuristic Programming Project, Dept. of
Computer Science and Medicine, Stanford University, Stanford,
California, October 1982. To appear in The AI Journal, March 1983.

D. B. Lenat, "EURISKO: A Program that Learns New Heuristics and Domain
Concepts,"
Journal of Artificial Intelligence, March 1983. Also available
as Report HPP-82-26, Heuristic Programming Project, Dept. of
Computer Science and Medicine, Stanford University, Stanford,
California, October 1982.

-- KIL]

------------------------------

Date: Wed 12 Sep 84 01:50:03-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Pattern Recognition and Computational Complexity

I have a solution to Jon Bentley's Problem 7 in this month's
CACM Programming Pearl's column (September 1984, pp. 865-871).
The problem is to find the maximal response for any rectangular
subwindow in an array of maximum-likelihood detector outputs.
The following algorithm is O(N^3) for an NxN array. It requires
working storage of just over half the original array size.

/*
** maxwdwsum
**
** Compute the maximum rectangular-window sum in a matrix.
** Return 0.0 if all array elements are negative.
**
** COMMENTS
**
** This algorithm scans the matrix, considering for each
** element all of the rectangular subwindows with that
** element as the lower-right corner. The current best
** window will either be interior to the previously
** processed rows or will end on the current row. The
** latter possibility is checked by considering the data
** on the current row added into the best window of each width
** for each lower-right corner element on the previous row.
**
** The memory array for tracking maximal window sums could
** be reduced to a triangular data structure. An additional
** triple of values could be carried along with globalmax
** to record the location and width of the maximal window;
** saving or recovering the height of the window would be
** a little more difficult.
**
** HISTORY
**
** 11-Sep-84 Laws at SRI-AI
** Wrote initial version.
*/



/* Sample problem. (Answer is 6.0.) */
#define NROWS 4
#define NCOLS 4
float X[NROWS][NCOLS] = {{ 1.,-2., 3.,-1.}, { 2.,-5., 1.,-1.},
{ 3., 1.,-2., 3.}, {-2., 1., 1., 0.}};

/* Macro to return the maximum of two expressions. */
#define MAX(exp1,exp2) (((exp1) > (exp2)) ? (exp1) : (exp2))


main()
{

float globalmax; /* Global maximum */
float M[NCOLS][NCOLS]; /* Max window-sum memory, */
/* (triangular, 1st >= 2nd) */
int maxrow; /* Upper row index */
int mincol,maxcol; /* Column indices */
float newrowsum; /* Sum for new window row */
float newwdwsum; /* Previous best plus new window row */
float newwdwmax; /* New best for this width */
int nowrow,nowcol; /* Loop indices */


/* Initialize the maxima registers. */
globalmax = 0.0;
for (nowrow = 0; nowrow < NCOLS; nowrow++)
for (nowcol = 0; nowcol <= nowrow; nowcol++)
M[nowrow][nowcol] = -1.0E20;

/* Process each lower-right window corner. */
for (maxrow = 0; maxrow < NROWS; maxrow++)
for (maxcol = 0; maxcol < NCOLS; maxcol++) {

/* Increase window width back toward leftmost column. */
newrowsum = 0.0;
for (mincol = maxcol; mincol >= 0; mincol--) {

/* Cumulate the window-row sum. */
newrowsum += X[maxrow][mincol];

/* Compute the sum of the old window and new row. */
newwdwsum = M[maxcol][mincol]+newrowsum;

/* Update the maximum window sum for this width. */
newwdwmax = MAX(newrowsum,newwdwsum);
M[maxcol][mincol] = newwdwmax;

/* Update the global maximum. */
globalmax = MAX(globalmax,newwdwmax);
}
}

/* Print the solution, or 0.0 for a negative array. */
printf("Maximum window sum: %g\n",globalmax);
}

------------------------------

Date: Sat 8 Sep 84 11:14:04-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: The Second Self

The Second Self by Sherry Turkle is an interesting study of the
relationship between computers and people. In contrast to most
studies I've seen, this is not a collection of sensationalism from the
newspapers combined with the wilder statements from various
professionals. Rather, it is (as far as I know) the first thorough
and scientific study of the influence of computers on human thinking
(there's even a boring appendix on methodology, for those who are into
details).

The book starts out with analyses of young children's attitudes towards
intelligent games (Merlin, Speak'n'Spell and others). Apparently, the children
playing with these games spend a great deal of time discussing whether these
games are actually alive or not, whether they know how to cheat, and so forth.
The games manifest themselves as "psychological machines" rather than the
ordinary physical machines familiar to most children. As such, they prompt
children to think in terms of mental behavior rather than physical behavior,
which is said to be an important stage in early mental development (dunno myself
if psychologists hold this view generally).

The theme of computers as "psychological machines" is carried throughout the
book. Older children and adolescents exhibit more of a desire to master the
machine rather than just to interact with it, but interviews with them reveal
that they, too, are aware of the computer as something fundamentally different
from an automobile, in the way that it causes them to think. Computer
hobbyists of both the first (ca 1978) and later generations are interviewed,
and one of them characterizes the computer as "a tool to think with".

Perhaps the section of most interest to AIList readers is the one in which
Turkle interviews a number of workers in AI. Although the material has an
MIT slant (since that's where she did her research), and there's an excess
of quotes from Pam McCorduck's Machines Who Think, this is the first time
I've seen a psychological analysis of motives and attitudes behind the
research. Most interesting was a discussion of "egoless thought" - although
most psychologists (and some philosophers) believe that the existence
of self-consciousness and an ego is a prerequisite to thought and
understanding, there are many workers in AI who do not share this view.
The resolution of this question will have profound effects on many of
the current views in psychology. Along the same lines, Minsky gave a
list of concepts common in computer science which have no analogies in
psychology (such as the notions of "garbage collection" and "pure procedure").

I recommend this book as an interesting viewpoint on computer science in
general and AI in particular. The experimental results alone are worth
reading it for. Hopefully we'll see more studies along these lines in the
future.

stan shebs

------------------------------

Date: Wed 12 Sep 84 09:58:29-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Dreams

A letter by Donald A. Windsor in the new CACM (September, p. 859) suggests
that the purpose of dreams is to test our cognitive models of the people
around us by simulating their behavior and monitoring for bizarre
patterns. He claims that the "dream people" are AI programs that
we construct subconsciously.

-- Ken Laws

------------------------------

Date: 09/11/84 13:56:44
From: STORY
Subject: Seminar - Computational Theory of Higher Brain Function

[Forwarded from the MIT bboard by SASW@MIT-MC.]


TITLE: ``A Computational Theory of Higher Brain Function''

SPEAKER: Leslie M. Goldschlager, Visiting Computer Scientist, Stanford
University

DATE: Friday, September 14, 1984
TIME: Refreshments, 3:45pm
Lecture, 4:00pm
PLACE: NE43-512a

A new model of parallel computation is proposed. The fundamental
item of data in the model is called a "concept", and concepts may be
stored on a two-dimensional data structure called a "memory
surface"
. The nature of the storage mechanism and the mode of
communication which is required between storage locations renders the
model suitable for implementation in VLSI. An implementation is also
possible with neurons arranged in a two-dimensional sheet. It is
argued that the model is particularly worthwhile studying as it
captures some of the computational characteristics of the brain.

The memory surface consists of a vast number of processors which
are called "columns" and which operate asynchronously in parallel.
Each processor stores a small amount of information and can be thought
of as a simple finite-state transducer. Each processor is connected
only to those processors within a small radius, or neighbourhood. As
is usually found with parallel computation, the most important aspect
of the model is the method of communication between the processors.

It is shown in the talk how the function of the individual
processors and the communication between them supports the formation
and storage of associations between concepts. Thus the memory surface
is in effect an associative memory. This type of associative memory
reveals a number of interesting computational features, including the
ability to store and retrieve sequences of concepts and the ability to
form abstractions from simpler concepts.

Certain capabilities taken from the realm of human activities are
shown to be explainable within the model of computation presented
here. These include creativity, self, consciousness and free will. A
theory of sleep is also presented which is consistent with the model.
In general it is argued that the computational model is appropriate
for describing and explaining the higher functions of the brain.
These are believed to occur in a region of the brain called the
cortex, and the known anatomy of the cortex appears to be consistent
with the memory surface model discussed in this talk.

HOST: Professor Gary Miller

------------------------------

Date: Mon, 10 Sep 84 17:38:55 PDT
From: Shel Finkelstein <SHEL%ibm-sj.csnet@csnet-relay.arpa>
Reply-to: IBM-SJ Calendar <CALENDAR%ibm-sj.csnet@csnet-relay.arpa>
Subject: Seminar - Distributed Knowledge

[Forwarded from the Stanford bboard by Laws@SRI-AI.]

IBM San Jose Research Lab
5600 Cottle Road
San Jose, CA 95193

[...]

Thurs., Sept. 13 Computer Science Seminar
3:00 P.M. KNOWLEDGE AND COMMON KNOWLEDGE IN A DISTRIBUTED
Front Aud. ENVIRONMENT
By examining some puzzles and paradoxes, we argue
that the right way to understand distributed
protocols is by considering how messages change the
state of a system. We present a hierarchy of
knowledge states that a system may be in, and discuss
how communication can move the system's state of
knowledge up the hierarchy. Of special interest is
the notion of common knowledge. Common knowledge is
an essential state of knowledge for reaching
agreements and coordinating action. We show that in
practical distributed systems, common knowledge is
not attainable. We introduce various relaxations of
common knowledge that are attainable in many cases of
interest. We describe in what sense these notions
are appropriate, and discuss their relationship to
each other. We conclude with a discussion of the
role of knowledge in a distributed system.
J. Halpern, IBM San Jose Research Lab
Host: R. Fagin


Please note change in directions due to completion of new Monterey
Road (82) exit replacing the Ford Road exit from 101. [...]

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT