Copy Link
Add to Bookmark
Report

AIList Digest Volume 2 Issue 011

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Tuesday, 31 Jan 1984      Volume 2 : Issue 11 

Today's Topics:
Techniques - Beam Search Request,
Expert Systems - Expert Debuggers,
Mathematics - Arnold Arnold Story,
Courses - PSU Spring AI Mailing Lists,
Awards - Fredkin Prize for Computer Math Discovery,
Brain Theory - Parallel Processing,
Intelligence - Psychological Definition,
Seminars - Self-Organizing Knowledge Base, Learning, Task Models
----------------------------------------------------------------------

Date: 26 Jan 1984 21:44:11-EST
From: Peng.Si.Ow@CMU-RI-ISL1
Subject: Beam Search

I would be most grateful for any information/references to studies and/or
applications of Beam Search, the search procedure used in HARPY.

Peng Si Ow
pso@CMU-RI-ISL1

------------------------------

Date: 25 Jan 84 7:51:06-PST (Wed)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ncsu!uvacs!erh @ Ucb-Vax
Subject: Expert debuggers
Article-I.D.: uvacs.1148

See also "Sniffer: a system that understands bugs", Daniel G. Shapiro,
MIT AI Lab Memo AIM-638, June 1981
(The debugging knowledge of Sniffer is organized as a bunch of tiny
experts, each understanding a specific type of error. The program has an in-
depth understanding of a (very) limited class of errors. It consists of
a cliche-finder and a "time rover". Master's thesis.)

------------------------------

Date: Thursday, 26-Jan-84 19:11:37-GMT
From: BILL (on ERCC DEC-10) <Clocksin%edxa@ucl-cs.arpa>
Reply-to: Clocksin <Clocksin%edxa@ucl-cs.arpa>
Subject: AIList entry

In reference to a previous AIList correspondent wishing to know more about
Arnold Arnold's "proof" of Fermat's Last Theorem, last week's issue of
New Scientist explains all. The "proof" is faulty, as expected.
Mr Arnold is a self-styled "cybernetician" who has a history of grabbing
headlines with announcements of revolutionary results which are later
proven faulty on trivial grounds. I suppose A.I. has to put up with
its share of circle squarers and angle trisecters.

------------------------------

Date: 28 Jan 84 18:23:09-PST (Sat)
From: ihnp4!houxm!hocda!hou3c!burl!clyde!akgua!sb1!sb6!bpa!burdvax!psu
vax!bobgian@Ucb-Vax
Subject: PSU Spring AI mailing lists
Article-I.D.: psuvax.433

I will be using net.ai for occasionally reporting "interesting" items
relating to the PSU Spring AI course.

If anybody would also like "administrivia" mailings (which could get
humorous at times!), please let me know.

Also, if you want to be included on the "free-for-all" discussion list,
which will include flames and other assorted idiocies, let me know that
too. Otherwise you'll get only "important" items.

The "official Netwide course" (ie, net.ai.cse) will start up in a month
or so. Meanwhile, you are welcome to join the fun via mail!

Bob

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
UUCP: bobgian@psuvax.UUCP -or- allegra!psuvax!bobgian
Arpa: bobgian@PSUVAX1 -or- bobgian%psuvax1.bitnet@Berkeley
Bitnet: bobgian@PSUVAX1.BITNET CSnet: bobgian@penn-state.csnet
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

Date: 26 Jan 84 19:39:53 EST
From: AMAREL@RUTGERS.ARPA
Subject: Fredkin Prize for Computer Math Discovery

[Reprinted from the RUTGERS bboard.]

Fredkin Prize to be Awarded for Computer Math Discovery

LOUISVILLE, Ky.--The Fredkin Foundation will award a $100,000 prize for the
first computer to make a major mathematical discovery, it was announced today
(Jan. 26).

Carnegie-Mellon University has been named trustee of the "Fredkin Prize for
Computer Discovery in Mathematics", according to Raj Reddy, director of the
university's Robotics Institute, and a trustee of IJCAI (International Joint
Council on Artificial Intelligence) responsible for AI prizes. Reddy said the
prize will be awarded "for a mathematical work of distinction in which some of
the pivotal ideas have been found automatically by a computer program in which
they were not initially implicit."

"The criteria for awarding this prize will be widely publicized and reviewed by
the artificial intelligence and mathematics communities to determine their
adequacy," Reddy said.

Dr. Woody Bledsoe of the University of Texas at Austin will head a committee of
experts who will define the rules of the competition. Bledsoe is
president-elect of the American Association for Artificial Intelligence.

"It is hoped," said Bledsoe, "that this prize will stimulate the use of
computers in mathematical research and have a good long-range effect on all of
science."

The committee of mathematicians and computer scientists which will define the
rules of the competition includes: William Eaton of the University of Texas at
Austin, Daniel Gorenstein of Rutgers University, Paul Halmos of Indiana
University, Ken Kunen of the University of Wisconsin, Dan Mauldin of North
Texas State University and John McCarthy of Stanford University.

Also, Hugh Montgomery of the University of Michigan, Jack Schwartz of New York
University, Michael Starbird of the University of Texas at Austin, Ken
Stolarsky of the University of Illinois and Francois Treves of Rutgers
University.

The Fredkin Foundation has a similar prize for a world champion computer chess
system. Recently, $5,000 was awarded to Ken Thompson and Joseph Condon, Bell
Laboratories researchers who developed the first computer system to achieve a
Master rating in tournament chess.

------------------------------

Date: 26 Jan 84 15:34:50 PST (Thu)
From: Mike Brzustowicz <mab@aids-unix>
Subject: Re: Rene Bach's query on parallel processing in the brain

What happens when something is "on the tip of your tounge" but is beyond
recall. Often (for me at least) if the effort to recall is displaced
by some other cognitive activity, the searched-for information "pops-up"
at a later time. To me, this suggests at least one background process.

-Mike (mab@AIDS-UNIX)

------------------------------

Date: Thu, 26 Jan 84 17:19:30 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: How my brain works

I find that most of what my brain does is pattern interpretation. I receive
various sensory input in the form of various kinds of vibrations (i.e.
eletromagnetic and acoustic) and my brain perceives patterns in this muck.
Then it attaches meanings to the patterns. Within limits, I can attach these
meanings at will. The process of logical deduction a la Socrates takes up
a negligible time-slice in the CPU.

--Charlie

------------------------------

Date: Fri, 27 Jan 84 15:35:21 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Re: How my brain works

I see what you mean about the question as to whether the brain is a parallel
processor in consious reasoning or not. I also feel like a little daemon that
sits and pays attention to different lines of thought at different times.

An interesting counterexample is the aha! phenomenon. The mathematician
Henri Poincare, among others, has written an essay about his experience of
being interrupted from his conscious attention somehow and becoming instantly
aware of the solution to a problem he had "given up" on some days before.
It was as though some part of his brain had been working on the problem all
along even though he had not been aware of it. When it had gotten the solution
an interrupt occurred and his conscious mind was triggered into the awareness
of the solution.

--Charlie

------------------------------

Date: Mon 30 Jan 84 09:47:49-EST
From: Alexander Sen Yeh <AY@MIT-XX.ARPA>
Subject: Request for Information

I am getting started on a project which combines symbolic artificial
intelligence and image enhancement techniques. Any leads on past and
present attempts at doing this (or at combining symbolic a.i. with
signal processing or even numerical methods in general) would be
greatly appreciated. I will send a summary of replies to AILIST and
VISION LIST in the future. Thanks.

--Alex Yeh
--electronic mail: AY@MIT-XX.ARPA
--US mail: Rm. 222, 545 Technology Square, Cambridge, MA 02139

------------------------------

Date: 30 January 1984 1554-est
From: RTaylor.5581i27TK @ RADC-MULTICS
Subject: RE: brain, a parallel processor ?

I agree that based on my own observations, my brain appears to be
working more like a time-sharing unit...complete with slow downs,
crashes, etc., due to overloading the inputs by fatigue, poor maintenance,
and numerous inputs coming too fast to be covered by the
time-sharing/switching mechanism!
Roz

------------------------------

Date: Monday, 30 Jan 84 14:33:07 EST
From: shrager (jeff shrager) @ cmu-psy-a
Subject: Psychological Definition of (human) Intelligence

Recommended reading for persons interested in a psychological view of
(human) intelligence:

Sternberg, R.J. (1983) "What should intelligence tests test? Implications
of a triarchic theory of intelligence for intelligence testing." in
Educational Researcher, Jan 1984. Vol. 13 #1.

This easily read article (written for educational researchers) reviews
Sternberg's current view of what makes intelligent persons intelligent:

"The triarchic theory accounts for why IQ tests work as well as they do
and suggests ways in which they might be improved...."

Although the readership of this list are probably not interested in IQ tests
per se, Sternberg is the foremost cognitive psychologist concerned directly
with intelligence so his view of "What is intelligence?" will be of interest.
This is reviewed quite nicely in the cited paper:

"The triachric theory of human intelligence comprises three subtheories. The
first relates intelligence to the internal world of the individual,
specifying the mental mechanisms that lead to more and less intelligent
behavior. This subtheory specifies three kinds of information processing
components that are instrumental in (a) learning how to do things, (b)
planning what to do and how to do them, and in (c) actually doing them. ...
The second subtheory specifies those points along the continuum of one's
experience with tasks or situations that most critically involve the use of
intelligence. In particular, the account emphasizes the roles of novelty
(...) and of automatization (...) in intelligence. The third subtheory
relates intelligence to the external world of the individual, specifying
three classes of acts -- environmental adaptation, selection, and shaping --
that characterize intelligent behavior in the everyday world."

There is more detail in the cited article.

(Robert J. Sternberg is professor of Psychology at Yale University. See
also, his paper in Behavior and Flame Sciences (1980, 3, 573-584): "Sketch of
a componential subtheory of human intelligence." and his book (in press with
Cambridge Univ. Press): "Beyond IQ: A triarchic theory of human
intelligence.")

------------------------------

Date: Thu 26 Jan 84 14:11:55-CST
From: CS.BUCKLEY@UTEXAS-20.ARPA
Subject: Database Seminar

[Reprinted from the UTEXAS-20 bboard.]

4-5 Wed afternoon in Pai 5.60 [...]

Mail-From: CS.LEVINSON created at 23-Jan-84 15:47:25

I am developing a system which will serve as a self-organizing
knowledge base for an expert system. The knowledge base is currently
being developed to store and retrieve Organic Chemical reactions. As
the fundamental structures of the system are merely graphs and sets,
I am interested in finding other domains is which the system could be used.

Expert systems require a large amount of knowledge in order to perform
their tasks successfully. In order for knowledge to be useful for the
expert task it must be characterized accurately. Data characterization
is usually the responsibility of the system designer and the
consulting experts. It is my belief that the computer itself can be
used to help characterize and classify its knowledge. The system's
design is based on the assumption that the key to knowledge
characterization is pattern recognition.

------------------------------

Date: 28 Jan 84 21:25:17 EST
From: MSIMS@RUTGERS.ARPA
Subject: Machine Learning Seminar Talk by R. Banerji

[Reprinted from the RUTGERS bboard.]

MACHINE LEARNING SEMINAR

Speaker: Ranan Banerji
St. Joseph's University, Philadelphia, Pa. 19130

Subject: An explanation of 'The Induction of Theories from
Facts' and its relation to LEX and MARVIN


In Ehud Shapiro's Yale thesis work he presented a framework for
inductive inference in logic, called the incremental inductive
inference algorithm. His Model Inference System was able to infer
axiomatizations of concrete models from a small number of facts in a
practical amount of time. Dr. Banerji will relate Shapiro's work to
the kind of inductive work going on with the LEX project using the
version space concept of Tom Mitchell, and the positive focusing work
represented by Claude Sammut's MARVIN.

Date: Monday, January 30, 1984
Time: 2:00-3:30
Place: Hill 7th floor lounge (alcove)

------------------------------

Date: 30 Jan 84 1653 PST
From: Terry Winograd <TW@SU-AI>
Subject: Talkware seminar Mon Feb 6, Tom Moran (PARC)

[Reprinted from the SU-SCORE bboard.]

Talkware Seminar (CS 377)

Date: Feb 6
Speaker: Thomas P. Moran, Xerox PARC
Topic: Command Language Systems, Conceptual Models, and Tasks
Time: 2:15-4
Place: 200-205

Perhaps the most important property for the usability of command language
systems is consistency. This notion usually refers to the internal
(self-) consistency of the language. But I would like to reorient the
notion of consistency to focus on the task domain for which the system
is designed. I will introduce a task analysis technique, called
External-Internal Task (ETIT) analysis. It is based on the idea that
tasks in the external world must be reformulated in to the internal
concepts of a computer system before the system can be used. The
analysis is in the form of a mapping between sets of external tasks and
internal tasks. The mapping can be either direct (in the form of rules)
or "mediated" by a conceptual model of how the system works. The direct
mapping shows how a user can appear to understand a system, yet have no
idea how it "really" works. Example analyses of several text editing
systems and, for contrast, copiers will be presented; and various
properties of the systems will be derived from the analysis. Further,
it is shown how this analysis can be used to assess the potential
transfer of knowledge from one system to another, i.e., how much knowing
one system helps with learning another. Exploration of this kind of
analysis is preliminary, and several issues will be raised for
discussion.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT