Copy Link
Add to Bookmark
Report

AIList Digest Volume 3 Issue 080

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Friday, 21 Jun 1985       Volume 3 : Issue 80 

Today's Topics:
Seminars - A General Machine-Learning Mechanism (GM) &
Distributed Decision Procedures (IBM-SJ) &
Organisms' Internal Models (CSLI) &
Design Expert Systems (CMU) &
Unification of Logic, Function and Frames (MIT) &
Automatic Example Generation (UTexas) &
Qualitative Process Theory (CSLI) &
Architectures for Logic Programming (GE) &
Reasoning about Programs via Constraints (MIT) &
Planning as Debugging (SRI)

----------------------------------------------------------------------

Date: Wed, 19 Jun 85 15:47 EST
From: "S. Holland" <holland%gmr.csnet@csnet-relay.arpa>
Subject: Seminar - A General Machine-Learning Mechanism (GM)


Towards a General Machine-Learning Mechanism

Paul Rosenbloom
Stanford University

Thursday, June 27, 1985, 10:00 a.m.
General Motors Research Laboratories
Computer Science Department
Warren, Michigan

Machine learning is the process by which a computer can bring about
improvements in its own performance. A general machine-learning mechanism
is a single mechanism that can bring about a wide variety of performance
improvements (ultimately all required types). In this talk I will present
some recent progress in building such a mechanism. This work shows that the
combination of a simple learning mechanism (chunking) with a sophisticated
problem-solver (SOAR) can yield: (1) practice speed-ups, (2) transfer of
learning between related tasks, (3) strategy acquisition, (4) automatic
knowledge-acquisition, and (5) the learning of general macro-operators of
the type used by Korf (1983) to solve Rubik's cube. These types of learning
are demonstrated for traditional search-based tasks, such as tic-tac-toe and
the eight puzzle, and for R1-SOAR (a reformulation of a portion of the R1
expert system in SOAR).

This work has been pursued in collaboration with John Laird (Xerox PARC) and
Allen Newell (Carnegie-Mellon University).

-Steve Holland

------------------------------

Date: Wed, 19 Jun 85 16:25:22 PDT
From: IBM San Jose Research Laboratory Calendar
<calendar%ibm-sj.csnet@csnet-relay.arpa>
Reply-to: IBM-SJ Calendar <CALENDAR%ibm-sj.csnet@csnet-relay.arpa>
Subject: Seminar - Distributed Decision Procedures (IBM-SJ)

[Excerpted from the IBM-SJ Calendar by Laws@SRI-AI.]

IBM San Jose Research Lab
5600 Cottle Road
San Jose, CA 95193

Tues., June 25 Computer Science Seminar
11:15 A.M. DECISION PROCEDURES
Aud. A Distributed artificial intelligence is the study of
how a group of individual intelligent agents can
combine to solve a difficult global problem. This
talk discusses in very general terms the problems of
achieving this global goal by considering simpler,
local subproblems; we drop the usual requirement that
the agents working on the subproblems do not
interact. We are led to a single assumption, which
we call common rationality, that is provably optimal
(in a formal sense) and which enables us to
characterize precisely the communication needs of the
participants in multi-agent interactions. An example
of a distributed computation using these ideas is presented.

M. Ginsberg, Stanford University
Host: J. Halpern (HALPERN@IBM-SJ)

------------------------------

Date: Wed 19 Jun 85 17:02:36-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Seminar - Organisms' Internal Models (CSLI)

[Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


*NEXT* THURSDAY, June 27, 1985

2:15 p.m. CSLI Seminar
Redwood Hall ``An Organism and Its Internal Model of the World''
Room G-19 Pentti Kanerva, CSLI
Discussion led by Alex Pentland

ABSTRACT OF NEXT WEEK'S SEMINAR
``An Organism and Its Internal Model of the World''

There is a glaring disparity in how children and computers learn
things. By and large, children are not instructed explicitly but
learn by observation, imitation, and trial and error. What kind of
computer architecture would allow a machine to learn the way children
do?
In the model I have been studying, an organism is coupled to the
world by its sensors and effectors. The organism's mind-ware consists
of a relatively small focus and a large memory. The sensors feed
information into the focus, the effectors are driven from the focus,
the memory is addressed by the contents of the focus, the contents of
the focus are stored in memory, and the memory feeds information into
the focus. The contents of the focus at a moment account for the
subjective experience of the organism at that moment.
The function of the memory is to store a model of the world for
later reference. The memory is sensitive to similarity in that
approximate retrieval cues can be used to retrieve exact information.
It is dynamic in that the present situation (its encoding) brings to
focus the consequences of similar past situations. The model sheds
light on the frame problem of robotics, and it appears that a robot
built according to this principle would learn by trial and error and
would be able to plan actions and to perform planned sequences of
actions.
Reading: ``Parallel Structures in Human and Computer Memory,''
available from Susi Parker at the Ventura Hall receptionist desk and
on line as <PKANERVA>COGNITIVA.PAPER at SU-CSLI.ARPA.
--Pentti Kanerva

------------------------------

Date: 20 Jun 85 11:16:01 EDT
From: Mary.Lou.Maher@CMU-RI-CIVE
Subject: Seminar - Design Expert Systems (CMU)


DESIGN RESEARCH CENTER BI-WEEKLY SEMINAR SERIES

COPS - A Concurrent Production System

BY
Luiz Alberto Villaca Leao
Department Of Electrical and Computer Engineering

Wednesday, June 26 at 1:30 pm in the Adamson Wing, Baker Hall

Existing tools for writing expert systems are most helpful when one wants to
emulate a single human expert working alone and without the aid of large
number crunching programs. Few engineering problems fit this template. Rather,
they tend to require multiple experts, working concurrently and supported by
numbers of CAD, CAM and other tools. COPS has been designed with these
requirements in mind. It is an interpreter of a superset of the OPS5 language.
It provides the means for implementing multiple blackboards that integrate
cooperating, concurrent expert systems, running in a distributed network of
processors.
-------

Refereshments will be served at 1:15

------------------------------

Date: Thu 20 Jun 85 10:49:56-EDT
From: Monica M. Strauss <MONICA%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Unification of Logic, Function and Frames (MIT)

Date: Friday 21 June, 1985
Time: 11:00AM
Place: 8th Floor Playroom


The Uranus System -- Unification of Logic, Function and Frames

Hideyuki Nakashima
Electrotecnical Laboratory
Tsukuba, Japan


Abstract

Uranus is a knowledge representation system based on the concept of logic
programming. The basic computational mechanism is the same as that of the
famous (or infamous!) logic programming language, Prolog, with several
extensions.

One important extension is the introduction of a multiple world mechanism.
Uranus consists of several independent definition spaces called worlds.
Worlds are combined at execution time to form a context for predicate
definitions. Regarding a given world as a frame for a given concept, and
predicates as slots, you have a frame-like system in logic programming.

Another extension is along the lines of functional notations within the
semantics of logic. Uranus has only one semantics, that of logic
programming. At the same time, it has the expressive power, or
convenience, of functional programming. Lazy execution of functional forms
follows naturally, since portions are computed only when they are necessary
for unification.

A brief demonstration of the system is scheduled following the talk.


Host: Gerald J. Sussman




REFRESHMENTS will be served.

------------------------------

Date: Thu, 20 Jun 85 15:11:31 cdt
From: briggs@ut-sally.ARPA (Ted Briggs)
Subject: Seminar - Automatic Example Generation (UTexas)


EGS: A Transformational Approach to Automatic Example Generation

by
Myung W. Kim


noon Friday June 28
PAI 5.60



In the light of the important roles of examples in AI, methods for
automatic example generation have been investigated. A system
(EGS) has been built which automatically generates examples given
a constraint specified in the Boyer-Moore logic. In EGS examples
are generated by successively transforming the constraint formula
into the form of an example representation scheme. Several
strategies have been incorporated: testing stored examples,
solving equations, doing case-analysis, and expanding
definitions. Global simplification checks inconsistency and
rewrites formulas to be easy to handle.

EGS has been tested for the problems of controlling backward
chaining and conjecture checking in the Boyer-Moore theorem
prover. It has proven to be powerful -- its power is mainly due
to combining efficient procedural knowledge and general formal
reasoning capacity.

In this talk I will present the operational aspect of EGS and
some underlying principles of its design.

------------------------------

Date: Wed 19 Jun 85 17:02:36-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Seminar - Qualitative Process Theory (CSLI)

[Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


*NEXT* THURSDAY, June 27, 1985

4:15 p.m. CSLI Colloquium
Redwood Hall ``Qualitative Process Theory''
Room G-19 Ken Forbus, University of Illinois, Computer Science


``Qualitative Process Theory''

Things move, collide, flow, bend, stretch, break, cool down, heat up,
and boil. Intuitively we think of the things that cause changes in
physical situations as processes. Qualitative Process Theory defines
simple notions of quantity, function, and process that allow
interesting common-sense inferences to be drawn about dynamical
systems. This talk will describe the basics of the Qualitative
Process Theory, illustrate how it can be used to capture certain
aspects of different models of physical phenomena, and discuss the
claims it makes about causal reasoning. --Ken Forbus

------------------------------

Date: Tue, 18 Jun 85 10:04:54 EDT
From: coopercc@GE-CRD
Subject: Seminar - Architectures for Logic Programming (GE)

Computer Science Seminar
General Electric R & D Center
Schenectady, N.Y.

Experimental Computer Architectures
for Logic Programming

Prof. John Oldfield
Syracuse University

Tuesday, June 25
10:30 AM, Conference Room 2, Bldg. K1
(Refreshments at 10:15)

ABSTRACT: Syracuse University is an established center
for research in logic programming languages and their
applications. In the last few years research has com-
menced on ways of speeding-up the execution of logic
programs by special-purpose computer architectures and
the incorporation of custom VLSI components.
The Syracuse Unification Machine (SUM) is a co-
processor for a host computer executing LOGLISP. Unifi-
cation is a fundamental and common operation in the
execution of logic programs, and is highly recursive in
nature. SUM speeds up unification by the combination of
separate functional units operating concurrently, high-
speed pattern matching and the use of content-
addressable memory (CAM) techniques. Unification fre-
quently requires a variable to be bound to something
else, such as an expression, a constant or even another
variable. The Binding Agent of SUM holds the set of
current bindings in the form of a segmented stack, and
with the aid of a CAM made up of custom nMOS circuits
it is possible to check if a variable is already bound
in under 150 nS. Binding is an operation which may be
carried out concurrently in most situations, and an
extra Binding Agent may be used to advantage. The
Analysis Agent is another custom nMOS component which
implements the pattern-matching and case-by-case
analysis required. It is organized as a pipeline, and
uses a state machine implemented as a PLA.

(Note: the June issue of Byte Magazine contains an
informative article on this work by Phillip Robinson)

Notice to Non-GE attendees: It is necessary that we ask
you to notify Marion White ((518) 387-6138 or
WHITEMM@GE-CRD) at least two days in advance of the
seminar.

------------------------------

Date: Thu, 13 Jun 1985 11:58 EDT
From: DICK%MIT-OZ@MIT-MC.ARPA
Subject: Seminar - Reasoning via Constraints (MIT)

[Forwarded from the MIT bboard by SASW@MIT-MC.]


Tuesday, June 18
8th Floor Playroom
4:00PM


REASONING ABOUT PROGRAMS
VIA
CONSTRAINT PROPAGATION

Thomas G. Dietterich
Department of Computer Science
Oregon State University

This talk describes a program reasoning system (PRE) and its application to
problems of incremental program development in machine learning. PRE solves
the following problem: given a program (in a modified Programmer's Apprentice
notation) with tokens labeling some of the ports in the program, find the set
of possible "interpretations" of the program. That is, compute the set of
executions consistent with the given information. The characterization of
these executions should be succinct and their computation should be efficient.
To perform this task, PRE applies constraint propagation methods. The talk
will focus on (a) modifications made to the P.A. notation, (b) techniques
introduced to handle failure of local propagation, and (c) strategies for
resolving the frame problem. PRE is part of a larger system, EG, whose task is
to form (procedural) theories of the UNIX operating system through
experimentation and observation. EG's theories take the form of programs, and
PRE is applied to perform tasks of data interpretation and goal regression.

Refreshments will be served.

Host: Richard C. Waters

------------------------------

Date: Tue 18 Jun 85 19:27:09-PDT
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - Planning as Debugging (SRI)


PLANNING AS DEBUGGING

Reid Simmons -- MIT AI Lab / SPAR
11:00 am, Monday, June 24
Room EJ232, SRI International


We are currently building a domain independent planner which can
represent and reason about fairly complex domains. The first part of
the talk will focus on the representations used and the rationale for
choosing them. The planner uses explicit temporal representations,
based on time points and the notion of "histories". It also extends
the traditional precondition/postcondition representation of actions
to include quantification, conditionals and the ability to reason
about cumulative changes.
The second part of the talk will focus on techniques to organize
and control the search for a plan. We view planning as "debugging
a blank sheet of paper". We correct a bug (ie. unachieved goal) by
changing one of the underlying assumptions in the plan which are
responsible for the bug. This problem solving approach combines
backtracking with traditional planning techniques, giving the planner
the potential for finding a solution with much less search. We also
present a simple, but effective, technique for choosing which plan
modification to pursue, based on maintaining a complete goal structure
of the plan.
This planner has been partially implemented and tested on
traditional blocks-world and register-transfer examples. It is
currently being applied to the problem of geologic interpretation and
to diagnosis of chip manufacturing problems.

-------

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT