Copy Link
Add to Bookmark
Report

AIList Digest Volume 6 Issue 042

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Tuesday, 1 Mar 1988       Volume 6 : Issue 42 

Today's Topics:
Seminars - Massively Parallel Object Recognition (BBN) &
A Theory of Prediction and Explanation (SRI) &
Cognition and Metaphor (BBN) &
Physically Based Modeling (CMU) &
Qualitative Probabilistic Networks (MIT) &
Automated Program Recognition (BBN)

----------------------------------------------------------------------

Date: Thu 25 Feb 88 09:21:08-EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: Seminar - Massively Parallel Object Recognition (BBN)

BBN Science Development Program
AI Seminar Series Lecture

OBJECT RECOGNITION USING MASSIVELY PARALLEL HYPOTHESIS TESTING

Lewis W. Tucker
Thinking Machines Corporation
Cambridge, MA
(TUCKER@THINK.COM)

BBN Labs
10 Moulton Street
2nd floor large conference room
10:30 am, Tuesday March 1


Problems in computer vision span several layers of data representation
and computational requirements. While it is easy to see how advances in
parallel machine architectures enhance our capability in "low-level"
image analysis to process the large quantities of data in typical
images, it is less obvious how parallelism can be exploited in the
"higher" levels of vision such as object recognition.

Traditional approaches to object recognition have relied on
constraint-based tree search techniques that are not necessarily
appropriate for parallel processing.

This talk will introduce a model-based object recognition system
designed at Thinking Machines Corporation and its implementation
on the Connection Machine. The goal of this system is to be able
to recognize a large number of partially occluded objects in 2-D
scenes of moderate complexity. In contrast to previous
approaches, the system described here utilizes a massively
parallel hypothesize-and-test paradigm that avoids serial search.
Perceptual grouping of features forms the basis for generating
hypotheses; parameter space clustering accumulates weak evidence;
template matching provides verification; and conflict resolution
ensures the consistency of scene interpretation.

Results from experiments with databases ranging from 10 to 100
objects show the relative independence of the time to recognize
objects with either the complexity of the scene or number of
objects in the database.

------------------------------

Date: Thu, 25 Feb 88 14:36:28 PST
From: Margaret Olender <olender@malibu.ai.sri.com>
Subject: Seminar - A Theory of Prediction and Explanation (SRI)


WHEN: FRIDAY, MARCH 4th
TIME: 10:30am
WHERE: EJ228
SPEAKER: LEORA MORGENSTERN / BROWN UNIVERSITY.



WHY THINGS GO WRONG:
A FORMAL THEORY OF PREDICTION AND EXPLANATION

Leora Morgenstern
Brown University

This talk presents a theory of Generalized Temporal Reasoning. We focus
on the related problems of:
(1) Temporal Projection - figuring out all the facts that are true
in some chronicle, given a partial description of that chronicle
and
(2) Explanation - figuring out what went wrong if an expected
outcome didn't occur.

Standard logics can't handle temporal projection due to such problems
as the frame problem (and qualification problem). Simplistic
applications of non-monotonic logics also won't do the trick, as the
Yale Shooting Problem demonstrates. During the past several years, a
number of solutions have been proposed to the Yale Shooting Problem,
which either use extensions of default logics (Shoham,Kautz), or which
circumscribe over predicates specific to a theory of action
(Lifschitz, Haugh). We show that these solutions - while perfectly
valid for the Yale Shooting Problem - cannot handle the general
temporal projection problem, because they all handle either forward or
backward projection improperly.

We present a solution to the generalized temporal projection problem
based on the notion that actions only happen if they are *motivated*.
We handle the non-monotonicity using only preference criteria on
models, and avoid both modal operators and circumscription axioms. We
show that our theory handles both forward projection and backward
projection properly, and in particular solves the Yale Shooting
Problem and a set of benchmark problems which other theories can't
handle. An advantage of our approach is that it lends itself to an
intuitive model for the explanation task. We present such a model,
give several characterizations of explanation within that model, and
show that these characterizations are equivalent.

This talk reports on joint work done with Lynn Stein of Brown
University.

------------------------------

Date: Mon 29 Feb 88 08:41:01-EST
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: Seminar - Cognition and Metaphor (BBN)


BBN Science Development Program
Language & Cognition Seminar Series


COGNITION AND METAPHOR

Professor Bipin Indurkhya
Computer Science Department
Boston University

BBN Laboratories Inc.
10 Moulton Street
Large Conference Room, 2nd Floor


10:30 a.m., Wednesday, March 9, 1988


Abstract: In past years a view of cognition has been emerging in which
metaphors play a key role. However, a satisfactory explanation of the
mechanisms underlying metaphors and how they aid cognition is far from
complete.

In particular, earlier theories of metaphors have been unable to account
for how metaphors can "create" new, and sometimes contradictory, perspectives
on the target domain.

In this talk I will address some of the issues related to the role metaphors
play in cognition. I will first lay an algebraic framework for cognition,
and then in this context I will pose the problem of metaphor. Two mechanisms
will be proposed to explain the workings of metaphors. One of these
mechanisms gives rise to what we call "projective metaphors", and it is
shown how projective metaphors can "create" new perspectives and new
ontologies on the target domain. The talk will conclude with a brief
discussion of some further implications of the theory on "Direct Reference
vs. Descriptive Reference", "Is all knowledge metaphorical?", and
"Induction and Analogies", among other things.

------------------------------

Date: Sun, 28 Feb 88 17:05:41 EST
From: Anurag.Acharya@CENTRO.SOAR.CS.CMU.EDU
Subject: Seminar - Physically Based Modeling (CMU)


TOPIC: Physically Based Modeling For Vision And Animation

SPEAKER: Andy Witkin, Purdue University

WHEN: Thursday, March 3, 1988, 3:30-4:30 p.m.

WHERE: Wean Hall 5409

ABSTRACT

Our approach to modeling for vision and graphics uses the machinery of
physics. We will describe two current foci of our research:

To create models of real-world objects we use simulated materials that
move and deform in response to applied forces. Constraints are imposed
on these active models by applying forces that coerce them into states
that satsify the constraints. In visual analysis, the constraint forces
are derived from images. Additionally, the user may apply forces
interactively, guiding the models towards the desired solution.
Examples of the approach include simulated pieces of springy wire
attracted to edges, and symmetry-seeking elastic bodies used to recover
three-dimension shapes from 2-D views.

To animate active character models we use a new method called
``spacetime constraints.'' The animator specifies what the character
has to do, for instance, ``jump from here to there, clearing a hurdle in
between;'' how the motion should be performed, for instance ``don't
waste energy,'' or ``come down hard enough to splatter whatever you land
on;'' the character's physical structure---the geometry, mass,
connectivity, etc. of the parts; and the physical resources available
to the character to accomplish the motion, for instance the character's
muscles, a floor to push off from, etc. The requirements contained in
this description, together with Newton's laws, comprise a problem of
constrained optimization. The solution to this problem is a physically
valid motion satisfying the ``what'' constraints and optimizing the
``how'' criteria. We will present animation of a Luxo lamp performing a
variety of coordinated motions. These realistic motions conform to such
principles of traditional animation as anticipation, squash-and-stretch,
follow-through, and timing.

We will conclude with a videotape presenting an overview of our recent
vision and animation work.

------------------------------

Date: Monday, 1 February 1988 11:31-EST
From: Paul Resnick <pr at ht.ai.mit.edu>
Subject: Seminar - Qualitative Probabilistic Networks (MIT)

[Excerpted from the IRList Digest.]

Thursday 4, February 4:00pm Room: NE43- 8th floor Playroom

The Artificial Intelligence Lab
Revolving Seminar Series

Qualitative Probabilistic Networks
Mike Wellman

Many knowledge representation schemes model the world as a collection of
variables connected by links that describe their interrelationships.
The representations differ widely in the nature of the fundamental
objects and in the precision and expressiveness of the relationship
links. Qualitative probabilistic networks occupy a region in
representation space where the variables are arbitrary and the
relationships are qualitative constraints on the joint probability
distribution among them.

Two basic types of qualitative relationship are supported by the
formalism. Qualitative influences describe the direction of the
relationship between two variables and qualitative synergies describe
interactions among influences. The probabilistic semantics of these
relationships justify sound and efficient inference procedures based on
graphical manipulations of the network. These procedures answer queries
about qualitative relationships among variables separated in the
network. An example from medical therapy planning illustrates the use
of QPNs to formulate tradeoffs by determining structural properties of
optimal assignments to decision variables.

------------------------------

Date: Thursday, 4 February 1988 11:05-EST
From: Paul Resnick <pr at ht.ai.mit.edu>
Subject: Seminar - Automated Program Recognition (BBN)

[Excerpted from the IRList Digest.]


Thursday, 11 February 4:00pm Room: NE43- 8th floor Playroom

The Artificial Intelligence Lab
Revolving Seminar Series

Automated Program Recognition
Linda Wills

By recognizing familiar algorithmic fragments and data structures in a
program, an experienced programmer can understand the program, based on
the known properties of the structures found. Automating this
recognition process will make it easier to perform many tasks which
require program understanding, e.g., maintenance, modification, and
debugging. This talk describes a recognition system which automatically
identifies occurrences of stereotyped computational structures in
programs. The system can recognize these standard structures, even
though they may be expressed in a wide range of syntactic forms or they
may be in the midst of unfamiliar code. It does so systematically by
using a parsing technique. Two important advances have made this
possible. The first is a language-independent graph representation for
programs, which canonicalizes many syntactic features of programs. The
second is an efficient graph parsing algorithm.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT