Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 047

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Monday, 10 Mar 1986       Volume 4 : Issue 47 

Today's Topics:
Article/Seminar - The TI Compact LISP Machine (Dallas ACM),
Seminars - Tools Beyond Technique (UCB) &
Knowledge and Action in the Presence of Faults (SU) &
Adaptive Networks (GTE) &
Stochastic Complexity (IBM-SJ) &
Updating Databases with Incomplete Information (SU) &
Parallel Architectures for Knowledge Bases (SMU),
Conference - 1987 Linguistics Institute

----------------------------------------------------------------------

Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Article/Seminar - The TI Compact LISP Machine (Dallas ACM)

ACM Dallas Chapter Meeting Notice
Speaker: Alfred Ricoomi
Senior Member, Technical Staff
Texas Instruments
Topic: The TI Compact LISP Machine

The February 17 issue of Aviation Week is devoted to the military
application of Artificial Intelligence. One article reports on the
development, at TI, of a military LISP machine. Mr. Riccomi will
describe the machine, its near term applications, and likely spin-offs
into the commercial world especially in the airline industry.

Place: INFOMART, 1950 nStemmons Freeway (at Oak Lawn)Room 7004

Date: Tuesday, March 11, 1986, 7:30 - 8:15

------------------------------

Date: 5 Mar 86 00:24:00 GMT
From: pur-ee!uiucdcs!marcel@ucbvax.berkeley.edu
Subject: Seminar - Tools Beyond Technique (UCB)

WHEN: 12:00 noon, Wednesday, March 5th
WHERE: Canterbury House,
University of Illinois at Urbana-Champaign
TOOLS BEYOND TECHNIQUE
Marcel Schoppers
Dept of Computer Science
U of Illinois at Urbana-Champaign
In this talk I will propose yet another way to characterize AI, but
one which I hope captures the intuitions of AI researchers: that AI is
the attempt to liberate tools/machines from absolute dependence on
human control. That done, I will suggest some achievements which should,
according to this characterization of AI, demonstrate the success of
AI work. Importantly, both the characterization and those crucial
achievements contain no comparison to human capabilities. I therefore
maintain that several contemporary arguments for and against the future
success of AI are at once fallacious and beside the point. Among others:
the AI community's claim that "brains are computers too" is hardly necessary
and certainly not scientific, while Weizenbaum's "maybe computers can think,
but they shouldn't" is self-defeating. On the issue of whether artificial
intelligence will ever be achieved I will not commit myself, but at least
my characterization provides a down-to-earth criterion.

A paper on this subject (in the socio-communications literature):
"A perspective on artificial intelligence in society" Communications 9:2
(december 1985).

------------------------------

Date: Thu 6 Mar 86 06:09:35-PST
From: Oren Patashnik <PATASHNIK@SU-SUSHI.ARPA>
Subject: Seminar - Knowledge and Action in the Presence of Faults (SU)

AFLB, 13-Mar-86 : Yoram Moses (MIT)
12:30 pm in MJ352 (Bldg. 460)

Knowledge, Common Knowledge, and Simultaneous Actions
in the Presence of Faults

We show that any protocol that guarantees to perform a particular
action simultaneously at all sites of a distributed system must
guarantee that the sites attain common knowledge of particular facts
when such an action is performed. We analyze what facts become common
knowledge at various points in the execution of protocols in a simple
model of a system in which processors are liable to crash. We obtain
a new protocol for Simultaneous Byzantine Agreement that is optimal in
all of its runs. That is, rather than achieving the worst case
behavior, every run of the protocol halts at the earliest possible
time, given the pattern in which failures occur. This may happen as
early as after two rounds. We characterize precisely what failure
patterns require the protocol to run for k rounds, 1<k<t+2,
generalizing and simplifying the lower bound proof for Byzantine
agreement. We also show a non-trivial simultaneous action for which
popular belief would suggest that t+1 rounds would be required in the
worst case, and use our analysis to design a protocol for it that
always halts in two rounds. This work sheds considerable light on many
heretofore mysterious aspects of the Byzantine Agreement problem. It
is one of the first examples of how reasoning about knowledge can be
used to obtain improved solutions to problems in distributed computing.

This is joint work with Cynthia Dwork of IBM Almaden.

------------------------------

Date: Thu, 6 Mar 86 14:24:29 est
From: Rich Sutton <rich%gte-labs.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Adaptive Networks (GTE)


Self-Organization, Memorization,
and Associative Recall of Sensory Information
by Brain-Like Adaptive Networks

Tuevo Kohonen, Helsinki University of Technology

The main purpose of thinking is to forecast phenomena that take place
in the environment. To this end, humans and animals must refer to a
complicated knowledge base which is somewhat vaguely called memory.
One has to realize the two main problem areas in a discussion of memory:
(1) the memory mechanism itself, and (2) the internal representations of
sensory information in the brain networks.

Most of the experimental and theoretical works have concentrated on the
first problem. Although it has been extremely difficult to detect memory
traces experimentally, the storage mechanism is theoretically still the
easier part of the problem. Contrary to this, it has been almost a
mystery how a physical system can automatically extract various kinds
of abstraction from the huge number of vague sensor signals. This paper
now contains some novel views and results about the formation of such
internal representations in idealized neural networks, and their
memorization. It seems that both of the above functions, viz. formation
of the internal representations and their storage, can be implemented
simultaneously by an adaptive, self-organizing neural structure which
consists of a great number of neural units arranged into a
two-dimensional network. A number of computer simulations are presented
to illustrate both the self-organized formation of sensory feature maps,
as well as associative recall of activity patterns from the distributed
memory.

When: March 14, 1:00 pm
Where: GTE Labs 3-131
Contact: Rich Sutton, Rich@GTE-Labs.CSNet, (617)466-4133

------------------------------

Date: 6 Mar 86 14:52:51 PST
From: CALENDAR@IBM-SJ.ARPA
Subject: Seminar - Stochastic Complexity (IBM-SJ)


IBM Almaden Research Center
650 Harry Road
San Jose, CA 95120-6099

CALENDAR
March 10 - 14, 1986


Computer STOCHASTIC COMPLEXITY AND THE MDL AND PMDL PRINCIPLES
Science J. J. Rissanen, IBM Almaden Research Center
Colloquium

Thurs., Mar. 13 There is no rational basis in traditional
3:00 P.M. statistics for the comparison of two models
Rear Audit. unless they have the same number of parameters.
Hence, for example, the important
selection-of-variables problem has a dozen or so
solutions, none of which can be
preferred over the others. Recently, inspired
by the algorithmic notion of complexity, we
introduced a new concept in statistics, the
Stochastic Complexity of the observed data,
relative to a class of proposed probabilistic
models. In broad terms, it is defined as the
least number of binary digits with which the
data can be encoded by use of the selected
models. The stochastic complexity also
represents the smallest prediction errors which
result when the data are predicted by use of the
models. Accordingly, the associated optimal
model represents all the statistical information
in the data that can be extracted with the
proposed models, and for this reason its
computation, which we call the MDL (Minimum
Description Length) principle, may be taken to
be the fundamental problem in statistics. In
this talk, we describe a special form of the MDL
principle, which amounts to the minimization of
squared "honest" prediction errors, and we apply
it to two examples of polynomial curve fitting
as well as to contingency tables. In the first
example, which calls for the prediction of
weight growth of mice, the degree of the MDL
polynomial agrees with the optimal degree,
determined in retrospect after the predicted
weights were seen. The associated predictions
also far surpass those made with the best
traditional statistical techniques. A
fundamental theorem is given, which permits
comparison of models in the spirit of the
Cramer-Rao inequality, except that the models
need not have the same number of parameters. It
also settles the issue of how the
selection-of-variables problem is to be solved.
Host: R. Arps
(Refreshments at 2:45 P.M.)
[...]

------------------------------

Date: Fri 7 Mar 86 17:33:40-PST
From: Marianne Winslett <WINSLETT@SU-SCORE.ARPA>
Subject: Seminar - Updating Databases with Incomplete Information (SU)


Updating Databases With Incomplete Information
--or--
Belief Revision is Harder Than You Thought


Marianne Winslett

PhD Oral
Area X Seminar
Margaret Jacks 352
Friday, March 14, 3:15 PM

Suppose one wishes to construct, use, and maintain a database of
knowledge about the real world, even though the facts about that world
are only partially known. In the database domain, this situation
arises when database users must coax information into and out of
databases in the face of missing values and uncertainty. In the AI
domain, this problem arises when an agent has a base set of beliefs
that reflect partial knowledge about the world, and then tries to
incorporate new, possibly contradictory knowledge into the old set of
beliefs. In the logic domain, one might choose to represent such a
database as a logical theory, and view the models of the theory as
possible states of the real world.

How can new information (i.e., updates) be incorporated into the
database? For example, given the new information that "b or c is
true," how can we get rid of all outdated information about b and c,
add the new information, and yet in the process not disturb any other
information in the database? The burden may be placed on the user or
other omniscient authority to determine exactly which changes in the
theory will bring about the desired set of models. But what's really
needed is a way to specify an update intensionally, by stating some
well-formed formula that the state of the world is now known to
satisfy and letting the database management system automatically
figure out how to accomplish that update.

This talk will explore a technique for updating databases containing
incomplete information. Our approach embeds the incomplete database
and the updates in the language of first-order logic, which we believe
has strong advantages over relational tables and traditional data
manipulation languages when information is incomplete. We present
semantics and algorithms for our update operators, and describe an
implementation of the algorithms. This talk should be accessible to
all who are comfortable with first-order logic and have a passing
acquaintance with the notion of database updates.

------------------------------

Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Seminar - Parallel Architectures for Knowledge Bases (SMU)

Toward Computer Architectures for Database and Knowledge Base Processing
Computer Science and Engineering Seminar, Friday, March 14, 1986
Speaker: Lubomir Bic
University of California at Irvine
Location: 315SIC
Time: 3:00 PM

The importance of parallelism has been recognized in recent years and
a number of multiprocessor architectures claiming suitability to
intelligent data and knowledge base processing have been proposed.
The success of these architectures has been, in most cases, rather
modest. The message conveyed in this talk is that, in order to build
highly-parallel computer architectures, new models of computation
capable of exploiting the potential of large numbers of processing
elments and memory units must first be developed. To support this
claim, two such models-- one for processing queries in a
network-oriented database system and another for extracting
information from a logic-based knowledge representation system -- will
be outlined. Both models are based on the principles of asynchronous
data-driven computation, which eliminate the need for centralized
control and shared memory.

------------------------------

Date: Sat, 8 Mar 86 15:29:19 est
From: walker@mouton.ARPA (Don Walker at mouton.ARPA)
Subject: Conference - 1987 Linguistics Institute

1987 LINGUISTICS INSTITUTE
STANFORD UNIVERSITY


The 1987 Summer Institute of the Linguistic Society of America will be
hosted by the Linguistics Department of Stanford University, from June
29 to August 7, 1987. It is co-sponsored by the Association for
Computational Linguistics.

The theme of the Institute is "Contextual and Computational Dimensions
of Language", and is meant to reflect the ever-growing interest in
integrating theories of linguistic structure with theories of language
processing and models of how language conveys information in context.
The aim is to provide a forum in which it is possible to integrate a
variety of linguistic traditions, particularly linguistic theory,
computational linguistics, discourse analysis, psycholinguistics,
sociolinguistics, and artificial intelligence.

Several different kinds of courses and activities will be offered
during the six-week period of the Institute:
(i) A series of overview classes in the main subareas of
linguistics (six weeks, 3 units)
(ii) A series of one-week intensive classes intended to provide
background for the four-week courses and seminars below (June 29-July
3, 1 unit)
(iii) Four-week classes on topics related directly to the theme
of the Institute (July 13-August 7, 2 units)
(iv) Several seminars associated with research workshops will
run throughout the last four weeks. These can be taken for credit, as
part of the Stanford "directed research" program (subject to prior
approval of the workshop leader) (up to three units)
(v) A series of Wednesday lectures (e.g.,on the Synthesis of
Approaches to Discourse), involving Institute participants and invited
visitors
(vi) The Association for Computational Linguistics will hold its
annual meeting during the second week of the Institute (July 6-10).

1987 marks the first time in recent years that two consecutive
Institutes have been held with the same theme. This complementarity
of the 1986 Institute held at the City University of New York and the
1987 Institute reflects remarkable changes taking place today in the
field of linguistics. Taken together, the Institutes provide the
depth and diversity necessary to cover the newly emerging subfields
and to teach the range of interdisciplinary tools and knowledge so
fundamental to new theoretical approaches. The 1987 Institute at
Stanford differs from the 1986 Institute primarily in specific course
offerings and faculty and in its focus on providing a rich
interdisciplinary research as well as teaching environment. Many of
the instructors will also be participating in research groups; in
general they will teach only one course.

The Executive planning committee is: Ivan Sag (Director), Ellen
Prince (Associate Director), Marlys Macken, Peter Sells, and Elizabeth
Traugott. David Perlmutter will be the Sapir Professor, and Joseph
Greenberg the Collitz Professor of the 1987 Institute.

For more information, write 1987 LSA Institute, Department of
Linguistics, Stanford University, Stanford, California 94305.

Preliminary List of Institute Faculty:

Judith Aissen
Elaine Anderson
Stephen Anderson
Philip Baldi
Jon Barwise
Joan Bresnan
Gennaro Chierchia
Kenneth Church
Eve Clark
Herbert Clark
Nick Clements
Charles Clifton
Philip Cohen
Robin Cooper
William Croft
Penelope Eckert
Elisabet Engdahl
Charles Ferguson
Charles Fillmore
Joshua Fishman
Lyn Frazier
Victoria Fromkin
J. Mark Gawron
Gerald Gazdar
Joseph Greenberg
Barbara Grosz
Jorge Hankamer
Jerry Hobbs
Paul Hopper
Larry Horn
Philip Johnson-Laird
Ron Kaplan
Lauri Karttunen
Martin Kay
Paul Kay
Paul Kiparsky
William Ladusaw
William Leben
Steve Levinson
Mark Liberman
Marlys Macken
William Marslen-Wilson
John McCarthy
Nils Nilsson
Barbara Partee
Fernando Pereira
David Perlmutter
Ray Perrault
Stanley Peters
Carl Pollard
William Poser
Ellen Prince
Geoffrey Pullum
John Rickford
Luigi Rizzi
Ivan Sag
Deborah Schiffrin
Peter Sells
Stuart Shieber
Candace Sidner
Brian Smith
Donca Steriade
Susan Stucky
Michael Tanenhaus
Elizabeth Traugott
Peter Trudgill
Lorraine Tyler
Thomas Wasow
Terry Winograd
Annie Zaenen
Arnold Zwicky

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT