Copy Link
Add to Bookmark
Report
AIList Digest Volume 5 Issue 085
AIList Digest Monday, 23 Mar 1987 Volume 5 : Issue 85
Today's Topics:
Seminars - Connectionist Networks as Models of Human Learning (SRI) &
Signals to Symbols in Neural Networks (UCB),
Courses - Approaches to AI (SU) &
Problem Solving, Learning, and Hardware Design (SU),
Conference - AAAI-87 Workshop on Real-Time Processing
----------------------------------------------------------------------
Date: Wed, 18 Mar 87 16:19:08 PST
From: lansky@sri-venice.ARPA (Amy Lansky)
Subject: Seminar - Connectionist Networks as Models of Human Learning (SRI)
Anyone interested in giving a talk, please contact Amy Lansky --
LANSKY@SRI-AI.
EVALUATING "CONNECTIONIST" NETWORKS AS MODELS OF HUMAN LEARNING
Mark A. Gluck (GLUCK@SU-PSYCH)
Stanford University
11:00 AM, MONDAY, March 23
SRI International, Building E, Room EJ228
We used adaptive network (or "connectionist") theory to extend the
Rescorla-Wagner/LMS rule for associative learning to phenomena of
human learning and judgment. In three experiments, subjects learned
to categorize hypothetical patients with particular symptom patterns
as having certain diseases. When one disease is far more likely than
another, the model predicts that subjects will substantially
overestimate the diagnosticity of the more valid symptom for the Rare
disease. This illusory diagnosticity is a learned form of "base-rate
neglect" which has frequently been observed in studies of probability
judgments. The results of Experiments 1 and 2 provided support for
this prediction in contradistinction to predictions from probability
matching, exemplar retrieval, or simple prototype learning models.
Experiment 3 addressed representational issues in the design of the
network models. When patients always have four symptoms (chosen from
four opponent pairs) rather than the statistically equivalent
presence/absence of each of four symptoms, as in Experiment 1, the
network model predicts a pattern of results quite different from
Experiment 1. The results of Experiment 3 were again consistent with
the Rescorla-Wagner/LMS learning rule as embedded within an
connectionist network.
VISITORS: Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk. Thanks!
------------------------------
Date: Fri, 20 Mar 87 10:05:47 PST
From: admin%cogsci.Berkeley.EDU@berkeley.edu (Cognitive Science
Program)
Subject: Seminar - Signals to Symbols in Neural Networks (UCB)
BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar - IDS 237B
Tuesday, March 31, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
2515 Tolman Hall
``From Signals to Symbols in Neural Network Models''
Terrence J. Sejnowski
Division of Biology
California Institute of Technology
At the earliest stages of sensory processing and at the
final common motor pathways, neural computation is best
described as signal processing. Somewhere in the nervous system
these signals are used to form internal representations and to
make decisions that appear symbolic. A first step toward under-
standing the transition from signals to symbols can be made by
studying the development and internal structure of massively-
parallel nonlinear networks that learn to solve difficult signal
identification and categorization problems. The concept of
``feature detector'' is explored in a problem concerning sonar
target identification that appears to be solved by humans and
network models in similar ways. The concept of a ``semi-
distributed population code'' is illustrated by the problem of
pronouncing English text in which invariant internal codes
emerge not at the level of single processing units, but at the
level of cell assemblies.
---------------------------------------------------------------
UPCOMING TALKS
Apr. 28: Eran Zaidel, Psychology Dept., Brain Research Insti-
tute, UCLA
---------------------------------------------------------------
ELSEWHERE ON CAMPUS
SESAME Colloquium: Robbie Case, Ontario Institute for Studies in
Education, Monday, March 30, at 4:00 p.m., 2515 Tolman.
------------------------------
Date: Fri 20 Mar 87 18:11:03-PST
From: Nils Nilsson <NILSSON@Score.Stanford.EDU>
Subject: Course - Approaches to AI (SU)
[Forwarded from the Stanford bboard by Laws@SRI-STRIPE.]
SEMINAR ANNOUNCEMENT
CS 520 ARTIFICIAL INTELLIGENCE RESEARCH SEMINAR
APPROACHES TO ARTIFICIAL INTELLIGENCE
Tuesdays 11:00 a.m. Terman Auditorium (Televised over SITN)
Spring Quarter 1987
Convener: Nils Nilsson
The student and/or researcher approaching artificial intelligence cannot
fail to note that research is guided by a number of different paradigms.
Among the most popular are: approaches based on one form or another of
symbolic logic; approaches stressing application-specific data
structures and programs for representing and manipulating knowledge;
approaches involving machine learning; and approaches based on
psychological models of human perception and cognition. There are many
variants and combinations of all of these, and each has contributed to
our broad understanding of how to build intelligent machines. During
this seminar series in 1987, leading exponents of these paradigms will
describe the main features of his approach, what it has achieved so far,
how it differs from other approaches, and what can be expected in the
future.
TENTATIVE SCHEDULE
Mar 31: Nils Nilsson (Stanford), ``Overview of Approaches to AI''
Apr 7: Paul Rosenbloom (Stanford), ``AI Paradigms and Cognition''
Apr 14: Bruce Buchanan (Stanford), title to be announced
Apr 21: Vladimir Lifschitz (Stanford), ``The Logical Approach to AI''
Apr 28: Martin Fischler/Oscar Firschein (SRI), ``Representation and
Reasoning in Machine Vision''
May 5: Richard Fikes (Intellicorp), ``Reasoning in Frame-Based
Representation Systems''
May 12: Terry Winograd (Stanford), ``Is There a Standard AI Paradigm?''
May 19: Hubert Dreyfus (UC Berkeley), ``AI at the Crossroads''
May 26: David Rumelhart (UC San Diego), title to be announced
[Will deal with ``connectionism'']
June 2: Ed Feigenbaum (Stanford), ``AI as an Empirical Science''
June 9: Doug Lenat (MCC), ``The Experimentalist's Approach to AI:
from Learning to Common Sense''
------------------------------
Date: 20 Mar 1987 1446-PST (Friday)
From: Tanya Walker <tanya@mojave.stanford.edu>
Subject: Course - Problem Solving, Learning, and Hardware Design (SU)
[Forwarded from the Stanford bboard by Laws@SRI-STRIPE.]
ELECTRICAL ENGINEERING DEPARTMENT-EE392H
Problem Solving, Learning, and Hardware Design
Spring Quarter, 1987 (3 units)
Instructor: Professor Daniel Weeise, CIS 207, 5-3711
Time: Tuesday and Thursday 4:15 to 5:30 pm
Place: ESMB 138
The aim of this course is to understand state-of-the-art AI techniques for
planning, problem solving, and learning. This course is the starting
point for investigating "self-configurable" systems capable of becoming
expert problem solvers in given domains. Our particular domain of
interest is hardware design. The global problem is automatically
creating expert hardware designers for different types of hardware.
Extant planners, such as Tweak, Molgen, and Soar will be studied first.
We will then look at truth maintenance systems. Then we will
investigate the learning and generalization methods of Strips, Soar,
Hacker, and similar systems. We will briefly discuss domain
exploration (a la Hasse and Lenat) and reflection (a la Smith).
We will then investigate using general problem solving methods to solve
problems from integrated circuit design. Examples include channel
routing, leaf cell generation, logic design, and global routing.
We will study two expert systems: Joobbani's Weaver system for channel
routing, and Kowalski's DAA system for VLSI design. They will be used
as examples of expert systems which might be automatically generated.
This will be largely a reading and discussion course. Students will be
required to write a term paper. Familarity with basic AI techniques
will be assumed. Enrollment is by consent of the instructor.
------------------------------
Date: 22 Mar 1987 18:21-EST
From: cross@afit-ab.arpa
Subject: Conference - AAAI-87 Workshop on Real-Time Processing
The AAAI-87 Workshop committee has approved a workshop to be held on
Tuesday, July 14, 1987 entitled "Real-Time Processing in Knowledge-Based
Systems." A call for participation follows.
Workshop on Real-Time Processing in Knowledge-Based Systems
AI techniques are maturing to the point where application
in knowledge intensive, but time constrained situations is
desired. Examples include monitoring large dynamic systems such
as nuclear power plants; providing timely advice based on time
varying data bases such as in stock market analysis; sensor
interpretation and management in hospital intensive care units,
or in military command and control environments; and diagnoses
of malfunctions in airborne aircraft. The goal of the workshop
is to gain a better understanding of the fundamental issues that
now preclude real-time processing and to provide a focus for
future research. Specific issues that will be discussed include:
Pragmatic Issues: What is real-time performance? What
metrics are available for evaluating performance?
Parallel Computation: How can parallel computation be
exploited to achieve real-time performance? What performance
improvements can be gained by maximizing and integrating the
inherent parallelism at all levels in a knowledge-based system
(e.g., application through the hardware levels).
Knowledge Organization Issues: What novel approaches can be
to maximize the efficiency of knowledge retrieval?
Meta-Level Problem Solving: How can intelligent problem
solving agents reason about and react to varying time-to-solution
resources? What general purpose or domain specific examples
exist of problem solving strategies employed under different
time-to-solution constraints? What are the tradeoffs in terms of
space, quality of solution, and completeness of solution.
Complexity Issues: How can an intelligent agent reason
about the inherent complexity of a problem?
Algorithm Issues: What novel problem solving methods can be
exploited? How can specialized hardware (for example , content
addressable memories) be exploited?
To encourage vigorous interaction and exchange of ideas
between those attending, the workshop will be limited to
approximately 30 participants (and only two from any one
organization). The workshop is scheduled for July 14, 1987, as a
parallel activity during AAAI 87, and will last for a day.
All participants are required to submit an abstract (up to
500 words) and a proposed list of discussion questions. Five
copies should be submitted to the workshop chairman by May 1,
1987. The discussion questions will help the workshop
participant's focus on the fundamental issues in real-time AI
processing.
Because of the brief time involved for the workshop,
participants will be divided into several discussion groups. A
group chairman will present a 30 minute summary of his group's
abstracts during the first session. In addition, the committee
reserves the right to arrange for invited presentations. Each
group will be assigned several questions for discussion. Each
group will provide a summary of their groups discussion. The
intent of the workshop is to promote creative discussion which
will spawn some exciting ideas for research.
Workshop Chairman:
Stephen E. Cross, AFWAL/AAX, Wright-Patterson AFB OH 45433-
6583, (513) 255-5800. arpanet: cross@afit-ab.arpa
Organizing Committee:
Dr. Northrup Fowler III, Rome Air Development Center
Dr. Barbara Hayes-Roth, Stanford University
Dr. Michael Fehling, Rockwell Palo Alto AI Research Lab
Ms. Ellen Waldrum, Computer Science Laboratory, Texas
Instruments
Dr. Paul Cohen, University of Massachusetts at Amherst
Invited Talks From:
Dr. Michael Fehling, Rockwell Palo Alto AI Research Lab
Dr. Barbara Hayes-Roth, Stanford University
*Dr. Vic Lesser, University of Massachuesetts at Amherst
*tentative
------------------------------
End of AIList Digest
********************