Copy Link
Add to Bookmark
Report

IRList Digest Volume 3 Number 08

eZine's profile picture
Published in 
IRList Digest
 · 1 year ago

IRList Digest           Thursday, 16 April 1987      Volume 3 : Issue 8 

Today's Topics:
Seminar - Information Retrieval by Text Skimming
- Dissert. Prop.: User Models in Advisory Systems
- Relevant Precedents in Case-Based Reasoning
- Adaptive User Models for User Interfaces
- Experimental Systems for Information Retrieval
Call for Papers - COLING 88

News addresses are ARPANET: fox@vtopus.cs.vt.edu BITNET: foxea@vtvax3.bitnet
CSNET: fox@vt UUCPNET: seismo!vtisr1!irlistrq

----------------------------------------------------------------------

Date: 9 Mar 87 14:50:04 EST
From: Edward.Gibson@cad.cs.cmu.edu
Subject: Information retrieval by Text Skimming

[forwarded from NL-KR Digest (3/13/87 14:47:48) Volume 2 Number 14]

COMPUTATIONAL LINGUISTICS RESEARCH SEMINAR
Speaker: Michael Mauldin, CMU Computer Science
Date: Thursday, March 12
Time: 12:00 noon
Place: Porter Hall 125-C
Title: Information Retrieval by Text Skimming.

ABSTRACT

I will report on the progress I have made for my thesis entitled
``Information Retrieval by Text Skimming.''

Most information retrieval systems today are word based. But simple
word searches and frequency distributions do not provide these systems
with an understanding of their texts. Full natural language parsers
are capable of deep understanding within limited domains, but are too
brittle and slow for general information retrieval.

My dissertation is an attempt to bridge this gap by using a text
skimming parser as the basis for an information retrieval system that
partially understands the texts stored in it. The objective is to
develop a system capable of retrieving a significantly greater fraction
of relevant documents than is possible with a keyword based approach,
without retrieving a larger fraction of irrelevant documents. As part
of my dissertation, I am implementing a full-text information retrieval
system called FERRET (Flexible Expert Retrieval of Relevant English
Texts). FERRET will provide information retrieval for the UseNet News
system, a collection of 247 news groups covering a wide variety of
topics. Currently FERRET reads SCI.ASTRO, the Astronomy news group,
and part of my investigation will be to demonstrate the addition of new
domains with only minimal hand coding of domain knowledge. FERRET will
acquire the details of a domain automatically using a script learning
component.

------------------------------

Date: Tue, 10 Mar 87 10:58:32 EST
From: tim@linc.cis.upenn.edu (Tim Finin)
Subject: Acquisition of User Models

[forwarded from AI-ED Digest Friday, 13 Mar 1987 Volume 2 : Issue 11]

Dissertation Proposal Presentation
Computer and Information Science
University of Pennsylvania

IMPLICIT ACQUISITION OF USER MODELS
IN COOPERATIVE ADVISORY SYSTEMS

Robert Kass

User modelling systems to date have relied heavily on user models which
were hand crafted for use in a particular situation. Recently,
attention has focused on the feasibility of General User Models,
models which can be transferred from one situation to another with
little or no modification. Such a general user model could be
implemented as a modular component which can then be easily integrated
into diverse systems. This paper addresses one class of general user
models, those which are general with respect to the underlying domain of
the application. In particular, a domain independent user modelling
module for cooperative advisory systems is discussed.

A major problem in building user models is the difficulty of acquiring
information about the user. Traditional approaches have relied heavily
on information which is pre-encoded by the system designer. For a user
model to be domain independent, acquisition of knowledge will have to be
done implicitly, i.e., knowledge about the user must be acquired
during his interaction with the system.

The research proposed in this paper focuses on domain independent
implicit user model acquisition techniques for cooperative advisory
systems. These techniques have been formalized as a set of model
acquisition rules which will serve as the basis for the implementation
of the model acquisition portion of a general user modelling module.
The acquisition rules have been developed by studying a large number
of conversations between advice-seekers and an expert. The rules
presented are capable of supporting most of the modelling requirements
of the expert in these conversations. Future work includes implementing
these acquisition rules in a general user modelling module to test their
effectiveness and domain independence.

10:00 am Wednesday, March 18
Moore 554

Committee: Tim Finin (advisor)
Aravind Joshi
Bonnie Webber
Elaine Rich (MCC)

------------------------------

Date: Mon, 16 Mar 87 16:46:45 EST
From: tim@linc.cis.upenn.edu (Tim Finin)
Subject: Colloquim: Applying Precedents in a Case-Based Reasoner

[forwarded from NL-KR Digest (3/18/87 22:13:11) Volume 2 Number 17]

Colloquium
Computer and Information Science
University of Pennsylvania

"Applying Relevant Precedents in a Case-Based Reasoning System"
Kevin D. Ashley
Department of Computer and Information Science
University of Massachusetts at Amherst

The law is an excellent domain to study Case-Based Reasoning (``CBR")
problems since it espouses a doctrine of precedent in which prior
cases are the primary tools for justifying legal conclusions. The law
is also a paradigm for adversarial CBR; there are ``no right answers",
only arguments pitting interpretations of cases and facts against each
other.

This talk will demonstrate techniques employed in the HYPO program for
representing and applying case precedents and hypothetical cases to
assist an attorney in evaluating and making arguments about a new fact
situation. HYPO performs case-based reasoning and, in particular,
models legal reasoning in the domain of trade secrets law. HYPO's key
elements include: (1) a structured case knowledge base (``CKB") of
actual legal cases; (2) an indexing scheme (``dimensions") for
retrieval of relevant precedents from the CKB; (3) techniques for
analyzing a current fact situation (``cfs"); (4) techniques for
``positioning" the cfs with respect to relevant precedent cases in the
CKB and finding the most on point cases (``mopc"); (5) techniques for
manipulating cases (e.g., citing, distinguishing, hybridizing); (6)
techniques for perturbing the cfs to generate hypotheticals that test
the sensitivity of the cfs to changes, particularly with regard to
potentially adverse effects of new damaging facts coming to light and
existing favorable ones being discredited; and (7) the use of ``3-ply"
argument snippets to dry run and debug an argument.

An extended example of HYPO in action on a sample trade secrets case
will be presented. The example will demonstrate how HYPO uses
``dimensions", ``case-analysis-record" and ``claim lattice" mechanisms
to perform indexing and relevancy assessment of precedent cases
dynamically and how it compares and contrasts cases to come up with
the best precedents pro and con a decision.

------------------------------

Date: 17 Mar 87 10:37:44 EST
From: Jeffrey.Bonar@isl1.ri.cmu.edu
Subject: Ted Selker talk

[forwarded from AI-ED Digest Thursday, 19 Mar 1987 Volume 2 : Issue 12]

Joint Learning Research and Development Center - Computer
Science Seminar on Human-Computer Interaction will be given by Ted Selker
of IBM Watson Research Laboratories. The talk will be on Monday, March 23,
at 11:00am in the second floor Auditorium of the Learning Research and
Development Center on the Pitt campus. The abstract appears below.

_________________________________________________________________________

Adaptive User Models: A New Paradigmn for User Interface

Ted Selker

IBM T. J. Watson Research Laboratories
Yorktown NY, Selker@IBM.com


This talk will be about a paradigm for using Artificial
Intelligence (AI) learning technology for improving user
interfaces. AI research efforts have produced flexible and powerful
programming environments and knowledge representation languages that
are so complex that they take months to master. Current
Computer Aided Instruction (CAI) systems are not good at teaching
people how to use complex systems. CAI systems have a
tendency to concentrate on comprehensive teaching at the expense of
user exploration and discovery. We believe that interfaces for
complex system can be built which offer users learning support
when needed it without interrupting user tasks.

Cognitive Adaptive Computer Help (COACH) is a system for programming in
LISP that embodies these ideas. The system creates a model of the user
in an attempt to understand a user's needs. COACH watches a
programmer's progress, providing them with syntax examples and insight
into the philosophical uderpinnings of LISP as they need it. It
ceases to interfere in places where a user has demonstrated expertise.
COACH text reappears when a programmer once again needs help.
Whether giving help (user initiated explanation), or acting as an
advisor (COACH initiated explanation), COACH presents information
at the syntactic level at which a user has shown competence. COACH was
written in Commonlisp with Flavors to run on a Symbolics 3645.

------------------------------

Date: Tue, 17-MAR-1987 08:32 EST
From: <FOXEA%VTVAX3.BITNET@wiscvm.wisc.edu>
Subject: talk at Univ. of Chicago Graduate Library School 4/9/87

Title --
Experimental Systems for Testing the Applicability of Advanced Retrieval
Methods: Lessons Learned from the SMART and CODER Systems

Speaker -- Edward A. Fox, Dept. of Computer Science, Virginia Tech

Abstract --
In recent years it has been repeatedly shown that
conventional information storage and retrieval methods,
generally used for searching bibliographic databases or online
public access catalogs (OPACs), are not very effective in finding
all and only those items that are relevant to a user's needs.
Building upon several different theories, experimental systems
have been developed that achieve a much higher level of performance.
The SMART system, which originated in the 1960's, was
redone in the early 1980's to operate in the UNIX environment
and to facilitate testing of theories based on probability and
statistics approaches. Originally developed at Cornell
University, SMART has been used at a number of locations.
Modifications have been made at Virginia Tech to support
searching of a large OPAC, so the relative effectiveness of
advanced methods can for the first time be compared in a large,
realistic test environment.
The CODER (COmposite Document Expert/extended/effective
Retrieval) project was launched at Virginia Tech in 1984 to
develop a more flexible testbed than SMART, that would allow
exploration of the use of various artificial intelligence
methods for information analysis and retrieval. A few of the
present plans are to extend the system to use large collections
stored on CD ROM, and to compare the effect of different user
models on the ability of CODER to adapt to the varying interaction
styles and needs of end users.

------------------------------

Date: Tue, 17 Mar 87 07:59:44 est
From: vtcs1::in% <walker@flash.bellcore.com>
Subject: Announcement and Call for Papers for COLING-88

COLING-88
12th International Conference on Computational Linguistics
22-27 August 1988, Budapest, Hungary

ANNOUNCEMENT AND CALL FOR PAPERS


Papers are invited on all aspects of computational linguistics in a broad
sense, including but not limited to:

> theoretical issues of CL in its relations to linguistics, mathematics,
computer science and cognitive science
> computational models of (sub)systems of natural language and of human
communication (phonemics, morphemic, syntax, semantics, pragmatics,
parsing and generation, discourse, speech acts and planning)
> linguistic contributions to
- natural language dialog systems, intelligent and cooperative question
answering
- machine (aided) translation
- speech understanding and voice output procedures
- systems for text generation
- systems for use and preparation of dictionaries for humans and machines
- intelligent text editors
> knowledge representation and inferencing
- language comprehension
- automatic creation of knowledge bases from texts
> hardware and software support for language data processing
> computational tools for language learning and teaching

Papers should report on substantial, original and unpublished research and
should indicate clearly the position of the work described within the
context of the research in the given domain and emphasize what new results
have been achieved.

Authors should submit four (4) copies of an extended abstract not
exceeding seven (7) double-spaced pages plus a title page including the
name(s) of the author(s), complete address, a short (five-line)
summary, and a specification of the topic area.

Abstracts must be received not later than 10 December 1987 by the
Chairperson of the Program Committee:
Dr. Eva Hajicova (COLING-88)
Charles University
Faculty of Mathematics, Linguistics
Malostranske n. 25
CS-118 00 Praha 1, Czechoslovakia

Authors will be notified of acceptance by 28 February 1988. Camera-ready
copies of final papers must be received by 30 April 1988.

Inquiries about the conference, exhibitions, and demonstrations (live and
video) should be directed to:
COLING-88 Secretariat
c/o MTESZ Congress Bureau
Kossuth ter 6-8
H-1055 Budapest, Hungary
telex 225792 MTESZ H

COLING-88 is sponsored by the International Committee on Computational
Linguistics. It is organized by the John von Neumann Society for
Computing Sciences in cooperation with the Computer and Automation
Institute and the Institute for Linguistics, both of the Hungarian
Academy of Sciences.

COLING-88 will be preceded by tutorials and workshops, and immediately
followed by the 3rd EURALEX Congress on all aspects of lexicography,
also to be held in Budapest.

------------------------------

END OF IRList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT