Copy Link
Add to Bookmark
Report

NL-KR Digest Volume 13 No. 25

eZine's profile picture
Published in 
NL KR Digest
 · 1 year ago

NL-KR Digest      Mon Jun  6 19:28:39 PDT 1994      Volume 13 No. 25 

Today's Topics:

Position: Res. Asst. in NLP Know. Acquis., University of Manchester
CFP: MLnet ML / Know. Acquis. summer school, Sep 94, Dourdan
Announcement: ICCS'94 Research Award, Aug 94, College Park
Announcement: Preprint available - Robust Reasoning by Ron Sun

Subcriptions, requests, policy: nl-kr-request@cs.rpi.edu
Submissions: nl-kr@cs.rpi.edu
Back issues are available from host ftp.cs.rpi.edu [128.213.3.254] in
the files nl-kr/Vxx/Nyy (e.g. nl-kr/V01/N01 for V1#1), or by gopher at
cs.rpi.edu, Port 70, choose RPI CSLab Anonymous FTP Server. Mail requests
will not be promptly satisfied. Starting with V9, there is a subject index
in the file INDEX. Back issues and automated index are also available from
ai.sunnyside.com:/pub/nl-kr via anonymous ftp and gopher, www, etc..
BITNET subscribers: please use the UNIX LISTSERVer for nl-kr as given above.
You may send submissions to NL-KR@cs.rpi.edu as above
and any listserv-style administrative requests to LISTSERV@AI.SUNNYSIDE.COM.
NL-KR is brought to you through the efforts of Chris Welty (weltyc@cs.rpi.edu)
and Al Whaley (al@sunnyside.com).

-----------------------------------------------------------------------

To: nl-kr@cs.rpi.edu
Subject: Position: Res. Asst. in NLP Know. Acquis., University of Manchester
Date: Thu, 26 May 94 17:36:47 +0000
From: Effie Ananiadou <effie@ccl.umist.ac.uk>



UMIST

Centre for Computational Linguistics
Post of Research Assistant --- Knowledge Acquisition from
Corpora and Generation.

We are seeking a suitable candidate to fill a research post
on a project funded by Matsushita Electric Industry Company Ltd.
(Panasonic).
The post is initially for two years with a possible extension
of one year beginning as soon as possible. The project deals with
the construction of tools for knowledge acquisition from textual
corpora and for text generation. We are dealing with technical
texts/sublanguages. The applicant should have a good first
degree and preferably postgraduate qualifications in
artificial intelligence/computing. Applicants should have a
good knowledge of C, in a UNIX environment. Knowledge of
computational linguistics, especially generation, is important.
An interest in or familiarity with hypertext would be an
advantage. The applicant will join a team collaborating
with the Japan-based Matsushita Research Centre. Salary will be
between 12828 and 18855 per annum, depending on age and
experience. Please quote reference MA/RA.

Applications, including full curriculum vitae and the names of
two referees, should be sent to:

Dr. S. Ananiadou
(re: Matsushita RA)
Centre for Computational Linguistics
University of Manchester Institute of Science and
Technology,
PO Box 88
Manchester, UK
M60 1QD

to arrive as soon as possible. Further information may be
obtained from
the above address or by phoning 061-200 3100.
An equal opportunity employer.






Sofia Ananiadou effie@ccl.umist.ac.uk
Centre for Computational effie@cclsun.uucp
Linguistics
UMIST
PO Box 88
Sackville Street
Manchester, UK tel: +44.61.200.3082 (direct)
M60 1QD fax: +44.61.200.3099



-----------------------------------------------------------------------

To: comp-ai-nlang-know-rep@uunet.uu.net
From: Dolores.Canamero@lri.fr (Dolores Canamero)
Subject: CFP: MLnet ML / Know. Acquis. summer school, Sep 94, Dourdan
Date: 27 May 1994 15:42:25 GMT
Reply-To: Dolores.Canamero@lri.fr (Dolores Canamero)


=================================================================
MLnet Summer School on Machine Learning and Knowledge Acquisition

MLSS'94

September 5 - 10, 1994, Dourdan, France
=================================================================


ORGANIZATION

Organizer: Celine Rouveirol, LRI, University of Paris-South, France.

Local organization and secretariat:

Dolores Canamero
LRI, Bat. 490, Universite Paris-Sud
F-91405 Orsay Cedex, France
Tel.: +33-1-69.41.66.26
Fax: +33-1-69.41.65.86
E-mail: mlss94@lri.lri.fr


SPONSORS

CEC (Commission of the European Communities) through the MLnet
Network of Excellence (Project 7115); PRC - IA (Projet de Recherche Coordonne,
groupe Intelligence Artificielle); Universite Paris-Sud, Centre d'Orsay,
Division de la Recherche; Sun France; INRIA (Institut National de Recherche
en Informatique et en Automatique).


AIM OF THE SCHOOL

Its aim is to provide training in the latest developments in Machine Learning
and Knowledge Acquisition to AI researchers, but also to industrials who are
investigating possible applications of those techniques. Invited seminars
and demonstrations of software will be organized in the evenings.

LECTURERS

Agnar AAMODT - University of Trondheim, Norway
Francesco BERGADANO - University of Catania, Italy
Gilles BISSON - LIFIA, Grenoble, France
Ivan BRATKO - Josef Stefan Institute, Ljubljana, Slovenia
Wray BUNTINE - RIACS/NASA Ames, Moffet Field, CA, USA
Leslie KAEBLING - Brown University, USA
Steve MUGGLETON - University of Oxford, UK
Celine ROUVEIROL - LRI, University of Paris-South, France
Lorenza SAITTA - University of Torino, Italy
Derek SLEEMAN - University of Aberdeen, UK
Maarten VAN SOMEREN - University of Amsterdam, NL
Bob WIELINGA - University of Amsterdam, NL
Stefan WROBEL - GMD, Bonn, Germany


PROGRAM
=======

Monday, September 5
-------------------

09h00 Case-Based Reasoning [3h]
Agnar Aamodt (University of Trondheim, Norway)

Case-based reasoning (CBR) is an integrated approach to
machine learning and problem solving, emphasizing the learning
and reuse of case-specific rather than generalized knowledge.
Starting with a characterization of case-based vs. other learning
methods, the lecture will present foundational issues of case-
based methods, a classification of method types illustrated by
existing systems, and a state-of the art description of the CBR
field. Specific problems related to case representation, i.e. case
structures, case contents, and indexing methods, will be
discussed. Different types of learning, such as adding new cases,
modifying existing cases, learning of indexes, will be described
and related to how the learned knowledge is utilized for case
retrieval and reuse. Recent developments of integrated methods
are also included, i.e. methods that combine case-specific and
general domain knowledge, and case-based and generalization-
based learning.

10h30 Coffee Break

11h00 Case-Based Reasoning (contd.)

12h30 Lunch

14h30 Learning and Probabilities [3h]
Wray Buntine (RIACS/NASA Ames, Moffet Field, CA, USA)

Probabilistic graphical models are being used widely in artificial
intelligence in, for instance, diagnosis and expert systems. This
tutorial explains how these models can be extended to machine
learning, neural networks, knowledge discovery, and knowledge
refinement. This provides a unified framework that combines
lessons learned from the connectionist, artificial intelligence, and
statistical communities, and provides a smooth transition between
the inference found in diagnostic and model-based systems and
that found in learning systems. This also offers a set of principles
for developing a learning tool-box, whereby a learning or
discovery system can be compiled from specifications. Many of
the popular families of learning algorithms can be compiled in this
way from graphical specifications.

16h00 Coffee Break

16h30 Learning and Probabilities (contd.)


Tuesday, September 6
--------------------

09h00 Learning and Noise [3h]
Ivan Bratko (Josef Stefan Institute, Ljubljana, Slovenia)

In most real world applications of Machine Learning, there is a
need to handle noisy data. We say that learning data is noisy
when the data contain errors and is incomplete. Machine Learning
techniques that are not equipped with noise handling mechanisms
are usually unsuccessful in practice. It is therefore important for a
learning algorithm to be able to filter out the effects of noise in
learning data.

There are two main approaches to making a learning technique
robust with respect to noise: (1) pruning of decision trees or
truncation of rules; (2) exploiting redundancy. These approaches
will be discussed with respect to the following points: (a) How to
distinguish between reliable and unreliable rules or parts of
decision trees induced from data; (b) the problem of estimating
the accuracy of induced rules or trees; (c) criteria for deciding
whether to prune or not, and adjusting the degree of pruning; (d)
relation between the level of noise and degree of pruning; (e)
effects of pruning in practical applications; and (f) benefits of
redundancy in rule sets.

10h30 Coffee Break

11h00 Learning and Noise (contd.)

12h30 Lunch

14h30 Knowledge Acquisition [3h]
Bob Wielinga (University of Amsterdam, NL)

This lecture will discuss knowledge acquisition methods,
techniques and tools from a knowledge level modelling
perspective. Knowledge Acquisition is viewed as a modelling
activity aimed at the construction of a description of the
knowledge required to give a system expert level performance.
The KADS approach to knowledge modelling will be described
and will be compared to other approaches to knowledge
modelling. Tools and techniques that support knowledge
modelling will be described. A recent trend in knowledge
engineering is concerned with "ontologies" that support the reuse
and sharing of knowledge. The notion of ontology will be
explained and its use will be illustrated by an example.

16h00 Coffee Break

16h15 Knowledge Acquisition (contd.)


Wednesday, September 7
----------------------

09h00 Integrated Architectures [3h]
Lorenza Saitta (University of Torino, Italy)

With the growing complexity of applications for machine
learning, the need of using integrated or multistrategy approaches
becomes more and more important, and an increasing amount of
research efforts is devoted to this issue. However, there is still
not a common agreement on what integration really means. In
fact, integration may be located at the approach level (for
instance, combining genetic or statistic and symbolic
approaches), whereas, in others, integration envisages the
presence of different methodologies within the same approach
(for instance inductive and deductive methods in symbolic
learning). Finally, integration may be equated with the availability
of a library of systems or algorithms to a supervisor, which has
to choose among them. Also the ways in which integration is
implemented differ: In some cases the various learning
components (let them be approaches or systems or methods) offer
alternative solutions to the same task and the supervisor has to
select among them a "most suitable" one, according to its current
learning goal. In other cases all the components shall be used (in
different phases or at the same time) in order to achieve a
common goal, which could not be reached by any single
component. Starting from the above considerations, the talk will
provide a survey of typical integrated architectures at each of the
mentioned levels, trying to enlighten their underlying principles
and basic strategies, and discussing the relative merits and
drawbacks. Practical results obtained by applying these integrated
architectures will also be reviewed. Finally, demonstrations of
two integrated learning systems will be given.

10h30 Coffee Break

11h00 Integrated Architectures (contd.)

12h30 Lunch

14h00 Knowledge Revision - I [2h]
Derek Sleeman (University of Aberdeen, UK)

The first lecture will review the several stages involved in
building Knowledge-Based systems, and will discuss how
Knowledge Base Refinement has been done "conventionally".
Some of the early aids will be reviewed, including the
TEIRESIAS system, and the system implemented by Suwa and
Shortliffe. I will then review several Learning Apprentice
Systems (LAS).

Secondly, I will discuss in greater detail, two systems built at
Aberdeen, namely REFINER & KRUST, making the point that
REFINER attempts to detect inconsistencies in a KB (Knowledge
Base) before problem solving, whereas KRUST helps an expert
"repair" a KB which is found to be deficient during a problem
solving session.

Finally, I will make some comparisons between Knowledge-
Based Refinement systems and Theory Revision systems.

16h00 Coffee Break

16h30 Knowledge Revision - II [2h]
Stefan Wrobel (GMD, Bonn, Germany)

With the growing complexity of applications being tackled by
Machine Learning and Inductive Logic Programming (ILP), it has
become increasingly clear that besides nonincremental approaches
for the initial acquisition of knowledge bases "from scratch" we
also need techniques for theory revision, i.e., techniques that can
use existing learned or human-supplied domain theories and can
modify them to improve their correctness, completeness,
efficiency or understandability. In this lecture, we will cover the
theoretical foundations, algorithmic methods, and practical
applications of theory revision with a particular focus on methods
that work with first-order theories (ILP). From the classical
incremental system MIS by Shapiro to the successors that are in
use today, students will get to know the state of the art in the
field.


Thursday, September 8
---------------------

09h00 Knowledge Acquisition and Machine Learning [3h]
Maarten van Someren (University of Amsterdam, NL)

The general theme is: can machine learning techniques and
systems be used to partially automate knowledge acquisition?
Building on the lecture on Knowledge Acquisition, we define the
major tasks in KA. Next I shall review some systems that use ML
for KA. These will include TEIRESIAS, AQUINAS, APT,
ACKnowledge KEW and MOBAL. These systems represent
different types of knowledge acquisition and different types of
applications of ML. Next we extract the techniques employed in
these systems. Here we shall focus not so much on technical
aspects of ML techniques that are addressed in the other lectures
but on the role of these systems in knowledge acquisition (e.g.,
interactive ML systems, integration of ML and problem solving,
the use of models of problem solving, different types of
background knowledge and bias, domain models and ontologies,
the use of revision, multiple representations and the core
knowledge base architecture, dialogue structures). This will also
show which KA tasks cannot (yet) be automated and thereby
defines current open research problems.

10h30 Coffee Break

11h00 Knowledge Acquisition and Machine Learning (contd.)

12h30 Lunch

14h30 Reinforcement Learning [3h]
Leslie P. Kaebling (Brown University, USA)

Reinforcement learning, the problem of learning from trial and
error, has become a recent new focus of attention in the machine
learning community. In this tutorial, we will discuss the basic
formal background of reinforcement learning, then consider a
number of important technical questions and some proposed
solutions. These questions include: How should an agent explore
its environment? How can it learn to select appropriate actions
when their effects are only apparent in the future? When is it
useful for the agent to build a model of the dynamics of its world,
rather than simply to learn a reactive strategy? How can an agent
take advantage of the idea that similar situations will require
similar reactions? What happens if the agent is unable to
completely perceive the state of its environment? We will
conclude with a discussion of applications of reinforcement
learning and of the important currently open problems.
Prerequisite knowledge: Undergraduate-level knowledge of
probability theory and mathematical notation and some familiarity
with the concepts and methods of machine learning are assumed.
No previous knowledge about reinforcement learning or Markov
models is necessary.

16h00 Coffee Break

16h30 Reinforcement Learning (contd.)


Friday, September 9
-------------------

09h00 Inductive Logic Programming - I [3h]
Steve Muggleton (University of Oxford, UK)

This is a lecture course on Inductive Logic Programming. The
purpose of this course is to explore the problem of inductive
inference in the framework of first-order predicate calculus and
the probability calculus. Students are expected to learn the theory
of Inductive Logic Programming in step with-hands on exercises
using the ILP system "Progol". Progol contains a Prolog
interpreter for writing and testing logic programs. However,
Prolog programs can be automatically generated from examples
using the inductive algorithm built into Progol.

10h30 Coffee Break

11h00 Inductive Logic Programming - I (contd.)

12h30 Lunch

14h00 Inductive Logic Programming - II [2h]
Celine Rouveirol (LRI, University of Paris-South, France)

A part of Inductive Logic Programming aims at providing a logic
based modelization of empirical learning operations
(generalization, specialization, predicate invention). In this
lecture, we will emphasize why formalizations issued from ILP
are a good basis for making explicit constraints (also called bias)
that guide the learning process : constraints that come from the
application domain of the ILP system, and constraints imposed
by the complexity of the ILP algorithm used. Eliciting all these
constraints is a pre-requisite for optimal use and adaptability of
ILP systems. This point of view will be illustrated by recent
works on Declarative Bias in ILP.

16h00 Coffee Break

16h30 Inductive Logic Programming - III [2h]
Francesco Bergadano (University of Catania, Italy)

Inductive Logic Programming is now an established area of AI
and Machine Learning with a well-founded theory and a number
of important application domains. We clarify the goals and the
motivations of ILP in a simplified problem setting, with an
analysis of possible variants and difficulties. Classical top-down
methods for learning Horn clauses from examples are described
in detail, but in simplified form. We then survey the software
engineering applications for which ILP is most appropriate and
has been used recently. The plan of the course is as follows:

1. Fundamentals. We define a simplified "Inductive Logic
Programming" problem as follows. We are given: (a) Positive
and Negative examples described as ground facts A; (b)
background knowledge B (a set of Horn clauses). We need to
find a logic program P such that: (a) all positive examples are
derived from P and B; (b) no negative example is derived from P
and B. Incremental and Probabilistic variants of this problem
setting are analyzed, together with the dificulties of such goals.

2. Top-down Algorithms: Computationally effective methods for
addressing the ILP problem. We carefully introduce a simplified
version of Shapiro's MIS system and of Quinlan's Foil system.
Some drawbacks of top-down methods based on what will be
defined as an "extensionality" principle are investigated, together
with some possible solutions.

3. Software engineering applications including program synthesis
and testing.


Saturday, September 10
----------------------

09h00 Conceptual Clustering [3h]
Gilles Bisson (LIFIA, Grenoble, France)

The problem of turning a set of unorganized observations into a
taxonomy of concepts is one of the major issues in scientific
research. This problem has been addressed for a long time in Data
Analysis, where many numerical clustering methods have been
developed. Psychology is also concerned by this problem with
the aim of building some models explaining how people
categorize information. More recently a part of machine learning,
known as Conceptual Clustering, also deals with this problem. In
some sense this domain lies between Data Analysis and
psychology since it focusses, at the same time, on tractability of
the problems and explainability of the concepts found by the
computer.

This lesson aims at presenting the major approaches developed in
Conceptual Clustering (CC in short) and describing the
interactions which exist between Data Analysis, Psychology, and
this domain. The systems developed in CC vary in many
respects: some of them are based on a top-down approach others
on a bottom-up one, some of them are incremental, while other
are not. Finally the criteria used in order to cluster can be
numerical or symbolical. For illustrating our purpose, we will
detail and compare three systems - CLUSTER (Michalski, Stepp),
COBWEB (Fisher), KBG (Bisson) - which closely represent this domain.

10h30 Coffee Break

11h00 Conceptual Clustering (contd.)

12h30 Lunch


GENERAL INFORMATION
===================

Location
--------

The school will be held at the VVF "Le Normont", in Dourdan, some 50 km
south of Paris.

VVF Le Normont
La Croix-Saint-Jacques
91410 Dourdan, France
Tel.: +33-1-64.59.78.54
Fax: +33-1-64.59.39.47

You can reach Dourdan from Paris:

* By train (R.E.R.): Take Line C of the R.E.R., direction "Dourdan". You
can access Line C:

- From Austerlitz railway station;
- From Line B of the R.E.R., at "Saint-Michel" station. Line B connects
Roissy-Charles De Gaulle airport with Paris (shuttles "ROISSY RAIL"
arrive from the airport to the R.E.R. station "Roissy-Aeroport CDG");
- From Orly airport, take a shuttle "ORLY RAIL" to the station "Pont de
Rungis-Aeroport d'Orly". Take a train to "Choisy-le-Roi" station,
where you will have to change trains, direction "Dourdan";
- From various lines of the subway network.

* By car: Take highway A6-A10 at "Porte d'Orleans" or at "Porte d'Italie",
direction "Chartres-Orleans". Exit at "Dourdan". Once in Dourdan, follow
the direction "La Croix-Saint-Jacques" and then "VVF" (Villages de
Vacances Familiales).

Shuttles from Dourdan R.E.R station to the VVF "Le Normont" will be
scheduled on Sunday evening, and from the VVF to Dourdan station on
Saturday afternoon.


Registration fee
----------------

Regular: 2,500 FF; Student(*): 2,000 FF;

Late registration (after August 1): add 500 FF to fee.

(*) A proof of student status must accompany a student registration.


Accommodation
-------------

Accommodation during the school will be ensured at VVF "Le Normont".
Prices:

A) Full board accommodation for the whole school (includes all meals and
coffee breaks), from Sunday 4 in the evening (rooms are available from
18h; dinner included) to Saturday 10 (lunch will be the last meal):

Single room: 2700 FF Double room: 2280 FF (per person)


B) Prices per day:

Full board: Single room: 450 FF (day/person); Double room: 380 FF (day/person)

Bed + breakfast: Single room: 230 FF (night/person); Double room: 160 FF
(night/person)

Additional meals (per meal): Lunch: 95 FF Dinner: 95 FF



Registration and accommodation forms are available from the school's
secretariat (see above).


Payment
-------

Please note that two different payments are required:

a) Registration to the Summer School. Modalities of payment:

- Money transfer in FF, free of charges for beneficiary, to bank account
10206740000 (bank ID No. 30788, office ID No. 00100), BANQUE DE
NEUFLIZE, SCHLUMBERGER, MALLET ET CIE., 3 avenue Hoche,
75008 Paris, France. Please send us a copy of the bank debit note.
- Certified bank cheque in FF payable to CEPHYTEN.
- Personal French cheque in FF payable to CEPHYTEN.

b) Accommodation. Modalities of payment:

- Certified bank cheque in FF payable to VVF Le Normont.
- Personal French cheque in FF payable to VVF Le Normont.
- Credit cards: Master Card, Eurocard, Visa.

Registration and hotel booking will be confirmed on receipt of registration
form and payments.


Grants
------

Some full and partial grants for travel, registration, and accommodation
can be accorded to European students and researchers, and to members of
PRC-IA. Applicants must send a letter stating their motivations and a CV
before June 26. Decisions will be communicated within the first week of
July.


-----------------------------------------------------------------------

To: comp-ai-nlang-know-rep@uunet.uu.net
From: cecilia@umiacs.UMD.EDU (Cecilia Kullman)
Subject: Announcement: ICCS'94 Research Award, Aug 94, College Park
Date: 1 Jun 1994 17:17:47 -0400

Second International Conference on
CONCEPTUAL STRUCTURES
ICCS'94

August 16-20, 1994
University of Maryland
College Park, MD
USA

ANNOUNCEMENT


In accordance with the theme of our 10th Anniversary meeting, of
focusing on future aspirations, we are offering an award of
$500 for the best proposal submitted describing a substantive
research project to be done using conceptual graphs (cgs).
Only bona fide students, that is, both graduates and undergraduates
currently registered at recognized educational institutions,
may make eligible submissions.

Proposals may deal with advances in theory, application, or
implementation. They ought to be substantive and thoughtful, including
enough detail to describe adequately the limits of the proposed project,
to a maximum of ten pages, preferably fewer. The planned research
is expected to be productive of significant results,
which may be measured or otherwise rigorously evaluated.

Five copies of each proposal must be sent to:

ICCS'94
UMIACS
University of Maryland
College Park, MD 20742
USA
phone: 301-405-6722
FAX: 301-314-9658

by AUGUST 1, 1994, along with proof of current student status, for example,
a letter affirming student status, from an institutional advisor or officer.

All eligible proposals will be published along with the shorter papers
presented at the Conference. The name of the winner will be
announced at the Reception on the last night of the Conference.
Ineligible proposals will be returned to submitters without review.

If you have further questions, please contact the General Chair of
ICCS'94, Judy Dick at dick@glue.umd.edu, or telephone 301-405-2048.

A small amount of grant money is available for aid in student travel.
Please send requests as soon as possible, giving proof of student status.


-----------------------------------------------------------------------

Date: Sat, 4 Jun 1994 11:38:06 -0500
From: rsun@cs.ua.edu (Ron Sun)
To: neuron-request@cattell.psych.upenn.edu,
Subject: Announcement: Preprint available - Robust Reasoning by Ron Sun



Preprint available:
--------------------------------------------
title:
Robust Reasoning: Integrating Rule-Based and Similarity-Based Reasoning

Ron Sun
Department of Computer Science
The University of Alabama
Tuscaloosa, AL 35487
rsun@cs.ua.edu

--------------------------------------------
to appear in: Artificial Intelligence (AIJ), Spring 1995
---------------------------------------------

The paper attempts to account for common patterns in commonsense reasoning
through integrating rule-based reasoning and similarity-based reasoning
as embodied in connectionist models. Reasoning examples are analyzed and
a diverse range of patterns is identified. A principled synthesis based on
simple rules and similarities is performed, which unifies these patterns that
were before difficult to be accounted for without specialized mechanisms
individually. A two-level connectionist architecture with dual representations
is proposed as a computational mechanism for carrying out the theory.
It is shown in detail how the common patterns can be generated
by this mechanism. Finally, it is argued that the brittleness problem
of rule-based models can be remedied in a principled way, with the theory
proposed here. This work demonstrates that
combining rules and similarities can result in more robust
reasoning models, and many seemingly disparate patterns of commonsense
reasoning are actually different manifestations of the same underlying process
and can be generated using the integrated architecture,
which captures the underlying process to a large extent.

----------------------------------------------------------------
* It is FTPable from aramis.cs.ua.edu
in: /pub/tech-reports
* No hardcopy available.

* FTP procedure:
unix> ftp aramis.cs.ua.edu
Name: anonymous
Password: (email-address)
ftp> cd pub/tech-reports
ftp> binary
ftp> get sun.aij.ps.Z
ftp> quit
unix> uncompress sun.aij.ps.Z
unix> lpr sun.aij.ps (or however you print postscript)
-----------------------------------------------------------------

(A number of other publications are also available for
FTP under pub/tech-reports)


================================================================
Dr. Ron Sun
Department of Computer Science phone: (205) 348-6363
The University of Alabama fax: (205) 348-0219
Tuscaloosa, AL 35487 rsun@athos.cs.ua.edu
================================================================


End of NL-KR Digest
*******************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT