Copy Link
Add to Bookmark
Report

AIList Digest Volume 2 Issue 150

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Thursday, 8 Nov 1984     Volume 2 : Issue 150 

Today's Topics:
AI Tools - Prolog Availability,
AI Education - Getting Started in AI & CAI Authoring,
Linguistics - Interlinguae,
Seminars - Knowledge Representation and Problem Solving & Vision &
Knowledge Editing & Automatic Program Debugging,
Conferences - Software Maintenance & Security And Privacy
----------------------------------------------------------------------

Date: Mon, 5 Nov 84 21:22:53 mst
From: "Arthur I. Karshmer" <arthur%nmsu.csnet@csnet-relay.arpa>
Subject: Prolog Availability


Our vax-11/750 runninx UNIX 4.2 is newly installed and we would very much
like to locate PROLOG for it. We would appreciate any help in finding
a version of PROLOG for our system. Further, we are using a number of
DEC pro-350 systems under Venix/11. The version of PROLOG we currently
have for these systems is badly brain damaged - is there any help
available in this area?

------------------------------

Date: 6 Nov 84 09:32:25 PST (Tuesday)
From: cherry.es@XEROX.ARPA
Subject: Getting started in AI

I am looking for any pointers which may help me get started in LISP.
Utility programs, applications programs, etc. will be helpful so that I
can analyze the source to better understand what I am trying to
accomplish. Most of the literature I have read on the topic of AI makes
the assumption that the reader is quite proficient in the LISP
environment. While I'm not new to programming, the LISP environment is
new to me.

My purpose for utilizing AI will be as an engineering aid for product
yield improvement.

Cherry.es@Xerox.Arpa

------------------------------

Date: 5 November 1984 1311-PST (Monday)
From: psotka@nprdc
Reply-to: psotka@NPRDC
Subject: CAI Authoring

I too would like to hear about good CAI authoring systems. Several
commercial systems that run on VAXen CYBERs and other stuff are really
good for their purpose -- linear CAI. The real question, it seems to me,
is how to use the marvelous computational power of personal
Lisp machines to do CAI authoring. What kinds of facilities would one want?
Natural language interpreters; graphic simulation systems for rapid
prototyping; expert systems for explaining; complex knowledge representation.
ETC. Could such a system be designed now to produce instruction as
effective as one on one tutoring by an expert? Would the author (the person
using the system to develop instruction) have to be an expert in the area
being taught (and an expert teacher, too)??


[For one viewpoint on nonlinear CAI, see Jacques Habenstreit's article,
Computers in Education: The French Experience (1970--1984), in the
Fall issue of Abacus. -- KIL]

------------------------------

Date: Sat, 3 Nov 84 15:49:16 pst
From: Bill Poser <poser@SU-Russell>
Subject: Interlinguae

I think that it is Rick Briggs who should read his own
writing more carefully. The relevant portion of Briggs' comment
runs as follows:

"Current Linguistics has begun to actually aid this entropy by paying
special attention to slang and casual usage (descriptive vs. prescriptive).
Without some negentropy from the linguists, I fear that English will
degenerate further."

The use of the inchoative "has begun" in the first sentence clearly
presupposes that Linguistics has hitherto been prescriptive. (I.e.,
Linguists have only have just begun to pay special attention to slang
and casual speech; they have just begun to engage in descriptive, as
opposed to prescriptive, linguistics.) So although it is quite true
that Briggs recognizes that there is now a descriptive element to
Linguistics, he is claiming (whether he intended to or not) that
Linguistics has been prescriptive and still is predominantly
prescriptive, and that it would be appropriate for linguists to be
more prescriptive. My point, which I believe still stands, was that
what we call Linguistics is not at all prescriptive and has not been
in the past. Modern Linguistics (by which I mean Linguistics since
the mid-nineteenth century) is by definition not prescriptive.
Moreover, the traditions of prescriptive grammar and Linguistics have
been essentially independent for a very long time.
Polemic aside, there is a real issue here. Briggs is
claiming that there is such a thing as degeneration of languages.
Now it is certainly true that some people use language more effectively
than others, whether we measure effectiveness in terms of aesthetics or
clarity or what. And it may be that the mean effectiveness of language use
over a population varies with time, e.g. as literacy rises and falls,
although I know of no objective demonstration of such a claim. But
that does not mean that the *language* degenerates--only that its use
degenerates. The issue is whether historical change in language results
in degneration of the language. This is certainly an empirical issue,
but I am not aware of any evidence that such degeneration takes place.
Features of one generation's casual style often become features of a
subsequent generation's formal style. There is just no evidence that
any historical stage of a language is less useful or more ambiguous
or whatever than any other. Different languages (and different social
and geographic dialects and historical stages of the same language)
differ in what information they present obligatorily or briefly,
but there is no evidence that there are statements that can be made
in one language that cannot be translated into another language, although
the expression of a given piece of information in one language may be
more or less cumbersome than in the other. In sum, while it is very
common for people to believe that their language is deteriorating and
look back to some golden age in which the language was just right,
the notion that there is such a thing as degeration of a language
(short of the special case of "language death" that sometimes
occurs when a language has only a few speakers left) is one that
has never been substantiated.
Finally, to return to my challenge to Briggs to show that
Shastric Sanskrit is a natural language, he argues that the
existence of dialogues written in it demonstrates that it was spoken,
suggesting that raising the issue of whether this demonstrates that
it was actually spoken is equivalent to raising the issue of whether
the Platonic dialogues were actually spoken.
It is quite possible to write dialogues that never took place, and
moreover to write them in a style that would never have been used
in actual speech, so the existence of written dialogues in and of
itself is not compelling. In fact, if I am not mistaken, the Platonic
dialogues are not believed to be actual transcripts of spoken
dialogues. In the case of Greek we have lots of other evidence that
the language was spoken, and the language of the dialogues is not so
different from other forms of the language, so I would not argue that the
Platonic dialogues could not have been spoken. But Shastric Sanskrit
differs sufficiently from other forms of Sanskrit that one must consider
seriously the possibility that the dialogues written in it were actually
spoken. The existence of dialogues in the language certainly shows that
it had a broader semantics than, say, the language of mathematical discourse,
but it doesn't show that Shastric Sanskrit was actually a spoken language.
But let's go one step further. Suppose that Briggs is right and
some people actually spoke Shastric Sanskrit, perhaps even all the time.
The mere fact that it could be spoken wouldn't mean that it wasn't artificial.
People speak Esperanto too. I reiterate: a language is artificial if it
was consciously designed by human beings. The use to which an artificial
language is put says nothing about its artificiality. (I'll back down
just a bit here. We should probably be willing to give a language status
as a natural language (in one sense) if, although it is the result
of conscious design, it is subsequently learned as a native language
by human children. This learnability would presumably show that the
language's properties are those of a natural language, although
it happens that it did not evolve naturally.)
I still think that Shastric Sanskrit is an artificial derivative
of Sanskrit used for specialized scientific purposes, not a natural language.
Briggs asks whether I would deny the language of scientific discourse
the status of natural language. As I indicated in my very first message
on this topic, yes I would, at least the language of mathematics. The
language of mathematics is a specialized derivative of normal language
that contains special constructions that in some cases violate strong
syntactic constraints of the natural base. Consider the "such that"
construction in English mathematical language, for example.
I suspect that it is pointless to quibble endlessly about
whether or not a given form of specialized language is natural
or not-we'll just end up worrying about at what point we say
that the specialized language departs sufficiently from its
source to differentiate them. But the real point, and the one that
I have been trying to make from the outset, is simple and, I
think, untouched. It is possible to create specialized languages based
on natural languages that are more precise, less ambiguous, etc., conceivably
even perfect in these respects, and therefore better candidates for
machine translation interlinguae, but there is no known natural language
which in its ordinary form has these properties.

------------------------------

Date: Fri 2 Nov 84 11:57:10-PST
From: Vineet Singh <vsingh@SUMEX-AIM.ARPA>
Subject: Seminars - Knowledge Representation, Problem Solving, Vision

[Forwarded from the Stanford bboard by Laws@SRI-AI.]

A couple of researchers from IBM Yorktown will be at HPP next Thursday
(11/8/84). They will present two short 20 minute talks starting at
10 am on distributed computing (AI and systems) research at their
research facility. Anyone who is interested in listening to their
talks and/or talking to them should show up at that time. Details are
given below:

Time: 10 am
Day: Thursday (11/8/84)
Place: Welch Road conference room (HPP, 701 Welch Rd., Bldg C)
Speakers: Sanjaya Addanki and Danny Sabbah
Abstracts:

*Abstract1*

Knowledge Representation and Parallel Problems Solving:

While there has been much research on "naive sciences" and
"expert systems" for problem-solving in complex domains,
there is a large class of problem solving tasks that is not
covered by these efforts. These tasks (e.g. intelligent de-
sign in complex domains) require systems to go beyond their
high level rules into deeper levels of knowledge down to the
"first principles" of the field. For example, new designs
often hinge on modifying existing assumptions about the
world. These modifications cause changes in the high level
rules about the world. Clearly, the processes of identify-
ing the modifications to be made and deducing the changes to
the rules require deeper levels of knowledge.

We propose a hierarchical, prototype-based scheme for the
representation and interpretation of the different levels of
knowledge required by an intelligent design system that
functions in a world of complex devices. We choose design as
the target task because it requires both the analysis and
synthesis of solutions and thus covers much of problem solv-
ing. This work is a part of a larger effort in developing a
parallel approach to complex problem solving.

*Abstract2*

Vision:

In this short overview of current interest in Computer Vision at Yorktown,
we will be discussing issues in:

a) Incorporation of complex shape representation (e.g. Extended Gaussian
Images) into parallel visual recognition systems.
b) Improvement of recognition behavior through the incorporation of
multiple sources of information (e.g. contour, motion, texture)
c) A possible mechanism for focus of attention in highly parallel,
connectionist vision systems (an approach to indexing into a large data
base of objects in such vision systems).

Detailed solutions will be sparse as the work is beginning and is just through
the proposal stage. The issues, however, are relevant to any visual
recognition system.

------------------------------

Date: 5 Nov 1984 13:04 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Knowledge Editing

[Forwarded from the MIT bboard by SASW@MIT-MC.]


Wednesday, Nov 7 4:00pm 8th floor playroom

CREF: A Cross-Referenced Editing Facility
for the Knowledge Engineer's Assistant


Kent M. Pitman


I will present a critical analysis of a tool I call CREF (Cross
Referenced Editing Facility), which I developed this summer at the Human
Cognition Research Laboratory of the Open University in Milton Keynes,
England. CREF was originally designed to fill a very specific purpose in
the KEA (Knowledge Engineer's Assistant) project, but appears to be of
much more general utility than I had originally intended and I am
currently investigating its status as a ``next generation'' general
purpose text editor.

CREF might be described as a cross between Zmacs, Zmail, and the Emacs
INFO subsystem. Its capabilities for cross referencing, summarization,
and linearized presentation of non-linear text put it in the same family
as systems such as NLS, Hypertext, and Textnet.

------------------------------

Date: Mon, 5 Nov 84 10:20:54 cst
From: briggs@ut-sally.ARPA (Ted Briggs)
Subject: Seminar - Automatic Program Debugging

[Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]


Heuristic and Formal Methods in Automatic Program Debugging
by
William R. Murray

noon Friday Nov. 9
PAI 3.38

I will discuss the implementation of an automatic debugging system for
pure LISP functions written to solve small but nontrivial tasks. It is
intended to be the expert module of an intelligent tutoring system to
teach LISP. The debugger uses both heuristic and formal methods to find
and correct bugs in student programs. Proofs of correctness of the
debugged definitions are generated for verification by the Boyer Moore
Theorem Prover.

Heuristic methods are used in algorithm identification, the mapping
of stored functions to student functions, the generation of verification
conditions, and in the localization of bugs. Formal methods are used
in a case analysis which detects bugs, in symbolic evaluation of
functions, and in the verification of results. One of the main roles of
the theorem prover is to represent intensionally an infinite database of
all possible rewrite rules.

- Regards,
Bill

------------------------------

Date: 3-Nov-84 21:33 PST
From: William Daul - Augmentation Systems - McDnD
<WBD.TYM@OFFICE-2.ARPA>
Subject: CALL FOR PAPERS - CONFERENCE ON SOFTWARE MAINTENANCE -- 1985

Conference On Softway Maintenance -- 1985

Wahsington, D.C., Nov. 11-13

The conference will be sponsored by the Association For Women in Computing, the
Data Processing Management Association, the Institute for Electrical &
Electronics Engineers, Inc., the National Bureau of Standards and the Special
Interest Groups on Software Maintenance in cooperation with the Special Interest
Group on Software Engineering.

Papers are being solicited in the following areas:

controlling software maintenance
software maintenance careers and education
case studies -- successes and failures
configuration management
maintenance of distributed, embedded, hybrid and real-time systems
debugging code
developing maintainance documentation and environments
end-user maintenance
software maintenance error distribution
software evolution
software maintenance metrics
software retirement/conversion
technololgy transfer
understanding the software maintainer

Submission deadline is Feb. 4, and 5 double-spaced copies are required. Papers
should range from 1,000 to 5,000 words in length.

The first page must include the title and a maximum 250-word abstract; all the
authors' names, affiliations, mailing addresses and telephone numbers; and a
statement of commitment that one of the authors will present the paper at the
conference if it is accepted.

Submit papers and panel session proposals to: Roger Martin (CMS-85), National
Bureau of Standards, Building 225, Room B266, Gaithersburg, Md. 20899

------------------------------

Date: 3-Nov-84 21:33 PST
From: William Daul - Augmentation Systems - McDnD
<WBD.TYM@OFFICE-2.ARPA>
Subject: CALL FOR PAPER -- 1985 Symposium On Security And Privacy

1985 Symposium On Security And Privacy

Oakland, Ca., April 21-24

The meet is being sponsored by the Technical Committee on Security and Privacy
and the Institue Of Electrical & Electronic Engineers, Inc.

Papers and panel session proposals are being solicited in the following areas:

security testing and evaluation
applications security
network security
formal security models
formal verification
authentication
data encryption
data base secutity
operating system secutity
privacy issues
cryptography protocols

Send three copes of the paper, an extended abstract of 2,000 works or panel
proposal by Dec. 14 to:

J.K. Millen
Mitre Corp.
P.O. Box 208
Bedford, Mass. 01730

Final papers will be due by Feb. 25 in order to be included in the proceedings.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT