Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 247

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Monday, 26 Oct 1987      Volume 5 : Issue 247 

Today's Topics:
Clarification - Knowledge Soup,
Neuromorphic Systems - Historical Terminology,
Bibliographies - Design Automation/Assistance & Computational Linguistics

----------------------------------------------------------------------

Date: 24 October 1987, 20:03:38 EDT
From: john Sowa <SOWA@ibm.com>
Subject: Knowledge Soup

An abstract of a recent talk I gave found its way to the AIList, V5 #241.
But along the way, the first five sentences were lost. Those sentences
made a distinction that was at least as important as the rest of the
abstract:

Much of the knowledge in people's heads is inconsistent. Some of it
may be represented in symbolic or propositional form, but a lot of it
or perhaps even most of it is stored in image-like forms. And some
knowledge is stored in vague "gut feel" or intuitive forms that are
almost never verbalized. The term "knowledge base" sounds too precise
and organized to reflect the enormous complexity of what people have
in their heads. A better term is "knowledge soup."

Whoever truncated the abstract also changed the title "Crystallizing
theories out of Knowledge Soup" by adding "(knowledge base)". That
parenthetical addition blurred the distinction between the informal,
disorganized knowledge in the head and the formalized knowledge bases
that are required by AI systems. Some of the most active research in
AI today is directed towards handling that soup and managing it within
the confines of digital systems: fuzzy logic, various forms of default
and nonmonotonic reasoning, truth maintenance systems, connectionism
and various statistical approaches, and Hewitt's due-process reasoning
between competing agents with different points of view.

Winograd and Flores' flight into phenomenology and hermeneutics is based
on a recognition of the complexity of the knowledge soup. But instead of
looking for ways of dealing with it in AI terms, they gave up. Although
I sympathize with their suggestion that we use computers to help people
communicate better with each other, I believe that variations of current
AI techniques can support semi-automated tools for knowledge acquisition
from the soup. More invention may be needed for fully automated systems
that can extract theories without human guidance. But there is no clear
evidence that the task is impossible.

------------------------------

Date: Sat, 24 Oct 1987 14:44 EDT
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList V5 #244 Neuromorphic Terminology


Terms like "neural networks" were in general use in the 1940's. To
see its various forms I suggest looking through the Bulletin of
Mathematical Biophysics in those years. For example, there is a 1943
paper by Landahl, McCulloch and Pitts called "A statistical
consequence of the logical calculus of Nervous Nets" and a 1945 paper
by McCulloch and Pitts called "A heterarchy of values determined by
the topology of Nervous Nets. It is true that Papert and I confused
this with the title of another McCulloch Pitts 1943 paper, which used
the term "nervous activity" instead. Both papers were published
together in the same journal issue. In any case, "neural networks"
and "nervous nets" were already in the current jargon.

In the original of my 1954 thesis, I called them "Neural-Analog
Networks, evidently being a little cautious. But in the same year I
retitled it for publication (for University Microfilms) as "Neural
Nets and the Brain Model Problem". My own copy has "Neural Netorks
and the ..." printed on its cover. My recollection is that we all
called them, simply, "neural nets". A paper of Leo Verbeek has
"neuronal nets" in its title; a paper of Grey Walter used "Networks of
Neurons"; Ashby had a 1950 paper about "randomly assembled nerve
networks. Farley and Clark wrote about "networks of neuron-like
elements". S.C.Kleene's great 1956 paper on regular expressions was
entitled "Representation of events in Nerve Nets and Finite Automata".

Should we continue to use the term? As Korzybski said, the map is not
the world. When a neurologist invents a theory of how brains learn,
and calls THAT a neural network, and complains that other theories are
not entitled to use that word, well, there is a problem. For even a
"correct" theory would apply only to some certain type of neural
network. Probably we shall eventually find that there are many
different kinds of biological neurons. Some of them, no doubt, will
behave functionally very much like AND gates and OR gates; others will
behave like McCulloch-Pitts linear threshold units; yet others will
work very much like Rosenblatt's simplest perceptrons; others will
participate in various other forms of back-propagated reinforcement,
e.g., Hebb synapses; and so forth. In any case we need a generic term
for all this. One might prefer one like "connectionist network" that
does not appear to assert that we know the final truth about neurons.
But I don't see that as an emergency, and "connectionist" seems too
cumbersome. (Incidentally, we used to call them "connectionistic" -
and that has condensed to "connectionist" for short.)

------------------------------

Date: Sat 24 Oct EDT 1987 05:46
From: frodo%research.att.com@RELAY.CS.NET
Subject: Re: AI & Design Automation, Design Assistance

A few references on AI and VLSI CAD

%author Kowalski, T. J.
%title An Artificial Intelligence Approach to VLSI Design
%publisher Kluwer
%address Boston, MA
%date 1985
%keyword DAA

%author Wolf, W. H.
%author Kowalski, T. J.
%author McFarland, M. C.
%title Knowledge Engineering Issues in VLSI Synthesis
%journal Proceedings of the National Conference on Artificial Intelligence
%pages 866-871
%date 1986

%author Kowalski, T. J.
%author Geiger, D. J.
%author Wolf, W. H.
%author Fichtner, W.
%title The VLSI Design Automation Assistant: From Algorithms To Silicon
%journal Design and Test of Computers
%volume 2
%number 4
%pages 33-43
%date August, 1985

%author McFarland, M. C. S.J.
%author Kowalski, T. J.
%title Assisting DAA: The Use of Global Analysis in an Expert System
%journal Proceedings of the IEEE International Conference on Computer Design
%pages 482-485
%publisher IEEE
%address New York, NY
%date October 6, 1986
%keyword DAA BUD

------------------------------

Date: 21 Oct 87 20:12:35 GMT
From: russell!goldberg@labrea.stanford.edu (Jeffrey Goldberg)
Subject: Computational Linguistics Bibliography by E-Mail (CLBIB)

It is possible to do a keyword search on a > 1700 entry
bibliography of work in computational linguistics published in
the 1980's. Here is how:


Computational Linguistics & Natural Language Processing Bibliography by Mail

There is a large (> 1700 items) bibliography of 1980s natural
language processing and computational linguistics sitting on a
Sun called Russell at CSLI. Anyone with a computer account can
now search this bibliography and get a listing of the result by
using electronic mail.


INSTRUCTIONS

The keywords used for the lookup are to be given in the subject line of
your mail message addressed to clbib@russell.stanford.edu (36.9.0.9).
The body of your message will be thrown away.

Here is an example:

% mail clbib@Russell.Stanford.EDU
Subject: Woods ATN 1980
.
EOT
Null message body; hope that's okay
%

Or more compactly:

% Mail -s "woods atn 1980" clbib@Russell.Stanford.EDU < /dev/null

And here is what you would receive in return:

>>> Date: Wed, 11 Jul 87 12:03:35 PST
>>> To: yourname
>>> Subject: CLBIB search: Woods ATN ...

%A T.P. Kehler
%A R.C. Woods
%T ATN grammar modeling in applied linguistics
%D 1980
%P 123-126
%J ACL Proceedings, 18th Annual Meeting

%A William A. Woods
%T Cascaded ATN grammars
%D 1980
%V 6
%N 1
%P 1-12
%J American Journal of Computational Linguistics

This example show one mailing from a Unix machine, but you can
mail CLBIB from any machine and get a result, provided you
remember to put your search keys in the "Subject:" field of the
message.

The entries you get are in standard Unix 'refer' format (see the man page).
You may put between one and eight keywords in the mail "Subject: "
field, and each keyword can be any string of characters (name,
date, topic, etc.) that you think likely to be found in the items
of interest (case is ignored). The list of keywords is interpreted
conjunctively: "Woods" gets you everything published by anyone
called "Woods" in the 1980s, whereas "Woods 1983" narrows that down
to just the 1983 papers (or papers whose first or last page number
is "1983") by persons named "Woods" (or whose title refers to "woods"),
and, of course, there may be no such items (so the reply would contain
nothing). Only the first six characters in a keyword are significant,
so "generation" is indistinguishable from "generalized", and "Anderson"
is indistinguishable from "Andersson". You should bear this in mind
when you consider the relevance of what you receive to your intended
request.

To take up less CPU at this end, please use as your first keyword the
one that will narrow selections down the most. The first key may not be
a year.

If the first key is "help", you will be sent this file.


BUGS

The system is no better than the mail connections.

This system is worse than the mail connections.

The return address is determined only from information in the "From" field.
"Reply-To:" should be checked but it is not.

The return parsing is stupid and doesn't know all there is to know about
RFC822 mail headers.

The "From" and "Subject" fields must have exactly the "F" and the "S"
in uppercase.

It is impossible to seach for only the item "help". (You get this file if
the first key on a subject line is "help")

It is impossible to get all of the entries for one year. [This is not a
bug. If you want the entire list you can follow the instructions about
such things below.]

The mail handling scripts were written by linguists, not by
programmers. The scripts are fragile and the system may be
taken down without notice at anytime.



THE BIBLIOGRAPHY

Some sense of the scope of the bibliography can be gathered from
the following summary information. Here are the authors who find
themselves with a dozen or more of their 1980s publications included:

25 Aravind K. Joshi
19 Bonnie Lynn Webber
18 Robert C. Berwick
18 Jaime G. Carbonell
17 David D. McDonald
15 Philip J. Hayes
15 Wendy G. Lehnert
15 Fernando C.N. Pereira
14 Kathleen R. McKeown
14 Karen Sparck-Jones
13 Eugene Charniak
13 Barbara J. Grosz
13 Jerry R. Hobbs
13 Martin Kay
13 Stuart M. Shieber
12 Douglas E. Appelt
12 Philip R. Cohen
12 C. Raymond Perrault
12 Graeme D. Ritchie
12 Ralph M. Weischedel
12 Yorick A. Wilks

And the papers included distribute across the years like this:

1980: 207
1981: 138
1982: 211
1983: 240
1984: 219
1985: 247
1986: 353
1987: 117

The 1987 figure includes the contents of this year's ACL Proceedings,
and the relevant papers in AAAI-87, but not those from the upcoming
IJCAI meeting in August nor the as-yet-unpublished 1987 European ACL
Proceedings.

Machine-readable copies of
the entire bibliography are available on standard MS-DOS 360K DS/DD disks.
Write to Ms Sheila Lee, CSRP Series, School of Cognitive Sciences,
University of Sussex, BRIGHTON BN1 9QN, UK, asking for a copy of
the CL-NLP8X.BIB bibliography disk, and enclose a check for $16.00 to
cover media, handling, packing and postage costs.


A hardcopy version
of the entire bibliography with a permuted index of titles and an index to
nonprimary authors is to be published by CSLI/Chicago University Press
in November 1987 - details below:

%A Gerald Gazdar
%A Alex Franz
%A Karen Osborne
%A Roger Evans
%D 1987 - in press
%T Natural Language Processing in the 1980's - A Bibliography
%C Stanford
%S CSLI Lecture Notes
%I Chicago University Press

If there is a problem with this program please send a note to:

clbib-request@Russell.stanford.edu

But only questions about the mailing system can be dealt with. Problems
with the content of the bibliography (typos, omissions, etc) are not
something that we are capable of coping with here.

SEE ALSO

refer(1) Mail(1) tib(local)

AUTHORS & ACKNOWLEDGEMENTS

The bibliography was compiled at the University of Sussex under the
direction of Gerald Gazdar by Gerald Gazdar, Alex Franz, Karen Osborne,
and Roger Evans. Initial c-shell scripts were written by Evans and
Gazdar at Sussex. They were overhauled by Jeff Goldberg at CSLI.

In addition to more standard Unix tools (awk(1), sed(1), Mail(1), etc),
refer(1) (available on most Unix distributions) and Tib (available on the
Unix TeX distribution) are employed.

Unix is a trade mark of AT&T.

SUMMARY

To search bibliography mail to clbib@Russell.stanford.edu with the keywords
for the search as your Subject line.

To get a help file send to clbib@Russell.stanford.edu with "help" as the first
keyword in your subject line.

To get in touch with real people, send to clbib-request@Russell.stanford.edu

Information about getting a hardcopy of the bibliography with indicies will
be forthcoming any day now.
--
Jeff Goldberg
ARPA goldberg@russell.stanford.edu
UUCP ...!ucbvax!russell.stanford.edu!goldberg

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT