Copy Link
Add to Bookmark
Report
AIList Digest Volume 3 Issue 048
AIList Digest Saturday, 20 Apr 1985 Volume 3 : Issue 48
Today's Topics:
Requests - AI in Agriculture & GCLisp on a TI,
Reports - ES Tools Paper,
Bindings - Walter Reitman,
Program - Expert Legal Systems,
Discussion - Knowledge and Information,
Linguistics - Hangul and Cherokee,
Opinion - Policy & Humor & Emotions & Duplicating Humans
----------------------------------------------------------------------
Date: Wed 17 Apr 85 11:57:58-PST
From: Peter Friedland <FRIEDLAND@SUMEX-AIM.ARPA>
Subject: AI in agriculture
I have a friend who has just been hired by the US Dept. of Agriculture to
explore advance computer applications, including AI, to problems in
agriculture. If anybody has any information about existing systems or research
in this area, please send it to FRIEDLAND@SUMEX (or publish on the AILIST).
Thanks,
Peter
------------------------------
Date: 18 Apr 85 9 32 CST
From: Douglas young <young@uofm-uts.cdn>
Subject: GCLisp
Please can anyone tell me if there is a way of being able to use
Golden Common Lisp on a TI Professional ( equipped with 512k )?
Thanks.
Douglas Young ( University of Manitoba )
young%uofm-uts.cdn%ubc.csnet@csnet-relay.arpa
------------------------------
Date: Fri 19 Apr 85 15:53:14-PST
From: Mark Richer <RICHER@SUMEX-AIM.ARPA>
Subject: es tools paper
I have gotten an incredible number of responses for the ES paper I have
offered. I didn't realize that that many people even read AIList. As a
result I will have it made into a Stanford Knowledge Systems Lab
(KSL) paper. I am saving all the addresses and will have copies
mailed out asap. If you are in a rush to get this information let me
know ... if the 'emergency' cases are small in number I can send out
a photocopy of the Macintosh output right away. I will also try to
make an announcement about the mass mailing so you should have to worry
about whether you got missed.
mark
------------------------------
Date: 17 Apr 1985 14:22-EST
From: BATES@BBNG.ARPA
Subject: address for Walter Reitman
In the latest issue of the AIList newsletter, someone mentioned that
Walter Reitman's address was unknown. Here it is, in case you'd like
to publish it:
Dr. Walter Reitman
Palladian Software Inc.
41 Munroe Street
Cambridge, MA 02142
------------------------------
Date: Thu, 18 Apr 85 08:14 EST
From: Carole D Hafner <hafner%northeastern.csnet@csnet-relay.arpa>
Subject: Expert Legal Systems
Responding to John Kastner's request in AILIST V3 #46:
Northeastern University in Boston is in the process of developing a new
program in Law and Computer Science. Although we are not ruling out work
on traditional "computer law" issues (software protection, liability of
computer system vendors, privacy, etc.), our primary interest is the use
of computers to model and/or enhance the process of "legal reasoning" -
whatever that is!! Thus, legal expert systems, natural language processing
and intelligent legal retrieval systems are the areas we want to develop.
We are very interested in hearing from potential graduate students
who want to work on AI and Law - and also potential faculty!
Don Berman Carole Hafner
School of Law College of Computer Science
berman%northeastern@csnet-relay hafner%northeastern@csnet-relay
Northeastern University
Boston MA 02115
------------------------------
Date: 18 Apr 85 10:12:47 PST (Thu)
From: Jeff Peck <peck@sri-spam>
Subject: RE: Knowledge Exploration (some suggestions)
A good overview of the problem of defining "information" is in "Theories
of Information" [edited] by Machlup and Mansfield (John Wiley 1984).
Its a thick volume, with articles by a large number researchers in
information science/computer science/cognitive science, but reading just
the prologue and epilogue will give you a good overview of the current
thinking about Information and Knowledge. In particular, it will warn
you about trying to extend from the Shannon definition of the word,
which generally tends to confuse the user, (and the English language).
Machlup is biased toward the position is that Information is
what is told from one human to another; all other uses are derived
metaphorically from that usage.
Personally, I would suggest that Information and Knowledge are related
just as Communication and Memory; Information is Knowledge in transit,
or conversely, Knowledge is stored Information. Knowledge is what you
"Know", Information is what you don't know; it's what you want to get
so that you may Know something. In this case, we can agree in principle
with Shannon, that the amount (or value) of information in a signal/symbol
is related to the amount of NEW knowledge we recieve.
"Datum", as Machlup points out, is latin for "given";
So, in any context in which a particular object is not the result of
immediate inference or derivation, that information or knowledge may
be considered as data. Inference and analysis may then proceed from
data to derive new knowledge, new "knowns" from the "givens".
These "knowns", when transmitted to a new process, then become the
"givens" for the next round of analysis. So, data is analysed to
create knowledge, which is transmitted as information, and may then again
become data and knowledge.
This explains the relationships, based on function, context and point-of-view,
but leaves open the question of the type or kind of object information is,
or the means of representation for these objects.
The bottom line, I think, is that all of these (info, knowledge, data)
must be seen in the context of their use: the purpose of intelligence is
to make (good) decisions. The memories, beliefs, inferences,
predictions, expectations, etc., that are used to make an "intelligent"
choice, or an "informed decision", are knowledge. Whether something is
knowledge can only be judged by its relationship to decisions that must
be made. I suggest that any theory of knowledge must include a theory
of decisions and utility.
* for those philosphers who still beleive in "true, justified belief"
as the definition of knowledge, I submit that "true, justified belief"
is just a special case of "things that are useful in making good decisions".
I suggest that the latter is the more fundamental and more useful concept.
peck@sri-spam
------------------------------
Date: Fri, 5 Apr 85 8:58:13 EST
From: Bruce Nevin <bnevin@bbncch>
Subject: Hangul and Cherokee
Sorry, Jan Steinman, at least one elegant writing system is more recent
than that of Hangul: in the early 1800s, Sequoya (aka George Guess)
invented a very sophisticated syllabary for the Cherokee language.
Each of its 85 characters represents one of the possible syllables of
the language.
This writing system enabled the Cherokees to write down their elaborate
system of laws and to publish newspapers in their language. They still
do publish Cherokee newspapers, though they disbanded as a tribe in 1906.
Their legal tomes still await scholarly study, many in possession of the
American Philosophical Society in Philadelphia.
A bit of history: the Cherokees were quite successful farmers and
businesspeople, a nation with a republican form of government (not to be
confused with the present GOP!) under a written constitution. Then
gold was discovered in their territory. A treaty obtained from a small
group in the tribe was claimed to be binding on the whole tribe. Their
autonomy as a nation was upheld by the US Supreme Court, and the tribe
overwhelmingly repudiated the treaty, but the State of Georgia used
military force and President Andrew Jackson refused to intervene, hence
the `Trail of Tears' from the Carolinas, Georgia, and Alabama to
Oklahoma. (Occurs to me that `Improper Mathematics' could well be
rewritten to this tune!)
Devanagari, used to write Sanskrit and its descendants, and many other
languages influenced by the spread of Buddhism, is also a syllable-
oriented script, but does have distinct marks for vowels. I should be
surprised if it did not influence King Sejong's scholars.
Bruce Nevin
bbncch.arpa
------------------------------
Date: Thu, 4 Apr 85 23:57 EST
From: Paul Fishwick <Fishwick%upenn.csnet@csnet-relay.arpa>
Subject: A final note on humor...
In response to the "Special Issue on Humor" of the AI bboard, I would
like to make some comments concerning the somewhat involved treastises
presented therein:
> I simply suggest that it be left at the level of:
>
> 1. If something offends you tell the offender and, if
> appropriate, the net audience. [Ok, agreed -pf]
> 2. Remember the individual involved. Next time something
> comes up about that person you will know their
> character is probably suspect, or at the very least
> their sense of judgement.
>
> I for one would feel severely punished if someone so seriously
> suspected my character. An entire audience like this would be
> crushing.
This sounds like something from a CIA primer. Can we not voice
our opinions freely without mass condemnation? I for one would
feel "severely punished" if I could not voice what I felt without
constantly being concerned about being "crushed."
> So, should it be published? Maybe, but not as humor; it should be
> quoted (mention vs use) as an example of vicious pseudo-humor. If
> it must always be presented in the context of its reality, its
> disguise must be removed.
Then the author presents metaphors relating to viruses,
pathogens, antigens and DNA sequences which I find quite
entertaining but highly romantic.
The part about removing disguises reminds me of one of the earlier
comments made by someone: namely, that we should remove disguises
associated with cartoons since they often contain violent acts.
And, heaven forbid that we should watch slap-stick.
I found "Freud's Theory of Jokes and Censors" interesting. It might
suggest an inquiry into the meaning of 'funny'. What exactly does
'funny' mean? I would be interested in someone would give me a
definition of 'funny' (no Webster's interpretations, please).
Amazingly enough, these heated discussions about humor might
have some relevance to AI after all...A computational model which
would relate to humor: I can see it now - If only I could attach
a voice box to my PC and come up with an algorithm (I would stick
it in the corner of my living room with a microphone so that it could
listen-in at parties....)
[I believe that both McCarthy and Minsky have published
papers on humor. -- KIL]
The point is that many people found Polly Nomial hilarious and many
people found it disgusting. The question is: can we look across
the fence and appreciate someone else's point of view (not necessarily
changing our own view)? There is nothing wrong with either view. Now,
lets get back to some AI, shall we?
-paul
------------------------------
Date: Fri, 5 Apr 1985 09:47 EST
From: BATALI%MIT-OZ@MIT-MC.ARPA
Subject: Midnight Theorizer
From: MINSKY
Perhaps this, too, explains the prolonged, mourning-like
depression that follows sexual or other forms of personal
assault. No matter that the unwelcome intimacy of violence
may be brief; it nonetheless affects one's attachment
machinery, however much against one's wish.
So the suggestion is that the rape-victim feels bad because she has
formed an attachment-bond to her attacker? The same "mechanism" is
involved as in the formation of her attachment-bonds to other people?
So she feels bad not because she has been raped, but because her
rapist has then left her? Is there a shred of evidence that any rape
victim has ever felt this way? Is this theory somehow suggesting that
there really isn't much of a difference between rape and seduction and
falling in love? Is it being assumed that fear, pain and loss of
self-esteem is not enough to "explain the prolonged depression" that
follows sexual assault?
[Surely the phrase "affects one's attachment machinery" should be
interpreted as "damaging" or "adversely affecting" the mechanism.
-- KIL]
------------------------------
Date: Monday, 15-Apr-85 16:50:04-BST
From: GORDON JOLY (on ERCC DEC-10) <GCJ%edxa@ucl-cs.arpa>
Subject: Street Speak
The current debate over censorship and jokes in the AI Digest leads me
to think that there is something fundamentally wrong. If your are trying
to mimic the human mind, you have do both sides of the brain. But if you
are using a computer, you can only duplicate the logical thought process
and not the emotional thought process.
Gordon Joly
aka
The Joka.
------------------------------
Date: Tuesday, 16-Apr-85 10:06:40-BST
From: GORDON JOLY (on ERCC DEC-10) <GCJ%edxa@ucl-cs.arpa>
Subject: Man as Machine
Hardware = Brain
Firmware = Instinct
Software = Intelligence.
"If man is any sort of a machine he is a learning machine"
Jacob Bronowski on The Ascent of Man.
Gordon Joly
------------------------------
End of AIList Digest
********************