Copy Link
Add to Bookmark
Report

AIList Digest Volume 2 Issue 053

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Sunday, 29 Apr 1984       Volume 2 : Issue 53 

Today's Topics:
References - AI and Legal Systems,
Linguistics - "Unless" & "And" & Metaphors,
Jobs - Noncompetition Clauses,
Seminars - System Identification & Chunking and R1-SOAR
----------------------------------------------------------------------

Date: Wed, 25 Apr 84 19:34 MST
From: Kip Cole <KCole@HIS-PHOENIX-MULTICS.ARPA>
Subject: Pointers to AI and Legal Systems

Some time ago there was a request for pointers to references on Legal
Information Systems and AI. I have the following which I can recommend:

1. Deontic Logic, Computational Linguistics & Legal Info. Systems.
Martino ed., published by North Holland.

2. AI and Legal Information Systems. Campi ed. published by North
Holland.

Both books are papers presented at a conference in Italy on said topics.

Kip Cole, Honeywell Australia.

------------------------------

Date: Wed 25 Apr 84 16:51:30-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Meanings of "Unless"

Multiple meanings for English connectives:

I once read a paper in which it was seriously alleged that the word "unless"
has in excess of 4,000 (that's four thousand) potential logically distinct
meanings when used in writing an English law. Sorry, I don't have the
reference, nor can I remember very many of the meanings.
- Richard

------------------------------

Date: 20 Apr 84 12:52:40-PST (Fri)
From: harpo!ulysses!allegra!princeton!eosp1!robison @ Ucb-Vax
Subject: Re: Customers in Ohio and Indiana...
Article-I.D.: eosp1.808

One point of view seems to have been neglected in this discussion.
Suppose we build programs smart enough to "realize" that 'Ohio and Indiana'
really means 'Ohio <inclusive or> Indiana'. Then what happens to the poor
user who really means 'Ohio AND Indiana'?? Suppose the original poor user
in this story had been trying to weed out duplicate accounts?

It seems to me that the best you can do is:
(a) Make a semantic decision based upon a much larger context of
what the user is doing, or:
(b) Catch the ambiguity and ask the user to clarify. We can
deduce from the original story that many users will become furious if
asked to clarify, by the way.
- Toby Robison (not Robinson!)
allegra!eosp1!robison
decvax!ittvax!eosp1!robison
princeton!eosp1!robison

------------------------------

Date: 19 Apr 84 6:53:00-PST (Thu)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: metaphors
Article-I.D.: uicsl.15500032

Here are some sources for metaphor:

1. A book edited by Andrew Ortony. The title is
Metaphor and Thought. There are several good articles in this book, and I
recommend it as a good place to start, but not as the last word.

2. The Psychology Dept. at Adelphi University has been sporadically
putting out a mimeo entitled: The Metaphor Research Newsletter. The latest
edition (which arrived today) indicates that as of January 1985 it will
become a full fledged journal called Metaphor, published by Erlbaum.

3. Dedre Gentner (of BBN) has been doing assorted work on metaphor.

4. Metaphors We Live By, by George Lakoff and Mark Johnson, is fun to read.
They have some good ideas, but they do tend to make too big a deal out of
them. I think its worth reading.


As far as your claim that "man is a BIC pen" is a "bad" metaphor, I
tend to shy away from such a coarse-grained term. For me, metaphors
may be more and less apt, more and less informative, more and less
novel, more and less easily understood, etc. In this particular
example human beings are so complex that there is almost no object that
they cannot be compared to -- however strained the interpretation may
be. BIC pens are well known (thanks to simple construction and a good
advertising agency) for their reliability and being able to withstand
unreasonable punishment (like being strapped to the heel of an Olympic
figure skater). Similarly, humankind throughout the ages has
successfully held up under all kinds of evolutionary torture, yet we
continue (as a species) to function. Now this interpretation may seem
a little bizarre to you, but to me it seemed to come almost
instantaneously and quite naturally. Can you truly say it is "bad?"

Even an example as silly sounding (at first) as "telephones are like
grapefruits"
yields to the great creative power of the human mind. Despite
their simple outer appearance, they both conceal a more complex inner
structure (which, as a youngster, I delighted to dissect). Both are
"machines" for reproducing something -- the telephone reproduces sounds,
while the grapefruit reproduces grapefruits (this one admittedly took a few
seconds more to think of). So what's a "bad" metaphor?


I would love to continue this discussion with interested parties privately,
so as not to take up space in the notesfile. USENET mail can reach me at
...!uiucdcs!uicsl!dinitz

-Rick Dinitz

------------------------------

Date: 21 Apr 84 12:37:35-PST (Sat)
From: hplabs!hao!cires!nbires!opus!rcd @ Ucb-Vax
Subject: Re: Non-competition clauses
Article-I.D.: opus.386

A question on non-competition clauses of the sort in which you agree, if
you leave a particular job, not to work in that field (or geographical
area), etc.: I once heard that they were essentially not enforceable, the
given reason being that there are certain legal rights you can't give up
(called "unconscionable clauses" in a contract), and that somehow "giving
up the right to make a living (in your profession) fell into this class.
I don't know if this is actually true - I'd like to hear a qualified
opinion from someone who understands the law or who has been through a case
of this sort.

In any event, I think it's pretty shoddy for an employer to make such
requests of an employee - this is going a long way beyond assigning patent
rights while you're employed and not disclosing company secrets. If the
employer mistrusts you that much, can you trust him? I also think it's
foolish to agree in writing to something that you don't accept, on the
basis that you don't think they'll use it or that it isn't enforceable.
Don't bet against yourself!

...Are you making this up as you go along? Dick Dunn
{hao,ucbvax,allegra}!nbires!rcd (303) 444-5710 x3086

------------------------------

Date: 22 Apr 84 3:09:20-PST (Sun)
From: decvax!cca!ima!inmet!andrew @ Ucb-Vax
Subject: Re: Non-competition clauses
Article-I.D.: inmet.1313

A few years back, I made the mistake of working for Computervision. They
tried to force me to sign an agreement that I a) would not work for any
competitors for 18 months, and b) would not "
entice" other employees into
leaving for an equal length of time. They didn't say a word about continuing
my salary for that time, either!

The incompetents in Personnel (I'd call them "
morons", but true morons are
considerably more conscientious workers) didn't notice that I never signed
or returned the above agreement, though!

Andrew W. Rogers, Intermetrics ...{harpo|ihnp4|ima|esquire}!inmet!andrew

------------------------------

Date: 23 Apr 84 7:49:14-PST (Mon)
From: hplabs!hao!seismo!ut-sally!ut-ngp!werner @ Ucb-Vax
Subject: Re: Non-competition clauses. The devils advocate speaks
Article-I.D.: ut-ngp.531

My personal opinion aside, I do have sympathy with the company that
reveals their "
secrets" to an employee, only to have him turn into
competition without having to pay the research costs. Remember also,
that for every one who leaves, there are five guys who stay, and more
likely than not the result of their years of work get 'abducted' also,
as, in a decent research effort, the work is done in a team rather than
by solo-artists.

So, after your flamers burn out, lets hear some ideas which take care
of the interests of all parties, because remember, one day it may be
YOU who stays behind, or YOU who may be the founder of a 3-man think-tank.

werner @ ut-ngp "
every coin has, AT LEAST, 3 sides"

------------------------------

Date: 23 Apr 84 9:35:44-PST (Mon)
From: harpo!ulysses!gamma!epsilon!mb2c!mpr @ Ucb-Vax
Subject: Re: Non-competition clauses
Article-I.D.: mb2c.242

Non-competition clauses may or may not be enforceable. It depends on
the skill of the party involved or any special knowledge that person
might have. For example, it is is not a shoddy practice or expectation
for Coca Cola to expect its personnel not to work for Pepsi cola,
especially and only if they have knowledge of the secret formula.

------------------------------

Date: 24 Apr 84 12:27:53-PST (Tue)
From: decvax!mcnc!unc!ulysses!gamma!pyuxww!pyuxa!wetcw @ Ucb-Vax
Subject: Re: Non-competition clauses (The Doctor)
Article-I.D.: pyuxa.710

In reference to the Doctor in Boulder.

The Doctor had joined a practicing group in a clinic. He did
indeed sign a contract which contained a clause which said that
if he left the clinic, he would be unable to practice in the
county in which Boulder is in.

It seems that after several years, the administrator of the
clinic (not a Doctor) decided that Doctor X was not bringing
in enough cash to the group. The Doctor was warned that he
would have to increase his patient load to bring his revenues
up to what they thought it should be. The Doctor refused to
compromise his patients care by giving them less time. After
a standoff, the administrator and the other Doctors told Doctor
X he would have to leave the clinic.

The crux of the problem was that he did not leave on his own, but
was asked to leave and therefore believed that the non-comp
clause invalid. He opened an office in Boulder. Many on his
former patients followed him, much to the displeasure of the
clinic crowd. The clinic then decided to go to court. They
won in court so that Doctor X had to move his practice out of the
county. The patients still followed him.

I think that this case is working its way up to the Supreme
Court. The whole affair was aired last year on [60 Minutes]. The
clinic crew and their administrative lackey came off in a
very bad light. They were arrogant and seemed self serving
to the nth degree. I hope Doc X wins in the final analysis.
In the meantime, there was a time-limit clause in the contract
which lapses sometime soon.
T. C. Wheeler

------------------------------

Date: 24 Apr 84 20:51:48 PST (Tuesday)
From: Bruce Hamilton <Hamilton.ES@XEROX.ARPA>
Reply-to: Hamilton.ES@XEROX.ARPA
Subject: Seminar - Learning About Systems That Contain State Variables

The research described below sounds closer to what I had in mind when I
raised this issue a couple of weeks ago, as opposed to the
automata-theoretic responses I tended to get. --Bruce

[For more leads on learning "
systems containing state variables", readers
should look into that branch of control theory known as system identification.
Be prepared to deal with some hairy mathematical notation. -- KIL]


Date: 24 Apr 84 11:39 PST
From: mittal.pa
Subject: Reminder: CSDG Today

The CSDG today will be given by Tom Dietterich, Stanford University,
based on his thesis research work.
Time etc: Tuesday, Apr. 24, 4pm, Twin Conf. Rm (1500)

Learning About Systems That Contain State Variables

It is difficult to learn about systems that contain state variables when
those variables are not directly observable. This talk will present an
analysis of this learning problem and describe a method, called the
ITERATIVE EXTENSION METHOD, for solving it. In the iterative extension
method, the learner gradually constructs a partial theory of the
state-containing system. At each stage, the learner applies this
partial theory to interpret the I/O behavior of the system and obtain
additional constraints on the structure and values of its state
variables. These constraints trigger heuristics that hypothesize
additional internal state variables. The improved theory can then be
applied to interpret more complex I/O behavior. This process continues
until a theory of the entire system is obtained. Several conditions
sufficient to guarantee the success of the method will be presented.
The method is being implemented and applied to the problem of learning
UNIX file system commands by observing a tutorial interaction with UNIX.

------------------------------

Date: 19 Apr 1984 1326-EST
From: Geoff Hinton <HINTON@CMU-CS-C.ARPA>
Subject: Seminar - Chunking and R1-SOAR

[Forwarded from the CMU-AI bboard by Laws@SRI-AI.]


"
RECENT PROGRESS IN SOAR: CHUNKING AND R1-SOAR"
by John Laird & Paul Rosenbloom

AI Seminar, Tuesday April 24, 4.00pm, Room 5409

In this talk we present recent progress in the development of the @p[Soar]
problem-solving architecture as a general cognitive architecture. This work
consists of first steps toward: (1) an architecture that can learn about all
aspects of its own behavior (by extending chunking to be a general learning
mechanism for @p[Soar]); and (2) demonstrating that @p[Soar] is (more than)
adequate as a basis for knowledge-intensive (expert systems) programs.

Until now chunking has been a mechanism that could speed up simple
psychological tasks, providing a model of how people improve their
performance via practice. By combining chunking with @p[Soar], we show how
chunking can do the same for AI tasks such as the Eight Puzzle, Tic-Tac-Toe,
and a portion of an expert system. More importantly, we present partial
demonstrations: (1) that chunking can lead to more complex forms of
learning, such as the transfer of learned behavior (that is, the learning of
generalized information), and strategy acquisition; and (2) that it is
possible to build a general problem solver that can learn about all aspects
of its own behavior.

Knowlege-intensive programs are built in @p[Soar] by representing basic task
knowledge as problem spaces, with expertise showing up as rules that guide
complex problem-space searches and substitute for expensive problem-space
operators. Implementing a knowledge-intensive system within @p[Soar] begins
to show how: (1) a general problem-solving architecture can work at the
knowledge intensive (expert system) end of the problem solving spectrum; (2)
it can integrate basic reasoning and expertise, using both search and
knowledge when relevant; and (3) it can perform knowledge acquisition by
transforming computationally intensive problem solving into efficient
expertise-level rules (via chunking). This approach is demonstrated on a
portion of the expert system @p[R1], which configures computers.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT