Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 129

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Friday, 18 Nov 1988      Volume 8 : Issue 129 

Queries and Responses:

Evaluating Expert Systems (1 response)
College Advisor
Social Impact of A.I.
Program for orifice-plate sizing
"Iterative Deepening" reference wanted (1 response)
AI & the DSM-III
Project Planning: Request for Bibliography
Questionaire (Neural Nets)
Neural Nets and Search
Learning arbitrary transfer functions (5 responses)

----------------------------------------------------------------------

Date: 15 Nov 88 17:06:44 GMT
From: alejandr@june.cs.washington.edu (Alejandra Sanchez-Frank)
Subject: Evaluating Expert Systems


Do you know of any research, paper or publication on Expert System
Evaluation?
I'm trying to define what a "good" expert system should be (What
characteristics it should have, and what criteria one should use
to evaluate it).

I would appreciate any comments and/or references you may have.

Thanks,

Alejandra

Alejandra Sanchez-Frank (alejandr@june.cs.washington.edu)
Computer Science Department
University of Washington FR-35
Seattle, Washington 98105

------------------------------

Date: 16 Nov 88 13:46:42 GMT
From: osu-cis!dsacg1!ntm1169@ohio-state.arpa (Mott Given)
Subject: Re: Evaluating Expert Systems


> Do you know of any research, paper or publication on Expert System
> Evaluation?
> I'm trying to define what a "good" expert system should be (What
> characteristics it should have, and what criteria one should use
> to evaluate it).

I would recommend the book "Expert Systems: A Non-programmer's Guide
to Development and Applications"
by Paul Siegel. It was published
by TAB Professional and Reference Books (Blue Ridge Summit, PA).
Chapter 10 has information on evaluating the knowledge base.

I would also recommend Paul Harmon's books. One of the books is called
Expert Systems: Artificial Intelligence in Business, published by John
Wiley.

Finally, I would recommend a recently published book called Expert
Systems for Experts. I don't have the author or publisher for it.

--
Mott Given @ Defense Logistics Agency ,DSAC-TMP, P.O. Box 1605,
Systems Automation Center, Columbus, OH 43216-5002
UUCP: mgiven%dsacg1.uucp@daitc.arpa I speak for myself
Phone: 614-238-9431

------------------------------

Date: 15 Nov 88 11:52:06 PST
From: oxy!chico@csvax.caltech.edu (Gary Patrick Simonian)
Subject: College Advisor

I am an Occidental student and a senior in the Cognitive
Science major. As a requirement we are to present a paper
or project for evaluation. I have decided to program an
expert system which performs the duties of a college
cousellor - given the types of interests, the "counsellor"
will advise the student (user) as to which major would be
best to pursue. Therefore, I would appreciate some
suggestions on how to model my program, and which language
would best suit my purpose.

------------------------------

Date: 16 Nov 88 03:04:18 GMT
From: mitel!sce!karam@uunet.uu.net (Gerald Karam)
Subject: Social Impact of A.I.

I'm looking for articles or books on the social impact of A.I.
Recent literature preferred. Please reply directly so the net
doesn't get clogged. If there is a suffiient response i'll post a
summary.

thanks, gerald karam

karam@sce.carleton.ca
karam@sce.uucp

------------------------------

Date: 16 Nov 88 04:34:46 GMT
From: cunyvm!ndsuvm1!ndsuvax!ncsingh@psuvm.bitnet (arun singh)
Subject: program for orifice-plate sizing


I am writing an Expert system for selection of flowmeters. I am looking
for Algorithmic program for orifice-plate sizing calculations.
Codes for the above can be in c,pascal or fortran.

Please send me email.
Thanks for your help in advance.

Arun
--
Arun Singh, BITNET: ncsingh@plains.nodak.edu.bitnet
Department Of Computer Science
300#Minard Hall, N.D.State University,Fargo
ND 58105. ARPANET,CSNET: ncsingh%plains.nodak.edu.bitnet@cunyvm.cuny.edu

------------------------------

Date: 16 Nov 88 10:55:21 GMT
From: quintus!ok@unix.sri.com
Subject: "Iterative Deepening" reference wanted

In my Prolog tutorial, I described a search method intermediate between
depth first and breadth first, called
Iterative Deepening
or Consecutively Bounded Depth-First Search

Does anyone know who invented these terms,
and can you give me references to readily available books or journal
articles describing them? (I know what iterative deepening is, I'd
just like to put proper references into the next draft of the
tutorial.)

------------------------------

Date: 16 Nov 88 16:15:51 GMT
From: mailrus!eecae!netnews.upenn.edu!linc.cis.upenn.edu!hannan@husc6.
harvard.edu (John Hannan)
Subject: Re: "Iterative Deepening" reference wanted

In article <688@quintus.UUCP> ok@quintus () writes:
>In my Prolog tutorial, I described a search method intermediate between
>depth first and breadth first, called
> Iterative Deepening
>or Consecutively Bounded Depth-First Search
>
>Does anyone know who invented these terms,

Check out "Depth-First Iterative Deepening: An Optimal Admissible Tree
Search,"
by R. Korf in AI Journal 27(1):97-109 and also a related
article by Korf in IJCAI85 proceedings. In the intro to his journal
paper, Korf briefly describes the origin of the algorithm but he seems to
have been the first person to use this term.

------------------------------

Date: Wed, 16 Nov 88 14:07 CST
From: ANDERSJ%ccm.UManitoba.CA@MITVMA.MIT.EDU
Subject: AI & the DSM-III

Hi Again. I have a colleague who is attempting to write a paper on
the use of AI techniques in psychiatric diagnosis in general, and
more specifically using the DSM-III. He tells me he's having a
great deal of trouble finding any material on this, & is between
computer accounts at the moment, and I told him I would post something
for him. If anybody has any references, material, or any info, it would
be greatly appreciated. His address is:

Ron Mondor
Dept. of Computer Science,
University of Manitoba,
Winnipeg, Manitoba, Canada, R3T 2N2

E-mail may be forwarded thru me: <ANDERSJ@UOFMCC.BITNET> (John Anderson)
With Great thanks,
J. Anderson

------------------------------

Date: 17 Nov 88 14:47:03 GMT
From: paul.rutgers.edu!cars.rutgers.edu!byerly@rutgers.edu (Boyce
Byerly )
Subject: Project Planning: Request for Bibliography


I am currently working on a project in Object-Oriented programming.
The domain I chose was Office Automation; namely, how to represent all
the information-processing needs of a business as a series of discrete
"objects" which get their work done by passing messages among
themselves.

My attention is mainly focused on the idea of project planning and
management now. I am interested in how different aspects of a project
(financial, personnel, management, capital items) interrelate when
working towards some goal (which might be defined in a slightly
"fuzzy" fashion).

It's my impression that a lot of software has been written along the
the lines of project planning. I would be interested in both academic
systems (built to demonstrate some theory) and real world systems
(built to get the job done). An "AI Orientation" in these projects
isn't required for me to be interested in them.

Can anyone help me out with references to papers, books, projects, or
periodicals which give more information?

Thanks in advance,

Boyce

------------------------------

Date: 15 Nov 88 18:14:44 GMT
From: sdcc6!sdcc18!cs162faw@ucsd.edu (Phaedrus)
Subject: Questionaire


About two weeks ago, I posted a desire for Axon/Netset information,
I'm afraid my scope was much to small, considering I only received
two responses. I'm sorry to muck up the newsgroup, but I really do
need this information, and my posting disappeared after a week or so.
If you've ever used a neural network simulator or if you have good
opinions regarding representations. Provided is a questionare regarding
Neural-Networking/PDP.

Information from this questionare will be used to design a user
interface for an industrial neural network program which
may perform any of the traditional PDP problems (e.g.,
back prop, counter prop, constraint satisfact, etc). The program
can handle connections set up in any fashion (e.g., fully
connected, feed-back connected, whatever), and it can also
toggle between syncronous or asyncronous modes.

What we're really interested in is what you feel is "hard"
or "easy" about neural net representations.

1. What type of research have you done ?

2. What type of research are you likely to do in the future ?

3. What is your programming background ?

4. What simulators have you used before ?
What did you like about their interfaces ?

5. Have you used graphical interfaces before ?
Did you like them ?
Do you think that you could use them for research-oriented problems ?
Why or why not ?

6. Do you prefer to work with numerical representations of
networks ? Weight matrices ? Connection Matrices ?
Why or why not ?

7. Would you like to use a graphical PDP interface if it could
craft complicated networks easily ? Why or why not ?

8. Do you forsee any difficulties you might have with graphical
interfaces ?

Any other comments along the same vein will be appreciated.

Your opinion is REALLY wanted, so please take 5 minutes and hit 'r-'!!!
Thank you,
James Shaw

------------------------------

Date: 15 Nov 88 23:44:51 GMT
From: ai!zeiden@speedy.wisc.edu (Matthew Zeidenberg)
Subject: Neural Nets and Search

Does anyone know of any papers on neural network solutions to
problems involving heuristic search? I do not mean optimization problems
such as Traveling Salesman, although these are obviously related.

Please reply by e-mail and I will summarize for the net.

------------------------------

Date: 15 Nov 88 14:14:08 GMT
From: efrethei@afit-ab.arpa (Erik J. Fretheim)
Subject: Re: Learning arbitrary transfer functions

In article <378@itivax.UUCP> dhw@itivax.UUCP (David H. West) writes:
In article <399@uvaee.ee.virginia.EDU> aam9n@uvaee.ee.virginia.EDU (Ali Mina
>
>I am looking for any references that might deal with the following
>problem:
>
>y = f(x); f(x) is nonlinear in x
>
>Training Data = {(x1, y1), (x2, y2), ...... , (xn, yn)}
>
>Can the network now produce ym given xm, even if it has never seen the
>pair before?
>
>That is, given a set of input/output pairs for a nonlinear function, can a
>multi-layer neural network be trained to induce the transfer function
> ^^^
I don't know about non-linear functions but, I did try to train a net
(back prop) to learn to compute sine(X) given X. I trained it for two
weeks straight (virtually sole user) on an ELXSI. The result was that in
carrying the solution to 5 significant decimal places I got a correct
solution 40% of the time. Although this is somewhat better than random
chance, it is not good enough to be useful. I will also note that the
solution did not improve dramatically in the last week of training so I
feel I can safely assume that the error rate would not decrease. I also
tryied the same problem using a two's complement input/output and was able
to get about the same results in about the same amount of training. The
binary representation needed a few more nodes though. I was not able to
spot any significant or meaningful patterns in the errors the net was
making and do not feel that reducing the number of significant decimal places
would help (even if it were meaningful) as the errors made were not
consistantly in the last couple of digits, but rather were spread throughout
the number (in both binary and decimal representations).
Based on these observations, I don't think
a net can be expected to produce any meaningful function. Sure it can
do 1 + 1 and other simple things, but trips when it hits something not
easily exhaustively (or nearly exhaustively) trained.

Just my opinion, but ...

------------------------------

Date: 15 Nov 88 20:59:46 GMT
From: mailrus!uflorida!novavax!proxftl!tomh@ohio-state.arpa (Tom
Holroyd)
Subject: Re: Learning arbitrary transfer functions

Another paper is "Learning with Localized Receptive Fields," by John Moody
and Christian Darken, Yale Computer Science, PO Box 2158, New Haven, CT 06520,
available in the Proceedings of the 1988 Connectionist Models Summer School,
published by Morgan Kaufmann.

They use a population of self-organizing local receptive fields, that cover
the input domain, where each receptive field learns the output value for the
region of the input space covered by that field. K-means clustering is used
to find the receptive field centers. Interpolation via weighted average of
nearby fields. 1000 times faster convergence than back-prop with conjugate
gradient.

Tom Holroyd
UUCP: {uflorida,uunet}!novavax!proxftl!tomh

The white knight is talking backwards.

------------------------------

Date: 15 Nov 88 22:08:20 GMT
From: phoenix!taiwan!hwang@princeton.edu (Jenq-Neng Hwang)
Subject: Re: Learning arbitrary transfer functions


A. Lapedes and R. Farber from Los Almos National
Lab. have a technical report LA-UR87-2662, entitled
"nonlinear signal processing using neural networks: prediction and system
modelling"
, which tried to solve the problem mentioned.
They also have a paper published in
"Proc. IEEE, Conf. on Neural Information Processing
Systems -- Natural and Synthetic, Denvor, November 1987"
,
entitled "How Neural Nets Work", pp 442-456.

------------------------------

Date: 16 Nov 88 21:04:58 GMT
From: sword!gamma!pyuxp!nvuxj!nvuxl!nvuxh!hall@faline.bellcore.com
(Michael R Hall)
Subject: Re: Learning arbitrary transfer functions

In a previous article, Ali Minai writes:
>I am looking for any references that might deal with the following
>problem:
>
>y = f(x); f(x) is nonlinear in x
>
>Training Data = {(x1, y1), (x2, y2), ...... , (xn, yn)}
>
>Can the network now produce ym given xm, even if it has never seen the
>pair before?
>
>That is, given a set of input/output pairs for a nonlinear function, can a
>multi-layer neural network be trained to induce the transfer function
>by being shown the data? What are the network requirements? What
>are the limitations, if any? Are there theoretical bounds on
>the order, degree or complexity of learnable functions for networks
>of a given type?
>
>Note that I am speaking here of *continuous* functions, not discrete-valued
>ones, so there is no immediate congruence with classification.

The problem you raise is not just a neural net problem. Your
function learning problem has been termed "concept learning" by
some researchers (e.g. Larry Rendell). A concept is a function.
There are many nonneural learning algorithms (e.g. PLS1) that are
designed to learn concepts. My opinion is that concept learning
algorithms generally work better, easier, and faster than neural
nets for learning concepts. (Anybody willing to pit their neural
net against my implementation of PLS to learn a concept from natural
data?) Neural nets are more general than concept learning
algorithms, and so it is only natural that they should not learn
concepts as quickly (in terms of exposures) and well (in terms of
accuracy after a given number of exposures).

Valiant and friends have come up with theories of the sort you
desire, but only for boolean concepts (binary y's in your notation)
and learning algorithms in general, not neural nets in particular.
"Graded concepts" are continuous. To my knowledge, no work has
addressed the theoretical learnability of graded concepts. Before
trying to come up with theoretical learnability results for neural
networks, one should probably address the graded concept learning
problem in general. The Valiant approach of a Probably Almost
Correct (PAC) learning criterion should be applicable to graded
concepts.
--
Michael R. Hall | Bell Communications Research
"I'm just a symptom of the moral decay that's | hall%nvuxh.UUCP@bellcore.COM
gnawing at the heart of the country"
-The The | bellcore!nvuxh!hall

------------------------------

Date: Wed, 16 Nov 88 17:45:54 EST
From: Raul.Valdes-Perez@B.GP.CS.CMU.EDU
Reply-to: valdes@cs.cmu.edu
Subject: learning transfer functions

System identification treats the induction of a mathematical
characterization of a dynamical system from behavioral data.
A nice tutorial on system identification is the following article:
o ..
K.J. Astrom and P. Eykhoff
System Identification - A Survey
Automatica, Vol 7, 1971, 123-162

I recall that it includes a discussion on learning transfer functions.

Raul Valdes-Perez

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT