Copy Link
Add to Bookmark
Report

NL-KR Digest Volume 07 No. 11

eZine's profile picture
Published in 
NL KR Digest
 · 1 year ago

NL-KR Digest      (Tue Jun 26 11:16:56 1990)      Volume 7 No. 11 

Today's Topics:

a machine more intelligent than human beings?
NEW CSLI VISITOR
References (ftp) needed for knowledge rep/aq
Re: Classification and Regression (CART)
grammatical inference
NLP in Real Time with Limited STM (Unisys Seminar)

Submissions: nl-kr@cs.rpi.edu
Requests, policy: nl-kr-request@cs.rpi.edu
Back issues are available from host archive.cs.rpi.edu [128.213.5.17] in
the files nl-kr/Vxx/Nyy (ie nl-kr/V01/N01 for V1#1), mail requests will
not be promptly satisfied. If you can't reach `cs.rpi.edu' you may want
to use `turing.cs.rpi.edu' instead.
BITNET subscribers: we now have a LISTSERVer for nl-kr.
You may send submissions to NL-KR@RPIECS
and any listserv-style administrative requests to LISTSERV@RPIECS.
========================================================================
[ This issue represents the end of the backlog, and even a few new
articles. At this point, nl-kr will go off the air for about a
week while I sort through the vast administrative backlog - CW ]

-----------------------------------------------------------------

To: nl-kr@cs.rpi.edu
Newsgroups: comp.ai,comp.ai.neural-nets,comp.ai.nlang-know-rep
From: deng@shire.cs.psu.edu (Mingqi Deng)
Subject: a machine more intelligent than human beings?
Date: Mon, 16 Apr 90 01:18:12 GMT

(this was posted with a messed up subject/newsgroup line. I am
reposting it. Sorry.)

One thouhgt occured to me few days ago and I think it is interesting to
have some discussion on it.

Suppose that Ms. Wonder (disclaimer: she is not a relative of
Wonder woman) built a chess-machine (perhaps a computer program)
called MisChess. MisChess has exhibits a master-level chess-skill
and chess-learning ability. Ms. Wonder has kept it secret how
MisChess works. Then Ms. Wonder misteriously vanished. Every
top-rank scientist in the world has been consulted since and yet
nobody is able to unveil the secret.

Question 1: is MisChess intelligent as far as chess is concerned?

Question 2: Mr. Artistein managerd to obtain the design of MisChess.
It turned out to be a simple architecture: a set of units and
connections with varible connection strength, and a small set
of rules governing the change of connection strengths. However,
there are 10 millions of units and 10 billion of connections.
Mr. Artistein comes to Mr. Compstein for help since he has
access to the world's fastest computer. Mr. Compstein's
computer prints out a message after 1 second: the following
envirnment variables undefied and a list of the variables
(the list stopped at 10 million-th since Mr. Compstein ran out
of printer paper). So they consulted human-development expert
Mr. Environstein. He pointed out that they need to duplicate
exactly the same environment MisChess experienced in order to
explain how MisChess reached its particular layout of
connections. Mr. Artistein has since worked hard with his
10,000,000 students on duplicating the environement which
fortunately is hinted by a diary left by Ms. Wonder.

While they are still working on the project, scientists,
laywers, human and machine rights groups over the world are
enganged in heated discussion as whether MisChess can be
considered as intelligent as we are.

So, what is your opinion?

Mingqi
deng@shire.cs.psu.edu
deng@psuvaxs.BITNET
deng@psuvax1.UUCP

------------------------------

To: nl-kr@cs.rpi.edu
Date: Fri, 15 Jun 90 14:51:44 PDT
From: ingrid@russell.Stanford.EDU (Ingrid Deiwiks)
Subject: NEW CSLI VISITOR

NEW CSLI/XEROX PARC VISITOR

Prashant Parikh, Computer Science Group, Tata Institute of Fundamental
Research, India. During his stay at CSLI and Xerox PARC, Prashant
will continue his work on situated communication within the frameworks
of situation theory and game theory. In particular, he will work on a
book based on his dissertation. He will also explore other related
topics involving situation theory, situated agency, and applications
to natural-language semantics. Prashant's email address is
parikh@csli, and he will visit from June through September 1990.

------------------------------

To: nl-kr@cs.rpi.edu
From: higginbotham@darwin.ntu.edu.au
Newsgroups: comp.ai.nlang-know-rep,comp.ai
Subject: References (ftp) needed for knowledge rep/aq
Date: 23 Jun 90 05:11:04 GMT

Can anyone direct me to a source from which I can obtain a
bibliography, hopefully annotated, on knowledge acquisition, which
can be obtained via ftp or e-mail? Also, any full-length papers,
which can be obtained in the same way. Free, if possible.

If there is sufficient response to this query, I will post the
results.

Thanks,

Tom
- -----------------------------------------------------------------
T. F. Higginbotham Northern Territory University
higginbotham@darwin.ntu.edu School of Engineering, Mathematics,
Telephone 61 - 89 - 46 - 6546 Physics and Computer Science
Fax 61 - 89 - 26 - 0612 P. O. Box 40146, Building 16
Casuarina, Northern Territory 0811
AUSTRALIA 0811

------------------------------

To: nl-kr@cs.rpi.edu
Date: Fri, 22 Jun 90 22:53:44 PDT
From: ajayshah%aludra.usc.edu@usc.edu (Ajay Shah)
Subject: Re: Classification and Regression (CART)
Newsgroups: comp.ai.nlang-know-rep

I'm desperately looking for source connected with the CART book;
wrote email to Friedman but no luck :-(

Do you have any source or know places to look?

Thankx,
-me.

ps: i plan to churn on a 386-387 at my apartment and on a
Sparcstation at work. My languages are Pascal > C >
Fortran. I'm economics, and will be using economics data.
- -
_______________________________________________________________________________
Ajay Shah, (213)747-9991, ajayshah@usc.edu
The more things change, the more they stay insane.
_______________________________________________________________________________

------------------------------

To: nl-kr@cs.rpi.edu
Newsgroups: comp.ai,comp.ai.neural-nets,comp.ai.nlang-know-rep
From: dong@wam.umd.edu (D C)
Subject: grammatical inference
Keywords: Thesis topic
Reply-To: dong@wam.umd.edu (D C)
Date: Sun, 24 Jun 90 22:23:57 GMT

Hello, the world:
I am a graduate student doing research in Neural-Net.
Now, I need to choose a Ph.D. thesis topic relay to NN grammatical
inference.
Currently,we could construct an automaton from numerous presence of examples
from toy regular grammar and contex-free grammar.
Here is the question:
Is there any place I can use these kind of technique,such as maybe nature
language inference? and how useful is contex-free grammar in practics
application?
What are the common approches to the grammar inference now?
Any other suggestions for a good Ph.D. project are appreciate.
Also I need some good reference if possible.

- -Dong

------------------------------

To: nl-kr@cs.rpi.edu
Date: Mon, 25 Jun 90 16:29:45 -0400
From: finin@PRC.Unisys.COM
Subject: NLP in Real Time with Limited STM (Unisys Seminar)


Unisys AI Seminar
Center for Advanced Information Technology*

How to Process Natural Language
in Real Time and with Limited STM


Glenn David Blank
CSEE Department, Lehigh University
GDB0%lehigh.bitnet@ibm1.cc.lehigh.edu



People process natural language in real time and with very limited
short-term memories. I will describe a natural language processor
that imitates these desirable attributes. Register Vector Grammar
(RVG) is equivalent to finite state automata (FSA), but implied in
the name are two innovations that allow RVG to be far more compact
than simple FSA when it comes to natural languages. The *vectors*
of simple ternary values provide a way to represent and propagate
discontinuous constraints on ordering. Since simple FSA (and many
more complex formalisms) are good at serial constraints but poor at
non-serial ones, the vector schema leads to an enormous reduction
in grammar size. The *registers* keep track of alternative states.
The processor permits reanalysis of structural ambiguities by back-
tracking to system states in a fixed array of registers. The number
of registers never grows; instead, RVG *reuses* them. Like a human
language processor, RVG forgets many possible ambiguities, and runs
in linear time.

Work in progress extends this baseline architecture. Subcategories
enable lexical entries to impose discontinuous constraints, calling
for particular complement structures. Conjunctions resume some of
the information held in boundary registers for backtracking. Affix
agreement and semantic interpretation exploit closed categories as
opportunities for fixed finite structures and constant-time access.
Rather than unifying unbounded graphs, why not represent affixes as
enumerable sets, implemented as bit vectors? Rather than searching
unbounded mazes of syntax trees or history lists, why not let each
grammatical role and personal pronoun directly index its potential
referents? RVG's real time performance can help speech processing;
its computional simplicity could facilitate language acquisition.

11:00 am Wednesday, July 11, 1990
CAIT Conference Room
Unisys Center for Advanced Information Technology
Great Valley Laboratories #1
70 East Swedesford Road
Paoli PA 19301

* Formerly Unisys Paoli Research Center. Non-Unisys visitors who want to
attend should send email to finin@prc.unisys.com or call 215-648-2480.

------------------------------
End of NL-KR Digest
*******************


← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT