Copy Link
Add to Bookmark
Report
AIList Digest Volume 1 Issue 061
AIList Digest Friday, 23 Sep 1983 Volume 1 : Issue 61
Today's Topics:
AI Applications - Music,
AI at Edinburgh - Request,
Games - Prolog Puzzle Solution,
Seminars - Talkware & Hofstader,
Architectures - Parallelism,
Technical Reports - Rutgers
----------------------------------------------------------------------
Date: 20 Sep 1983 2120-PDT
From: FC01@USC-ECL
Subject: Re: Music in AI
Music in AI - find Art Wink formerly of U. of Pgh. Dept of info sci.
He had a real nice program to imitate Debuse (experts could not tell
its compositions from originals).
------------------------------
Date: 18 Sep 83 12:01:27-PDT (Sun)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: U of Edinburgh, Scotland Inquiry
Article-I.D.: dartvax.224
Who knows anything about the current status of the Artificial
Intelligence school at the University of Edinburgh? I've heard
they've been through hard times in recent years, what with the
Lighthill report and British funding shakeups, but what has been going
on within the past year or so? I'd appreciate any gossip/rumors/facts
and if anyone knows that they're on the net, their address.
--decvax!dartvax!dartlib!lorien
Lorien Y. Pratt
------------------------------
Date: Mon 19 Sep 83 02:25:41-PDT
From: Motoi Suwa <Suwa@Sumex-AIM>
Subject: Puzzle Solution
[Reprinted from the Prolog Digest.]
Date: 14 Sep. 1983
From: K.Handa ETL Japan
Subject: Another Puzzle Solution
This is the solution of Alan's puzzle introduced on 24 Aug.
?-go(10).
will display the ten disgit number as following:
-->6210001000
and
?-go(4).
will:
-->1210
-->2020
I found following numbers:
6210001000
521001000
42101000
3211000
21200
1210
2020
The Following is the total program ( DEC10 Prolog Ver.3 )
/*** initial assertion ***/
init(D):- ass_xn(D),assert(rest(D)),!.
ass_xn(0):- !.
ass_xn(D):- D1 is D-1,asserta(x(D1,_)),asserta(n(D1)),ass_xn(D1).
/*** main program ***/
go(D):- init(D),guess(D,0).
go(_):- abolish(x,2),abolish(n,1),abolish(rest,1).
/* guess 'N'th digit */
guess(D,D):- result,!,fail.
guess(D,N):- x(N,X),var(X),!,n(Y),N=<Y,N*Y=<D,ass(N,Y),set(D,N,Y),
N1 is N+1,guess(D,N1).
guess(D,N):- x(N,X),set(D,N,X),N1 is N+1,guess(D,N1).
/* let 'N'th digit be 'X' */
ass(N,X):- only(retract(x(N,_))),asserta(x(N,X)),only(update(1)).
ass(N,_):- retract(x(N,_)),asserta(x(N,_)),update(-1),!,fail.
only(X):- X,!.
/* 'X' 'N's appear in the sequence of digit */
set(D,N,X):- count(N,Y),rest(Z),!,Y=<X,X=<Y+Z,X1 is X-Y,set1
(D,N,X1,0).
set1(_,N,0,_):- !.
set1(D,N,X,P):- n(M),P=<M,x(M,Y),var(Y),M*N=<D,ass(M,N),set(D,M,N),
X1 is X-1,P1 is M,set1(D,N,X1,P1).
/* 'X' is the number of digits which value is 'N' */
count(N,X):- bagof(M,M^(x(M,Z),nonvar(Z),Z=N),L),length(L,X).
count(_,0).
/* update the number of digits which value is not yet assigned */
update(Z):- only(retract(rest(X))),Z1 is X-Z,assert(rest(Z1)).
update(Z):- retract(rest(X)),Z1 is X+Z,assert(rest(Z1)),!,fail.
/* display the result */
result:- print(-->),n(N),x(N,M),print(M),fail.
result:- nl.
------------------------------
Date: 21 Sep 83 1539 PDT
From: David Wilkins <DEW@SU-AI>
Subject: Talkware Seminars
[Reprinted from the SU-SCORE bboard.]
1127 TW Talkware seminar Weds. 2:15
I will be organizing a weekly seminar this fall on a new area I am
currently developing as a research topic: the theory of "talkware".
This area deals with the design and analysis of languages that are
used in computing, but are not programming languages. These include
specification languages, representation languages, command languages,
protocols, hardware description languages, data base query languages,
etc. There is currently a lot of ad hoc but sophisticated practice
for which a more coherent and general framework needs to be developed.
The situation is analogous to the development of principles of
programming languages from the diversity of "coding" languages and
methods that existed in the early fifties.
The seminar will include outside speakers and student presentations of
relevant literature, emphasizing how the technical issues dealt with
in current projects fit into the development talkware theory. It will
meet at 2:15 every Wednesday in Jacks 301. The first meeting will be
Wed. Sept. 28. For a more extensive description, see
{SCORE}<WINOGRAD>TALKWARE or {SAIL}TALKWA[1,TW].
------------------------------
Date: Thu 22 Sep 00:23
From: Jeff Shrager
Subject: Hofstader seminar at MIT
[Reprinted from the CMU-AI bboard.]
Douglas Hofstader is giving a course this semester at MIT. I thought
that the abstract would interest some of you. The first session takes
place today.
------
"Perception, Semanticity, and Statistically Emergent Mentality"
A seminar to be given fall semester by Douglas Hofstader
In this seminar, I will present my viewpoint about the nature
of mind and the goals of AI. I will try to explain (and thereby
develop) my vision of how we perceive the essence of things, filtering
out the details and getting at their conceptual core. I call this
"deep perception", or "recognition".
We will review some earlier projects that attacked some
related problems, but primarily we will be focussing on my own
research projects, specifically: Seek-Whence (perception of sequential
patterns), Letter Spirit (perception of the style of letters), Jumbo
(reshuffling of parts to make "well-chunked" wholes), and Deep Sea
(analogical perception). These tightly related projects share a
central philosophy: that cognition (mentality) cannot be programmed
explicitly but must emerge "epiphenomenally", i.e., as a consequence
of the nondeterministic interaction of many independent "subcognitive"
pieces. Thus the overall "mentality" of such a system is not directly
programmed; rather, it EMERGES as an observable (but onnprogrammed)
phenomenon -- a statistical consequence of many tiny semi-cooperating
(and of course programmed) pieces. My projects all involve certain
notions under development, such as:
-- "activation level": a measure of the estimated relevance of a given
Platonic concept at a given time;
-- "happiness": a measure of how easy it is to accomodate a structure
and its currently accepted Platonic class to each other;
-- "nondeterministic terraced scan": a method of homing in to the best
category to which to assign something;
-- "semanticity": the measure of how abstractly rooted (intensional) a
perception is;
-- "slippability": the ease of mutability of intensional
representational structures into "semantically close" structures;
-- "system temprature": a number measuring how chaotically active the
whole system is.
This strategy for AI is permeated by probabilistic or
statistical ideas. The main idea is that things need not happen in
any fixed order; in fact, that chaos is often the best path to follow
in building up order. One puts faith in the reliability of
statistics: a sensible, coherent total behavior will emerge when there
are enouh small independent events being influenced by high-level
parameters such as temperature, activation levels, happinesses. A
challange is to develop ways such a system can watch its own
activities and use those observations ot evaluate its own progress, to
detect and pull itself out of ruts it chances to fall into, and to
guide itself toward a satisfying outcome.
... Prerequisits: an ability to program well, preferably in
Lisp, and an interest in philosophy of mind and artificial
intelligence.
------------------------------
Date: 18 Sep 83 22:48:56-PDT (Sun)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: Parallelism et. al.
Article-I.D.: dartvax.229
The Parallelism and AI projects at the University of Maryland sound
very interesting. I agree with an article posted a few days back that
parallel hardware won't necessarily produce any significantly new
methods of computing, as we've been running parallel virtual machines
all along. Parallel hardware is another milestone along the road to
"thinking in parallel", however, getting away from the purely Von
Neumann thinking that's done in the DP world these days. It's always
seemed silly to me that our computers are so serial when our brains:
the primary analogy we have for "thinking machines" are so obviously
parallel mechanisms. Finally we have the technology (software AND
hardware) to follow in our machine architecture cognitive concepts
that evolution has already found most powerful.
I feel that the sector of the Artificial Intelligence community that
pays close attention to psychology and the workings of the human brain
deserves more attention these days, as we move from writing AI
programs that "work" (and don't get me wrong, they work very well!) to
those that have generalizable theoretical basis. One of these years,
and better sooner than later, we'll make a quantum leap in AI research
and articulate some of the fundamental structures and methods that are
used for thinking. These may or may not be isomorphic to human
thinking, but in either case we'll do well to look to the human brain
for inspiration.
I'd like to hear more about the work at the University of Maryland; in
particular the prolog and the parallel-vision projects.
What do you think of the debate between what I'll call the Hofstadter
viewpoint: that we should think long term about the future of
artificial intelligence, and the Feigenbaum credo: that we should stop
philosophizing and build something that works? (Apologies to you both
if I've misquoted)
--Lorien Y. Pratt
decvax!dartvax!lorien
(Dartmouth College)
------------------------------
Date: 18 Sep 83 23:30:54-PDT (Sun)
From: pur-ee!uiucdcs!uiuccsb!cytron @ Ucb-Vax
Subject: AI and architectures - (nf)
Article-I.D.: uiucdcs.2883
Forward at the request of speaker: /***** uiuccsb:net.arch /
umcp-cs!speaker / 12:20 am Sep 17, 1983 */
The fact remains that if we don't have the algorithms for
doing something with current hardware, we still won't be
able to do it with faster or more powerful hardware.
The fact remains that if we don't have any algorithms to start with
then we shouldn't even be talking implementation. This sounds like a
software engineer's solution anyway, "design the software and then
find a CPU to run it on."
New architectures, while not providing a direct solution to a lot of
AI problems, provide the test-bed necessary for advanced AI research.
That's why everyone wants to build these "amazingly massive" parallel
architectures. Without them, AI research could grind to a standstill.
To some extent these efforts change our way of thinking
about problems, but for the most part they only speed up
what we knew how to do already.
Parallel computation is more than just "speeding things up." Some
problems are better solved concurrently.
My own belief is that the "missing link" to AI is a lot of
deep thought and hard work, followed by VLSI implementation
of algorithms that have (probably) been tested using
conventional software running on conventional architectures.
Gad...that's really provincial: "deep thought, hard work, followed by
VLSI implementation." Are you willing to wait a millenia or two while
your VAX grinds through the development and testing of a truly
high-velocity AI system?
If we can master knowledge representation and learning, we
can begin to get away from programming by full analysis of
every part of every algorithm needed for every task in a
domain. That would speed up our progress more than new
architectures.
I agree. I also agree with you that hardware is not in itself a
solution and that we need more thought put to the problems of building
intelligent systems. What I am trying to point out, however, is that
we need integrated hardware/software solutions. Highly parallel
computer systems will become a necessity, not only for research but
for implementation.
- Speaker
-- Full-Name: Speaker-To-Animals
Csnet: speaker@umcp-cs
Arpa: speaker.umcp-cs@UDel-Relay
This must be hell...all I can see are flames... towering flames!
------------------------------
Date: 19 Sep 83 9:36:35-PDT (Mon)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: AI and Architecture
Article-I.D.: ncsu.2338
Sheesh. Everyone seems so excited about whether a parallel machine
is or will lead to fundamentally new things. I agree with someone's
comment that conceptually time-sharing and multi-programming have
been conceptually quite parellel "virtual" machines for some time.
Just more and cheaper of the same. Perhaps the added availability
will lead someone to have a good idea or two about how to do
something better -- in that sense it seems certain that something
good will come of proliferation and popularization of parallelism.
But for my money, there is nothing really, fundamentally different.
Unless it is non-determinism. Parallel system tend to be less
deterministic then their simplex brethern, though vast effort are
usually expended in an effort to stamp out this property. Take me
for example: I am VERY non-deterministic (just ask my wife) and yet I
am also smarter then a lot of AI programs. The break thru in AI/Arch
will, in my non-determined opinion, come when people stop trying to
sqeeze paralle systems into the more restricted modes of simplex
systems, and develop new paradigms for how to let such a system spred
its wings in a dimension OTHER THAN performance. From a pragmatic
view, I think this will not happen until people take error recovery
and exception processing more seriously, since there is a fine line
between an error and a new thought ....
----GaryFostel----
------------------------------
Date: 20 Sep 83 18:12:15 PDT (Tuesday)
From: Bruce Hamilton <Hamilton.ES@PARC-MAXC.ARPA>
Reply-to: Hamilton.ES@PARC-MAXC.ARPA
Subject: Rutgers technical reports
This is probably of general interest. --Bruce
From: PETTY@RUTGERS.ARPA
Subject: 1983 abstract mailing
Below is a list of our newest technical reports.
The abstracts for these are available for access via FTP with user
account <anonymous> with any password. The file name is:
<library>tecrpts-online.doc
If you wish to order copies of any of these reports please send mail
via the ARPANET to LOUNGO@RUTGERS or PETTY@RUTGERS. Thank you!!
CBM-TR-128 EVOLUTION OF A PLAN GENERATION SYSTEM, N.S. Sridharan,
J.L. Bresina and C.F. Schmidt.
CBM-TR-133 KNOWLEDGE STRUCTURES FOR A MODULAR PLANNING SYSTEM,
N.S. Sridharan and J.L. Bresina.
CBM-TR-134 A MECHANISM FOR THE MANAGEMENT OF PARTIAL AND
INDEFINITE DESCRIPTIONS, N.S. Sridharan and J.L. Bresina.
DCS-TR-126 HEURISTICS FOR FINDING A MAXIMUM NUMBER OF DISJOINT
BOUNDED BATHS, D. Ronen and Y. Perl.
DCS-TR-127 THE BALANCED SORTING NETWORK,M. Dowd, Y. Perl, L.
Rudolph and M. Saks.
DCS-TR-128 SOLVING THE GENERAL CONSISTENT LABELING (OR CONSTRAINT
SATISFACTION) PROBLEM: TWO ALGORITHMS AND THEIR EXPECTED COMPLEXITIES,
B. Nudel.
DCS-TR-129 FOURIER METHODS IN COMPUTATIONAL FLUID AND FIELD
DYNAMICS, R. Vichnevetsky.
DCS-TR-130 DESIGN AND ANALYSIS OF PROTECTION SCHEMES BASED ON THE
SEND-RECEIVE TRANSPORT MECHANISM, (Thesis) R.S. Sandhu. (If you wish
to order this thesis, a pre-payment of $15.00 is required.)
DCS-TR-131 INCREMENTAL DATA FLOW ANALYSIS ALGORITHMS, M.C. Paull
and B.G. Ryder.
DCS-TR-132 HIGH ORDER NUMERICAL SOMMERFELD BOUNDARY CONDITIONS:
THEORY AND EXPERIMENTS, R. Vichnevetsky and E.C. Pariser.
LCSR-TR-43 NUMERICAL METHODS FOR BASIC SOLUTIONS OF GENERALIZED
FLOW NETWORKS, M. Grigoriadis and T. Hsu.
LCSR-TR-44 LEARNING BY RE-EXPRESSING CONCEPTS FOR EFFICIENT
RECOGNITION, R. Keller.
LCSR-TR-45 LEARNING AND PROBLEM SOLVING, T.M. Mitchell.
LRP-TR-15 CONCEPT LEARNING BY BUILDING AND APPLYING
TRANSFORMATIONS BETWEEN OBJECT DESCRIPTIONS, D. Nagel.
------------------------------
End of AIList Digest
********************