Copy Link
Add to Bookmark
Report
AIList Digest Volume 3 Issue 111
AIList Digest Friday, 16 Aug 1985 Volume 3 : Issue 111
Today's Topics:
Symbolic Computing - Definition
----------------------------------------------------------------------
Date: Sun, 11 Aug 1985 21:55 PDT
From: DAVIES@SUMEX-AIM.ARPA
Subject: What is symbolic computing?
PARSYM is the new netwide mailing list for parallel symbolic
computing. One of the first messages to PARSYM asked just what is
symbolic computing anyway. I thought the readers of AIList might be
interested to see what PARSYM came up with.
-- Byron Davies (PARSYM-Request@SUMEX-AIM.ARPA)
------------------------------
Date: Mon, 8 Jul 85 09:15:00 pdt
From: coraki!pratt@Navajo (Vaughan Pratt)
Subject: What is symbolic computing?
I suppose with quarter of a century of Lisp experience we should
understand by now what symbolic computing is. Indeed everyone appears
to be quite happy to talk about symbolic computing as though the
concept had the unanimous blessing of the last three centuries of
mathematics.
Yet I have been unable to find in the literature a definition of the
concept that matches the facts.
Discussion based on the premise that the topic is well-defined when it
isn't is apt to run at cross-purposes.
I therefore challenge this list to agree on a definition of symbolic
computing.
Even if this challenge cannot be met (as I expect will be the case), at
least people will have been made aware of the variety of definitions
and will know to make allowance for this variety.
-v
------------------------------
Date: Mon, 15 Jul 85 15:34:33 EST
From: munnari!dmtadel.oz!crw@seismo.CSS.GOV (charles watson)
Subject: Applicative Symbolic Programming
Firstly I consider symbolic computing, to be abstract and non-numeric, dealing
with structured objects that match human concepts and images.
Secondly, I'd like to argue the case for side-effect free programming.
I feel that the side-effects of much real-world software are the
result of poor programming practice. John McCarthy's (1959) LISP is
sufficient. All the features added in the name of efficiency have
encumbered their compilers and interpreters. Except for debugging,
side effects have been tolerable in the past. For highly concurrent
systems of the future, side-effects would be catastrophic. I accept
that the cost of re-writing software is a strong reason for downward
compatability to existing non-applicative code, but redesign in a
sound paradigm would take less effort than conversion. The machine
architecture I'm working on will allow setq to avoid re-evaluating
S-exprs but not for explicit register assignment; it will allow
replac* to append non-circular lists, but not to point back to any
object in the ancestral tree. There are those who still believe that
the shortest distance between two pieces of software is a GOTO
statement. In my experience real-world software is easier to develop
and maintain in the structured paradigm. The last project I was
involved in used an applicative hierarchical specification of IC
designs to drive a silicon compiler. The problem would have otherwise
been intractable. Also, processors can be custom designed for
symbolic processing and replicated for the cost of about $10 each
(ignoring the development cost). A system with thousands of these
processors, could cost the same as a Symbolics 3600. Arguments such
as "this side-effect riddled feature gives a 20% performance
improvement" would be irrelevent to the cost-effectiveness of such a
system.
------------------------------
Date: Tue, 16 Jul 85 07:46:47 pdt
From: coraki!pratt@Navajo (Vaughan Pratt)
Subject: Definitional Issues
From: munnari!dmtadel.oz!crw@seismo.CSS.GOV (charles watson)
Firstly I consider symbolic computing, to be abstract and non-numeric,
dealing with structured objects that match human concepts and images.
Excellent! A first attempt (on PARSYM) to define symbolic computing.
Let's try it out.
1. What does "abstract" mean? One definition I like a lot is
"authorized." Computation is abstract when it depends only on what
is authorized by the documentation. Is that what you meant? If so,
why is that in the definition of "symbolic" computing, as opposed to
other kinds of computing? If not, then what do you mean?
2. Lisp has numbers. Does this rule out Lisp as a symbolic language?
3. In PROLOG without equality an inefficient but convenient way to
represent natural numbers is symbolically: 3 = (S (S (S 0))) and so
on. How do you reconcile "symbolic" and "nonnumeric" for such a case?
4. What is an example of a structured object *not* matching human
concepts and images? I find it hard to conceive of or imagine such a
thing. This requirement is surely more appropriate for a definition of
AI computing than symbolic computing.
In short, I see neither neither the rationale for nor the application
of any part of this definition.
Secondly, I'd like to argue the case for side-effect free programming.
... For highly concurrent systems
of the future, side-effects would be catastrophic.
Speaking as one highly concurrent system trying to side effect another,
I hope I haven't thereby caused a catastrophe. And on behalf of human
society, another highly concurrent system, it would certainly be
interesting, and surely catastrophic in some sense, if we ceased to
have side effects on each other.
Whether you have, or for that matter can identify, side effects is very
dependent on your particular computational paradigm, e.g. the
distinction between functional and imperative programming. As soon as
one starts to explore other paradigms appropriate for concurrency,
e.g. dataflow (of particular interest to me), the concept of
side-effect free programming becomes either irrelevant or meaningless.
About the only sense one could make of it in dataflow would be if one
introduced ESP for processes, i.e. communication by unseen channels.
Sorry not to be more constructive here. For more constructive remarks
in the above spirit see my POPL-83 paper "Five Paradigm Shifts in
Programming Language Design and their Application to Viron, a Dataflow
Programming Language" After having taken off a couple of years helping
out with getting workstations out to people I am just now returning to
academia to continue the design and implementation of Viron, or
something resembling it (in addition to continuing my work on fonts, a
side-effect (hak coff) of my working at Sun). If people would be
interested I'd be happy to make occasional short contributions to this
column expressing the general philosophy behind Viron, which is very
much a parallel and abstract programming language. Since Viron
processes don't have a notion of internal state (how do you define "the
state" of a process consisting of an ocean of ships each loaded with
microprocessors with cycle times measured in nanoseconds, where
"simultaneous" is both physically and practically undefined?) one has
to define "side effect" in a way that does not depend on the notion of
state - in this sense Viron is free of any state-based notion of side
effect. Whether Viron could be called "symbolic" depends on whether we
ever find a workable definition of "symbolic," but it should pass
almost any plausible definition that does not rule out numeric
computation and that does not specify implementation or representation
details.
-v
------------------------------
From: eugene@AMES-NAS.ARPA (Eugene Miya)
Date: 16 Jul 1985 1736-PDT (Tuesday)
Subject: Re: PARSYM Digest V1 #1: RE: What is symbolic computing?
> From: coraki!pratt@Navajo (Vaughan Pratt)
> Indeed everyone appears
> to be quite happy to talk about symbolic computing as though the
> concept had the unanimous blessing of the last three centuries of
> mathematics.
I work with people who crunch numbers, permit me to play devils advocate
(I do support research on symbolic processing): the only people
who have given their blessing is the LISP community. Let's not
close our eyes to that fact.
> I therefore challenge this list to agree on a definition of symbolic
> computing.
I realize this is rehashing hallway arguments we have all had:
please define that which is "not" symbolic computing and why we should
make a distinction in these two types of parallel computing. After all
isn't crunching a number the same as manipulating a symbol, and aren't
we possibly creating artificial distinctions of computing types (symbolic
and numeric)?
I like the idea of academic discussions of this nature. Work can get too
serious at times. I also would like to point out that I just returned from SU
and I have seen copies of Dr. Pratt's TRs on new thoughts for concurrency
models.
--eugene miya
NASA
------------------------------
Date: Fri 19 Jul 85 21:25:35-CDT
From: Mayank Prakash <AI.Mayank@MCC.ARPA>
Subject: Re: What is symbolic computing?
Here's another attempt at it. First of all, I think that the terms
symbolic computing and numerical computing are different modes of
computing rather than mutually exclusive taxonomical categories.
Then, numerical computing is the mode when the major data elements are
numerical and one is interested in changing the numerical values of
these data elements. That is, both the input to and the output from
the program are mainly numerical. In symbolic computing, one is
interested mainly in manipulating structures. That is, both the input
and the output to the program are structures. Note that in this
definiton one mode of computing does not exclude the other. In fact,
most programs do some of each. It is the predominant activity of the
program that determines it's mode.
One could look at it from a lower level as well. The memory cells in
the computer's data memory (as opposed to the instruction memory)
contain binary values. If they are mostly interpreted as representing
numbers, and the majority of operations that are carried out on them
are numerical, i.e., add, subtract, multiply, shift etc. their values,
then the program is a numerical mode program. If they are mostly
pointers to other cells in memory, and the operations on them are
mainly follow the pointers along, modify them to point to some other
cells etc., then the program is a symbolic mode program.
A characteristic that generally distinguishes the languages for the
two kinds of programs is memory allocation. The languages developed
for numerical processing have mostly static memory allocation schemes.
By that I mean that the data memory is allocated to a procedure (or a
function, or block, whatever you want to call it) upon entry, and
released upon exit. Generally, though not always, the procedure does
not (and in most cases, can not) change its data memory. In contrast,
symbolic processing languages have dynamic memory allocation with
attendant garbage collection. This is necessary for symbolic
computing since in this case one is dealing with structures, which are
essentially pointers pointing at each other in various ways, and since
the main activity here is manipulating these structures, i.e.,
releasing and allocating pointers.
Admittedly these are somewhat vague definitions, but I hope that this
posting will at least spur a discussion on the subject.
- mayank.
------------------------------
Date: Thu, 25 Jul 85 22:43:43 edt
From: Tom Blenko <blenko@rochester.arpa>
Subject: What is symbolic computation?
Vaughn's question is an interesting one. My proposal is that
numerical computation is performed over a flat domain, while symbolic
computing permits computation over terms which are partially bound or
instantiated. Binding is used in a broad sense here, and subsumes
type declaration and generic instantiation, as well as the
conventional notion of variable binding.
According to this scheme, FORTRAN is almost purely numerical because it
allows variables to be bound only to constants (although a form of
compile-time co-referential binding is possible using EQUIVALENCE).
FORTRAN is not purely numerical because variables are typed. Type
declaration is a form of binding under the notion of binding described
above, although an especially weak one because all type declarations
must be made at compile time. The class of nearly-numerical languages
(those with flat domains plus some support for typed variables) can be
expanded somewhat by including languages with slightly more powerful
type mechanisms, i.e., those which support discriminated unions or
procedure name overloading. For the purposes of discussion, I'd be
willing to refer to all of these as languages which support numerical
computation exclusively. This seems like a reasonable approximation
because their competence as symbolic languages is both weak and well
known.
Languages which permit various forms of abstract data type or generic
procedure definition would be classified as partially-symbolic because
they each support some form of run-time partial binding of variables.
Representative examples in this class are CLU, ADA, and SMALLTALK-80.
This is the class currently of greatest interest to the imperative
language people, and certainly there needs to be more work on what
abstraction and type mechanisms are useful and can be implemented
efficiently.
My two choices to represent (nearly) fully-symbolic languages are
PROLOG and LISP. In PROLOG, the binding mechanism, unification, can
also be viewed as a type restriction mechanism, so that a variable
becomes bound to a grounded term by successive application of type
restrictions to (variable) subterms of intermediate bindings as the
computation proceeds (reference for related work is Hassan Ait-Kaci's
thesis, A New Model of Computation based on a Calculus of Type
Subsumption). This identification of variable typing with more
traditional notions of variable binding is precisely what I propose
permits one to view symbolic computation in a coherent way.
LISP is well-known, and it would be quite a task to persuade some that
it is anything except the ultimate symbolic language. Clearly its
binding mechanism allows the kind of partial binding which occurs
naturally in PROLOG (although, of course, this could be said to be
only one consequence of its excessive permissiveness). Let me mention
two ways in which it differs from PROLOG, however, and might be viewed
as a more powerful symbolic language.
First, it allows variables to be bound as pointers to other variables.
This is undeniably a powerful mechanism, although it makes for more
complicated language semantics. It is also not a particularly good
substitute for what is (arguably) the corresponding mechanism in
PROLOG, specifically the co-referential binding resulting from the
unification of variables. I say the two mechanisms correspond because
the only way to do equivalencing in LISP is for variables A and B to
both point to C, and for C to store the equivalenced value of A and B,
which may be accessed by a dereference followed by an evaluation --
which leads to the second point.
Many LISPs implement what might be termed a first-class recursive call
to the interpreter. INTERLISP's evala, for example, allows any
procedure to call the interpreter recursively on a LISP data structure
with the binding environment of the computation completely specified
in an argument to evala. Intuitively, this is a powerful mechanism
for symbolic computation, and is moreover necessary for the rather
awkward implementation of LISP equivalencing described above, since
dereferencing is indistinguishable from evaluation (although Brian
Smith has succeeded in separating the two in his definition of
3-LISP). The (second-class) recursive call implemented in most
PROLOGs is more restricted because the environment of the calling
procedure is unconditionally inherited by the called procedure
(although proposals for more general approaches have been made). Many
partially- and non-symbolic languages do not provide any recursive
call.
These are admittedly incomplete thoughts, and I'd be interested in any
responses. I have not specifically addressed binding of parameters
across procedure calls -- call-by-value and call-by-reference can be
understood rather easily once the notion of variable creation is
included, I think. I haven't though about the symbolic power of
exotica such as thunks, and I suspect that macro expansion doesn't add
much in the way of symbolic power.
I'd be particularly interested in a coherent exposition of the
relationships between what I've been proposing as the primary
characteristic of symbolic computation (partial binding) and
mechanisms such as pointer creation and dereferencing, and lazy or
eager evaluation. For example, one might interpret PROLOG unification
of variables as a lazy assignment of the source variable to the target
variable, with evaluation of the source variable binding delayed
indefinitely (this is correct because PROLOG variables are
write-once). The obvious way to do dereferencing in PROLOG is through
a "procedure call" or recursive call to the interpreter, except that
the PROLOG interpreter treats bound and unbound variables differently,
so that unbound variables "evaluate" to themselves both during
unification and when used as parameters to recursive function calls
(under what interpretation is this eager evaluation?).
Another question is why one is normally tempted to categorize a
language like C as non-symbolic, since it allows liberal pointer
creation/dereferencing and thereby allows the same binding of
variables to variables as LISP to be performed in a fairly direct
fashion. (Note, however, that no recursive call to the interpreter is
permitted).
I'd be interested in any thoughts or comments the list might have.
Tom
BLENKO@ROCHESTER
----------------------------------------------------------------------
Date: Fri, 26 Jul 85 10:04 EDT
From: Seth Steinberg <sas@BBN-VAX.ARPA>
Subject: Symbolic vs Numerical Computing
We had better be careful here. Programs, not languages are either
symbolic or numerical. If I write a numerical matrix inverter in LISP
it is still a numerical program while if I write a MACSYMA-like
symbolic algebra matrix inverter in Fortran it is a symbolic program.
(Never mind why I would want to do the latter).
Numerical programs make the vast majority of their decisions based on
the values of terminal values which are stored in relatively
homogeneous data structures. Symbolic programs make significantly more
decisions based on examination of the structure of the data which may
vary more freely. The exact implementation of runtime typed data may
be part of the programming language (LISP or SMALLTALK types), assisted
by the programming language (PASCAL records used with case or C struct
unions), or it can be implemented in spite of the programming language
(FORTRAN, possibly with a package like SLIP or ASSEMBLY language using
the high bits for type and the rest for pointer). [Obviously, some
languages make symbolic programming easier than others].
This definition is far from perfect so I'll propose a test. Try
ranking a list of programs on the numerical<->symbolic scale. For
example (probably not in the right order):
Fast Fourier Transform
Sparse Matrix Multiply
Graph Coloring Algorithm
Simple Expression Compiler
ELIZA
LISP Interpreter
Simple Theorem Prover
Algebraic Integrator
Noun Phrase Parser
Try out your own ordering. Where would you put in things like:
FTP support for the Symbolics 3600
UNIX (kernel or cshell)
MacPaint vs. MacDraw
A Turing Machine
I'd be interested to see how these would be ranked or whether it is
meaningless to perform such rankings.
Seth Steinberg
SAS @ BBN-VAX
------------------------------
Date: Sat, 27 Jul 85 14:14:44 pdt
From: coraki!pratt@Navajo (Vaughan Pratt)
Subject: What is symbolic computing?
From: Mayank Prakash <AI.Mayank@MCC.ARPA>
Then, numerical computing is the mode when the major data elements are
numerical and one is interested in changing the numerical values of
these data elements. ...
I do a lot of computing with:
* complex numbers
* polynomials over various fields, including the reals
* vector spaces of various dimensions
* linear transformations
* survey maps, involving bearings, lot boundaries (expressed as lists
of line segments), areas, etc.
* outline fonts based on conic splines
Now if these aren't examples of structured data I'm a monkey's uncle.
Yet most of the time spent manipulating these structures goes into
floating point operations. On the one hand this is certainly
consistent with Mayank's observation that "most programs do some of
each." On the other hand I don't see how to apply the test "predominant
activity." Is this determined by the proportion of time spent on
floating point operations? If so then does plugging in a floating point
accelerator convert my program from a numerical to a symbolic one?
Or is it determined by the number of calls to floating point routines
in my programs? Most of my calls are to things like dot and matrix
products.
If [the memory cells] are mostly interpreted as representing
numbers, and the majority of operations that are carried out on them
are numerical, i.e., add, subtract, multiply, shift etc. their values,
then the program is a numerical mode program. ...
At this level the definition is vulnerable to the compiler. Given a
vector of reals a powerful optimizing compiler may see fit to implement
it either as a linked list (if the optimizer detects operations on the
vector that amount to expanding or contracting the vector) or an array
of contiguous locations (if there is much random access to the array).
How does one classify a program that leaves such decisions to the
compiler?
... symbolic processing languages have dynamic memory allocation
with attendant garbage collection.
When memory is allocated and released in LIFO order it is very efficient
to allocate it off a stack. When the release order is unpredictable
one resorts to a heap. How does this have anything to do with whether
numeric data types are involved?
-v
------------------------------
Date: Wed, 31 Jul 85 11:59 EDT
From: Guy Steele <gls@THINK-AQUINAS.ARPA>
Subject: Symbolic and numerical computing
Symbolic programs:
* laugh at themselves.
* philosophize.
* till the soil.
* are featherless bipeds.
Here's a more serious attempt: ALL computing applications are symbolic.
All applications rely on processing data organized according to some
structural discipline. This discipline may be trivial or exceedingly
complex. There are typically certain invariants or axioms of the
structure, and the operations on the structure, on which the processing
relies: for example, that a tree is binary and balanced, and insertion
and deletion maintain these invariants.
A particularly large and important class of applications relies heavily
on the axioms for rings and fields, particularly the ring of integers
and the fields of real and complex numbers, and for such applications
much of the computation is done with data structures and operations
that are organized so as to obey these axioms (more or less, given the
usual finiteness of the representations). Because these applications
are so important, and the theory is well-understood and agreed-upon,
special hardware accelerators for certain very complicated operations
(such as multiplication) are the norm rather than the exception; but
the presence or absense of such hardware has, to my mind, little
bearing on the numericalness of the application.
So I propose that an application be considered numerical to the extent
that it relies on data structures obeying the axioms for rings or
fields, however these data structures may be represented as bits. I
would regard a LISP program operating on lists of NIL's as numerical if
it were so organized as to treat these lists primarily as unary
encodings of numbers, using routines to concatenate the lists (addition)
and repeatedly self-concatenate (double, multiply), and so on. (Indeed,
the SCHEME chips that I and others designed to directly interpret LISP
code had no on-chip ALU to speak of, and the chips were tested on
numerical applications using numbers encoded in exactly this way.)
Is a program that relies on a group structure numerical? What if there
is hardware to compute a*b quickly, where * is the group operator?
Suppose the group were SU(16) or GF(16) instead of Z[2^16]? By the
proposed criterion, any such program might be somewhat numerical,
but less so than one using a ring or field.
--Guy Steele
------------------------------
Date: Fri, 2 Aug 85 22:18:42 pdt
From: coraki!pratt@Navajo (Vaughan Pratt)
Subject: Re: PARSYM Digest V1 #8
From: Guy Steele <gls@THINK-AQUINAS.ARPA>
ALL computing applications are symbolic.
My position exactly. I particularly appreciated GLS's algebraic
examples, which I thought were right on. Subject closed (as far
as I'm concerned).
-v
------------------------------
Date: Tuesday, 6 August 1985, 5:15 pm PDT
From: Byron Davies <PARSYM-Request@SUMEX-AIM.ARPA>
Subject: What is symbolic computing?
The cat is out of the bag. Symbolic computing is indeed all
computing. PARSYM was designated the netwide mailing list for
*symbolic* computing in order to broaden the domain of discourse
rather than to restrict it to any particular branch of computing. I
hope it won't be long before someone asks the analogous question
about parallel computing -- and gets the *same* answer.
-- Byron
------------------------------
End of AIList Digest
********************