Copy Link
Add to Bookmark
Report

AIList Digest Volume 6 Issue 022

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest             Monday, 1 Feb 1988       Volume 6 : Issue 22 

Today's Topics:
Theory - Self-Conscious Code and the Chinese Room,
AI Tools - Compilation to Assembler in Poplog

----------------------------------------------------------------------

Date: Sun, 31 Jan 88 20:02:48 CST
From: ted reichardt <rei3@sphinx.uchicago.edu>
Reply-to: rei3@sphinx.uchicago.edu.UUCP (ted reichardt)
Subject: Self-conscious code and the Chinese room

From: Jorn Barger, using this account.
Please send any mail replies c/o rei3@sphinx.uchicago.edu



I'm usually turned off by any kind of
philosophical speculation,
so I've been ignoring the Chinese Room melodrama
from day one.
But I came across a precis of it the other day
and it struck me
that a programming trick I've been working out
might offer a partial solution to the paradox.

Searle's poser is this:
when you ask a question of a computer program,
even if it gives a reasonable answer
can it really be said to exhibit "intelligence,"
or does it only _simulate_ intelligent behavior?
Searle argues that the current technology
for question-answering software
assumes a database of rules
that are applied by a generalized rule-applying algorithm.
If one imagines a _human_ operator
(female, if we want to be non-sexist)
in place of that algorithm,
she could still apply the rules
and answer questions
even though they be posed
in a language she doesn't _understand_--
say, Chinese.
So, Searle says,
the ability to apply rules
falls critically short of our natural sense
of the word "intelligence."

Searle's paradigm for the program
is drawn from the work of Roger Schank
on story-understanding and scripts.
Each domain of knowledge
about which questions can be asked
must be spelled out as an explicit script,
and the rule-applying mechanism
should deduce from clues (such as the vocabulary used)
which domain a question refers to.
Once it has identified the domain,
it can extract an answer from the rules of that domain.

Again, these rules can be applied
by the rule-applying algorithm
to the symbols in the question
without reference to the _meaning_ of the symbols,
and so, for Searle, intelligence is not present.

But suppose now that one domain we can ask about
is the domain of "question-answering behavior in machines"?
So among the scripts the program may access
must be a question-answering script.
We might ask the program,
"If a question includes mathematical symbols,
what domains need to be considered?"

The question-answering script will include rules
like "If MATH-SYMBOL then try DOMAIN (arithmetic)"

But the sum of all these
rules of question-answering
will be logically identical to
the question-answering algorithm itself.
In Lisp, the script (data) and the program (code)
could even be exactly the same set of Lisp expressions.

Now, Searle will say, even so,
the program is still answering these questions
without any knowledge of the meanings of the symbols used.
A human operator could similarly
answer questions about answering questions
without knowing what is the topic.
In this case, for the human operator
the script she examines will be pure data,
no executing code,
Her own internal algorithms
as they execute
will not be open to such mechanical inspection.

Yet if we ask the program to modify one of its scripts,
as we have every right to do,
and the script we ask it modify is one that also executes,
_its_ behavior may change
while the human operator's never will.

And in a sense we might see evidence here
that the program _does_ understand Chinese,
for if we ask a human to change her behavior
and she subsequently does
we would have little doubt that understanding took place.
To explain away such a change as blind rule-following
we would have to picture her as
changing her own brain structures
with microtomes and fiber optics.
(But the cybernetic equivalent of this ought to be
fiber optics and soldering irons...)

Self-modifying code
has long been a skeleton key in the programmer's toolbox,
and a skeleton in his closet.
It alters itself blindly, dangerously,
inattentive to context and consequences.
But if we strengthen our self-modifying code
with _self-conscious_ code,
as Lisp and Prolog easily can,
we get something very _agentlike_.

Admittedly, self-consciousness about question-answering behavior
is pretty much of a triviality.
But extend the self-conscious domain
to include problem-solving behavior,
goal-seeking behavior,
planning behavior,
and you have the kernel of something more profound.

Let natural selection build on such a kernel
for a few million, or hundreds of millions of years,
and you might end up with something pretty intelligent.

The self-reference of Lisp and Prolog
takes place on the surface of a high-level language.
Self-referent _machine code_ would be more interesting,
but I wonder if the real quantum leap
might not arrive when we figure out how to program
self-conscious _microcode_!

------------------------------

Date: Sat, 30 Jan 88 23:18:17 GMT
From: Aaron Sloman <aarons%cvaxa.sussex.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Compilation to Assembler in Poplog

This is a response to a discussion in comp.compilers, but as it is
potentially of wider interest I'm offering it to all of you for your
bulletin boards. There does not seem to be anything comparable for
Lisp, so I suppose I just have to post it direct to comp.lang.lisp for
people interested in Lisp implementations? Or should I assume any such
people will read comp.compilers?

I hope it is of some interest, and I apologise for its length.
Although Sussex University has a commercial interest in Poplog I have
tried to avoid raising any commercial issues.
---------------------

COMPILING TO ASSEMBLY LANGUAGE IN POPLOG

There have been discussions on the network about the merits of
compiling to assembly language. Readers may be interested in the
methods used for implementing and porting Poplog, a multi-language
software development system containing incremental compilers for
Common Lisp, Prolog, ML and POP-11, a Lisp-like language with a more
readable Pascal-like syntax. Before I explain how assembly language is
used as output from the compiler during porting and system building, I
need to explain how the running system works. The mechanisms described
below were designed and implemented by John Gibson, at Sussex
University.

All the languages in Poplog compile to a common virtual machine, the
Poplog VM which is then compiled to native machine code. First an
over-simplified description:

The Poplog system allows different languages to share a common store
manager, and common data-types, so that a program in one language can
call another and share data-structures. Like most AI environments it
also allows incremental compilation: individual procedures can be
compiled and re-compiled and are immediately automatically linked in
to the rest of the system, old versions being garbage collected if no
longer pointed to. Moreover, commands to run procedures or interrogate
data-structures can be typed in interactively, using exactly the same
high level language as the programs are written in. The difference
between this and most AI systems is that ALL the languages are
compiled in the same way. E.g. Prolog is not interpreted by a POP-11
or Lisp program: they all compile (incrementally) to machine code.

The languages are all implemented using a set of tools for adding new
incremental compilers. These tools include procedures for breaking up
a text stream into items, and tools for planting VM instructions when
procedures are compiled. They are used by the Poplog developers to
implement the four Poplog languages but are also available for users
to implement new languages suited to particular applications. (E.g.
one user claims he implemented a complete Scheme in Poplog in about
three weeks, in his spare time, getting a portable compiler and
development environment for free once he had built the Scheme
front-end compiler in Poplog.)

All this makes it possible to build a range of portable incremental
compilers for different sorts of programming languages. This is how
POP-11, PROLOG, COMMON LISP and ML are implemented. They all compile
to a common internal representation, and share machine-specific
run-time code generators. Thus several different machine-independent
"front ends" for different languages can share a machine-specific
"back end" which compiles to native machine code, which runs far more
quickly than if the new language had been interpreted.

The actual story is more complicated: there are two Poplog virtual
machines, a high level and a low level one, both of which are language
independent and machine independent. The high level VM has powerful
instructions, which makes it convenient as a target language for
compilers for high level languages. This includes special facilities
to support Prolog operations, dynamic and lexical scoping of
variables, procedure definitions, procedure calls, suspending and
resuming processes, and so on. Because these are quite sophisticated
operations, the mapping from the Poplog VM to native machine code is
still fairly complex.

So there is a machine independent and language independent
intermediate compiler which compiles from the high level VM to to a
low level VM, doing a considerable amount of optimisation on the way.
A machine-specific back-end then translates the low-level VM to native
machine code, except when porting or re-building the system. In the
latter case the final stage is translation to assembly language. (See
diagram below.)

The bulk of the core Poplog system is written in an extended dialect
of POP-11, with provision for C-like addressing modes, for efficiency.
We call it SYSPOP. The system sources, written in SYSPOP, are also
compiled to the high-level VM, and then to the low level VM. But
instead of then being translated to machine code, the low level
instructions are automatically translated to assembly language files
for the target machine. This is much easier than producing object
files, because there is a fairly straight-forward mapping from the low
level VM to assembly language, and the programs that do the
translation don't have to worry about formats for object files: we
leave that to the assembler and linker supplied by the manufacturer.

In fact, the system sources need facilities not available to users, so
the two intermediate virtual machines are slightly enhanced for
SYSPOP. The following diagram summarises the situation.

{POP-11, COMMON LISP, PROLOG, ML, SYSPOP}
|
Compile to
|
V
[High level VM]
(extended for SYSPOP)
|
Optimise & compile to
|
V
[Low level VM]
(modified for SYSPOP)
|
Compile (translate) to
|
V
[Native machine instructions]
[or assembler - for SYSPOP]

So for ordinary users compiling or re-compiling their procedures in
the system, the machine code generator is used and compilation is very
fast, with no linking required. For rebuilding the whole system we go
via assembly language for maximum flexibility and it is indeed a slow
process. But it does not need to be done very often, and not (yet) by
ordinary users. Later (1989) they will have the option to use the
system building route in order to configure the version of Poplog they
want. So we sit on both sides of the argument about speed raised in
comp.compilers.

All the compilers and translators are implemented in Poplog (mostly in
POP-11). Only the last stage is machine specific. The low level VM is
at a level that makes it possible on the VAX, for example, to generate
approximately one machine instruction per low level VM instruction. So
writing the code generator for something like a VAX or M68020 was
relatively easy. For a RISC machine the Clipper the task is a little
more complicated.

Porting to a new computer requires the run-time "back end", i.e. the
low level VM compiler, to be changed and also the system-building
tools which output assembly language programs for the target machine.
There are also a few hand-coded assembly files which have to be
re-written for each machine. Thereafter all the high level languages
have incremental compilers for the new machine. (The
machine-independent system building tools perform rather complex
tasks, such as creating a dictionary of procedure names and system
variables that have to be accessible to users at run time. So besides
translating system source files, the tools create additional assembler
files and also check for consistency between the different system
source files.)

I believe most other interactive systems provide at most an
incremental compiler for one language, and any other language has to
be interpreted. If everything is interpreted, then porting is much
easier, but execution is much slower. The advantage of the Poplog
approach is that it is not necessary to port different incremental
compilers to each new machine.

This makes it relatively easy for the language designer to implement
complex languages, since the Poplog VM provides a varied, extendable
set of data-types and operations thereon, including facilities for
logic programming, list, record and array processing, 'number
crunching', sophisticated control structures (e.g. co-routines),
'active variables' and 'exit actions', that is instructions executed
whenever a procedure exits, whether normally or abnormally. Indefinite
precision arithmetic, ratios and complex numbers are accessible to all
the languages that need them. Both dynamic and lexical scoping of
variables are provided. A tree-structured "section" mechanism (partly
like packages) gives further support for modular design. External
modules (e.g. programs in C or Fortran) can be dynamically linked in
and unlinked. A set of facilities for accessing the operating system
is also provided. Poplog allows functions to be treated as "first
class"
objects, and this is used to great advantage in POP-11 and ML.

The VM facilities are relatively easy to port to a range of computers
and operating systems because the core system is mostly implemented in
SYSPOP, and is largely machine independent. Only the machine-dependent
portions mentioned above (e.g. run-time code generator, and translator
from low level VM to assembler), plus a small number of assembler
files need be changed for a new machine (unless the operating system
is also new). Since the translators are all written in a high level AI
language, altering them is relatively easy.

Porting requires compiling all the SYSPOP system sources, to generate
the corresponding new assmbler files, then moving them and the
hand-made assembler files to the new machine, where they are assembled
then linked. The same process is used to rebuild the system on an
existing machine when new features are added deep in the system. Much
of the system is in source libraries compiled as needed by users, and
modifying those components does not require re-building.

Using this mechanism an experienced programmer with no prior knowledge
of Poplog or the target processor was able to port Poplog to a RISC
machine in about 7 months. But for the usual crop of bugs in the
operating system, assembler, and other software of the new machine the
actual porting time would have been shorter. In general, extra time is
required for user testing, producing system specific documentation,
tidying up loose ends etc.

Thus 7 to 12 months work ports incremental compilers for four
sophisticated languages, a screen editor, and a host of utilities. Any
other languages implemented by users using the compiler-building tools
should also run immediately. So in principle this mechanism allows a
fixed amount of work to port an indefinitely large number of
incremental compilers. Additional work will be required if the
operating system is different from Unix or VMS, or if a machine
specific window manager has to be provided. This should not be
necessary for workstations supporting X-windows.

The use of assembler output considerably simplifies the porting task,
and also aids testing and debugging, since the output is far more
intelligible to the programmer than if object files were generated.

Comments welcome.

Aaron Sloman,
School of Cognitive Sciences, Univ of Sussex, Brighton, BN1 9QN, England
ARPANET : aarons%uk.ac.sussex.cvaxa@nss.cs.ucl.ac.uk
JANET aarons@cvaxa.sussex.ac.uk
BITNET: aarons%uk.ac.sussex.cvaxa@uk.ac

As a last resort
UUCP: ...mcvax!ukc!cvaxa!aarons
or aarons@cvaxa.uucp

Phone: University +(44)-(0)273-678294 (Direct line. Diverts to secretary)

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT