Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 120

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Wednesday, 13 May 1987    Volume 5 : Issue 120 

Today's Topics:
Reports - NMSU Computer and Cognitive Science Abstracts (1 of 2)

----------------------------------------------------------------------

Date: Sun, 10 May 87 16:07:45 MDT
From: yorick%nmsu.csnet@RELAY.CS.NET
Subject: Computer and Cognitive Science Abstracts (1 of 2)


ABSTRACTS OF
MEMORANDA IN COMPUTER AND COGNITIVE SCIENCE

Computing Research Laboratory
New Mexico State University
Box 30001
Las, Cruces, NM 88003.


Kamat, S.J. (1985), Value Function Approach to Multiple Sensor
Integration, MCCS-85-16.

A value function approach is being tried for integrating multiple sensors
in a robot environment with known objects. The state of the environment is
characterized by some key parameters which affect the performance of the
sensors. Initially, only a handful of discrete environmental states will be
used. The value of a sensor or a group of sensors is defined as a function
of the number of possible object contenders under consideration and the
number of contenders that can be rejected after using the sensor information.
Each possible environmental state will have its effect on the function, and
the function could be redefined to indicate changes in the sampling frequency
and/or resolution for the sensors. A theorem prover will be applied to the
sensor information available to reject any contenders. The rules used by the
theorem prover may be different for each sensors, and the integration is
provided by the common decision domain. The values for the different sensor
groups will be stored in a database. The order of use of the sensor groups
will be according to the values, and can be stored as the best search path.
The information in the database can be adaptively updated to provide a
training methodology for this approach.


Cohen, M. (1985), Design of a New Medium for Volume Holographic
Information Processing, MCCS-85-17.

An optical analog of the neural networks involved in sensory
processing consists of a dispersive medium with gain in a narrow
band of wavenumbers, cubic saturation, and a memory nonlinearity
that may imprint multiplexed volume holographic gratings. Coupled
mode equations are derived for the time evolution of a wave
scattered off these gratings; eigenmodes of the coupling
matrix $$kappa$$ saturate preferentially, implementing stable
reconstruction of a stored memory from partial input and
associative reconstruction of a set of stored memories. Multiple
scattering in the volume reconstructs cycles of associations that
compete for saturation. Input of a new pattern switches all
the energy into the cycle containing a representative of that
pattern; the system thus acts as an abstract categorizer with
multiple basins of stability. The advantages that an imprintable
medium with gain biased near the critical point has over either
the holographic or the adaptive matrix associative paradigms
are (1) images may be input as non-coherent distributions which
nucleate long range critical modes within the medium, and (2) the
interaction matrix $$kappa$$ of critical modes is full, thus implementing
the sort of `full connectivity' needed for associative reconstruction
in a physical medium that is only locally connected, such as a
nonlinear crystal.


Uhr, L. (1985), Massively Parallel Multi-Computer
Hardware = Software Structures for Learning, MCCS-85-19.

Suggestions are made concerning the building and use of appropriately
structured hardware/software multi-computers for exploring ways that
intelligent systems can evolve, learn and grow. Several issues are addressed
such as: what computers are, the great variety of topologies that can be used
to join large numbers of computers together into massively parallel
multi-computer networks, and the great sizes that the micro-electronic VLSI
(``very large scale integration'') technologies of today and tomorrow make
feasible. Finally, several multi-computer structures that appear
especially appropriate as the substrate for systems that evolve, learn and
grow are described, and a sketch of a system of this sort is begun.



Partridge, D. (1985), Input-Expectation Discrepancy Reduction:
A Ubiquitous Mechanism, MCCS-85-24.

The various manifestations of input-expectation discrepancy that occurs in a
broad spectrum of research on intelligent behavior is examined. The point
is made that each of the different research activities highlights different
aspects of an input-expectation reduction mechanism and neglects others.

A comprehensive view of this mechanism has been constructed and applied in
the design of a cognitive industrial robot. The mechanism is explained as
both a key for machine learning strategies, and a guide for the selection of
appropriate memory structures to support intelligent behavior.


Ortony, A., Clore, G. & Foss, M. A. (1985), Conditions of Mind,
MCCS-85-27.

A set of approximately 500 words taken from the literature on emotion was
examined. The overall goal was to develop a comprehensive taxonomy of the
affective lexicon, with special attention being devoted to the isolation of
terms that refer to emotions. Within the taxonomy we propose, the best
examples of emotion terms appear to be those that (a) refer to [i]internal,
mental[xi] conditions as opposed to physical or external ones, (b) are clear
cases of [i]states[xi], and (c) have [i]affect[xi] as opposed to behavior or
cognition as their predominant referential focus. Relaxing one or another of
these constraints yields poorer examples or nonexamples of emotions; however,
this gradedness is not taken as evidence that emotions necessarily defy
classical definition.

Wilks, Y. (1985), Machine Translation and Artificial Intelligence:
Issues and their Histories, MCCS-85-29.

The paper reviews the historical relations, and future prospects for
relationships, between artificial intelligence and machine translation. The
argument of the paper is that machine translation is much more tightly bound
into the history of artificial intelligence than many realize (the MT origin
of Prolog is only the most striking example of that), and that it remains,
not a peripheral, but a crucial task on the AI agenda.


Coombs, M.J. (1986), Artificial Intelligence Foundations
for a Cognitive Technology: Towards The Co-operative Control of Machines,
MCCS-85-45.

The value of knowledge-based expert systems for
aiding the control of physical and
mechanical processes is not firmly established. However, with experience,
serious weaknesses have become evident which, for solution, require a new
approach to system architecture.

The approach proposed in this paper is based on the direct manipulation of
models in the control domain. This contrasts with the formal syntactic
reasoning methods more conventionally employed. Following from work on the
simulation of qualitative human reasoning, this method has potential for
implementing truly co-operative human/computer interaction.

Coombs, M.J., Hartley, R. & Stell J.F. (1986), Debugging
User Conceptions of Interpretation Processes, MCCS-85-46.

The use of high level declarative languages has been advocated since they allow
problems to be expressed in terms of their domain facts, leaving details of
execution to the language interpreter. While this is a significant advantage,
it is frequently difficult to learn the procedural constraints imposed by
the interpreter. Thus, declarative failures may arise from misunderstanding
the implicit procedural content of a program. This paper argues for a
\fIconstructive\fR approach to identifying poor understanding of procedural
interpretation, and presents a prototype diagnostic system for Prolog.

Error modelling is based on the notion of a modular interpreter, misconceptions
being seen as modifications of correct procedures. A trace language,
based on conceptual analysis of a novice view of Prolog, is used by
both the user to describe his conception of execution, and the system to
display the actual execution process. A comparison between traces enables the
the correct interpreter to be modified in a manner which progressively
corresponds to the user's mental interpreter.

Dorfman, S.B. & Wilks, Y. (1986), SHAGRIN: A Natural
Language Graphics Package Interface, MCCS-85-48.

It is a standard problem in applied AI to construct a front-end to some
formal data base with the user's input as near English as possible. SHAGRIN
is a natural language interface to a computer graphics package. In
constructing SHAGRIN, we have chosen some non-standard goals: (1) SHAGRIN
is just one of a range of front-ends that we are fitting to the same formal
back-end. (2) We have chosen not a data base in the standard sense, but a
graphics package language, a command language for controlling the production
of graphs on a screen. Parser output is used to generate graphics world
commands which then produce graphics PACKAGE commands. A four-component
context mechanism incorporates pragmatics into the graphics system as well
as actively aids in the maintenance of the state of the graph world.


Manthey, M.J. (1986), Hierarchy in Sequential and
Concurrent Systems or What's in a Reply, MCCS-85-51.

The notion of hierarchy as a tool for controlling conceptual
complexity is justifiably well entrenched in computing in general,
but our collective experience is almost entirely in the realm of
sequential programs. In this paper we focus on exactly what the
hierarchy-defining relation should be to be useful in the realm of
concurrent programming. We find traditional functional dependency
hierarchies to be wanting in this context, and propose an alternative
based on shared resources. Finally we discuss some historical and
philosophical parallels which seem to have gone largely unnoticed in
the computing literature.

Huang, X-M (1986), A Bidirectional Chinese Grammar
in A Machine Translation System, MCCS-85-52.

The paper describes a Chinese grammar which can be run bidirectionally, ie.,
both as a parser and as a generator of Chinese sentences. When used as a
parser, the input to the grammar is single Chinese sentences, and the output
would be tree structures for the sentences; when used as a generator, tree
structures are the input, and Chinese sentences, the output. The main body
of the grammar, the way bidirectionality is achieved, and the performance of
the system with some example sentences are given in the paper.

Partridge, D. & Wilks, Y. (1986), Does AI have a methodology different
from Software Engineering?, MCCS-85-53.

The paper argues that the conventional methodology of software
engineering is inappropriate to AI, but that the failure of many
in AI to see this is producing a Kuhnian paradigm ``crisis''. The
key point is that classic software engineering methodology (which
we call SPIV: Specify-Prove-Implement-Verify) requires that the
problem be circumscribable or surveyable in a way that it is not
for areas of AI like natural language processing. In addition, it
also requires that a program be open to formal proof of
correctness. We contrast this methodology with a weaker form SAT
( complete Specification And Testability - where the last term is
used in a strong sense: every execution of the program gives
decidably correct/incorrect results) which captures both the
essence of SPIV and the key assumptions in practical software
engineering. We argue that failure to recognize the
inapplicability of the SAT methodology to areas of AI has
prevented development of a disciplined methodology (unique to AI,
which we call RUDE: Run-Understand-Debug-Edit) that will
accommodate the peculiarities of AI and also yield robust,
reliable, comprehensible, and hence maintainable AI software.

Slator, B.M., Conley, W. & Anderson, M.P (1986), Towards an Adaptive
Front-end, MCCS-85-54.

An adaptive natual language interface to a graphics package has
been implemented. A mechanism for modelling user behavior
operating over a script-like decision matrix capturing
co-occurrence of commands is used to direct the interface, which
uses a semantic parser, when ambiguous utterances are
encountered. This is an adaptive mechanism that forms a model of
a user's tendencies by observing the user in action. This
mechanism provides a method for operating under conditions of
uncertainty, and it adds power to the interface - but, being a
probabilistic control scheme, it also adds a corresponding
element of nondeterminism.

A hidden operator experiment was conducted to collect utterance files
for a user-derived interface development process. These empirical
data were used to design the interface; and a second set, collected
later, was used as test data.


Lopez, P., Johnston, V. & Partridge, D. (1986), Automatic Calibration
of the Geometric Workspace of an Intelligent Robot, MCCS-85-55.

An intelligent robot consisting of an arm, a single camera, and a computer,
functioning in an industrial environment, is described. A variety of
software algorithms that compute and maintain, at task-execution time,
the mappings between robot arm, work environment (the robot's world),
and camera coordinate systems, are presented.

These mappings are derived through a sequence of arm movements
and subsequent image ``snapshots'', from which arm motion is
detected. With the aid of world self-knowledge (i.e., knowledge of the
length of the robot arm and the height of the arm to the base
pivot), the robot then uses its ``eye'' to calculate a
pixel-to-millimeter ratio in two known planes. By ``looking''
at its arm at two different heights, it geometrically computes the
distance of the camera from the arm, hence deriving the mapping from
the camera to the work environment. Similarly, the calculation of
the intersection of two arm positions (where wrist location
and hypothetical base location form a line) gives a base pivot
position. With the aid of a perspective projection, now possible
since the camera position is known, the position of the base and
its planar angle of rotation in the work environment (hence the world
to arm mapping) is determined. Once the mappings are known,
the robot may begin its task,
updating the approximate camera and base pivot positions with
appropriate data obtained from task-object manipulations. These
world model parameters are likely to remain static
throughout the execution of a task, and as time passes, the
old information receives more weight than new information when
updating is performed. In this manner, the robot first
calibrates the geometry of its workspace with sufficient accuracy
to allow operation using perspective projection, with performance
``fine-tuned'' to the nuances of a particular work environment
through adaptive control algorithms.


Fass, D. (1986), Collative Semantics: An Approach to Coherence,
MCCS-85-56.

Collative Semantics (CS) is a domain-independent semantics for
natural language processing that focusses on the problem of
coherence. Coherence is the synergism of knowledge (synergism is the
interaction of two or more discrete agencies to achieve an effect of
which none is individually capable) and plays a substantial role in
cognition. The representation of coherence is distinguished from
the representation of knowledge and some theoretical connections are
established between them. A type of coherence representation has
been developed in CS called the semantic vector. Semantic vectors
represent the synergistic interaction of knowledge from diverse
sources (including the context) that comprise semantic relations.
Six types of semantic relation are discriminated and represented:
literal, metaphorical, anomalous, novel, inconsistent and redundant.
The knowledge description scheme in CS is the senseframe, which
represents lexical ambiguity. The semantic primitives in senseframes
are word-senses which are a subset of the word-senses in natural
language. Because these primitives are from natural language, the
semantic markerese problem is avoided and large numbers of primitives
are provided for the differentiated description of concepts required
by semantic vectors. A natural language program called meta5 uses
CS; detailed examples of its operation are given.


McDonald, D.R. & Bourne, L.E. Jr. (1986), Conditional Rule Testing in
the Wason Card Selection Task, MCCS-85-57.

We used the Wason card selection task, with variations, to study
conditional reasoning. Disagreement exists in the literature, as to
whether performance on this task improves when the problem is
expressed concretely and when instructions are properly phrased. In
order to resolve some inconsistencies in previous studies, we examined
the following variables, (1) task intructions, (2) problem format,
and (3) the thematic compatibility of solution choices with formal
logic and with pre-existing schemas. In Experiment 1, performance
was best in an 8-card, rather than a 4-card or a hierarchical
decision-tree format. It was found in Experiment 2 that instructions
directing subjects to make selections based on ``violation'' of the
rule, rather than assessing its truth or falsity, resulted in more
correct responses. Response patterns were predictable in part from
formal logical considerations, but primarily from mental models, or
schemas, based on (assumed) common prior experience and knowledge.
Several explanations for the findings were considered.

Partridge, D, McDonald, J., Johnston, V. & Paap, K. (1986)
AI Programs and Cognitive Models: Models of Perceptual Processes,
MCCS-85-60.

We examine and compare two independently developed computer models of
human perceptual processes: the recognition of objects in a scene and
of words. The first model was developed to support intelligent
reasoning in a cognitive industrial robot - an AI system. The second
model was developed to account for a collection of empirical data and
known problems with earlier models - a cognitive science model. We
use these two models, together with the results of empirical studies
of human behaviour, to generate a generalised model of human visual
processing, and to further our claim that AI modelers should be more
cognizant of empirical data. A study of the associated human
phenomena provides an essential basis for understanding complex
models as well as valuable constraints in complex and otherwise
largely unconstrained domains.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT