Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 002

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Friday, 2 Jan 1987        Volume 5 : Issue 2 

Today's Topics:
Review - Spang Robinson Report, December 1986,
Philosophy - Connectionism & Consciousness,
Seminar - The Qualitative Process Engine (BBN)

----------------------------------------------------------------------

Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Spang Robinson Report, December 1986

Volume 2 Number 12, December 1986
Summary of Spang Robinson Report

AI at Microelectronics and Computer Technology Corporaiton (MCC)

Summary of organization and some of the projects of MCC.

In Douglas Lenat's attempt to encode "common sense" he will develop
20 megabytes of data, searchable by conventional technology, in 200
man-years.

MCC is working on Proteus, a shell with specific truth-maintenance capabilities.

__________________________________________________________________________
Maxell, a manufacturer of microcomputer disk products, is providing two
software products free of charge in their boxes of disks. The first is
a rule-based expert system that handles 400 rules, but which lacks math
in the rules and confidence factors. The other is a free-form
text searcher from Thunderstone called Logic-Line 1 with an integrated
thesaurus. [Thunderstone advertised several products including this
one in the March 1986 Byte among others. Logic-line1 was advertised
as "a major breakthrough in sub-cognitive mathematics" which "distills
the DNA/RNA like analog to any writer's thought processes." q. v. LEFF]

__________________________________________________________________________
New products:

Fuji Xerox - CSRL, a Battelle Memory Lab program running on the Xerox 1100
Fuji Xerox - a sytem for Smalltalk 80
Japan IBM - a Stock Portfolio Selection system for PC's
Mitsubishi - intelligent Dialog System
Mitsubishi - Meltran J/E English-Japanese translation system
Toshiba - English-Japanese system
Okidata - PENSEE, an English-Japanese system
Toshiba - Image Processor System
Sharp - Prolog interpreter/compiler for IX-5 and IX-7
__________________________________________________________________________
Other short notes

Intelligent Technology Incorporated has been selling Carnegie's
Knowledge Craft and Language Craft in Japan and the far east as well
as training other companies people to be knowledge engineers.

Nisshou Iwai, Marubeni and Nomura computers are investing in this
company.

...

AIR has been selling GCLISP for the PC 9801 and will be selling
it for the 286 based computers. ECC sells it for FM16B. MuLISP is also
popular in Japan.

Japan Univac is selling CAI systems for nuclear power operators and
for teaching people LISP.

Nihon LAD is working on an application system called LPS (logic program
synthesis)

Computer Applications Corp is working on a software maintenance expert system
and oen for estimating system size.

There are one hundred fielded systems based on Teknowledge's products
alone. Teknowlege had at least 300 more in advanced statges of development.

MONY and Harvard Tax and Investment Planners are using Plan Power, a
financial planning expert sytem to save 50 per cent time.

Inference and American Cimflex will be developing AI based computer-integrated
manufacturing products.

Votan is developing voice recognition systems that can handle 100 decibel
backgorund noises as found in manufacturing environments.

CL publications has purchased AI Expert.

Boole and Babbage are selling an expert system to work with their DASD
RESPONSE Manager.


__________________________________________________________________________
Review of Machinery of the Mind: Inside the New Science of Artificial
Intelligence which is for the nonspecialist. It has historical information
and info on the people who pioneered the field.

------------------------------

Date: 22 Dec 86 23:55:48 GMT
From: ihnp4!alberta!ubc-vision!ubc-cs!andrews@ucbvax.Berkeley.EDU
(Jamie Andrews)
Subject: Re: Challenge to Connectionists

In article <425@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>... meeting one or the other of the
>following criteria will be necessary:
> (i) Prove formally that not only is C not subject to perceptron-like
> constraints, but that it does have the power to generate
> mental capacity.
> (ii) Demonstrate C's power to generate mental capacity empirically...

Minsky and Papert's analysis of perceptrons was based on a very
exact and restricted type of machine. It seems to me that the
emphasis in the discussion about connectionism should be on proving
that the connectionist approach cannot work (possibly *using*
_Perceptrons_-like arguments), rather than that _Perceptrons_-like
proofs *cannot* be applied to connectionism.

I think both connectionists and anti-connectionists should be
involved in this proof process, however. I wouldn't want the
discussion to turn into yet another classic AI political battle.

>To summarize, my challenge to connectionists is that they either
>provide (1) formal proof or (ii) empirical evidence for their claims
>about the present or future capacity of C to model human performance
>or its underlying function.

If you mean by this that we should not study connectionism
until connectionists have done one of these things, then (as you
point out) we might as well write off the rest of AI too. The
main thing should be to try to learn as much from the connectionist
model as possible, and to accept any proofs of uselessness if
someone should come up with them. We can't expect to turn all
connectionist researchers into Minskys in order to prove theorems
about it that must needs be very complex.

--Jamie.
...!seismo!ubc-vision!ubc-cs!andrews
"Good heavens, Miss Sakamoto, you're beautiful"
This probably does not represent the views of the UBC
Computer Science Department, or anyone else, for that matter.

------------------------------

Date: Sun, 21 Dec 86 02:06:56 EST
From: "Keith F. Lynch" <KFL@AI.AI.MIT.EDU>
Subject: Consciousness

From: mcvax!ukc!rjf@seismo.CSS.GOV

If someone had lived for several years with a supposed-person who turned
out to be a robot, they would be severely shocked, when they discovered
that fact, and would *not* say 'Well, you certainly had me fooled. I guess
you robots must be conscious after all.'

That is what *I* would say. What WOULD be sufficient evidence for
consciousness? If only self experience is sufficient, does that mean
you don't think the rest of us are conscious?
What if YOU turned out to be a robot, much to your own surprise?
Would you then doubt your own consciousness? Or would you then say
"well, maybe robots ARE conscious, and humans AREN'T"?

The problem is not just about what would deserve the attribution of
consciousness, but about what we feel about making that attribution.

Huh? Does reality depend on feelings?

And such feelings go much deeper than mere prejudice. I think they go as
deep as love and sex, and are equally valid and valuable. I often turn
machines on, but they don't do the same for me - they're not good enough,
because they're not folks. And never will be.

What about aliens from another planet? They might give ample
evidence that they are intelligent (books, starships, computers,
robots, network discussion groups, etc) but might appear quite
physically repulsive to a human being. Would you believe them
to be conscious? Why or why not?
...Keith

------------------------------

Date: Thu, 18 Dec 86 18:45:46 n
From: DAVIS%EMBL.BITNET@WISCVM.WISC.EDU
Subject: unlikely submission to the ai-list...


*********rambling around conciousness*******************************************


There appear to me to be utterly different, though related, meanings of
the phrase `conciousness', especially when used in the ai-domain. The
first refers to an individual's sense of its own `conciousness', whilst
the second refers to that which we ascribe to other apparently sentient
objects, mostly other humans. There tends to be an automatic assumption
that the two are necessarily related, and in some guises, of course, this
is connected with the problem of `other minds'. However, the distinction
runs to the core of ai, particularly in connection with the infamous
Turing test. I would like to illustrate that this is so, and point to at
least one possible consequence for ai as a `nuts-and-bolts' discipline.

Let us ignore (perhaps forever!) the origin of the internal sensation of
conciousness, and concentrate upon our ascription of this capacity to
other objects. This ascription is dependent upon our observation of
some object's behaviour, and it could be argued, arises from our need to
rationalize and order the world as percieved. The ascription rests conditonally
upon an object exhibiting behaviour which is seen to either demand, or at
least be commensurate with, our own feeling of `conciousness'. This in turn
requires a whole subset of properties such as intentionality and intelligence.
As we note from everyday life, most humans fulfill these demands - their
behaviour appears purposeful, intelligent, self-concious etc..

However, turn now to an example which few would defend as being a case
of a sentient being: the ubiquitous and often excellent chess machine. Despite
our intellectual position being one of knowing that "this thing ain't nuthin'
but a blob of silicon", the reactions to, and more importantly, strategies
of play against such machines rarely fits what one might (naievly) expect
in the case of a complicated circuit. Instead, the machine is (publicly,
or privately) acknowledged to be `trying to win'. It is `smart'. It doesn't
like to lose. It `fouls up' or comes up with a `brilliant move'.

Of course, all this chat from computer chess players is meaningless - nobody
*really* believes in the will of the machine. Yet, it is very instructive
in the following sense: in order to formulate sensible strategies with a
well designed machine, we ascribe it intentionality. (I owe this argument
to Daniel Dennet) That is to say, we use the fact that the machine behaves
*AS IF* it had intent, despite the fact that we know it has no such capacity.

A similar, though more risky argument may be put forward for the reactions
of owners to their pets. I say more risky since it is arguable as to the
true status of sentience in dogs, cats etc..

This ascription of intentionality is not, I believe, a mistake, simply on
the grounds that intentionality simply does not exist. It is an explanatory
construct which creates an arbitrary class (`intentional objects'), but
has no real existence in the world (either as an emergent or concrete
property). What the ascription does is to provide a powerful way of dealing
with the world - it lets us make successful predictions about well designed
objects (such as human beings). We ccannot pretend that we really know
anything about why the somewhat loosely defined object called John invited
a similarly fluid Mary over for a meal, but we can make a lot of correct
prior judgements if we ascribe John with an intent......

So, back to nuts-and-bolts ai. As technicians sit in their nuts-and-bolts
laboratories, seeking the Josephson concurrent 5th generation hypercube
that will stroll though the Turing test, and into your lounge, workplace and
maybe even elsewhere, perhaps they should reflect upon their design
strategy. The accolade of appearing as `almost human' is a function of
the describer (aka: beauty is in the ......). Humans get special points
because they are exceedingly well designed, and hence our ascriptions
of intelligence, intentionality and conciousness do a very good job of
helping us to understand and interact with other people (They also seem
to work quite well with dogs.....).But this is ONLY because we do what do
exceedingly well, and what we do covers a very wide range of activities.

No computer that just tells the weather, just builds other computers,
or even just chats through a Turing interface will ever be regarded as we
regard other humans.Instead, they will get little more than the low level
ascription of intentionality that chess machines demand in order to beat
them. The assignment of conciousness, intelligence, and intentionality
are all just higher points in this scale, however.

To sum up - you can't build a 'concious' or an intelligent computer because
`conciousness' and `intelligence' are conceptual categories of description,
and not genuine properties. Current computers are not said to be `concious'
because we are able to understand and predict their behaviour without
invoking such a category. Build us a computer as bewildering as a certain
leading US politician, and the maybe, just maybe, we may have to turn round
and say "hell, this thing really has a mind of its own...". But then again...

paul davis

bitnet/earn/netnorth: davis@embl
on the relay interchat: 'redcoat' (central european daytime)
by mail: european molecular biology laboratory
postfach 10.2209
meyerhofstrasse 1
6900 heidelberg
west germany/bundesrepublic deutschland

------------------------------

Date: Sun, 21 Dec 86 05:20:44 EST
From: "Steven A. Swernofsky" <SASW%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU>
Subject: Seminar - The Qualitative Process Engine (BBN)

Date: 25 Nov 1986 09:59-EST
From: Brad Goodman <BGOODMAN at BBNG.ARPA>

BBN Laboratories
Science Development Program
AI/Education Seminar

Speaker: Professor Kenneth D. Forbus
Qualitative Reasoning Group
University of Illinois
(forbus@a.cs.uiuc.edu)

Title: The Qualitative Process Engine

Date: 10:30a.m., Monday, December 1st

Location: 2nd floor large conference room,
BBN Laboratories Inc., 10 Moulton St., Cambridge


This talk describes how to use an assumption-based truth maintenance
system (ATMS) to build efficient qualitative physics systems. In
particular, I will describe the Qualitative Process Engine (QPE), a new
implementation of Qualitative Process theory that is signficantly simpler
and faster (by a factor of roughly 95) than the previous implementation.
After a short review of Qualitative Process theory, several organizing
abstractions for using an ATMS in problem solving will be identified. How
these abstractions can be applied to algorithms for qualitative physics
will then be described in detail. The performance of QPE is then compared
with a previous implementation, and the advantages and drawbacks of
ATMS technology will be discussed.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT