Copy Link
Add to Bookmark
Report

AIList Digest Volume 6 Issue 066

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Wednesday, 13 Apr 1988     Volume 6 : Issue 66 

Today's Topics:
Theory - Probability,
Opinion - Simulated Intelligence & Ethics of AI & The Future of AI

----------------------------------------------------------------------

Date: Thu, 7 Apr 88 00:13:57 HAE
From: Spencer Star <STAR@LAVALVM1>
Reply-to: <AIList@Stripe.SRI.Com>
Subject: Probability: is it appropriate, necessary, or practical

> a uniquely probabilistic approach to uncertainty may be
> inappropriate, unnecessary and impractical. D. J. Spiegelhalter

For the record, it was this quote from Spiegelhalter by Paul Creelman
that prompted my questioning the accuracy of the quote. I have been
able to track down the quote. A few words were dropped from what
Spiegelhalter actually wrote. "Deviations from the methodological
rigor and coherence of probability theory have been criticized,
but in return it has been argued that a uniquely probabilistic
approach to uncertainty may be inappropriate, unnecessary, and
impractical."
Spiegelhalter goes on to discuss these points and then
reply to them point by point. Roses to Paul for sending me the source
and brickbats to him for the liberties he took with it.

------------------------------

Date: 5 Apr 88 01:26:17 GMT
From: mind!eliot@princeton.edu (Eliot Handelman)
Subject: Simulated Intelligence

Intelligence draws upon the resources of what Dostoevsky, in the "Notes from
Underground"
, called the "advantageous advantage" of the individual who found
his life circumscribed by "logarithms", or some form of computational
determinism: the ability to veto reason. My present inclination is to believe
that AI, in the long run, may only be a test for underlying mechanical
constraints of of theories of intelligence, and therefore inapplicable to
to the simulation of human intelligence.


I'd like to hear this argued for or against.

Best wishes to all,

Eliot Handelman

------------------------------

Date: Mon, 4 Apr 88 12:18 EST
From: Jeffy <JCOGGSHALL%HAMPVMS.BITNET@MITVMA.MIT.EDU>
Subject: submission: Ethics of AI

From:<ssc-vax!bcsaic!rwojcik@beaver.cs.washington.edu (Rick Wojcik)>
(I have deleted much text from in between the following lines)
>But the payoff can be tremendous.
>In the development stage, AI is expensive,
>but in the long term it is cost effective.
>the demand for AI is so great that we have no choice but to
>push on.

I would question the doctrine of "what is most _cost-effective_
(in the long term of course) is best."
I think that, as Caroline Knight
said,
"Whatever the far future uses of AI are
we can try to make the current uses as
humane and as ethical as possible."

I mean, what are we developing it for anyway? It often seems that
AI is being developed for a specific purpose, but nobody seems to want to
be explicit about what it is. Technology is not neutral. If you develop AI
mainly as a war technology, then you will have a science that is most
easily suited for war (as far as I know, DARPA is _the_ main funder for AI
projects).
Here is a quote from a book by Marcus Raskin and Herbert Bernstein:
(they are talking about the Einstein-Bohr debate here, and how the
results of Quantum Mechanics show us an observer created universe):
"Bohr's position puts man, or at least his machines, at the center
of scientific inquiry. If he is correct, science's style and purpose has to
change. The problem has been that the physicists have not wanted to make
any critical evaluation of their scientific work, an evaluation which their
research cried out for just because of their belief that human beings
remain at the center of inquiry, and man cannot know fundamental laws of
nature. They rejected Einstins's conception of a Kantian reality and
without saying it, his view of scientific purpose. Even though no
fundamental laws can be found independent of man's beliefs and machines he
constructs, scientists abjure making moral judgements as part of their
work, even though they know - and knew - that the very character of science
had changed.
Standards for rational inquiry demand that moral judgements should
be added as an integral part of any paricular experiment. Unless shown
otherwise, I do not see how transformations of social systems, or the
generation of a new consciousness can occur if we hold on to narrow
conceptions of rational inquiry. Inquiry must now focus on relationsips.
The reason is that rational inquiry is not, cannot, and should not be
sealed from everyday life, institutional setting or the struggles which are
carried on throughout the world. How rational inquiry is carried on, who we
do it for, what we think about and _what we choose to see_ is never
insulated or antiseptic. Once we communticate it through the medium of
language, the symbols of mathematics, the metaphors and clishes of everyday
like, we call forth in the mids of readers of fellow analysts other issues
and considerations that may be outside of the four cornedrs of the
experiment or inquiry. What they bring to what they see, read, or replicate
is related to their purpose or agenda or their unconscious interpretations"

- from: "NEW WAYS OF KNOWING", page 115
In any case,

>But the payoff can be tremendous.
>In the development stage, AI is expensive,
>but in the long term it is cost effective.
>the demand for AI is so great that we have no choice but to
>push on.

is a wrong way to view what one is doing, when one is doing AI.

- Jeff Coggshall
(JCOGGSHALL@HAMPVMS.BITNET)


(This article is also in response to:
From: ARMAND%BCVMS.BITNET@MITVMA.MIT.EDU
Subject: STATUS OF EXPERT SYSTEMS?)

------------------------------

Date: 31 Mar 88 01:26:50 GMT
From: jsnyder@june.cs.washington.edu (John Snyder)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
the Future]

In article <4640@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>... But the demand for AI is so great that we have no choice but to
>push on.

We always have the choice not to develop a technology; what may be lacking
are reasons or will.

jsnyder@june.cs.washington.edu John R. Snyder
{ihnp4,decvax,ucbvax}!uw-beaver!jsnyder Dept. of Computer Science, FR-35
University of Washington
206/543-7798 Seattle, WA 98195

------------------------------

Date: 1 Apr 88 22:26:06 GMT
From: arti@vax1.acs.udel.edu (Arti Nigam)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
the Future]


In article <4565@june.cs.washington.edu> you write:
>
>We always have the choice not to develop a technology; what may be lacking
>are reasons or will.

I heard this from one of the greats in computer-hardware-evolution, only
I don't remember his name. What he said, and I say, is essentially this;
if you are part of an effort towards progress, in whatever field or
domain, you should have some understanding of WHERE you are going and
WHY you want to get there.

Arti Nigam

------------------------------

Date: 31 Mar 88 15:10:30 GMT
From: ulysses!sfmag!sfsup!saal@ucbvax.Berkeley.EDU (S.Saal)
Subject: Re: The future of AI - my opinion

I think the pessimism about AI is a bit more subtle. Whenever
something is still only vaguely understood, it is considered a
part of AI. Once we start understanding the `what,' `how,' and
(sometimes) `why' we no longer consider it a part of AI. For
example, all robotics used to be part of AI. Now robotics is a
field unto itself and only the more difficult aspects (certain
manipoulations, object recognition, etc) are within AI anymore.
Similarly so for expert systems. It used to be that ES were
entirely within the purview of AI. That was when the AI folks
had no real idea how to do ESs and were trying all sorts of
methods. Now they understand them and two things have happened:
expert systems are an independant branch of computer science and
people have found that they no longer need to rely on the
(advanced) AI type languages (lisp, etc) to get the job done.

Ironically, this makes AI a field that must make itself obsolete.
As more areas become understood, they will break off and become
their own field. If not for finding new areas, AI would run out
of things for it to address.

Does this mean it isn't worth while to study AI? Certainly not.
If for no other reason than AI is the think tank, problem
_finder_ of computer science. So what if no problem in AI itself
is ever solved? Many problems that used to be in AI have been,
or are well on their way to being, solved. Yes, the costs are
high, and it may not look as though much is actually coming out
of AI research except for more questions, but asking the
questions and lookling for the answers in the way that AI does,
is a valid and useful approach.
--
Sam Saal ..!attunix!saal
Vayiphtach HaShem et Peah HaAtone

------------------------------

Date: 3 Apr 88 11:03:30 GMT
From: bloom-beacon.mit.edu!boris@bloom-beacon.mit.edu (Boris N
Goldowsky)
Subject: Re: The future of AI - my opinion


In article <2979@sfsup.UUCP> saal@sfsup.UUCP (S.Saal) writes:

Ironically, this makes AI a field that must make itself obsolete.
As more areas become understood, they will break off and become
their own field. If not for finding new areas, AI would run out
of things for it to address.

Isn't that true of all sciences, though? If something is understood,
then you don't need to study it anymore.

I realize this is oversimplifying your point, so let me be more
precise. If you are doing some research and come up with results that
are useful, people will start using those results for their own
purposes. If the results are central to your field, you will also
keep expanding on them and so forth. But if they are not really of
central interest, the only people who will keep them alive are these
others... and if, as in the case of robotics, they are really useful
results they will be very visibly and profitably kept alive. But I
think this can really happen in any field, and in no way makes AI
"obsolete."

Isn't finding new areas what science is all about?

Bng


--
Boris Goldowsky boris@athena.mit.edu or @adam.pika.mit.edu
%athena@eddie.UUCP
@69 Chestnut St.Cambridge.MA.02139
@6983.492.(617)

------------------------------

Date: 4 Apr 88 16:25:49 GMT
From: hubcap!mrspock@gatech.edu (Steve Benz)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
the Future]

>From article <1134@its63b.ed.ac.uk>, by gvw@its63b.ed.ac.uk (G Wilson):
> In article <4640@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>> Moreover, your opinion that conventional techniques can
>>replace AI is ludicrous. Consider the area of natural language. What
>>conventional techniques that you know of can extract information from
>>natural language text or translate a passage from English to French?
>
> Errmmm...show me *any* program which can do these things? To date,
> AI has been successful in these areas only when used in toy domains.

In a real world (real world at least as far as real money will carry you...)
project here, we developed a nearly-natural-language system that deals
with the "toy domain" of reading mail, querying databases, and some other stuff.

It may be a toy, but some folks were willing to lay out some signifigant
number of dollars to get it. These applications are based off of
a lazy-evaluation, functional language (I wouldn't call that a "conventional
technique."
)

But the best part about the whole thing (as far as our contract monitor is
concerned) is that it really wasn't all that expensive to do--less than
20 man-months went into the development of the language and fitting out
the old menu-driven software with the new technique. Overall, it was a
highly successful venture, allowing us to create high-quality user-interfaces
very quickly, and develop them semi-independently of the application itself.
None of the "conventional techniques" we had used before allowed us this.

So you see, AI has application, I think the problem is that AI techniques
like expert systems, and functional/logic programming simply haven't
filtered out of the University in sufficient quantity to make an impact on
the marketplace. The average BS-in-CS-graduate probably has had a very
limited exposure to these techniques, hence he/she will be afraid of the
unknown and will prefer to stick with "conventional techniques."

To say that AI will never catch on is like saying that high-level languages
should never have cought on. At one point it looked unlikely that HLL
would gain wide acceptance, better equipment and better understanding by
the programming community made them practical.

- Steve
mrspock@hubcap.clemson.edu
...!gatech!hubcap!mrspock

------------------------------

Date: 3 Apr 88 08:51:59 GMT
From: mcvax!ukc!its63b!gvw@uunet.uu.net (G Wilson)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
the Future]

In article <4640@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
> Moreover, your opinion that conventional techniques can
>replace AI is ludicrous. Consider the area of natural language. What
>conventional techniques that you know of can extract information from
>natural language text or translate a passage from English to French?

Errmmm...show me *any* program which can do these things? To date,
AI has been successful in these areas only when used in toy domains.

>The future of AI is going to be full of unrealistic hype and disappointing
>failures.

Just like its past, and present. Does anyone think AI would be as prominent
as it is today without (a) the unrealistic expectations of Star Wars,
and (b) America's initial nervousness about the Japanese Fifth Generation
project?

> But the demand for AI is so great that we have no choice but to
>push on.

Manifest destiny?? A century ago, one could have justified
continued research in phrenology by its popularity. Judge science
by its results, not its fashionability.

I think AI can be summed up by Terry Winograd's defection. His
SHRDLU program is still quoted in *every* AI textbook (at least all
the ones I've seen), but he is no longer a believer in the AI
research programme (see "Understanding Computers and Cognition",
by Winograd and Flores).

Greg Wilson

------------------------------

Date: 31 Mar 88 20:02:22 GMT
From: necntc!linus!philabs!ttidca!hollombe@AMES.ARC.NASA.GOV (The
Polymath)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
the Future]

In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>What do people think of the PRACTICAL future of artificial intelligence?

My empoloyers just sponsored a week-long in-house series of seminars,
films, vendor presentations and demonstrations of expert systems
technology. I attended all of it, so I think I can reasonably respond to
this.

Apparently, the expert systems/knowledge engineering branch of so called
AI (of which, more later) has made great strides in the last few years.
There are many (some vendors claim thousands) of expert system based
commercial applications running in large and small corporations all over
the country.

In the past week we saw presentations by Gold Hill Computers (GOLDWORKS),
Aion Corp. (ADS), Texas Instruments (Personal Consultant Plus) and Neuron
Data (Nexpert Object). The presentations were impressive, even taking
into account their sales nature. None of the vendors is in any financial
trouble, to say the least. All claimed many delivered, working systems.

A speaker from DEC explained that their Vax configurator systems couldn't
have been developed without an expert system (they tried and failed) and
is now one of the oldest and most famous expert systems running.

It was pointed out by some of the speakers that companies using expert
systems tend to keep a low profile about it. They consider their systems
as company secrets, proprietary information that gives them an edge in
their market.

Personal Impressions:

The single greatest advantage of expert systems seems to be their rapid
prototyping capability. They can produce a working system in days or
weeks that would require months or years, if it could be done at all, with
conventional languages. That system can subsequently be modified very
easily and rapidly to meet changing conditions or include new rules as
they're discovered. Once a given algorithm has stabilized over time, it
can be re-written in a more conventional language, but still accessed by
the expert system. The point being that the algorithm may never have been
determined at all but for the adaptable rapid prototyping environment.
(The DEC Vax configurator, mentioned above, is an example of this. Much of
it, but not all, has been converted to conventional languages).

As for expense, prices of systems vary widely, but are coming down. TI
offers a board with a LISP mainframe-on-a-chip (their term) that will turn
a MAC-II into a LISP machine for as little as $7500. Other systems went
as high as an order of magnitude over that. I personally think these
won't really take off 'til the price drops another order of magnitude to
put them in the hands of the average home hacker.

Over all, I'd have to say that expert systems, at least, are alive and
well with a bright future ahead of them.

About Artificial Intelligence:

I maintain this is a contradiction in terms, and likely to be so for the
forseeable future. If we take "intelligence" to mean more than expert
knowledge of a very narrow domain there's nothing in existence that can
equal the performance of any mammal, let alone a human being. We're just
begining to explore the types of machine architectures whose great^n-
grandchildren might, someday, be able to support something approaching
true AI. I'll be quite amazed to see it in my lifetime (but the world has
amazed me before (-: ).

--
The Polymath (aka: Jerry Hollombe, hollombe@TTI.COM) Illegitimati Nil
Citicorp(+)TTI Carborundum
3100 Ocean Park Blvd. (213) 452-9191, x2483
Santa Monica, CA 90405 {csun|philabs|psivax|trwrb}!ttidca!hollombe

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT