Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 205

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Friday, 28 Aug 1987      Volume 5 : Issue 205 

Today's Topics:
Philosophy - Wittgenstein and Natural Kinds & Fear of Philosophy &
Should AI be Scientific? & Philosophy of Science, AI Paradigms

----------------------------------------------------------------------

Date: 24 August 1987, 23:09:52 EDT
From: john Sowa <SOWA@ibm.com>
Subject: Wittgenstein and natural kinds

Wittgenstein's basic point is that the most important concepts
of ordinary language cannot be defined by a set of necessary and
sufficient conditions. No matter whether you try to give structural
definitions or functional definitions, you cannot state a precise set
of conditions that will admit all relevant instances while ruling out
all irrelevant ones.

In my book, Conceptual Structures (Addison-Wesley, 1984), I made the
distinction between natural types (or kinds) and role types. Something
can be recognized as belonging to a natural type by its own properties.
Examples include MAN, WOMAN, CAT, DOG, NUMBER, or NAIL. A role type
can be recognized only by relationships to something outside of itself:
FATHER, LAWYER, PET, WATCHDOG, QUOTIENT, or FASTENER. The number 4,
for example, can be recognized as a number in isolation, but as a
sum, divisor, quotient, product, etc., only in relation to something
else. A tee shirt had the slogan "Food is the only edible thing in the
universe." That is true by definition, since FOOD is a role type,
defined by its role of being considered edible.

Yet that distinction does not solve Wittgenstein's problem. Every
culture has its own standards of what is considered edible. In
Scandinavia, there is a rotten fish delicacy that requires a mound of
raw onions and garlic to prepare the taste buds and liberal quantities
of aquavit to wash it down. Even for a particular individual, degree of
hunger shifts the boundary line between the roles of FOOD and GARBAGE.

Even mathematical concepts have shifting definitions. Consider what
happened to the concept of number as rational number, irrational number,
complex number, transfinite number, etc., were introduced. If you try
to give a precise definition today, somebody tomorrow is sure to invent
some kind of hyper-quaternary-irresolute number that will violate your
definition, yet be so similar to what mathematicians like to call a
number that they would not want to exclude it.

To handle Wittgenstein's notion of meaning as use, I introduced
schematic clusters (in Section 4.1 of Conceptual Structures) as an
open-ended collection of schemata (or frames) associated with a
concept type. Each schema would represent one pattern of use (or
perspective) for a type, but it would not exhaust the complete meaning
of that type. There would always be the possibility of some new
experience that would add new schemata to the cluster. Consider the
concept ADD: one schema would show its use in arithmetic. But if
someone wants to talk about adding a line to a file, another schema
could be added to the cluster for that use. And then one should add
a new schema for adding schemata to clusters. Every schema in a
cluster represents one valid use of the concept type. The meaning
is determined not by any definition, but by the collection of all
the permissible uses, which can grow and change with time.

Does that solve the problem? Maybe, but we still need criteria
for determining what kinds of uses can legitimately be added to a
cluster. Could I say "To add something means to eat it with garlic
and onions"? What are the criteria for accepting or rejecting a
proposed extension to a concept's meaning?

John Sowa

------------------------------

Date: 25 Aug 87 08:33:00 EST
From: cugini@icst-ecf.arpa
Reply-to: <cugini@icst-ecf.arpa>
Subject: Fear of philosophy


> From: mckee%corwin.ccs.northeastern.edu@RELAY.CS.NET
> Subject: Should AI be scientific? If yes, how?
>
> ....Besides science, the other significant field with aspirations toward
> understanding reality is philosophy, which has even evolved a specialized
> subfield, ontology, devoted to the question. Now I haven't studied
> ontology, not because the question is unimportant, but because I
> think philosophical methodology is fatally flawed, and incapable of
> convincing me of the substance of any conclusions that it might
> obtain. ...I think philosophers' methodology has kept them from
> being as productive of useful understanding as they could have been.

It's worth noting that the rest of McKee's message consists of nothing
but philosophizing, with particular emphasis on epistemology and the
philosophy of science. Just for instance, his claim, in passing, that
it is verifiability that distinguishes the scientific method, simply
echoes the logical positivist school of thought.

No one doubts that there is a lot of silly philosophy out there, but
simply to ask the questions like: "Why should we believe the results
of a scientific inquiry more than those of an inquiry using non-scientific
methods?" or, more fundamentally, "What constitutes the scientific method?"
is already to begin to philosophize. These are, then, not properly
scientific questions (under what microscope will you find and verify
their answers?), but philosophic questions about science.

The distinction, then is not between a) wooly-headed philosophers and
b) hard-headed scientists, but rather between a) self-conscious
philosophizing, which attempts to learn about and profit from 2000+
years of related efforts, and b) "naive" philosophizing, which
disdains previous experience and usually winds up inventing positions
originally propounded and discussed anywhere from 20-2000 years ago.

John Cugini <Cugini@icst-ecf.arpa>

------------------------------

Date: Tue, 25 Aug 87 10:56:51 MDT
From: shebs@cs.utah.edu (Stanley Shebs)
Reply-to: cs.utah.edu!shebs@cs.utah.edu (Stanley Shebs)
Subject: Re: Should AI be scientific? If yes, how?

In article <8708240436.AA19024@ucbvax.Berkeley.EDU>
mckee@CORWIN.CCS.NORTHEASTERN.EDU writes:
>
>[...] If AI researchers
>call themselves Computer Scientists (as many of them do), they're implicitly
>also claiming to be scientists.

Not necessarily. "Computer Science" is an unfortunate term that should be
phased out. I wasn't there when it got popular, but the timing is right for
the term to have been inspired by the plethora of "sciences" that got named
when the govt started handing out lots of money for science in the 60s.
I prefer the term "informatics" as the best of a bad lot of alternatives.
("Datology" sounds like a subfield of history; the study of dates :-) )

>[... tutorial on scientific method omitted ...]
> In AI, one can trace the operation of a theory that's been instantiated
>as a program, as long as there's sharing of source code and the hardware is
>the same. This gives you operational confirmation as well as implicational
>confirmation, since you can watch the computer's "mind" at work, pausing
>to examine the data, or single-step the inference engine.

Goedel's and Turing's ghosts are looking over our shoulders. We can't do
conventional science because, unlike the physical universe, the computational
universe is wide open, and anything can compute anything. Minute examination
of a particular program in execution tells one little more than what the
programmer was thinking about when writing the program.

>The points of
>divergence between multiple theories of the same phenomenon can thus be
>precisely determined. But theories summarize data, and where does the
>data come from? In academia, it's probably been typed in by a grad student;
>in industry, I guess this is one of the jobs of the knowledge engineer.
>In either case there's little or no standard way to tell if the data that
>are used represent a reliable sample from the population of possible data
>that could have been used. In other sciences the curriculum usually includes
>at least one course in statistics to give researchers a feel for sampling
>theory, among other topics. Statistical ignorance means that when an AI
>program makes an unexpected statement, you have only blind intuition and
>"common sense" to help decide whether the statement is an artifact of sampling
>error or a substantial claim.

I took a course in statistics, but you don't need a course to know that
sampling from a population is not meaningful, if you don't know what the
population is in the first place! In the case of AI, the population is
"intelligent behavior". Who among us can define *that* population precisely?
If the population is more restricted, say "where native-speaking Germans
place their verbs", then you're back in the Turing tarpit. A program that
just says "at the end" (:-) is behaviorally as valid as something that
does some complex inferences to arrive at the same conclusion. Worse,
Occam's razor makes us want to prefer the simpler program, even though
it won't generalize to other natural languages. When we generalize the
program, the population to sample gets ill-defined again, and we're back
where we started.

>[...] It's not easy to think of statements about the content
>of AI (as opposed to its practice or techniques) that *could* be validated
>this way, much less hypotheses that actually *have* been experimentally
>validated. Hopefully, it's my ignorance of the field that leads me to
>say this. The best I can think of at the moment is "all intelligent systems
>that interact with the physical world maintain multiple representations
>for much of their knowledge."

This could only be a testable hypothesis if we agreed on the definition
of "intelligent system". Are gorillas intelligent because they use sign
language? Are birds intelligent because they use sticks? Are thermostats
intelligent? I don't believe the above hypothesis is testable. Almost the
only agreement you'd get is that humans are intelligent (ah, the hubris of
our species), but then you'd have to build a synthetic human, which isn't
going to be possible anytime soon. Even if you did build a synthetic human,
you'd get a lot of disagreement about whether it was correctly built, since
the Turing Test is too slow for total verification.

> - George McKee
> College of Computer Science [sic]
> Northeastern University, Boston 02115

AI people are generally wary of succumbing to "physics envy" and studying only
that which is easily quantifiable. It's like the drunk searching under the
street light because that's where it's easy to see. AI will most likely
continue to be an eclectic mixture of philosophy, mathematics, informatics,
and psychology. Perhaps the only problem is the name of the funding source -
any chance of an "NAIF"? :-)

stan shebs
shebs@cs.utah.edu

------------------------------

Date: 24 Aug 1987 14:50-EDT
From: Spencer.Star@h.gp.cs.cmu.edu
Subject: Re: AIList V5 #201 - Philosophy of Science, AI Paradigms

In V5 #201 Andrew Jenning suggests that AI is empirical research when a
programmer writes a program because we have some definite criteria:
either the program works or it does not. Unfortunately, this view is
rather widespread. Also, it is wrong. Empirical research seeks to
make general statements of a quantitative nature. For example, the
measurement of the speed of light gives us a value that is applicable
in general, not just Tues July 15th in Joe's lab. A psychologist who
measures the reaction time of a person before and after drinking
alcohol is making an empirical statement that should hold in other labs
under other similar experimental conditions. The central ideas of
empirical research is that results be publically repeatably, and lead
to some generalizations. If it happens that the results confirm or
disconfirm some theoretical predictions, so much the better. A
programmer who gets a program to work says nothing more scientific than
a plumber who has cleared a drain or a dentist who has filled a tooth.
In most cases there was no theory being tested, there is no
generalization that can be made, the work is handcrafted and cannot be
repeated in another lab based on the public description of what was
done, and we cannot even be sure that the program works on anything
more than the specific examples used in the demonstration. At best
such a program is an example of craftsmanship and programming skills.
It has nothing to do with scientific research.

Spencer Star
(star@h.cs.cmu.edu)

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT