Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 092

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest             Monday, 7 Nov 1983       Volume 1 : Issue 92 

Today's Topics:
Halting Problem,
Metaphysics,
Intelligence
----------------------------------------------------------------------

Date: 31 Oct 83 19:13:28-PST (Mon)
From: harpo!floyd!clyde!akgua!psuvax!simon @ Ucb-Vax
Subject: Re: Semi-Summary of Halting Problem Discussion
Article-I.D.: psuvax.335

About halting:
it is unclear what is meant precisely by "can a program of length n
decide whether programs of length <= n will halt". First, the input
to the smaller programs is not specified in the question. Assuming
that it is a unique input for each program, known a priori (for
example, the index of the program), then the answer is obviously YES
for the following restriction: the deciding program has size 2**n and
decides on smaller programs (there are a few constants that are
neglected too). There are less than 2*2**n programs of length <=n. For
each represent halting on the specific input the test is to apply to
by 1, looping by 0. The resulting string is essentially the program
needed - it clearly exists. Getting hold of it is another matter - it
is also obvious that this cannot be done in a uniform manner for every
n because of the halting problem. At the cost of more sophisticated
coding, and tremendous expenditure of time, a similar construction can
be made to work for programs of length O(n).


If the input is not fixed, the question is obviously hopeless - there are
very small universal programs.

As a practical matter it is not the halting proble that is relevant, but its
subrecursive analogues.
janos simon

------------------------------

Date: 3 Nov 83 13:03:22-PST (Thu)
From: harpo!eagle!mhuxl!mhuxm!pyuxi!pyuxss!aaw @ Ucb-Vax
Subject: Re: Halting Problem Discussion
Article-I.D.: pyuxss.195

A point missing in this discussion is that the halting problem is
equivalent to the question:
Can a method be formulated to attempt to solve ANY problem
which can determine if it is not getting closer to the
solution
so the meta-halters (not the clothing) can't be more than disguised
time limits etc. for the general problem, since they CAN NOT MAKE
INFERENCES ABOUT THE PROCESS they are to halt
Aaron Werman pyuxi!pyuxss!aaw

------------------------------

Date: 9 Nov 83 21:05:28-EST (Wed)
From: pur-ee!uiucdcs!uokvax!andree @ Ucb-Vax
Subject: Re: re: awareness - (nf)
Article-I.D.: uiucdcs.3586


Robert -

If I understand correctly, your reasons for preferring dualism (or
physicalism) to functionalism are:

1) It seems more intuitively obvious.
2) You are worried about legal/ethical implications of functionalism.

I find that somewhat amusing, as those are EXACTLY my reasons for
prefering functionalism to either dualism or physicalism. The legal
implications of differentiating between groups by arbitrarily denying
`souls' to one is well-known; it usually leads to slavery.

<mike

------------------------------

Date: Saturday, 5 November 1983, 03:03-EST
From: JCMA@MIT-AI
Subject: Inscrutable Intelligence

From: Dan Carnese <DJC%MIT-OZ@MIT-MC.ARPA>

Trying to find the ultimate definition for field-naming terms is a
wonderful, stimulating philosophical enterprise.

I think you missed the point all together. The idea is that *OPERATIONAL
DEFINITIONS* are known to be useful and are found in all mature disciplines
(e.g., physics). The fact that AI doesn't have an operation definition of
intelligence simply points up the fact that the field of inquiry is not yet a
discipline. It is a proto-discipline precisely because key issues remain
vague and undefined and because there is no paradigm (in the Khunian sense of
the term, not popular vulgarizations).

That means that it is not possible to specify criteria for certification in
the field, not to mention the requisite curriculum for the field. This all
means that there is lots of work to be done before AI can enter the normal
science phase.

However, one can make an empirical argument that this activity has little
impact on technical progress.

Let's see your empirical argument. I haven't noticed any intelligent machines
running around the AI lab lately. I certainly haven't noticed any that can
carry on any sort of reasonable conversation. Have you? So, where is all
this technical progress regarding understanding intelligence?

Make sure you don't fall into the trap of thinking that intelligent machines
are here today (Douglas Hofstadter debunks this position in his "Artificial
Intelligence: Subcognition as Computation," CS Dept., Indiana U., Nov. 1982).

------------------------------

Date: 5 November 1983 15:38 EST
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Turing test in everyday life

Have you ever gotten one of those phone calls from people who are trying
to sell you a magazine subscription? Those people sound *awfully* like
computers! They have a canned speech, with canned places to wait for
human (customer) response, and they seem to have a canned answer to
anything you say. They are also *boring*!

I know the entity at the other end of the line is not a computer
(because they recognize my voice -- someone correct me if this is not a
good test) but we might ask: how good would a computer program have to
be to fool someone into thinking that it is human, in this limited case?
I suspect you wouldn't have to do much, since the customer doesn't
expect much from the salescreature who phones. Perhaps there is a
lesson here.

-- Steve

[There is a system, in use, that can recognize affirmative and negative
replies to its questions. It also stores a recording of your responses
and can play the recording back to you before ending the conversation.
The system is used for selling (e.g., record albums) and for dunning,
and is effective partly because it is perceived as "mechanical". People
listen to it because of the novelty, it can be programmed to make negative
responses very difficult, and the playback of your own replies is very
effective. -- KIL]

------------------------------

Date: 1 Nov 83 13:41:53-PST (Tue)
From: hplabs!hao!seismo!uwvax!reid @ Ucb-Vax
Subject: Slow Intelligence
Article-I.D.: uwvax.1129

When people's intelligence is evaluated, at least subjectively, it is common
to hear such things as "He is brilliant but never applies himself," or "She
is very intelligent, but can never seem to get anything accomplished due to
her short attention span." This seems to imply to me that intelligence is
sort of like voltage--it is potential. Another analogy might be a
weight-lifter, in the sense that no one doubts her
ability to do amazing physical things, based on her appearance, but she needn't
prove it on a regular basis.... I'm not at all sure that people's working
definition of intelligence has anything at all to do with either time or sur-
vival.



Glenn Reid
..seismo!uwvax!reid (reid@uwisc.ARPA)

------------------------------

Date: 2 Nov 83 8:08:19-PST (Wed)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ecsvax!unbent @ Ucb-Vax
Subject: intelligence and adaptability
Article-I.D.: ecsvax.1466

Just two quick remarks from a philosopher:

1. It ain't just what you do; it's how you do it.
Chameleons *adapt* to changing environments very quickly--in a way
that furthers their goal of eating lots of flies. But what they're doing
isn't manifesting *intelligence*.

2. There's adapting and adapting. I would have thought that
one of the best evidences of *our* intelligence is not our ability to
adapt to new environments, but rather our ability to adapt new
environments to *us*. We don't change when our environment changes.
We build little portable environments which suit *us* (houses,
spaceships), and take them along.

------------------------------

Date: 3 Nov 83 7:51:42-PST (Thu)
From: decvax!tektronix!ucbcad!notes @ Ucb-Vax
Subject: What about physical identity? - (nf)
Article-I.D.: ucbcad.645


It's surprising to me that people are still speaking in terms of
machine intelligence unconnected with a notion of a physical host that
must interact with the real world. This is treated as a trivial problem
at most (I think Ken Laws said that one could attach any kind of sensing
device, and hence (??) set any kind of goal for a machine). So why does
Hubert Dreyfus treat this problem as one whose solution is a *necessary*,
though not sufficient, condition for machine intelligence?

But is it a solved problem? I don't think so--nowhere near, from
what I can tell. Nor is it getting the attention it requires for solution.
How many robots have been built that can infer their own physical limits
and capabilities?

My favorite example is the oft-quoted SHRDLU conversation; the
following exchange has passed for years without comment:

-> Put the block on top of the pyramid
-> I can't.
-> Why not?
-> I don't know.

(That's not verbatim.) Note that in human babies, fear of falling seems to
be hardwired. It will still attempt, when old enough, to do things like
put a block on top of a pyramid--but it certainly doesn't seem to need an
explanation for why it should not bother after the first few tries. (And
at that age, it couldn't understand the explanation anyway!)

SHRDLU would have to be taken down, and given another "rule".
SHRDLU had no sense of what it is to fall down. It had an arm, and an
eye, but only a rather contrived "sense" of its own physical identity.
It is this sense that Dreyfus sees as necessary.
---
Michael Turner (ucbvax!ucbesvax.turner)

------------------------------

Date: 4 Nov 83 5:57:48-PST (Fri)
From: ihnp4!ihuxn!ruffwork @ Ucb-Vax
Subject: RE:intelligence and adaptability
Article-I.D.: ihuxn.400

I would tend to agree that it's not how a being adapts to its
environment, but how it changes the local environment to better
suit itself.

Also, I would have to say that adapting the environment
would only aid in ranking the intelligence of a being if that
action was a voluntary decision. There are many instances
of creatures that alter their surroundings (water spiders come
to mind), but could they decide not to ??? I doubt it.

...!iham1!ruffwork

------------------------------

Date: 4 Nov 83 15:36:33-PST (Fri)
From: harpo!eagle!hou5h!hou5a!hou5d!mat @ Ucb-Vax
Subject: Re: RE:intelligence and adaptability
Article-I.D.: hou5d.732

Man is the toolmaker and the principle tooluser of all the living things
that we know of. What does this mean?

Consider driving a car or skating. When I do this, I have managed to
incorporate an external system into my own control system with its myriad
of pathways both forward and backward.

This takes place at a level below that which usually is considered to
constitute intelligent thought. On the other hand, we can adopt external
things into our thought-model of the world in a way which no other creature
seems to be capable of.

Is there any causal relationship here?

Mark Terribile
DOdN

------------------------------

Date: 6 Nov 1983 20:54-PST
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Re: AIList Digest V1 #90

Irwin Marin's course in AI started out by asking us to define
the term 'Natural Stupidity'. I guess artificial intelligence must be
anything both unnatural and unstupid. We had a few naturally stupid
examples to work with, so we got a definition quite quickly. Naturally
stupid types were unable to adapt, unable to find new representations,
and made of flesh and bone. Artificially intelligent types were
machines designed to adapt their responses and seek out more accurate
representations of their environment and themselves. Perhaps this would
be a good 'working' definition. At any rate, definitions are only
'working' if you work with them. If you can work with this one I
suggest you go to it and stop playing with definitions.
FC

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT