Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 180
AIList Digest Tuesday, 16 Sep 1986 Volume 4 : Issue 180
Today's Topics:
Administrivia - Resumption of Service,
AI Tools - C,
Expert Systems - Matching,
Philosophy - Argumentation Style & Sports Analogy,
Physiology - Rate of Tissue Replacement
----------------------------------------------------------------------
Date: Tue 16 Sep 86 01:28:28-PDT
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Resumption of Service
I'm back from vacation and almost have the mail streams under control
again. This issue clears out "old business" messages relating to the
discussions in early August. I'll follow with digests flushing the
accumulated queries, Usenet replies, news items, conference and
seminar abstracts, and bibliographic citations -- spread out a bit so
that I'm not deluged with mailer bounce messages from readers who have
dropped without notification. Incidentally, about 30 people signed up
for direct distribution this month despite the inactivity of the list.
(Most of the additions for the last year have been on BITNET, often in
clusters as new universities join the net or become aware of the
Arpanet digests. Most Arpanet and CSNet sites are now using bboards
and redistribution lists or are making use of the Usenet mod.ai/net.ai
distribution.)
I plan to pass along only an abbreviated announcement for conferences
that have already been announced in the NL-KR, IRList, or Prolog lists
-- you can contact the message author if you need the full text.
(Note that this may reduce the yield of keyword searches through the
AIList archive; future historians will have to search the other lists
to get a full picture of AI activity. Anyone building an intelligent
mail-screening system should also incorporate cross-list linkages.
Any such screening system that can understand and coordinate these
message streams deserves a Turing award.)
-- Ken Laws
------------------------------
Date: Wed, 20 Aug 86 10:07:49 edt
From: cdx39!jc%rclex.UUCP@harvard.HARVARD.EDU
Subject: Re: Reimplementing in C
> I've been hearing and seeing something for the past couple years,
> something that seems to be becoming a folk theorem. The theorem goes
> like this:
> Many expert systems are being reimplemented in C.
> I'm curious what the facts are.
[I program in C, and have reached the conclusion that most AI
programming could be done in that language as easily as in LISP
if libraries of list-oriented subroutines were available. (They
needn't be consed lists -- I use dynamically allocated arrays.)
You do have to worry about storage deallocation, but that buys you
considerable run-time efficiency. You also lose the powerful
LISP debugging environment, so fill your code with lots of
argument checks and ASSERTs. Tail recursion isn't optimized,
so C code should use iteration rather than recursion for most
array-based list traversals. Data-driven and object-oriented
coding are easy enough, but you can't easily build run-time
"active objects" (i.e., procedures to be applied to message
arguments); compiled subroutines have to do the work, and dynamic
linking is not generally worth the effort. I haven't tried much
parsing or hierarchy traversal, but programs such as LEX, YACC,
and MAKE show that it can be done. -- KIL]
Well, now, I don't know about re-implementing in C, but I myself
have been doing a fair amount of what might be called "expert
systems" programming in C, and pretty much out of necessity.
This is because I've been working in the up-and-coming world
of networks and "intelligent" communication devices. These
show much promise for the future; unfortunately they also
add a very "interesting" aspect to the job of an application
(much less a system) programmer.
The basic problem is that such comm devices act like black
boxes with a very large number of internal states; the states
aren't completely documented; those that are documented are
invariably misunderstood by anyone but the people who built
the boxes; and worst of all, there is usually no reliable
way to get the box into a known initial state.
As a result, there is usually no way to write a simple,
straightforward routine to deal with such gadgets. Rather,
you are forced to write code that tries to determine 1)
what states a given box can have; 2) what state it appears
to be in now; and 3) what sort of command will get it from
state X to state Y. The debugging process involves noting
unusual responses of the box to a command, discussing the
"new" behavior with the experts (the designers if they are
available, or others with experience with the box), and
adding new cases to your code to handle the behavior when
it shows up again.
One of the simplest examples is an "intelligent ACU", which
we used to call a "dial-out modem". These now contain their
own processor, plus sufficiently much ROM and RAM to amount
to small computer systems of their own. Where such boxes
used to have little more than a status line to indicate the
state of a line (connected/disconnected), they now have an
impressive repertoire of commands, with a truly astonishing
list of responses, most of which you hope never to see. But
your code will indeed see them. When your code first talks
to the ACU, the responses may include any of:
1. Nothing at all.
2. Echo of the prompt.
3. Command prompt (different for each ACU).
4. Diagnostic (any of a large set).
Or the ACU may have been in a "connected" state, in which
case your message will be transmitted down the line, to be
interpreted by whatever the ACU was connected to by the most
recent user. (This recursive case is really fun!:-)
The last point is crucial: In many cases, you don't know
who is responding to your message. You are dealing with
chains of boxes, each of which may respond to your message
and/or pass it on to the next box. Each box has a different
behaviour repertoire, and even worse, each has a different
syntax. Furthermore, at any time, for whatever reason
(such as power glitches or commands from other sources),
any box may reset its internal state to any other state.
You can be talking to the 3rd box in a chain, and suddenly
the 2nd breaks in and responds to a message not intended
for it.
The best way of handling such complexity is via an explicit
state table that says what was last sent down the line, what
the response was, what sort of box we seem to be talking to,
and what its internal state seems to be. The code to use such
info to elicit a desired behavior rapidly develops into a real
piece of "expert-systems" code.
So far, there's no real need for C; this is all well within the
powers of Lisp or Smalltalk or Prolog. So why C? Well, when
you're writing comm code, you have one extra goodie. It's very
important that you have precise control over every bit of every
character. The higher-level languages always seen to want to
"help" by tokenizing the input and putting the output into some
sort of standard format. This is unacceptable.
For instance, the messages transmitted often don't have any
well-defined terminators. Or, rather, each box has its own
terminator(s), but you don't know beforehand which box will
respond to a given message. They often require nulls. It's
often very important whether you use CR or LF (or both, in
a particular order). And you have to timeout various inputs,
else your code just hangs forever. Such things are very awkward,
if not impossible to express in the typical AI languages.
This isn't to say that C is the world's best AI language; quite
the contrary. I'd love to get a chance to work on a better one.
(Hint, hint....) But given the languages available, it seems
to be the best of a bad lot, so I use it.
If you think doing it in C is weird, just wait 'til
you see it in Ada....
------------------------------
Date: 2 Sep 86 08:31:00 EST
From: "CLSTR1::BECK" <beck@clstr1.decnet>
Reply-to: "CLSTR1::BECK" <beck@clstr1.decnet>
Subject: matching
Mr. Rosa, is correct in saying that "the obstacles to implementation are not
technological," since this procedure is currently being implemented. See
"matches hit civil servants hardest" in the august 15, 1986 GOVERNMENT COMPUTER
NEWS. "Computer Matching/Matches" is defined as searching the available data for
addresses, financial information, specific personal identifiers and various
irregularities". The congressional Office of Technology Assessment has recently
issued a report, "Electronic Record Systems and Individual Privacy" that
discusses matching.
My concern with this is how will the conflicting rules of society be reconciled
to treat the indiviual fairly. Maybe the cash society and anonymous logins will
become prevalent. Do you think that the falling cost of data will force data
keepers to do more searches to justify their existence? Has there been any
discussion of this topic?
peter beck <beck@ardec-lcss.arpa>
------------------------------
Date: Tue, 12 Aug 86 13:20:20 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: philosophy articles
> >Out of curiosity I hunted up [Jackson, "What Mary Didn't Know," _J.
> >of Philosophy_ 83(1986) 291-295] on the way back from lunch.
> >It's aggressive and condescending; any sympathy I might have felt for
> >the author's argument was repulsed by his sophomoric writing. I hope it's
> >not typical of the writing in philosophy journals.
>
> I don't quite understand what "aggressive and condescending" or
> "sophomoric writing" have to do with philosophical argumentation.
> One thing that philosophers try not to do is give ad hominem arguments.
> A philosophical arguement stands or falls on its logical merits, not its
> rhetoric.
That's an automatic reaction, and I think it's unsound. Since we're
not in net.philosophy, I'll be brief.
Philosophers argue about logic, terminology, and their experience of
reality. There isn't really much to argue about where logic is
concerned: we all know the principles of formal logic, and we're
all writing sincerely about reality, which has no contradictions in
itself. What we're really interested in is the nature of our exist-
ence; the logic of how we describe it doesn't matter.
One reason that Jackson's article irritated me is that he uses formal
logic, of the sort "Either A or B, but not A, therefore B." This kind
of argument insults the reader's intelligence. Jackson ought to know
that nobody is going to question the soundness of such logic, but that
all his opponents will question his premises and his definitions. More-
over, he appears to regard his premises and definitions as unassailable.
I call that stupid philosophizing.
Ad-hominem attacks may well help to discover the truth. When the man
with jaundice announces that everything is fundamentally yellow, you
must attack the man, not the logic. So long as he's got the disease,
he's right!
------------------------------
Date: Tue, 12 Aug 86 12:53:07 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: talk to the medium (from Risks Digest)
> Whether he was talking about the broadcast or the computer industry, he
> got the analogy wrong.
Of course--that's what makes the analogy "stick."
> If the subject is broadcasting, the sports analogy to a "programmer"
> is the guy that makes the play schedules.
Not exactly. McLuhan's "programmer" is the man who selects the content
of the medium, not what computer people call a programmer.
> ... But still, in computing,
> a programmer bears at least partial responsibility for the computer's
> (mis)behaviour.
I agree. McLuhan is writing about not responsibility but responsiveness.
Last Saturday I visited an apartment where a group of men and kids were
shouting at the TV set during a football game. It's a natural response,
and it would have been effective if TV were an interactive medium.
If you dislike this posting, will you complain to the moderator? To
the people who programmed netnews? To the editor of the New York
_Times?_ Of course not; you must talk to the medium, not to the
programmer!
------------------------------
Date: Wed, 20 Aug 86 10:06:56 edt
From: cdx39!jc%rclex.UUCP@harvard.HARVARD.EDU
Subject: Re: "Proper" Study of Science, Conservation of Info
[The following hasn't any obvious AI, but it's interesting enough
to pass along. Commonsense reasoning at work. -- KIL]
> The ability to quantify and measure ... has profound implications ...
>
> ... A decade from now it's likely that none of our bodies
> will contain EVEN A SINGLE ATOM now in them. Even bones are fluid in
> biological organisms; ...
OK, let's do some BOTE (Back Of The Envelope) calculations.
According to several bio and med texts I've read over the
years, a good estimate of the half-life residency of an atom
in the soft portions of a mammal's body is 1/2 year; in the
bones it is around 2 years. The qualifications are quite
obvious and irrelevant here; we are going for order-of-magnitude
figures.
For those not familiar with the term, "half-life residency"
means the time to replace half the original atoms. This
doesn't mean that you replace half your soft tissues in
6 months, and the other half in the next six months. What
happens is exponential: in one year, 1/4 of the original
are left; in 18 months, 1/8 are left, and so on.
Ten years is about 5 half-lives for the bones, and 20 for the
soft tissues. A human body masses about 50 Kg, give or take
a factor of 2. The soft tissues are primarily water (75%)
and COH2; we can treat it all as water for estimating the
number of atoms. This is about (50Kg) * (1000 KG/g) / (16
g/mole) = 3000 moles, times 6*10^23 gives us about 2*10^26
atoms. The bones are a bit denser (with fewer atoms per
gram); the rest is a bit less dense (with more atoms per
gram), but it's about right. For order-of-magnitude estimates,
we would have roughly 10^26 atoms in each kind of tissue.
In 5 half-lives, we would divide this by 2^5 = 32 to get the
number of original atoms, giving us about 7*10^25 atoms of the
bones left. For the soft tissues, we divide by 2^20 = 4*10^6,
giving us about 2 or 3 * 10^20 of the original atoms.
Of course, although these are big numbers, they don't amount to
much mass, especially for the soft tissues. But they are a lot
more than a single atom, even if they are off by an order of
magnitude..
Does anyone see any serious errors in these calculations? Remember
that these are order-of magnitude estimates; quibbling with anything
other than the first significant digit and the exponent is beside
the point. The only likely source of error is in the half-life
estimate, but the replacement would have to be much faster than a
half-year to stand a chance of eliminating every atom in a year.
In fact, with the exponential-decay at work here, it is easy
to see that it would take about 80 half-lives (2*10^26 = 2^79)
to replace the last atom with better than 50% probability.
For 10 years, this would mean a half-life residency of about
6 weeks, which may be true for a mouse or a sparrow, but I've
never seen any hint that human bodies might replace themselves
nearly this fast.
In fact, we can get a good upper bound on how fast our atoms
could be replaced, as well as a good cross-check on the above
rough calculations, by considering how much we eat. A normal
human diet is roughly a single Kg of food a day. (The air
breathed isn't relevant; very little of the oxygen ends up
incorporated into tissues.) In 6 weeks, this would add up to
about 50 Kg. So it would require using very nearly all the
atoms in our food as replacement atoms to do the job required.
This is clearly not feasible; it is almost exactly the upper
bound, and the actual figure has to be lower. A factor of 4
lower would give us the above estimate for the soft tissues,
which seems feasible.
There's one more qualification, but it works in the other
direction. The above calculations are based on the assumption
that incoming atoms are all 'new'. For people in most urban
settings, this is close enough to be treated as true. But
consider someone whose sewage goes into a septic tank and
whose garbage goes into a compost pile, and whose diet is
based on produce of their garden, hen-house, etc. The diet
of such people will contain many atoms that have been part
of their bodies in previous cycles, especially the C and N
atoms, but also many of the O and H atoms. Such people could
retain a significantly larger fraction of original atoms
after a decade.
Please don't take this as a personal attack. I just couldn't
resist the combination of the quoted lines, which seemed to
be a clear invitation to do some numeric calculations. In
fact, if someone has figures good to more places, I'd like
to see them.
------------------------------
End of AIList Digest
********************