Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 250
AIList Digest Thursday, 6 Nov 1986 Volume 4 : Issue 250
Today's Topics:
Philosophy - The Analog/Digital Distinction & Information
----------------------------------------------------------------------
Date: 29 Oct 86 16:29:10 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan
Harnad)
Subject: The A/D Distinction: 5 More Replies
[This message actually fits into the middle of the sequence I sent
yesterday. Sorry for the reversal. -- KIL]
Here are 5 more replies I've received on the A/D distinction. I'll
respond in a later module. [Meantime, could someone post this to
sci.electronics, to which I have no access, please?]
------
(1)
Message-Id: <8610271622.11564@ur-seneca.arpa>
In-Reply-To: <13@mind.UUCP>
U of Rochester, CS Dept, Rochester, NY
ken@rochester.arpa
CS Dept., U. of Roch., NY 14627.
Mon, 27 Oct 86 11:22:10 -0500
I think the distinction is simply this: digital deals with a finite set
of discrete {voltage, current, whatever} levels, while analog deals
with a *potentially* infinite set of levels. Now I know you are going
to say that analog is discrete at the electron noise level but the
circuits are built on the assumption that the spectrum is continuous.
This leads to different mathematical analyses.
Sort of like infinite memory Turing machines, we don't have them but we
program computers as if they had infinite memory and in practice as
long as we don't run out, it's ok. So as long as we don't notice the
noise in analog, it serves.
--------
(2)
Tue, 28 Oct 86 20:56:36 est
cuuxb!mwm
AT&T-IS, Software Support, Lisle IL
In article <7@mind.UUCP> you write:
>The ground-rules are these: Try to propose a clear and
>objective definition of the analog/digital distinction that is not
>arbitrary, relative, a matter of degree, or loses in the limit the
>intuitive distinction it was intended to capture.
>
>One prima facie non-starter: "continuous" vs. "discrete" physical
>processes.
>
>Stevan Harnad (princeton!mind!harnad)
Analog and digital are two ways of *representing* information. A
computer can be said to be analog or digital (or both!) depending
upon how the information is represented within the machine, and
particularly, how the information is represented when actual
computation takes place.
Digital is essentially a subset of analog, where the range of
properties used to represent information is grouped into a
finite set of values. For example, the classic TTL digital
model uses electrical voltage to represent values, and is
grouped into the following:
above +5 volts -- not used
+2..+5 volts (approx) -- a binary 1
0..+2 volts (approx) -- a binary 0
negatvie voltage -- not used.
Important to distinguish here is the grouping of the essentially
infinite possiblities of voltage into a finite set of values.
A system that used 4 voltage ranges to represent a base 4 number
system would still be digital. Note that this means that it
takes several voltages to represent an arbitrarily precise number
Analog, on the other hand, refers to using a property to directly
represent an infinite range of values with a different infinite
range of values: for example representing the number 15 with
15 volts, and the number 100 with 100 volts. Note that this means
it takes 1 voltage to represent an arbitrarily precise number.
This is my pot-shot at defining analog/digital and how they relate,
and how they are used in most systems i am familiar with. I think
these make reasonably clear what it is that "analog to digital"
converters (and "digital to analog") do.
This is why slide-rules are considered analog, you are using distance
rather than voltage, but you can interpret a distance as precisely
as you want. An abacus, on the otherhand also uses distance, but
where a disk is means either one thing or another, and it takes
lots of disks to represent a number. An abacus then, is digital.
Marc Mengel
...!ihnp4!cuuxb!mwm
--------
(3)
<bcsaic!ray>
Thu, 23 Oct 86 13:10:47 pdt
Message-Id: <8610232010.AA18462@bcsaic.LOCAL>
Try this:
(An) analog is a (partial) DUPLICATE (or abstraction)
of some material thing or some process, which contains
(it is hoped) the significant characteristics and properties
of the original. An analog is driven by situations and events
outside itself, and its usefulness is that the analog may be
observed and, via induction, the original understood.
A digital device or method operates on symbols, rather than
physical (or other) reality. Analog computers may operate on
(real) voltages and electron flow, while digital computers
operate on symbols and their logical interrelationships.
Digital operations are formal; that is they treat form rather
than content, and are therefore always deductive, while the
behavior of real things and their analogs is not. (Heresy follows).
It is one of my (unpopular) assertions that the central nervous
system of living organisms (including myself) is best understood
as an analog of "reality"; that most interesting behavior
such as induction and the detection of similarity (analogy and
metaphor) cannot be accomplished with only symbolic, and
therefore deductive, methods.
--------
(4)
Mon, 27 Oct 86 16:04:36 mst
lanl!a.LANL.ARPA!crs (Charlie Sorsby)
Message-Id: <8610272304.AA25429@a.ARPA>
References: <7@mind.UUCP> <45900003@orstcs.UUCP>, <13@mind.UUCP>
Stevan,
I've been more or less following your query and the resulting articles.
It seems to me that the terms as they are *usually* used today are rather
bastardized. Don't you think that when the two terms originated they
referred to two ways of "computing" and *not* to kinds of circuits at all.
The analog simulator (or, more popularly, analog computer) "computed" by
analogy. And, old timers may recall, they weren't all electronic or even
electrical. I vaguely recall reading about an analog simultaneous
linear-equation solver that comprised plates (rectangular, I think), cables
and pulleys.
Digital computers (truly so) on the other hand computed with *digits* (i.e.
numbers). Of course there was (is) analogy involved here too but that was
a "higher-order term" in the view and was conveniently ignored as higher
order terms often are.
In the course of time, the term analog came to be used for those
electronic circuits *like* those used in analog simulators (i.e. circuits
that work with continuous quantities). And, of course, digital came to
refer to those circuits *like* those used in digital computers (i.e. those
which work with discrete or quantized quantities.
Whether a quantity is continuous or discrete depends on such things as the
attribute considered to say nothing of the person doing the considering,
hence the vagueness of definition and usage of the terms. This vagueness
seems to have worsened with the passage of time.
Best regards,
Charlie Sorsby
...!{cmcl2,ihnp4,...}!lanl!crs
crs@lanl.arpa
--------
(5)
Message-Id: <8610280022.AA16966@mitre.ARPA>
Organization: The MITRE Corp., Washington, D.C.
sundt@mitre.ARPA
Date: Mon, 27 Oct 86 19:22:21 -0500
Having read your messages for the last few months, I
couldn't help but take a stab on this issue.
Coming from a heavily theoretical undergraduate physics background,
it seems obvious that the ONLY distinction between the analog and
digital representation is the enumerability of the relationships
under the given representation.
First of all, the form of digital representation must be split into
two categories, that of a finite representation, and that of a
countably infinite representation. Turing machines assume a countably
infinite representation, whereas any physically realizable digital computer
must inherently assume a finite digital representation (be it ever so large).
Thus, we have three distinctions to make:
1) Analog / Finite Digital
2) Countably-Infinite Digital / Finite Digital
3) Analog / Countably-Infinite Digital
Second, there must be some predicate O(a,b) defined over all the a and b
in the representation such that the predicate O(a,b) yields only one of
a finite set of symbols, S(i) (e.g. "True/False"). If such a predicate does
not exist, then the representation is arguably ambiguous and the symbols are
"meaningless".
An example of an O(a,b) is the equality predicate over the reals, integers,
etc.
Looking at all the (a,b) pairs that map the O(a,b) predicate into the
individual S(i), note that the following is true:
ANALOG REPRESENTATION: the (a,b) pairs cannot be enumerated for ALL
S(i).
COUNTABLY-INFINITE DIGITAL REPRESENTATION: the (a,b) pairs cannot be
enumerated for ALL S(i).
FINITE DIGITAL REPRESENTATION: all the (a,b) pairs for all the S(i)
CAN be enumerated.
This distinguishes the finite digital representation from the other two
representations. I believe this is the distinction you were asking about.
The distinction between the analog representation and the countably-infinite
digital representation is harder to identify. I sense it would require
the definition of a mapping M(a,b) onto the representation itself, and
the study of how this mapping relates to the O(a,b) predicate.
That is, is there some relationship between O(?,?), M(?,?) and the (a,b)
that is analgous to divisibility in Z and R. How this would be formulated
escapes me.
On your other-minds problem:
[see "Searle, Turing, Categories, Symbols"]
I think the issue here is related to the above classification. In particular,
I think the point to be made is that we can characterize when something is
NOT intelligent, but are unable to define when it is.
A less controversial issue would be to "Define chaos". Any attempt to do so
would give it a fixed structure, and therefore order. Thus, we can only
define chaos in terms of what it isn't, i.e. "Chaos is anything that cannot
be categorized."
Thus, it is the quality that is lost when a signal is digitized to either a
finite or an countably-infinite digital representation.
Analog representations would not suffer this loss of chaos.
Carrying this thought back to "intelligence," intelligence is the quality that
is lost when the behavior is categorized among a set of values. Thus, to
detect intelligence, you must use analog representations ( and
meta-representations). And I am forced to conclude that the Turing test must
always be inadequate in assessing intelligence, and that you need to be an
intelligent being to *know* an intelligent being when you see one!!!
Of course, there is much error in categorizations like this, so in the *real*
world, a countably-infinite digital representation might be *O.K.*.
I wholy agree with your arguement for a basing of symbols on observables,
and would also argue that semantic content is purely a result of a rich
syntactic structure with only a few primitive predicates, such as set
relations, ordering relations, etc.
Thinking about it further, I would argue, in view of what I just said, that
people are by construction only "faking" intelligence, and that we have
achieved a complexity whereby we can percieve *some* of the chaos left
by our crude categorizations (perhaps through multiple categorizations of
the same phenomena), and that this perception itself gives us the appearance
of intelligence. Our perceptions reveal only the tip of the chaotic iceberg,
however, by definition. To have true intelligence would require the perception
of *ALL* the chaos.
I hope you found this entertaining, and am anxious to hear your response.
Mitch Sundt The MITRE Corp. sundt@mitre.arpa
------------------------------
Date: 3 Nov 86 23:40:28 GMT
From: ladkin@kestrel.ARPA (Peter Ladkin)
Subject: Re: The Analog/Digital Distinction
(weinstein quoting goodman)
> > A scheme is syntactically dense if it provides for infinitely many
> > characters so ordered that between each two there is a third.
(harnad)
> I'm no mathematician, but it seems to me that this is not strong
> enough for the continuity of the real number line. The rational
> numbers are "syntactically dense" according to this definition.
Correct. There is no first-order way of defining the
real number line without introducing something like countably
infinite sequences and limits as primitives.
Moreover, if this is done in a countable language, you are
guaranteed that there is a countable model (if the definition
isn't contradictory). Since the real line isn't countable,
the definition cannot ensure you get the REAL reals.
Weinstein wants to identify *analog* with *syntactically dense*
plus some other conditions. Harnad observes that the rationals
fit the notion of syntactic density.
The rationals are, up to isomorphism, the only countable, dense,
linear order without endpoints. So any syntactically dense scheme
fitting this description is (isomorphic to) the rationals,
or a subinterval of the rationals (if left-closed, right-closed,
or both-closed at the ends).
One consequence is that one could define such an *analog* system
from a *digital* one by the following method:
Use the well-known way of defining the rationals from the
integers - rationals are pairs (a,b) of integers,
and (a,b) is *equivalent* to (c,d) iff a.d = b.c.
The *equivalence* classes are just the rationals, and
they are semantically dense under the ordering
(a,b) < (c,d) iff there is (f,g) such that f,g have
the same sign and (a,b) + (f,g) = (c,d)
where (a,b) + (c,d) = (ad + bc, bd), and the + is factored
through the equivalence.
We may be committed to this kind of phenomenon, since every
plausible suggested definition must have a countable model,
unless we include principles about non-countable sets that
are independent of set theory. And I conjecture that every
suggestion with a countable model is going to be straightforwardly
obtainable from the integers, as the above example was.
Peter Ladkin
ladkin@kestrel.arpa
------------------------------
Date: 3 Nov 86 23:47:34 GMT
From: ladkin@kestrel.ARPA (Peter Ladkin)
Subject: Re: The Analog/Digital Distinction
In article <1701@Diamond.BBN.COM>, aweinste@Diamond.BBN.COM
(Anders Weinstein) writes:
> The upshot of Goodman's requirement is that if a symbol system is to count as
> "digital" (or as "notational"), there must be some finite sized "gaps",
> however minute, between the distinct elements that need to be distinguished.
I'm not sure you want this definition of the distinction.
There are *finite-sized gaps, however minute* between rational
numbers, and if we use the pairs-of-integers representation to
represent the syntactically dense scheme, (which must be
isomorphic to some subrange of the rationals if countable)
we may use the integers and their gaps to distinguish the gaps
in the syntactically dense scheme, in a quantifier-free manner.
Thus syntactically dense schemes would count as *digital*, too.
Peter Ladkin
ladkin@kestrel.arpa
------------------------------
Date: 4 Nov 86 19:03:09 GMT
From: nsc!amdahl!apple!turk@hplabs.hp.com (Ken "Turk" Turkowski)
Subject: Re: Analog/Digital Distinction
In article <116@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>(2) winnie!brad (Brad Garton) writes:
>> ... When I consider the digitized versions of analog
>> signals we deal with over here <computer music>, it seems that we
>> approximate more and more closely the analog signal with the
>> digital one as we increase the sampling rate.
There is a difference between sampled signals and digital signals. A digital
signals is not only sampled, but is also quantized. One can have an analog
sampled signal, as with CCD filters.
As a practical consideration, all analog signals are band-limited. By the
Sampling Theorem, there is a sampling rate at which a bandlimited signal can
be perfectly reconstructed. *Increasing the sampling rate beyond this
"Nyquist rate" cannot result in higher fidelity*.
What can affect the fidelity, however, is the quantization of the samples:
the more bits used to represent each sample, the more accurately the signal
is represented.
This brings us to the subject of Signal Theory. A particular class of signal
that is both time- and band-limited (all real-world signals) can be represented
by a linear combination
of a finite number of basis functions. This is related to the dimensionality
of the signal, which is approximately 2WT, where W is the bandwidth of the
signal, and T is the duration of the signal.
>> ... This process reminds
>> me of Mandelbrot's original "How Long is the Coastline of Britain"
>> article dealing with fractals. Perhaps "analog" could be thought
>> of as the outer limit of some fractal set, with various "digital"
>> representations being inner cutoffs.
Fractals have a 1/f frequency distribution, and hence are not band-limited.
>> In article <105@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>> I'm not convinced. Common ways of transmitting analog signals all
>> *do* lose at least some of the signal, irretrievably...
Let's not forget noise. It is impossible to keep noise out of analog channels
and signal processing, but it can be removed in digital channels and can be
controlled (roundoff errors) in digital signal processing.
>> ... Losses of information in processing analog signals tend to
>> be worse, and for an analog transformation to be exactly invertible, it
>> *must* preserve all the information in its input.
Including the exclusion of noise. Once noise is introduced, the signal cannot
be exactly inverted.
--
Ken Turkowski @ Apple Computer, Inc., Cupertino, CA
UUCP: {sun,nsc}!apple!turk
CSNET: turk@Apple.CSNET
ARPA: turk%Apple@csnet-relay.ARPA
------------------------------
Date: Wed 5 Nov 86 21:03:42-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Subject: Information in Signals
From: nsc!amdahl!apple!turk@hplabs.hp.com (Ken "Turk" Turkowski)
Message-Id: <267@apple.UUCP>
*Increasing the sampling rate beyond this
"Nyquist rate" cannot result in higher fidelity*.
>>... Losses of information in processing analog signals tend to
>>be worse, and for an analog transformation to be exactly invertible, it
>>*must* preserve all the information in its input.
Including the exclusion of noise. Once noise is introduced, the signal
cannot be exactly inverted.
To pick a couple of nits:
Sampling at the Nyquist rate preserves information, but only if the proper
interpolation function is used to reconstruct the continuous signal. Often
this function is nonphysical in the sense that it extends infinitely far
in each temporal direction and contains negative coefficients that are
difficult to implement in some types of analog hardware (e.g., incoherent
optics). One of the reasons for going to digital processing is that
[approximate] sinc or Bessel functions are easier to deal with in the digital
domain. If a sampled signal is simply run through the handiest speaker
system or other nonoptimal reconstruction, sampling at a higher rate
may indeed increase fidelity.
The other two quotes are talking about two different things. No transformation
(analog or digital) is invertible if it loses information, but adding noise
to a signal may or may not degrade its information content. An analog signal
can be just as redundant as any coded digital signal -- in fact, most digital
"signals" are actually continuous encodings of discrete sequences. To talk
about invertibility one must define the information in a signal -- which,
unfortunately, depends on the observer's knowledge as much as it does on the
degrees of freedom or joint probability distribution of the signal elements.
Even "degree of freedom" and "probability" are not well defined, so that
our theories are ultimately grounded in faith and custom. Fortunately the
real world is kind: our theories tend to be useful and even robust despite
the lack of firm foundations. Philosophers may demonstrate that engineers
are building houses of cards on shifting sands, but the engineers will build
as long as their houses continue to stand.
-- Ken Laws
------------------------------
Date: Wed, 5 Nov 1986 16:00 EST
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList Digest V4 #248
With all due respect, I wonder if the digital-analog discussion could
be tabled soon. I myself do not consider it useful to catalog the
dispositions of many different persons' use of a word; in any case the
thing has simply gone past the bounds of 1200 baud communication.
Please. On to some substance.
------------------------------
End of AIList Digest
********************