Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 041

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest             Monday, 8 Aug 1988       Volume 8 : Issue 41 

Queries:

Sigmoid transfer function
(3 responses)
refs. for stochastic relaxation
(1 response)

----------------------------------------------------------------------

Date: 4 Aug 88 20:28:13 GMT
From: amdahl!pacbell!hoptoad!dasys1!cucard!aecom!krishna@ames.arpa
(Krishna Ambati)
Subject: Sigmoid transfer function


I am looking for a "black box" circuit that has the product transfer
function:

Output voltage = 0.5 ( 1 + tanh ( Input voltage / "Gain" ) )

= 1 / ( 1 + exp ( -2 * Input voltage / "Gain" ) )

When plotted, this function looks like an elongated S

When IV = - infinity, OV = 0
When IV = + infinity, OV = 1
When IV = 0 , OV = 0.5

By the way, this question arose in connection with a neural network
problem.

Thanks.

Krishna Ambati
krishna@aecom.uucp

------------------------------

Date: 6 Aug 88 16:58:05 GMT
From: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: Sigmoid transfer function


Recognize that the transfer function in a neural network threshold unit
doesn't really have to be a sigmoid function. It just has to look roughly
like one. The behavior of the net is not all that sensitive to the
exact form of that function. It has to be continuous and monotonic,
reasonably smooth, and rise rapidly in the middle of the working range.
The trigonometric form of the transfer function is really just a notational
convenience.

It would be a worthwhile exercise to come up with some other forms
of transfer function with roughly the same graph, but better matched to
hardware implementation. How do real neurons do it?

John Nagle

------------------------------

Date: 7 Aug 88 00:04:26 GMT
From: ankleand@athena.mit.edu (Andy Karanicolas)
Subject: Re: Sigmoid transfer function

In article <1945@aecom.YU.EDU> krishna@aecom.YU.EDU (Krishna Ambati) writes:
>
>I am looking for a "black box" circuit that has the product transfer
>function:
>
>Output voltage = 0.5 ( 1 + tanh ( Input voltage / "Gain" ) )
>
> = 1 / ( 1 + exp ( -2 * Input voltage / "Gain" ) )
>
>When plotted, this function looks like an elongated S
>
>When IV = - infinity, OV = 0
>When IV = + infinity, OV = 1
>When IV = 0 , OV = 0.5
>
>By the way, this question arose in connection with a neural network
>problem.
>
>Thanks.
>
>Krishna Ambati
>krishna@aecom.uucp

The function you are looking for is not too difficult to synthesize from
a basic analog circuit builing block; namely, a differential amplifier.
The accuracy of the circuit will depend on the matching of components, among
other things. The differential amplifier is discussed in many textbooks
concerned with analog circuits (analog integrated circuits especially).

You can try:

Electronic Principles, Grey and Searle, Wiley 1969.
Bipolar and MOS Analog IC Design, Grebene, Wiley 1984.
Design and Analysis of Analog IC's, Gray and Meyer, Wiley 1984.

Unfortunately, drawing circuits on a text editor is a pain; I'll
attempt it if these or other books are not available or helpful.


Andy Karanicolas
Microsystems Technology Laboratory

ankleand@caf.mit.edu

------------------------------

Date: 7 Aug 88 19:55:49 GMT
From: icsia!munro@ucbvax.berkeley.edu (Paul Munro)
Subject: Re: Sigmoid transfer function

In article <17615@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
[JN]Recognize that the transfer function in a neural network threshold unit
[JN]doesn't really have to be a sigmoid function. It just has to look roughly
[JN]like one. The behavior of the net is not all that sensitive to the
[JN]exact form of that function. It has to be continuous and monotonic,
[JN]reasonably smooth, and rise rapidly in the middle of the working range.
[JN]The trigonometric form of the transfer function is really just a notational
[JN]convenience.
[JN]
[JN] It would be a worthwhile exercise to come up with some other forms
[JN]of transfer function with roughly the same graph, but better matched to
[JN]hardware implementation. How do real neurons do it?
[JN]
[JN] John Nagle


Try this one : f(x) = x / (1 + |x|)


It is continuous and differentiable:

f'(x) = 1 / (1 + |x|) ** 2 = ( 1 - |f|) ** 2 .

- Paul Munro

------------------------------

Date: 5 Aug 88 18:38:42 GMT
From: tness7!tness1!nuchat!moray!uhnix1!sun2.cs.uh.edu!rmdubash@bellco
re.bellcore.com
Subject: refs. for stochastic relaxation

I am currently working on stochastic relaxation and relaxation algorithms for
finely grained parallel architectures. In particular, I am studying their
implementation on neural and connectionist models, with emphasis on inherent
fault tolerance property of such implementations.

I will be grateful if any of you can provide me with pointers, references etc.
on this ( or related ) topics.

Thanks.




_______________________________________________________________________________
Rumi Dubash, Computer Science, Univ. of Houston,
Internet : rmdubash@sun2.cs.uh.edu
U.S.Mail : R.M.Dubash, Computer Science Dept., Univ. of Houston,

------------------------------

Date: 6 Aug 88 18:02:44 GMT
From: brand.usc.edu!manj@oberon.usc.edu (B. S. Manjunath)
Subject: Re: refs. for stochastic relaxation

In article <824@uhnix1.uh.edu> rmdubash@sun2.cs.uh.edu () writes:
>I am currently working on stochastic relaxation and relaxation algorithms for
>finely grained parallel architectures. In particular, I am studying their
>implementation on neural and connectionist models, with emphasis on inherent
>fault tolerance property of such implementations.
>
>I will be grateful if any of you can provide me with pointers, references etc.
>on this ( or related ) topics.

>Rumi Dubash, Computer Science, Univ. of Houston,

Geman and Geman (1984) is an excellent paper to start with. It also
contains lot of refernces. The paper mainly deals with Markov Random Fields
and applications to image processing.

S.Geman and D.Geman,"Stochastic relaxation, Gibbs distributions and
the bayesian restoration of images", IEEE trans. on pattern analysis
and machine intelligence", PAMI-6,Nov 1984, pp. 721-742.

Another reference that I feel might be useful is Marroquin,J.L.
Ph. D Thesis "Probabilistic solution of Inverse problems",
M.I.T. 1985.

bs manjunath.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT