**Sigmoid functions in neural networks**

Sigmoid functions are often used in neural networks to introduce nonlinearity in the model and/or to clamp signals to within a specified range. A popular neural net element computes a linear combination of its input signals, and applies a bounded sigmoid function to the result; this model can be seen as a "smoothed" variant of the classical threshold neuron.

A reason for its popularity in neural networks is because the sigmoid function satisfies the differential equation

**y' = y(1 − y)**

The right hand side is a low order polynomial. Furthermore, the polynomial has factors y and (1 − y), both of which are simple to compute. Given y = sig(t) at a particular t, the derivative of the sigmoid function at that t can be obtained by multiplying the two factors together. These relationships result in simplified implementations of artificial neural networks with artificial neurons.

## 1 comment:

Wow, your blog is very much in-depth.

*bookmarks page*

I've been writing about simple ANNs recently on my own blog... Looks like I've just found my 'click here for more info' link.

Post a Comment