## Overview#

Activation Function of an Artificial Neuron defines the output of that Artificial Neuron given an input or set of inputs.Activation Function is just a decision making function that determines presence of a particular feature. Zero means the Artificial Neuron says feature is **not** present and one means Artificial Neuron says feature is present.

A standard computer chip circuit can be seen as a digital network of Activation Functions that can be "ON" (1) or "OFF" (0), depending on input.

This is similar to the behavior of the linear perceptron in neural networks.

### nonlinear Activation Functions#

Nonlinear Activation Functions allow such networks to compute nontrivial problems using only a small number of nodes. In Artificial Neural networks this function may be referred to as the transfer function.In biologically inspired neural networks, the activation function is usually an abstraction representing the rate of action potential firing in the cell. In its simplest form, this function is binaryâ€”that is, either the neuron is firing or not. The function looks like

(v_{i})=U(v_{i})

A line of positive slope may be used to reflect the increase in firing rate that occurs as input current increases. Such a function would be of the form {\displaystyle \phi (v_{i})=\mu v_{i} \phi (v_{i})=\mu v_{i}, where {\displaystyle \mu } \mu is the slope. This activation function is linear, and therefore has the same problems as the binary function. In addition, networks constructed using this model have unstable convergence because neuron inputs along favored paths tend to increase without bound, as this function is not normalizable.

All problems mentioned above can be handled by using a normalizable sigmoid activation function. One realistic model stays at zero until input current is received, at which point the firing frequency increases quickly at first, but gradually approaches an asymptote at 100% firing rate. Mathematically, this looks like

(v_{i})=U(v_{i})\tanh(v_{i})

The final model, then, that is used in multilayer perceptrons is a sigmoidal activation function in the form of a hyperbolic tangent. Two forms of this function are commonly used:

(v_{i})=\tanh(v_{i})

(v_{i})=(1+\exp(-v_{i}))^{-1}

Typically before Activation Function you would perform forward propagation

### Artificial Neural networks#

"G" is often used to represent the Activation Function.Activation Function are applied to the Hidden layers and to the Output layer.

#### Sigmoid function Activation Function in Python#

Here we show common Sigmoid Activation FunctionA = sigmoid(np.dot(w.T,X)+b)

**only**use in a binary output layer

#### Hyperbolic tangent:#

tanh x=-i\tan(ix) or a = tanh(z) = (e^z - e^-z) / (e^z) + (e^-z)

#### Rectified Linear Unit (ReLU)#

Instead of sigmoid function, most recent Deep Learning networks use Rectified Linear Units (ReLUs) for the Hidden layers. A Rectified Linear Unit has output- 0 if the input is less than 0
- raw output otherwise.

Rectified Linear Unit Activation Functiona are the simplest non-linear Activation Function you can use, obviously. When you get the input is positive, the derivative is just 1, so there isn't the squeezing effect you meet on backpropagation errors from the Sigmoid function.

Research has shown that Rectified Linear Units result in much faster training for large networks. Most frameworks like TensorFlow and TFLearn make it simple to use Rectified Linear Units on the the Hidden layers, so typically you won't need to implement them yourself.

### Category#

Artificial Intelligence### More Information#

There might be more information for this subject on one of the following:- Artificial Neural network
- Artificial Neuron
- Bias
- Forward propagation
- Machine Learning Taxonomy
- Rectified Linear Unit
- Sigmoid function

- [#1] - Activation_function - based on information obtained 2017-11-24-