π₯Activation Functions
The main purpose of Activation Functions is to convert an input signal of a node in an ANN to an output signal by applying a transformation. That output signal now is used as a input in the next layer in the stack.
π Types of Activaiton Functions
Function | Description |
Linear Activation Function | Inefficient, used in regression |
Sigmoid Function | Good for output layer in binary classification problems |
Tanh Function | Better than sigmoid |
Relu Function β¨ | Default choice for hidden layers |
Leaky Relu Function | Little bit better than Relu, Relu is more popular |
π Linear Activation Function (Identity Function)
Formula:
Graph:
It can be used in regression problem in the output layer
π© Sigmoid Function
Formula:
Graph:
π© Tangent Function
Almost always strictly superior than sigmoid function
Formula:
Shifted version of the Sigmoid function π€
Graph:
Activation functions can be different for different layers, for example, we may use tanh for a hidden layer and sigmoid for the output layer
π Downsides on Tanh and Sigmoid
If z is very large or very small then the derivative (or the slope) of these function becomes very small (ends up being close to 0), and so this can slow down gradient descent π’
π© Rectified Linear Activation Unit (Relu β¨)
Another and very popular choice
Formula:
Graph:
So the derivative is 1 when z is positive and 0 when z is negative
Disadvantage: derivative=0 when z is negative π
π© Leaky Relu
Formula:
Graph:
Or: π
π Advantages of Relu's
A lot of the space of z the derivative of the activation function is very different from 0
NN will learn much faster than when using tanh or sigmoid
π€ Why Do NNs Need non-linear Activation Functions
Well, if we use linear function then the NN is just outputting a linear function of the input, so no matter how many layers out NN has π, all it is doing is just computing a linear function π
β Remember that the composition of two linear functions is itself a linear function
π©βπ« Rules For Choosing Activation Function
If the output is 0 or 1 (binary classification) β‘ sigmoid is good for output layer
For all other units β‘ Relu β¨
We can say that relu is the default choice for activation function
Note:
If you are not sure which one of these functions work best π΅, try them all π€ and evaluate on different validation set and see which one works better and go with that π€π
π§ Read More
Last updated