Activation Functions in DL with Python code
In a neural network, the activation functions are responsible for transforming the summed weighted input from the node into the activation of the node or output for that input.
Activation functions :
- Step
- Sigmoid
- tanH
- ReLU
- Leaky Relu
1.Step Activation Function
From the image, we can clearly see step function only gives the output as 1 or 0 for the given input.
If the input value is less than 0 it will give output as 0 or else gives 1 as output.
Python Code:
def step(x):
return 0 if x < 0 else 1
2.Sigmoid Activation Function
It is one of the popular and most used activation function for the output layer in a neural network.
From the image, we can clearly find that the sigmoid function only gives a value between 0 and 1 as output. Using the formula
Python Code:
import numpy as np
def sigmoid(x):
return 1/(1+np.exp(-x))
3.tanh Activation Function
From the image, we can clearly find that the tanH function gives a value between -1and 1 as output. Using the formula
Python Code :
import numpy as np
def tanh(x):
return (np.exp(x)-np.exp(-x))/(np.exp(x)+np.exp(-x))
4.ReLU Activation Function
It is one of the most used activation function in the hidden layer of a neural network.
ReLU activation function simply returns the value which is maximum between 0 and the given input value.
Python Code :
def reLU(x):
return max(0,x)
5.Leaky Relu Activation Function
Leaky ReLU is one attempt to fix the “dying ReLU” problem. Instead of the function being zero when x < 0, a leaky ReLU will instead have a small negative slope (of 0.01, or so)
Python Code :
def leaky(x):
return x*0.01 if x < 0 else x
The following graph is showing the difference between mostly used activation functions in artificial neural networks.