π©βπ» Python Code Snippets
π General Code Snippets in ML
π₯ Sigmoid Function
π Sigmoid Gradient
A function that computes gradients to optimize loss functions using backpropagation
π©βπ§ Reshaping Arrays (or images)
π₯ Normalizing Rows
Dividing each row vector of x by its norm.
π¨ Softmax Function
A normalizing function used when the algorithm needs to classify two or more classes
π€ΈββοΈ L1 Loss Function
The loss is used to evaluate the performance of the model. The bigger the loss is, the more different that predictions ( yΜ ) are from the true values ( y ). In deep learning, we use optimization algorithms like Gradient Descent to train the model and to minimize the cos
π€ΈββοΈ L2 Loss Function
The loss is used to evaluate the performance of the model. The bigger the loss is, the more different that predictions ( yΜ ) are from the true values ( y ). In deep learning, we use optimization algorithms like Gradient Descent to train the model and to minimize the cost.
πββοΈ Propagation Function
Doing the "forward" and "backward" propagation steps for learning the parameters.
π« Gradient Descent (Optimization)
The goal is to learn Ο and b by minimizing the cost function J. For a parameter Ο
πΈ Basic Code Snippets for Simple NN
Functions of 2-layer NN
Input layer, 1 hidden layer and output layer
π Parameter Initialization
Initializing W
s and b
s, W
s must be initialized randomly in order to do symmetry-breaking, we can do zero initalization for b
s
β© Forward Propagation
Each layer accepts the input data, processes it as per the activation function and passes to the next layer
π© Cost Function
The average of the loss functions of the entire training set due to the output layer -from A2 in our example-
βͺ Back Propagation
Proper tuning of the weights ensures lower error rates, making the model reliable by increasing its generalization.
π Updating Parameters
Updating the parameters due to the learning rate to complete the gradient descent
Last updated