πŸ‘©β€πŸ’» Python Code Snippets

πŸ“š General Code Snippets in ML

πŸ’₯ Sigmoid Function

sigmoid(x)=11+exp(βˆ’x)sigmoid(x)=\frac{1}{1+exp(-x)}

πŸš€ Sigmoid Gradient

A function that computes gradients to optimize loss functions using backpropagation

πŸ‘©β€πŸ”§ Reshaping Arrays (or images)

    def arr2vec(arr, target):
     """
    Argument:
    image -- a numpy array of shape (length, height, depth)

    Returns:
    v -- a vector of shape (length*height*depth, 1)
    """

    v = image.reshape(image.shape[0] * image.shape[1] * image.shape[2], 1)

    return v

πŸ’₯ Normalizing Rows

Dividing each row vector of x by its norm.

🎨 Softmax Function

A normalizing function used when the algorithm needs to classify two or more classes

πŸ€Έβ€β™€οΈ L1 Loss Function

The loss is used to evaluate the performance of the model. The bigger the loss is, the more different that predictions ( yΜ‚ ) are from the true values ( y ). In deep learning, we use optimization algorithms like Gradient Descent to train the model and to minimize the cos

πŸ€Έβ€β™‚οΈ L2 Loss Function

The loss is used to evaluate the performance of the model. The bigger the loss is, the more different that predictions ( yΜ‚ ) are from the true values ( y ). In deep learning, we use optimization algorithms like Gradient Descent to train the model and to minimize the cost.

πŸƒβ€β™€οΈ Propagation Function

Doing the "forward" and "backward" propagation steps for learning the parameters.

πŸ’« Gradient Descent (Optimization)

The goal is to learn Ο‰ and b by minimizing the cost function J. For a parameter Ο‰

πŸ•Έ Basic Code Snippets for Simple NN

Functions of 2-layer NN

Input layer, 1 hidden layer and output layer

πŸš€ Parameter Initialization

Initializing Ws and bs, Ws must be initialized randomly in order to do symmetry-breaking, we can do zero initalization for bs

⏩ Forward Propagation

Each layer accepts the input data, processes it as per the activation function and passes to the next layer

🚩 Cost Function

The average of the loss functions of the entire training set due to the output layer -from A2 in our example-

βͺ Back Propagation

Proper tuning of the weights ensures lower error rates, making the model reliable by increasing its generalization.

πŸ”ƒ Updating Parameters

Updating the parameters due to the learning rate to complete the gradient descent

Last updated