πŸ”ŽThe Problem in General

Given a dataset like:

[(x1,y1),(x2,y2),....,(xm,ym)][(x^{1},y^{1}), (x^{2},y^{2}), ...., (x^{m},y^{m})]

We want:

y^(i)β‰ˆy(i)\hat{y}^{(i)} \approx y^{(i)}

πŸ“š Basic Concepts and Notations

Concept

Description

m

Number of examples in dataset

x(i)x^{(i)}

ith example in the dataset

Ε·

Predicted output

Loss Function 𝓛(Ε·, y)

A function to compute the error for a single training example

Cost Function 𝙹(w, b)

The average of the loss functions of the entire training set

Convex Function

A function that has one local value

Non-Convex Function

A function that has lots of different local values

Gradient Descent

An iterative optimization method that we use to converge to the global optimum of Cost Function

In other words: The Cost Function measures how well our parameters w and b are doing on the training set, so the best w and b are the values that minimize 𝙹(w, b) as possible

πŸ“‰ Gradient Descent

General Formula:

w:=wβˆ’Ξ±dJ(w,b)dww:=w-\alpha\frac{dJ(w,b)}{dw}

b:=bβˆ’Ξ±dJ(w,b)dwb:=b-\alpha\frac{dJ(w,b)}{dw}

Ξ± (alpha) is the Learning Rate

πŸ₯½ Learning Rate

It is a positive scalar determining the size of the step of each iteration of gradient descent due to the corresponded estimated error each time the model weights are updated, so, it controls how quickly or slowly a neural network model learns a problem.

πŸŽ€ Good Learning Rate

πŸ’’ Bad Learning Rate

🧐 References

Last updated