πTransfer Learning
Applying a knowledge to separate tasks
Last updated
Applying a knowledge to separate tasks
Last updated
In short: Learning from one task and applying knowledge to separate tasks π°π
π΅οΈββοΈ Transfer learning is a machine learning technique where a model trained on one task is re-purposed on a second related task.
π In addition, it is an optimization method that allows rapid progress or improved performance when modeling the second task.
π€ΈββοΈ Transfer learning only works in deep learning if the model features learned from the first task are general.
Long story short: Rather than training a neural network from scratch we can instead download an open-source model that someone else has already trained on a huge dataset maybe for weeks and use these parameters as a starting point to train our model just a little bit more with the smaller dataset that we have β¨
Layers in a neural network can sometimes end up having similar weights and possible impact each other leading to over-fitting. With a big complex model it's a risk. So if you can imagine the dense layers can look a little bit like this.
We can drop out some neurons that has similar weights with neighbors, so that overfitting is being removed.
π€ΈββοΈ An NN before and after dropout
β¨ Accuracy before and after dropout
It is practical when we have a lot of data for problem that we are transferring from and usually relatively less data for the problem we are transferring to π΅οΈβ
More accurately:
For task A
to task B
, it is sensible to do transfer learning from A to B when:
π© Task A and task B have the same output x
β We have a lot more data for task A
than task B
π Low level features from task A
could be helpful for learning task B