πTransfer Learning
Applying a knowledge to separate tasks
In short: Learning from one task and applying knowledge to separate tasks π°π
β What is Transfer Learning?
π΅οΈββοΈ Transfer learning is a machine learning technique where a model trained on one task is re-purposed on a second related task.
π In addition, it is an optimization method that allows rapid progress or improved performance when modeling the second task.
π€ΈββοΈ Transfer learning only works in deep learning if the model features learned from the first task are general.
Long story short: Rather than training a neural network from scratch we can instead download an open-source model that someone else has already trained on a huge dataset maybe for weeks and use these parameters as a starting point to train our model just a little bit more with the smaller dataset that we have β¨
π« Traditional ML vs Transfer Learning
π Problem
Layers in a neural network can sometimes end up having similar weights and possible impact each other leading to over-fitting. With a big complex model it's a risk. So if you can imagine the dense layers can look a little bit like this.
We can drop out some neurons that has similar weights with neighbors, so that overfitting is being removed.
π Comparison
π€ΈββοΈ An NN before and after dropout
β¨ Accuracy before and after dropout
π€ When is it practical?
It is practical when we have a lot of data for problem that we are transferring from and usually relatively less data for the problem we are transferring to π΅οΈβ
More accurately:
For task A
to task B
, it is sensible to do transfer learning from A to B when:
π© Task A and task B have the same output x
β We have a lot more data for
task A
thantask B
π Low level features from
task A
could be helpful for learningtask B
π§ References
Last updated