πOther Strategies
Other Strategies of Deep Learning
Last updated
Other Strategies of Deep Learning
Last updated
In short: We start simultaneously trying to have one NN do several things at same time and then each of these tasks helps all of the other tasks π
In other words: Let's say that we want to build a detector to detect 4 classes of objects, instead of building 4 NN for each class, we can build one NN to detect the four classes π€ (The output layer has 4 units)
π€³ Training on a set of tasks that could benefit from having shared lower level features
β± Amount of data we have for each task is quite similar (sometimes) β±
π€ Can train a big enough NN to do well on all the tasks (instead of building a separate network fΔ±r each task)
π Multi task learning is used much less than transfer learning
Briefly, there have been some data processing systems or learning systems that requires multiple stages of processing,
End to end learning can take all these multiple stages and replace it with just a single NN
π©βπ§ Long Story Short: breaking the big task into sub smaller tasks with the same NN
π¦ΈββοΈ Shows the power of the data
β¨ Less hand designing of components needed
π May need large amount of data
π Excludes potentially useful hand designed components
Key question: do you have sufficient data to learn a function of the complexity needed to map x to y?