In this episode, we discuss the bane of many machine learning algorithms – overfitting. It is also explained why it is an undesirable way to learn and how to combat it via dropout.
_____________________
The paper “Dropout: A Simple Way to Prevent Neural Networks from
Overtting” is available here:
Andrej Karpathy’s autoencoder is available here:
Recommended for you:
Overfitting and Regularization For Deep Learning –
Decision Trees and Boosting, XGBoost –
A full playlist with machine learning and deep learning-related Two Minute Papers videos –
WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE:
Sunil Kim, Vinay S.
Subscribe if you would like to see more of these! –
The thumbnail image background was created by Norma (CC BY 2.0) –
Splash screen/thumbnail design: Felícia Fehér –
Károly Zsolnai-Fehér’s links:
Facebook →
Twitter →
Web →
source