Browsing: Yannic Kilcher
The goal of hierarchical reinforcement learning is to divide a task into different levels of coarseness with the top-level agent…
Standard neural networks suffer from problems such as un-smooth classification boundaries and overconfidence. Manifold Mixup is an easy regularization technique…
Current CNNs have to downsample large images before processing them, which can lose a lot of detail information. This paper…
Ever wanted to do a convolution on a Klein Bottle? This paper defines CNNs over manifolds such that they are…
This paper claims that there is a radicalization pipeline on YouTube pushing people towards the Alt-Right, backing up their claims…
This paper shows that the original BERT model, if trained correctly, can outperform all of the improvements that have been…
Geoff Hinton’s next big idea! Capsule Networks are an alternative way of implementing neural networks by dividing each layer into…
The wait is finally over! Antonio and I discuss the best, funniest and dankest memes of the machine learning world.…
What if you could reduce the time your network trains by only training on the hard examples? This paper proposes…
Popular ML YouTuber Siraj Raval is in the middle of not just one, but two controversies: First, a lot of…