This time around, Google DeepMind embarked on a journey to write an algorithm that plays Go. Go is an ancient chinese board game where the opposing players try to capture each other’s stones on the board. Behind the veil of this deceptively simple ruleset, lies an enormous layer of depth and complexity. As scientists like to say, the search space of this problem is significantly larger than that of chess. So large, that one often has to rely on human intuition to find a suitable next move, therefore it is not surprising that playing Go on a high level is, or maybe was widely believed to be intractable for machines. The result is Google DeepMind’s AlphaGo, the deep learning technique that defeated a professional player and world champion, Lee Sedol.
What it also important to note is that the techniques used in this algorithm are general, and can be used for a large number of different tasks. By this, I mean not AlphaGo specifically, but the Monte Carlo Tree Search, the value network and deep neural networks.
______________________
The paper “Mastering the Game of Go with Deep Neural Networks and Tree Search” is available here:
A great Go analysis video by Brady Daniels. Make sure to check it out and subscribe if you like what you see there!
The mentioned post on the Go reddit:
Some clarification on what part of the algorithm is specific to Go and how:
Go board image credits (all CC BY 2.0):
Renato Ganoza –
Jaro Larnos –
Luis de Bethencourt –
WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE:
Sunil Kim, Vinay S.
Subscribe if you would like to see more of these! –
The background of the thumbnail image is the property of Google DeepMind.
Splash screen/thumbnail design: Felícia Fehér –
Károly Zsolnai-Fehér’s links:
Patreon →
Facebook →
Twitter →
Web →
source