Yesterday, an artificial intelligence programme called AlphaGo beat World Champion Lee Sedol at the ancient game of Go.
Similar to Deep Blue’s 1997 win over chess grandmaster Garry Kasparow, this signals another major milestone in the development of intelligent computers.
The reason why this win is so important is due to the complexity of the game of Go. Whereas in Chess, at any one time there are approximately 20 moves that can be played, in Go there are likely to be more like 200. This means that mathematically there are probably more configurations of Go than there are atoms in the universe. It is also not usually clear which of two players is winning at any point, with experts saying they often rely on the “feel” of the board to tell. All this makes Go potentially the most complex game which humans play (well, apart from the game of love).
But while Deep Blue won its chess match through what is called “brute force” computing, using its supercomputer power to assess every possible move and result to calculate the best moves, AlphaGo needed a different kind of computation. One that is much more like how humans think.
AlphaGo was programmed by Deepmind, an artificial intelligence lab acquired by Google using systems known as Deep Neural Networks. This is the same technique used by Google, Facebook, Microsoft and other tech companies to teach their systems to recognise things in data the way a human would (such as recognising faces in uploaded pictures, or cats in youtube videos). If you feed enough examples of something into a deep learning system, it will begin to be able to find the same thing in new data.
Deepmind started by programming millions of examples of games by human Go players in AlphaGo, allowing it to analyse how their moves progressed and impacted on the likelihood of a win.
But what this system then did was new: it was programmed to start playing against itself. Subtle variations of the program were made to play against each other, enabling the system to play millions of games that no person had ever experienced. This was called reinforcement learning, and promises to be a major breakthrough in AI for the years to come.
In fact, in what is likely to become a talked about moment in the history of AI research, during game 2 (of the five matches which AlphaGo eventually won 4-1), the system played a move which none of the human commentators had seen before, because it was something no human would apparently think of doing. But it ended up winning the game. One of the commentators kept referring to it as “beautiful”. It was Move 37 (as it will be remembered), and is one of the best examples ever seen of true Artificial Creativity.
In the coming years, this technology and the deep neural networks which power it will evolve into many systems we can’t even imagine right now. But for the moment, remember March 15th 2016, when a computer played Move 37.
Is there any game that a computer still can’t beat a human at, and is it just a matter of time? Let me know in the comments below (I read all comments).
Latest posts by Nick Skillicorn (see all)
- S3E47: Prof. Keith Sawyer – The Creative Classroom and improving learning outcomes - December 4, 2019
- 3 Dimensions of Innovation: the 23 Capabilities your company needs to succeed - November 28, 2019
- Please vote for me here as one of the best Innovation Writers of 2019 - November 20, 2019
- S3E46: Max McKeown – The Innovator’s Gap - November 20, 2019