Friday, January 05, 2018

A Bittersweet Triumph

I've been fascinated with the game of Go since I was young. As a kid I played a fair bit of chess, never seriously, but I could usually beat my peers. That was balanced by regularly getting thrashed by a 90 year old in my neighborhood that I was conscripted to play with. After I discovered Go I pretty much gave up on Chess. My business partner and I used to play a lot of Go in the early days of our company. It was a good way to occupy our time on business trips. We were never very good, and almost never played with anyone else, but it was a fun mental challenge. I still play regularly on my iPad against SmartGo Kifu and Crazy Stone (which claims to use deep learning), but only small 9x9 games. (and I'm still not any good!)

One of the things that attracted me to Go was that it wasn't amenable to the same techniques that were used in software to conquer Chess. Of course, I don't believe there is any magic in human brains, so presumably there was a way for computers to handle it. Until AlphaGo, most people believed it would be a long time, perhaps a decade, before computers could beat the best human players. So it was a bit of a shock to everyone when AlphaGo recently beat the worlds best human players.

I highly recommend the full length (90 min) documentary about AlphaGo - a Go playing computer program developed by DeepMind, a company owned by Google. (Don't be scared off by "AI". The documentary isn't technical, it's more about the people.) It's available on Netflix, iTunes, Google Play, and Amazon.



For more of the technical details, see the Nature articles. They're behind a paywall on the Nature web site, but they seem to be available on Scribd - Mastering the game of Go with deep neural networks and tree search and Mastering the game of Go without human knowledge

I've never been involved in AI, but it's always fascinated me. It was one of the original reasons I got into computers. In the early days there was a lot of hype about neural networks. But they didn't live up to their hype and there was a backlash that meant they were downplayed for many years. Recently they've seen a huge comeback and are now living up to much of those original hopes. Partly that's because of advances in hardware like GPU's.

The first AlphaGo was trained partly on human games. Its successor, AlphaGo Zero, learned completely on its own, by playing itself. It surpassed the earlier version in a matter of days. Not long after, the AlphaGo project was ended and the team was disbanded. I guess it's a done deal.

No comments: