Google's AI Has Secretly Been Giving Online "Go" Players An Ass-Kicking


Tom Hale

Tom is a writer in London with a Master's degree in Journalism whose editorial work covers anything from health and the environment to technology and archaeology.

Senior Journalist

Chill Chillz/Shutterstock

Over the past few weeks, an unknown player called “Master” has been thrashing people in online matches of the ancient strategy game "Go."

The spate of ass-whoopings was causing quite a stir within circles of online Go enthusiasts. Some people began to speculate whether the assailant was in fact even human, or actually a machine.


Yesterday the rumors of their identity were finally put to rest. Demis Hassabis, chief of Google's affiliate DeepMind, posted on Twitter that “Master” was actually the work of their artificial intelligence system AlphaGo. While playing on the Tygem and FoxGo servers, AlphaGo won over 50 games, lost none, and tied just one. The draw was also only because the opponent's network connection timed out, according to many reports.

You might remember AlphaGo from last year, when it managed to beat the current Go world champion, Lee Se-dol, 4-1. Se-dol, who plays on the Tygem server himself, is rated as a “9 dan professional”, which is the ultimate rank for Go players. His matchup with AlphaGo was the first time a machine had beaten a 9 dan player and was widely heralded as a landmark achievement for artificial intelligence.


As for the game of Go, it’s a profoundly complex boardgame thought to have been played over 2,500 years old ago in China. The basic premise is placing black or white stones on a gridded board, with the eventual hope of capturing opponent's stones and occupying territory on the board. It might sound straightforward, but there are more possible positions in the game than atoms in the universe.

This ludicrous level of complexity means that players often stress how intuition and “feel” can play a part of the game, something which computers aren’t particularly good at.


The neural networks of AlphaGo can effectively learn to mirror how humans play Go by “watching” and understanding the moves of real people playing. DeepMind initially did this by plugging in over 30 million moves from games played by human experts. These unofficial online games are just more test runs to see how the latest improvements are working.

Why they chose to do it covertly against unwitting players isn’t clear. It could be a publicity stunt, it could just be convenient, or perhaps it’s because humans might adjust the way they play if they know their opponent is a machine. Whatever the reason, it's still another dizzyingly impressive feat for DeepMind and a recent boom for artificial intelligence. 


  • tag
  • google,

  • artificial intelligence,

  • computer,

  • AI,

  • deep learning,

  • game,

  • Deepmind,

  • Go,

  • computer technology