AlphaGo
Zero, a board game playing program based on AI, devised by Google’s AI division
Deep Mind, took just three days to master Go, an ancient Chinese board game, without
any human intervention. All it had were the rules of the game and a blank Go
board to begin with. It then played itself over and over again till it mastered
this complex game. This final version of the Go-playing program from Deep Mind
is the most powerful AI so far. This year on May 25th, Ke Jie, the world’s
number one player of the complex strategy game, lost to AlphaGo Zero three out
of three games. Last year the 2015 version of the same program defeated Lee
Sedol, the South Korean grandmaster, 100 to 0.
History of Go dates
back to around 3000 years ago in China. The game is played using black and
white pieces and opponents try to win the game by surrounding their opponent’s
piece with their own. The rules of the game are simpler than those of chess,
but the number of choices a Go player has in his turn are about 200 compared to
just 20 in the chess. It’s also very difficult to tell, at a particular stage of
the game, that who is winning, the top players rely on instinct. Go maintains
fluidity and dynamism much longer than other comparable board games. Draws are
rare. There is no defined procedure for victory only continued good play. The
game rewards patience and balance over aggression and greed, remember that
scene from “A Beautiful Mind”? An early mistake can be made up, used to
advantage, or even reversed as the game progresses. To its devotees, Go is more
than just a game. To them it can be an analogy for life, an intense meditation,
a mirror of one’s personality, a mental workout or when played well a delicate
balance between black and white pieces dancing around the board.
Demis
Hassabis, the CEO and co-founder of the Deep Mind, has a team which includes people
like a Dutch Physics Olympiad winner, the person who got the top Maths PhD of the
year in France and the man leading them all, David Silver who has contributed
to more research papers (16 by now) than any of the other team members.

AlphaGo
isn’t the first program to learn from self-play. Elon Musk’s non-profit OpenAI
has used similar techniques, but AlphaGo’s capabilities show that it’s the most
powerful AI so far. “By not using the human data, by not using human features
or human expertise in any fashion, we’ve actually removed the constraints of
human knowledge” said David Silver. ”It’s able to create knowledge for itself”. That is exactly what it did. It not only rediscovered thousands of years of human knowledge like the some of the most common and best moves that humans play but also surpassed them by playing its own variants which human have yet not discovered. Remember, all of this, in just three days. But AlphaGo was never about winning board games. It is a step closer for general purpose learning machines. Imagine if instead of discovering new Go moves, the algorithm could able to learn the interactions between proteins in human body to further scientific research or use the laws of physics to create new building materials. “Quantum chemistry, material design, maybe there is a room temperature superconductor out and about there” said Hassabis.

In game
four, Lee Sedol the human, played a move that no machine would ever expect and
it was beautiful, as beautiful as from the Google machine itself. It looked as
if Lee Sedol grew as a player while playing with the machine. He himself
admitted this, and told Hassabis that it has opened his eyes. The proponents of
AI say that the game is not AI vs. Humans but AI and Humans. We will grow with
the help of our own creations. There are countless possibilities, analyzing
stocks, managing energy use, discovering new drugs. The message is to be awed
and not afraid because no matter how intelligent the computers get we will
always be more creative. After all, we build the machines.
No comments:
Post a Comment