hn-classics/_stories/1997/11324202.md

253 lines
12 KiB
Markdown

---
created_at: '2016-03-20T19:16:31.000Z'
title: To Test a Powerful Computer, Play an Ancient Game (1997)
url: http://www.nytimes.com/1997/07/29/science/to-test-a-powerful-computer-play-an-ancient-game.html
author: luso_brazilian
points: 50
story_text:
comment_text:
num_comments: 26
story_id:
story_title:
story_url:
parent_id:
created_at_i: 1458501391
_tags:
- story
- author_luso_brazilian
- story_11324202
objectID: '11324202'
year: 1997
---
Deep Blue defeated the world chess champion by leveraging a moderate
amount of chess knowledge with a huge amount of blind, high-speed
searching power.
But this roughshod approach is powerless against the intricacies of Go,
leaving computers at a distinct disadvantage. ''Brute-force searching is
completely and utterly worthless for Go,'' said David Fotland, a
computer engineer for Hewlett-Packard who is the author of one of the
strongest programs, called The Many Faces of Go. ''You have to make a
program play smart like a person.''
To play a decent game of Go, a computer must be endowed with the ability
to recognize subtle, complex patterns and to draw on the kind of
intuitive knowledge that is the hallmark of human intelligence.
''It may be a hundred years before a computer beats humans at Go --
maybe even longer,'' said Dr. Piet Hut, an astrophysicist at the
Institute for Advanced Study in Princeton, N.J., and a fan of the game.
''If a reasonably intelligent person learned to play Go, in a few months
he could beat all existing computer programs. You don't have to be a
Kasparov.''
When or if a computer defeats a human Go champion, it will be a sign
that artificial intelligence is truly beginning to become as good as the
real thing.
''Go is the highest intellectual game,'' said Dr. Chen Zhixing, a
retired chemistry professor at Zhongshan University, in Guangzhou,
China.
Dr. Zhixing has spent the last six years perfecting Handtalk, the winner
of several recent international competitions. In Go, he said, the mind
is dazzled by the beauty of the patterns unfolding on the board, and a
sequence of moves can be as mesmerizing as a melody. The trick is to get
a computer to compose and understand this visual music.
Advertisement
[Continue reading the main story](#story-continues-4)
On its surface, Go seems simple compared with chess. A chess match
begins with two facing armies of 16 pieces, ranking from pawn to king,
on a 64-square board. Each of the six kinds of pieces is allowed to move
only in certain ways -- bishops diagonally; knights in L-shaped paths.
In Go there are few such complications. All of a player's stones are
identical. A game begins with a blank 19-by-19 grid (sometimes smaller
ones are used), and the two contestants take turns placing their stones
(black for one side, white for the other) on any of the unoccupied
intersections.
A player can capture a group of an opponent's stones by surrounding it
and then removing the cluster from the board. The object of the game is
to build complex fence-like structures enclosing as much territory as
possible.
''In chess you start with everything you have on the board,'' said Tim
Klinger, a graduate student in computer science at New York University
who is studying computer Go. ''In Go you start from nothing and build.''
Stone by stone, you try to construct enclaves, engulfing those of your
opponent, who is all the time trying to engulf your own. Adding to the
complications, there are usually several skirmishes going on
simultaneously in different corners of the board. If chess is like a
medieval battle, it is sometimes said, Go is more like a world war. And
it can be maddeningly difficult to determine who is ahead.
## Newsletter Sign Up
[Continue reading the main story](#continues-post-newsletter)
###
Please verify you're not a robot by clicking the box.
Invalid email address. Please re-enter.
You must select a newsletter to subscribe to.
You agree to receive occasional updates and special offers for The New
York Times's products and services.
### Thank you for subscribing.
### An error has occurred. Please try again later.
[View all New York Times newsletters.](/newsletters)
''In chess, if a player loses even a single pawn at world champion
level, it can decide the game maybe 99 percent of the time,'' said Dr.
Hans Berliner, a computer scientist at Carnegie-Mellon University in
Pittsburgh who is an expert on computer chess. ''In Go, you keep hearing
people say that you can lose a life-and-death battle along the edge of
the board, but that is far from deciding the outcome. You can go on to
other battles. It's a very different kind of game.''
From the point of view of a computer, the difference could not be more
profound. Because of the tight constraints in how chess pieces can be
moved, a player is faced with an average of only about 35 legal moves to
consider with each turn. Computer programs like Deep Blue analyze these
moves, considering the opponent's possible countermoves, and then the
countermoves to the countermoves. In computer chess terminology, each
move and its response is called a ply. The fastest chess programs look
ahead seven or eight plies into the game.
The result is a densely proliferating tree of possibilities with the
branches and twigs representing all the different ways the game could
unfold. Looking ahead just seven plies (14 individual chess moves)
requires examining 3514 (more than a billion trillion) leaves
representing all the various outcomes.
As the computer tries to look deeper, the number of possibilities
explodes. Programmers have learned clever ways to ''prune'' the trees,
so that all but a fraction of the paths can be discarded without
plumbing them all the way to the bottom. Even so, a chess-playing
computer looking ahead seven plies might consider as many as 50 or 60
billion scenarios each time its turn comes around.
Advertisement
[Continue reading the main story](#story-continues-5)
As bad as that sounds, in Go the situation is drastically worse. The
tree of possible moves is so broad and dense that not even the fastest
computer can negotiate it. The first player can put a stone in any of
361 places; the opponent can respond by placing a stone on any of 360
places, and so on. As the game continues, there are steadily fewer
possible places to play. But, on average, a player is faced with about
200 possible moves, compared with just 35 in chess.
As a computer scientist would put it, the branching factor is much
higher for Go than for chess. In chess the approximate number of
possible board positions after only four moves is typically 35x35x35x35=
1,500,625. For Go, the number is 200x200x200x200=1,600,000,000 -- and
far more toward the beginning of a game. Search one ply deeper and the
numbers rapidly diverge: about 1.8 billion possible outcomes for chess
and 64 trillion for Go.
Looking ahead 14 moves, or seven plies, in Go creates a search tree not
with a mere 3514 leaves, as for chess, but with more than 20014 leaves.
Pruning techniques cut this down to about ten thousand trillion
possibilities to consider. Still, a Go computer as fast as Deep Blue
(which analyzed some 200 million chess positions per second) would take
a year and a half to mull over a single move.
Even worse, performing so laborious a search would give the computer no
significant advantage over its human opponent. After sifting through the
myriad possibilities, a chess-playing computer tries to choose the move
that will leave it in the strongest position. It determines this by
using fairly simple formulas called evaluation functions. Each piece can
be assigned a number indicating its rank (pawns are worth 1, knights and
bishops 3, rooks 5, queens 9). This figure can be multiplied by another
number indicating the strength of the piece's position on the board.
Other formulas quantify concepts like ''king safety,'' or how
wellprotected that piece is. These rules, called heuristics, are hardly
infallible, but they give the computer a rough sense of the state of the
game and a basis on which to make its decisions.
Go does not succumb to such simple analysis. There is no single piece,
like a king, whose loss decides the game. Even counting the amount of
territory each player has captured is not very revealing. With the
placement of a single stone, a seeming underdog might surround the grand
structure his opponent has been assiduously building and turn it --
jujitsu-like -- into his own. ''You're stringing all these stones
together, and if you don't watch out the whole collection becomes dinner
for your opponent,'' Mr. Klinger said.
Expert Go players evaluate the state of the board by using their skills
at pattern recognition, and these are very hard to capture in an
algorithm. After years of experience, they can look at a complex
configuration and sense whether it is ''alive,'' meaning that it is
constructed in such a way that it cannot be captured, or ''dead,'' so
that no amount of reinforcement can save it. Learning to sense life and
death is crucial. A player does not want to waste stones attacking a
group that is invulnerable, or defending one that is doomed. Sometimes
there are fairly obvious clues: if a group of stones contains two
configurations called eyes, it can fend off any attempt to capture it.
But often the difference between life and death is difficult to
perceive, hinging on a single stone.
Go masters can also sense whether several unconnected stones might be
slowly joined to form a group, or whether two smaller groups might be
combined into a larger, stronger whole.
To get a computer to do this kind of analysis, programmers must confront
fundamental problems in artificial intelligence. Mr. Fotland armed his
program, The Many Faces of Go, with basic concepts like territory and
connectivity (whether several stones are in adjacent positions). It can
also recognize some 1,100 different patterns, each of which sets off a
sequence of suggested moves, and it has access to about 200 higher-level
strategic notions like ''attack a weak group'' or ''expand into a
potential territory'' or ''if behind, make unreasonable invasions that
you don't expect to work.'' Like Deep Blue, the program draws on a
library of standard openings and other commonly used plays. Drawing on
this knowledge, it will consider only about 5 or 10 of the approximately
200 possible moves available to it in a typical turn.
Advertisement
[Continue reading the main story](#story-continues-6)
But programming this kind of knowledge is extremely difficult. ''People
are so good at dealing with fuzzy concepts,'' said David Mechner, a
doctoral student in neural science at New York University who is a
top-ranked amateur Go player. But how do you tell a computer that
several stones might end up being connected, but not necessarily? Mr.
Mechner and Mr. Klinger are studying these kinds of problems and
fine-tuning an algorithm for recognizing life and death. They hope to
soon join the handful of programmers competing to make the best Go
program.
The winner of the FOST Cup, sponsored by the Japanese Fusion of Science
and Technology organization and held in Nagoya next month as part of the
International Joint Conference on Artificial Intelligence, will get
about $17,000. The contest for the $7,000 Ing Cup, sponsored by the Ing
Chang-ki Wei-Chi Educational Foundation in Taipei, will be held in
November in the San Francisco Bay Area. (The winner will have the
opportunity to challenge three young Go players for additional prizes).
But winning the $1.4 million prize promised by the Ing foundation to a
program that beats a human champion may be an impossible dream. The
offer expires in the year 2000. Go programmers are hoping it will be
extended for another century or two.
***Correction:** August 11, 1997, Monday An article in Science Times on
July 29 about computers that play the game of Go included an incorrect
definition for the term ''ply,'' as used in computer chess. It is an
individual move by one player, not a move and its response.*
[Continue reading the main story](#whats-next)