This is basically Kurt's lecture on this topic.
W W W
start - - -
B B B
/
/
/
/
/
/
- W W
white moves W - -
B B B
and then we (playing black) applied the minimax procedure to
find our best move from the board that our opponent had left
for us:
W W W
start - - -
B B B
/
/
/
/
/
/
- W W
white moves W - -
B B B
/ | \
/ | \
/ | \
our / | \
best / | \
move / | \
/ | \
- W W - W W - W W
0 B - - -10 W B - -10 W - B
B - B B - B B B -
/ | \ / \ / | \
/ | \ / \ / | \
/ | \ / \ / | \
/ | \ / \ / | \
- - W - - W - W - - W - - W - - W W - - W - - W
W - - B W - B - W W B W W W - - - B W W B W - W
B - B B - B B - B B - B B - B B W - B B - B B -
0 1 1 -10 -1 -10 0 -1
So we make our move, and then white counters with a move:
W W W
start - - -
B B B
/
/
/
/
/
/
- W W
white moves W - -
B B B
/
/
/
/
/
/
/
we - W W
move B - -
B - B
|
|
|
|
white - - W
moves B W
B - B
Note that white didn't make the move that we predicted would
be the best move for white. That happens a lot, but we don't
care. Regardless of the move that white made, we'd have to
go through the whole minimax thing again to decide our next
best move. So let's go through it one more time just to make
sure we follow how this all works. We start with the board
that was left after white's last move:
- - W
B W -
B - B
Then we generate the state space that results from all the
moves we could make followed by all the moves that our
opponent could make in response. (Remember that we've
arbitrarily set our search limit at two moves ahead...we
could have set that limit higher if we wanted to expend the
resources.)
- - W
B W -
B - B
/ ^ \
/ / \ \
/ / \ \
/ / \ \
/ / \ \
/ / \ \
B - W - - W - - W - - W
- W - B B - B B - B W B
B - B - - B B - - B - -
/ \ / \ / \
/ \ / \ / \
/ \ / \ / \
/ \ / \ / \
- - - - - - - - - - - - - - W - - W
B W - B B W B W - B B W B - B B - B
- - B - - B B - - B - - W - - B W -
Then we use our static board evaluation function to determine
the goodness of the "terminal boards":
- - W
B W -
B - B
/ ^ \
/ / \ \
/ / \ \
/ / \ \
/ / \ \
/ / \ \
B - W - - W - - W - - W
- W - B B - B B - B W B
B - B - - B B - - B - -
/ \ / \ / \
10 / \ / \ / \
/ \ / \ / \
/ \ / \ / \
- - - - - - - - - - - - - - W - - W
B W - B B W B W - B B W B - B B - B
- - B - - B B - - B - - W - - B W -
2 4 1 3 -10 -10
Then we propagate the minimums up from the result of white's
move (this is the "minimizing level"):
- - W
B W -
B - B
/ ^ \
/ / \ \
/ / \ \
/ / \ \
/ / \ \
/ / \ \
B - W - - W - - W - - W
- W - 2 B B - 1 B B - -10 B W B
B - B - - B B - - B - -
/ \ / \ / \
10 / \ / \ / \
/ \ / \ / \
/ \ / \ / \
- - - - - - - - - - - - - - W - - W
B W - B B W B W - B B W B - B B - B
- - B - - B B - - B - - W - - B W -
2 4 1 3 -10 -10
And then we would propagate the maximum value up and select
the best move to make. In this case, that move would be the
one on the left, with the board value of 10, which indicates
a win for us. Yippee!!
Let's take a look at a simple abstract example of how this might work. Let's say we start with some board:
start
board
And from that starting board, I have two possible moves. But
instead of generating all my moves in a sort of breadth-first
fashion, I'm going to fall back on my old depth-first search
technique and generate just one of my moves, and explore all
of my opponent's moves in response to my move before I go and
look at my other move:
start
board
/
/
/
/
/
my
move
Now, again following my depth-first approach, and remembering
that I'm still cutting off my search at two moves ahead, I
look at one of my opponent's moves and apply the static board
evaluation function:
start
board
/
/
/
/
/
my
move
/
/
/
/
/
opp's
move
2
Let's say that my opponent has two possible moves after
either of my moves. We've just looked at one of the
opponent's possible moves, now well explore the other:
start
board
/
/
/
/
/
my
move
/ \
/ \
/ \
/ \
/ \
opp's opp's
move move
2 7
I then propagate the minimum value up from that level, and
begin to explore the possible outcomes of my other move:
start
board
/ \
/ \
/ \
/ \
/ \
2 my my
move move
/ \ /
/ \ /
/ \ /
/ \ /
opp's opp's opp's
move move move
2 7 1
The question now is "do I get any useful information from
exploring my opponent's remaining possible move?" And the
answer is "no". Why? Let's look at what could possibly
happen here. If I generate that last remaining board and
apply the board evaluation function to it, the value of that
board is either going to be greater than or equal to 1, or
it's going to be less than 1. In the former case, the value
that will be propagated up from this level is 1, a value that
I already knew. In the latter case, the value less than 1
would be propagated up, and I didn't know about that value
already. But, and this is the important but, either of those
values will be less than 2, which is the minimum value that
was propagated up from the other side of the tree. So based
on what I know from only exploring three of my opponent's
four possible moves, I can determine that the fourth possible
move will have no bearing on my decision about what move I
should make. I know I'm going to choose the move to the
left---the one where the worst my opponent can do to me is
leave me with a board with a value of 2. I know that I'm not
going to choose the move to the right, because my incomplete
exploration of the state space has already convinced me that
the best I can do if I go that direction is end up with a
board that has a value of 1. Oh, sure, maybe there's a
possibility that my opponent would do something stupid if I
took that move to the right and leave me with a +10 board and
I'd win, but I can't count on that. I have to assume that my
opponent is playing smart and playing to win. If I didn't
assume that, I wouldn't have to go through all this stuff in
the first place. This is the informal description of what's called "minimax with alpha-beta pruning". It's called alpha-beta because traditionally, procedures which use this technique have a paramater called alpha which holds the biggest of the maximum value found and a parameter called beta which holds the smallest of the minimum values found.
The usefulness of alpha-beta pruning is dependent upon the order in which you generate and search the possible moves. In some worst cases, there are orderings of the branches of the tree for which alpha-beta provides no help. (What if the two subtrees in the above example were explored in the reverse order?) In more common cases, alpha-beta pruning temporarily reduces the impact of exponentially increasing amounts of search, but it does not prevent that exponential increase. As the depth of the state space grows, the amount of work required still increases exponentially, but at a reduced rate.
Let's take one more look at our real hexapawn game in this context:
- W W
W - -
B B B
/ | \
/ | \
/ | \
/ | \
/ | \
/ | \
/ | \
- W W - W W - W W
0 B - - -10 W B - -10 W - B
B - B B - B B B -
/ | \ / \ / | \
/ | \ / \ / | \
/ | \ / \ / | \
/ | \ / \ / | \
- - W - - W - W - - W - - W - - W W - - W - - W
W - - B W - B - W W B W W W - - - B W W B W - W
B - B B - B B - B B - B B - B B W - B B - B B -
0 1 1 -10 -1 -10 0 -1
Could alpha-beta pruning have saved us some work in deciding
which move to make here? Sure. There are three moves we
didn't have to look at. They are marked with an X below:
X X X
- - W - - W - W - - W - - W - - W W - - W - - W
W - - B W - B - W W B W W W - - - B W W B W - W
B - B B - B B - B B - B B - B B W - B B - B B -
0 1 1 -10 -1 -10 0 -1
If you figured out that these were the moves that alpha-beta
pruning would have discarded without looking at them, and you
can explain why, then you know everything you need to know
about alpha-beta pruning.