Stockfish vs AlphaZero
I've always been very interested in Chess programming and more broadly game engine design. One of the most interesting topics is the how you approach programming a computer to play a game. Broadly speaking you can make an engine that's narrowly targeted at a single game or you can focus on how to play games more generally such as the approaches found in general game playing.
One of the most interesting events in the chess programming world occurred in December 2018 when a challenge match of 100 games between AlphaZero and Stockfish version 8 was played. This represented a clash of completely different classes of algorithms to computer chess.
Different approaches to chess programs
The AlphaZero algorithm was only given the rules of the game of chess and this engine taught itself strategy by playing a very large number of games against itself. This effectively is a reinforcement learning approach that uses neural nets extensively in the implementation. The evaluation strategy checks fewer nodes but each node evaluated has a higher complexity.
This is a very different approach to the one taken by Stockfish. Stockfish relies on a very deep search depth and evaluates a much larger number of nodes per second. Stockfish uses an alpha-beta pruning search algorithm with some sophisticated optimizations and a particularly clever implementation (for example its use of bitboards and transposition tables) that allows an enormous number of possibilities to be evaluated per second. Of particular interest is how Stockfish is open source software, you can go to the official GitHub repository and see for yourself how Stockfish works.
There's a project Leela Chess Zero that takes a neural net approach to chess which is open sourced, unlike the Alpha Zero chess engine (as of 2020).
The AlphaZero vs Stockfish 8 match
In December 2018 AlphaZero beat Stockfish version 8 across the 100 game match.
Some of the games were released which have led to a bunch of interesting analysis. Here's some YouTube videos with quality analysis of the games from this match:
One of the most interesting things about these games is that they are effectively much higher level than any human games of chess. Since Kasperov's loss to Deep Blue in 1997 there's been a lot of use of computer analysis in chess coaching and training.
One of the interesting things about the Stockfish vs AlphaZero match is that some of the strategies that come about seem to be very strong but are not necessarily "intuitive". From reviewing these games I found it interesting how the engines like pushing pawns and moving pieced to reduce opponent options even if this leaves the pieces in awkward positions, they also don't seem to mind moving the same piece more than once in the opening if it gains advantage. Perhaps one of the most interesting things is how the notion of what is considered to be "intuitive" to recent players has been shaped so heavily by use of these chess engines for analysis for the last two decades. Many of the existing engines before 2018 were focused on depth-first algorithms to do searches of the game space. This causes those computer engines to play with a certain style which has in turn influenced many of the players who have used these engines to learn.
I remember when I was younger the only way I could beat GNU chess was by playing certain irregular openings that gained positional advantages that were hard for that engine to properly compute, some gambit based openings seemed to do disproportionately well (I wish I remembered all the details, I seem to recall that there was some variations of the Scandinavian Defense that it played outstandingly badly against). I suspect if there was no opening book used by the GNU chess engine a very substantial decline in its effective ELO rating strength would have been apparent when it faced certain openings compared to others but I don't think this would have been a uniform drop across various openings. This experience playing against a specific engine shaped how I played for a while and I still play the Scandinavian Defense somewhat frequently (when I play). Notably I didn't do so well with these strategies vs stronger human players at the chess club.
What's the best way to play?
With AlphaZero beating Stockfish by employing a more positional oriented strategy could that be the most optimal way to play? Or is this just the most effective way for a computer chess engine to play?
Ultimately we don't know what the optimality gap is for these strategies from these engines, only a sense of relative strengths. This is in contrast to situations like the endgame tablebases which contain the proven game theoretic optimal strategies for playing certain positions. Outside of these completely solved situations we don't have the same mathematical certainty about the strategies from seeing these computer vs computer matches. Would the approach of Stockfish do better with more computational power behind it? Or is it fundamentally weaker in some manner? These are the sorts of questions we may see answered in the upcoming years of computer chess. These developments are likely to answer some of the deeper questions of what constitutes the optimal strategies in chess.
But what about human players?
I always wonder about how well an engine derived strategy can be implemented by humans in real game conditions. A human player can't effectively play the same game as an engine, especially when time controls are short. Some of the lines taken by both AlphaZero and Stockfish would be very hard to get right, any mistake could lead to an immediate loss in some of these lines.
Thinking about situations where there's constraints that get in the way of implementing optimal strategies is a very important commercial question in many areas. I co-founded a startup many years ago that aimed to tackle the question of "how do we most effectively use computers to teach humans how to play games?"1. There's a whole lot of interesting parts to this since the optimal strategies in a pure game theoretic sense are practically impossible for a human player to execute on due to the enormous complexity of some components of those strategies. If you want to get a human to be able to practically benefit from the analysis you have to create something that they can feasibly implement. This means you have to find solutions that have a low enough Kolmogorov complexity to be able to be internalized and properly executed without dropping too much from the optimal solutions. It turns out this is something I'm looking at again in the area of warehouse and logistics optimization, the most mathematically optimal strategies can lead to such high complexity as to be almost impossible to practically execute on. And it brings me back to the lessons learned from chess and game engine development.
The future of chess engines
It will be interesting to see how these different engines progress over time. I'm sure I'll be tuning in to the next seasons of the Top Chess Engine Championship to see what approaches do the best. Will the percentage of drawn games rise? Will we see wins as black? Will we see the emergence of a strategy that can force a win or a draw vs any counter-strategy?
I still get asked about this from time to time, I think we were a little ahead of the market and had a bit of trouble selling what we were offering as a result. Effectively our funding ran out in 2013 and we moved on to other projects, I learned a tremendous amount from this experience. Back in 2011 I remember a huge amount of mainstream sentiment that thought that people will always be better than machines at playing games, often with little reasoning given to back this up. Even though the era started by Deep Blue in 1997 showed that chess could be played better than any human people were incredibly reluctant to believe that these approaches could apply to other games, particularly ones with an element of hidden information or chance. A prevalent idea was that "computers can't understand chance" and other bullshit like this. Back then a lot of people were strongly convinced that "their gut feeling" was the best in such domains and evidence that challenged this only really went very mainstream as data science went on its rise. I feel like AlphaZero Go was a big event in changing the narrative for many people about computer game playing. ↩