AlphaZero vs. Stockfish: Who Was Really Better?
Introduction
The battle between AlphaZero and Stockfish represents one of the most fascinating clashes in computer chess history. In December 2017, DeepMind’s AlphaZero, a self-learning neural network, shocked the chess world by defeating Stockfish 8, one of the strongest traditional chess engines at the time, in a 100-game match.
But was AlphaZero truly superior? Or did the match conditions favor it? This article dives deep into:
How AlphaZero and Stockfish work differently
The controversial match conditions
Key games and strategic innovations
Long-term impact on chess engines
Who was actually stronger?
1. The Engines: How AlphaZero and Stockfish Work
Stockfish: The Traditional Engine
Stockfish is an open-source, brute-force chess engine that relies on:
Alpha-beta pruning – A search algorithm that eliminates bad moves early.
Handcrafted evaluation functions – Human-programmed rules for assessing positions (e.g., material, pawn structure, king safety).
Massive computational power – Evaluates millions of positions per second using heuristics.
Stockfish has dominated computer chess for years, constantly improving through community contributions and better hardware.
AlphaZero: The Neural Network Revolution
AlphaZero, developed by DeepMind (Google’s AI lab), uses:
Monte Carlo Tree Search (MCTS) – A probabilistic approach to exploring moves.
Deep Neural Networks – Self-trained through reinforcement learning (no human input).
Zero human knowledge – Learned chess from scratch by playing itself.
Unlike Stockfish, AlphaZero doesn’t rely on pre-programmed evaluations—it intuitively understands chess strategy through self-play.
2. The 2017 Match: Controversy and Results
In December 2017, DeepMind published a paper claiming AlphaZero defeated Stockfish 8 in a 100-game match:
+28 wins, -0 losses, 72 draws (with White and Black).
Dominated in dynamic, strategic positions.
Showed human-like creativity in sacrifices and long-term planning.
Controversial Conditions
Critics argued the match was unfairly stacked against Stockfish:
No opening book: Stockfish (which relies on opening databases) played from scratch.
Limited hardware: Stockfish ran on 64 CPU threads, while AlphaZero used 4 TPUs (specialized AI chips).
No endgame tablebases: Stockfish couldn’t access its usual endgame databases.
Time control: AlphaZero had 1 minute per move, while Stockfish was restricted to fixed depth.
Had Stockfish been given its full capabilities (e.g., Syzygy tablebases, better hardware), the result might have been closer.
3. Key Games: How AlphaZero Outplayed Stockfish
Game 10: The Immortal Anti-Computer Game
AlphaZero sacrificed a rook in a closed position—something Stockfish didn’t see coming.
1. d4 Nf6 2. c4 e6 3. Nf3 d5 4. Nc3 Be7 5. Bf4 O-O 6. e3 c5 7. dxc5 Bxc5 8. Qc2 Nc6 9. a3 Qa5 10. O-O-O?! (A risky, human-like move)
AlphaZero’s play was positionally deep, slowly strangling Stockfish in closed positions where brute-force calculation struggled.
Game 23: The King Walk
AlphaZero marched its king to safety in a way Stockfish didn’t anticipate, showcasing long-term planning over immediate tactics.
Game 55: The Exchange Sacrifice Masterpiece
AlphaZero gave up a rook for a knight to dominate the center—a strategic concept Stockfish undervalued.
4. Impact on Chess Engines
AlphaZero’s success revolutionized chess AI:
Neural Network Evaluation (NNUE): Stockfish later adopted neural networks (2020) to improve positional understanding.
Leela Chess Zero (Lc0): An open-source version of AlphaZero, proving neural networks could compete with traditional engines.
Human-like play: Engines now blend brute-force calculation with intuitive understanding.
5. Who Was Really Better?
If the Match Was Fair…
Stockfish 8 was outdated—later versions (Stockfish 14+) with NNUE surpassed AlphaZero’s level.
AlphaZero was optimized for the match—it trained specifically against Stockfish’s weaknesses.
In equal hardware conditions, a modern Stockfish or Komodo Dragon might outperform AlphaZero.
The Verdict
AlphaZero was better in dynamic, strategic positions, while Stockfish remained stronger in deep tactical lines. The real winner? Chess itself, as AI evolved beyond brute-force calculation.
Conclusion
AlphaZero’s victory was a paradigm shift in computer chess, proving neural networks could surpass traditional engines in creativity and strategy. However, under fair conditions, modern Stockfish is likely stronger today.
The legacy? A new era of chess engines—where human-like intuition meets machine precision.