The Brandeis University Quant Club is a charted club at Brandeis University [v. 2023-3.1]
The core question at the heart of this competition was: What makes a
Chess bot excel or falter in its decision-making? Participants
explored various facets of this question, including issues regarding
algorithmic efficiency, board evaluation functions, opening book
strategies, and endgame techniques.
This event was
the first-ever Chess.com sponsored Hackathon, and it
offered a chance for participants to showcase their
skills in a competitive and challenging environment.
In essence, a mix of both anecdotal and quantitative metrics determined the winners. The winners were decided through tests conducted on their creations.
Evaluation criteria encompassed the bots' proficiency in puzzle challenges, their speed in identifying optimal moves,
and their overall efficiency in achieving checkmate within the shortest possible duration.
Individuals who achieved finalist status developed bots that demonstrated competence,
although they were not proficient enough to compete successfully against the ultimate winners.
A comprehensive list of
evaluation is as follows:
Click on the teams to read more!
The "Grob Bot" by Binyamin Friedman and Alex Ott introduces interesting and effective improvements to traditional decision-making algorithms. Employing the minimax strategy, this sophisticated system adeptly "prunes away" unnecessary moves, optimizing its overall performance. A noteworthy enhancement lies in the refined logic for piece placement, injecting a strategic intelligence into the bot's maneuvers. What sets this bot apart is its intelligent guessing of more promising moves, prioritizing their exploration and thus, significantly reducing computational costs associated with redundant or evidently suboptimal moves. The incorporation of iterative deepening further refines the bot's efficiency over time. The implementation of a cache board technique is a notable improvement, eliminating redundancy in the evaluation of the same move, resulting in a more streamlined and resource-efficient operation. The strategic shift in endgame logic is another standout feature, greatly improving the bot's proficiency in achieving checkmate against similarly ranked engines, particularly in critical scenarios. Furthermore, the bot has an opening book based on common grandmaster openings, showcasing versatility and ensuring a dynamic and strategic approach right from the outset
The team explained their concepts extremely well and used excellent use of their resources.
View the GitHub.
Sijia Deng MS'24, Yichun Huang MS'24, Shamsi Mumtahina Momo '27, and Marco Wong '26 have developed a robust minimax algorithm, opting for a traditional approach after experimenting with a neural network. Their algorithm strategically divides the game into three distinct sections of early, middle, and end—each assigned a specific depth optimized for the corresponding phase. Noteworthy is their team's highly optimized code, characterized by a well-structured, clean, and efficient design. The team demonstrated a clear understanding of their concepts and logic, effectively conveying their approach and contributing to the algorithm's overall transparency and effectiveness. View the GitHub.
Ben Kamen '24, MA'24, Isaac Berger '25, Brandon Wu '25 have created a search algorithm from scratch, departing from traditional methods to create a unique approach for evaluating the board state after each move. Their algorithm begins by assessing the square root of black material minus white material, incorporating a recursive function to determine depth. In their pursuit of optimizing computational costs, the team implemented a system that prioritizes more "interesting" moves, such as checks and captures, while diminishing the emphasis on ordinary moves. This strategic adjustment not only adds a layer of dynamism to the algorithm but also effectively streamlines the computational process, making it more efficient. View the GitHub.
Finalists, in no particular order