Skip to content

Latest commit

 

History

History
33 lines (18 loc) · 1.43 KB

README.md

File metadata and controls

33 lines (18 loc) · 1.43 KB

Chess_MDP

MDP & RL project

This is Markov's Decision Process and Reinforcement Learning Course Project

Chess Automation using MDP, RL, Dynamic Programming, and ML

This repository contains the code for automating the game of Chess using Markov Decision Process (MDP), Reinforcement Learning (RL), Dynamic Programming, and Machine Learning (ML) techniques.

Project Description

The goal of this project is to create a chess engine that can play at the level of a intermediate player in chess. This is done using the TD (\lambda) algorithm for the learning process and the minimax algorithm for playing.

Collaborators

Getting Started

To get started with this project, follow the instructions below:

  1. Clone the repository: git clone https://github.com/Harshit-Dhanwalkar/Chess-MDP.git
  2. Install the required dependencies: pip install -r requirements.txt
  3. Run the main script: ChessEngine.py

Current Issues

  • The minimax algorithm is very slow.This prevented us from finding the true strength of the engine. As of now, the best elo rating we have seen is 1000 at a depth of 2. The speed can be improved by using the alpha-beta pruning method.