Skip to content

Latest commit

 

History

History
68 lines (48 loc) · 1.63 KB

README.md

File metadata and controls

68 lines (48 loc) · 1.63 KB

Double Experience Replay (DER)

Pytorch implementaion of Double_Experience_Replay (DER)

This method mixes two stratesgies for sampling experiences which will be stored in replay buffer.
You could choose strategies whatever you want, but this paper we use temperal difference (TD) value based sample strategy and uniform sample strategy.

Contents

This implementaion contains:

Simulation of Urban MObility (SUMO)

  • Lane change Environmnet
  • Ring Network Environment

**YeongDong Bridge Environment does not contain in this repository.

Method

Using Uniform sample strategy and TD value based sampling method.
As a training algorithm we use Deep Q-learning (DQN)

Requirements

Usage

To train SUMO with ring environment

cd ring
python ring.py

To train SUMO with Lane Change environment

cd lanechange
python lane.py

Result

  • YeongDong Bridge Agent (LEFT, white car)
  • Lane Change Agent (RIGHT, white car)

  • YeongDong Bridge (LEFT)
  • Ring Network (RIGHT)