In this project, we propose a reinforcement-learning-based framework for assisting urban planners in the complex task of optimizing the spatial design of urban communities. Our proposed model can generate land and road layout with superior spatial efficiency, and improve the productivity of human planners with a human-AI collaborative workflow.
This project was initially described in the research article in Nature Computational Science:
Yu Zheng, Yuming Lin, Liang Zhao, Tinghai Wu, Depeng Jin, Yong Li. Spatial planning of urban communities via deep reinforcement learning. Nat Comput Sci (2023). https://doi.org/10.1038/s43588-023-00503-5
Full text (PDF) is available at this link.
- Tested OS: Linux
- Python >= 3.8
- PyTorch >= 1.8.1, <= 1.13.0
- Install PyTorch with the correct CUDA version.
- Set the following environment variable to avoid problems with multiprocess trajectory sampling:
export OMP_NUM_THREADS=1
The data used for training and evaluation can be found in urban_planning/cfg/test_data. We provide all the three scenarios used in our paper, including one synthetic grid community in urban_planning/cfg/test_data/synthetic, and two real-world communities, HLG and DHM, with and without planning concepts, in urban_planning/cfg/test_data/real. The data for the real-world communities are collected from the widely used OpenStreetMap (OSM) using OSMnx. For each case, we provide the following data:
init_plan.pickle
: the initial conditions of the community in geopandas.GeoDataFrame form, including the initial land blocks, roads, and junctions.objectives.yaml
: the planning objectives (requirements) of the community in yaml form, including the required number/area of different functionalities, and the minimum/maximum area of each land use type.
The figure below illustrates the initial conditions of the three scenarios.
With the initial conditions and planning objectives, the agent will generate millions of spatial plans for the community in real-time during training, which are stored in the replay buffer for training.
You can train your own models using the provided config in urban_planning/cfg/exp_cfg/real.
For example, to train a model for the HLG community, run:
python3 -m urban_planning.train --cfg hlg --global_seed 111
You can replace hlg
to dhm
to train for the DHM community.
To train a model with planning concepts for the HLG community, run:
python3 -m urban_planning.train --cfg hlg_concept --global_seed 111
You can replace hlg_concept
to dhm_concept
to train for the DHM community.
You will see the following logs once you start training our model:
running_code.mp4
You can visualize the generated spatial plans using the provided notebook in demo.
To evaluate the centralized heuristic, run:
python3 -m urban_planning.eval --cfg hlg --global_seed 111 --agent rule-centralized
To evaluate the decentralized heuristic, run:
python3 -m urban_planning.eval --cfg hlg --global_seed 111 --agent rule-decentralized
To evaluate the geometric set-coverage adapted baseline, run:
python3 -m urban_planning.eval --cfg hlg --global_seed 111 --agent gsca
To evaluate the GA baseline, run:
python3 -m urban_planning.train_ga --cfg hlg --global_seed 111
python3 -m urban_planning.eval --cfg hlg --global_seed 111 --agent ga
You can replace hlg
to dhm
to evaluate for the DHM community.
We provide the final plans (geojson format) generated by our model for the HLG and DHM communities in results.
We conduct experiments on the HLG and DHM communities, which are real-world communities in Beijing, China. You can change the base map to your own community. We provide an example in extra showing how to prepare the base map data of another community in Huizhou, China.
If you use this code in your project, please consider citing the following paper:
@article{zheng_spatial_2023,
title = {Spatial planning of urban communities via deep reinforcement learning},
volume = {3},
issn = {2662-8457},
url = {https://doi.org/10.1038/s43588-023-00503-5},
doi = {10.1038/s43588-023-00503-5},
number = {9},
journal = {Nature Computational Science},
author = {Zheng, Yu and Lin, Yuming and Zhao, Liang and Wu, Tinghai and Jin, Depeng and Li, Yong},
month = sep,
year = {2023},
pages = {748--762},
}
Please see the license for further details.
The implemention is based on Transform2Act and circuit_training.
With urban planning being a long-standing problem, researchers have devoted decades of efforts to developing computational models and supporting tools for it in order to automate its process. Automated spatial layout seemed impossible until much more recently with the latest advancements in artificial intelligence, especially deep reinforcement learning. In fact, our proposed DRL approach is inspired by, and takes a small step forward from, the planning support tools that have been available for the past few decades. Here, we summarize these existing awesome planning support tools that utilize AI to facilitate urban planning.