Skip to content

Code for the paper Lang2LTL-2: Grounding Spatiotemporal Navigation Commands Using Large Language and Vision-Language Models

Notifications You must be signed in to change notification settings

h2r/Lang2LTL-2

 
 

Repository files navigation

Spatio-temporal Language Grounding

Codebase for the paper Lang2LTL-2: Grounding Spatiotemporal Navigation Commands Using Large Language and Vision-Language Models website.

Installation

conda create -n ground python=3.9 dill matplotlib plotly scipy scikit-learn utm
conda activate ground
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia  # GPU
conda install pytorch torchdata -c pytorch  # CPU
conda install -c conda-forge pyproj
conda install -c conda-forge spot

Data

Please download data from drive.

Generate Synthetic Dataset for Evaluation

python synthetic_dataset.py --location <LOCATION>

Lifted Command Translation Module

Please download finetuned T5-base model weights at drive.

Citation

@inproceedings{liu2024lang2ltl2,
  title     = {Lang2LTL-2: Grounding Spatiotemporal Navigation Commands Using Large Language and Vision-Language Models},
  author    = {Liu, Jason Xinyu and Shah, Ankit and Konidaris, George and Tellex, Stefanie and Paulius, David},
  booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year      = {2024},
  url       = {https://arxiv.org/abs/},
}

About

Code for the paper Lang2LTL-2: Grounding Spatiotemporal Navigation Commands Using Large Language and Vision-Language Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 93.4%
  • Jupyter Notebook 5.6%
  • Shell 1.0%