Skip to content

Code for LENS, fully neuromoprhic place recognition integrating sensors, hardware, and algorithms for robotic localization.

License

Notifications You must be signed in to change notification settings

AdamDHines/LENS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

95 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

👁️ LENS - Locational Encoding with Neuromorphic Systems

PyTorch License: MIT QUT Centre for Robotics stars Downloads Conda Version PyPI - Version GitHub repo size

This repository contains code for LENS - Locational Encoding with Neuromorphic Systems. LENS combines neuromorphic algoriths, sensors, and hardware to perform accurate, real-time robotic localization using visual place recognition (VPR). LENS can be used with the SynSense Speck2fDevKit board which houses a SPECKTM dynamic vision sensor and neuromorphic processor for online VPR.

License and citation

This repository is licensed under the MIT License. If you use our code, please cite our arXiv paper:

@misc{hines2024lens,
      title={A compact neuromorphic system for ultra energy-efficient, on-device robot localization}, 
      author={Adam D. Hines and Michael Milford and Tobias Fischer},
      year={2024},
      eprint={2408.16754},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2408.16754}, 
}

Installation and setup

To run LENS, please download this repository and install the required dependencies.

Get the code

Get the code by cloning the repository.

git clone [email protected]:AdamDHines/LENS.git
cd ~/LENS

Install dependencies

All dependencies can be instlled from our conda-forge package, PyPi package, or local requirements.txt. For the conda-forge package, we recommend using micromamba or miniforge. Please ensure your Python version is <= 3.11.

conda package

# Create a new environment and install packages
micromamba create -n lens-vpr -c conda-forge lens-vpr

# samna package is not available on conda-forge, so pip install it
micromamba activate lens-vpr
pip install samna

pip

# Install from our PyPi package
pip install lens-vpr

# Install from local requirements.txt
pip install -r requirements.txt

Quick start

Get started using our pretrained models and datasets to evaluate the system. For a full guide on training and evaluating your own datasets, please visit our Wiki.

Run the inferencing model

To run a simulated event stream, you can try our pre-trained model and datasets. Using the --sim_mat and --matching flag will display a similarity matrix and perform Recall@N matching based on a ground truth matrix.

python main.py --sim_mat --matching

Train a new model

New models can be trained by parsing the --train_model flag. Try training a new model with our provided reference dataset.

# Train a new model
python main.py --train_model

Optimize network hyperparameters

For new models on custom datasets, you can optimize your network hyperparameters using Weights & Biases through our convenient optimizer.py script.

# Optimize network hyperparameters
python optimizer.py

For more details, please visit the Wiki.

Deployment on neuromoprhic hardware

If you have a SynSense Speck2fDevKit, you can try out LENS using our pre-trained model and datasets by deploying simulated event streams on-chip.

# Generate a timebased simulation of event streams with pre-recorded data
python main.py --simulated_speck --sim_mat --matching

Additionally, models can be deployed onto the Speck2fDevKit for low-latency and energy efficient VPR with sequence matching in real-time. Use the --event_driven flag to start the online inferencing system.

# Run the online inferencing model
python main.py --event_driven

For more details on deployment to the Speck2fDevKit, please visit the Wiki.

Issues, bugs, and feature requests

If you encounter problems whilst running the code or if you have a suggestion for a feature or improvement, please report it as an issue.