Video super-resolution aims to improve the quality of low-resolution videos by generating high-resolution versions with better detail and clarity. Existing methods typically rely on optical flow, which assumes linear motion and is sensitive to rapid lighting changes, to capture inter-frame information. Event cameras are a novel type of sensor that output high temporal resolution event streams asynchronously, which can reflect nonlinear motion and are robust to lighting changes. Inspired by these characteristics, we propose an Event-driven Bidirectional Video Super-Resolution (EBVSR) framework. Firstly, we propose an event-assisted temporal alignment module that utilizes events to generate nonlinear motion to align adjacent frames, complementing flow-based methods. Secondly, we build an event-based frame synthesis module that enhances the network’s robustness to lighting changes through a bidirectional cross-modal fusion design. Experimental results on synthetic and real-world datasets demonstrate the superiority of our method.
See EBVSR_arch.py.
git clone https://github.com/DachunKai/EBVSR
cd EBVSR
pip install -r requirements.txt
python setup.py develop
We conducts experiments on both synthetic (Vid4, Vimeo-90K-T) and real-world CED rgb-event well-aligned datasets. For synthetic datasets, we follow the vid2e event simulator to generate events.
Download the pretrained model from this link and place it to experiments/pretrained_models/EBVSR/*.pth
.
./scripts/dist_test.sh 1 options/test/EBVSR/*.yml
@inproceedings{kai2023video,
title={Video Super-Resolution Via Event-Driven Temporal Alignment},
author={Kai, Dachun and Zhang, Yueyi and Sun, Xiaoyan},
booktitle={2023 IEEE International Conference on Image Processing (ICIP)},
pages={2950--2954},
year={2023},
organization={IEEE}
}
If you have any questions, please feel free to contact [email protected].
This project is under the Apache 2.0 license, and it is based on BasicSR which is under the Apache 2.0 license. Thanks to the inspirations and codes from event_utils.