Skip to content

Commit

Permalink
Update Readme
Browse files Browse the repository at this point in the history
  • Loading branch information
DachunKai committed Jun 9, 2024
1 parent a6905d3 commit 383154c
Showing 1 changed file with 12 additions and 20 deletions.
32 changes: 12 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
# [EvTexture (ICML 2024)](https://icml.cc/virtual/2024/poster/34032)
Official Pytorch implementation for the "EvTexture: Event-driven Texture Enhancement for Video Super-Resolution" paper (ICML 2024).

<p align="left">
📃 <a href="https://drive.google.com/file/d/1RWptb35a-z-hwc3gZZY-FPd_G8g8Up1d/view?usp=sharing" target="_blank">[Paper]</a>
<p align="center">
🌐 <a href="https://dachunkai.github.io/evtexture.github.io/" target="_blank">Project</a> | 📃 <a href="https://drive.google.com/file/d/1RWptb35a-z-hwc3gZZY-FPd_G8g8Up1d/view?usp=sharing" target="_blank">Paper</a> | 🖼️ <a href="https://docs.google.com/presentation/d/1nbDb39TFb374DzBwdz5v20kIREUA0nBH/edit?usp=sharing" target="_blank">Poster</a> <br>
</p>

This is the official Pytorch implementation of "EvTexture: Event-driven Texture Enhancement for Video Super-Resolution" paper (ICML 2024). This repository contains *video demos* and *codes* of our work.

**Authors**: [Dachun Kai](https://github.com/DachunKai/)<sup>[:email:](mailto:[email protected])</sup>, Jiayao Lu, [Yueyi Zhang](https://scholar.google.com.hk/citations?user=LatWlFAAAAAJ&hl=zh-CN&oi=ao)<sup>[:email:](mailto:[email protected])</sup>, [Xiaoyan Sun](https://scholar.google.com/citations?user=VRG3dw4AAAAJ&hl=zh-CN), *University of Science and Technology of China*

**Feel free to ask questions. If our work helps, please don't hesitate to give us a :star:!**
Expand Down Expand Up @@ -49,22 +48,18 @@ https://github.com/DachunKai/EvTexture/assets/66354783/01880c40-147b-4c02-8789-c
pip install torch-1.10.2+cu111-cp37-cp37m-linux_x86_64.whl
pip install torchvision-0.11.3+cu111-cp37-cp37m-linux_x86_64.whl
git clone https://github.com/DachunKai/EvTexture.git
cd /path/to/EvTexture
pip install -r requirements.txt
python setup.py develop
cd EvTexture && pip install -r requirements.txt && python setup.py develop
```
* Run in Docker :clap:

Note: before running the Docker image, make sure to install nvidia-docker by following the [official intructions](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).

[Option 1] Directly pull the published Docker image we have provided from [Dockerhub](https://hub.docker.com/)/[Alibaba Cloud](https://cr.console.aliyun.com/cn-hangzhou/instances).
[Option 1] Directly pull the published Docker image we have provided from [Alibaba Cloud](https://cr.console.aliyun.com/cn-hangzhou/instances).
```bash
docker pull dachunkai/evtexture:latest # From Dockerhub
# or
docker pull registry.cn-hangzhou.aliyuncs.com/dachunkai/evtexture:latest # From Alibaba Cloud
docker pull registry.cn-hangzhou.aliyuncs.com/dachunkai/evtexture:latest
```

[Option 2] We also provide a [Dockerfile](docker/Dockerfile) that you can use to build the image yourself.
[Option 2] We also provide a [Dockerfile](https://github.com/DachunKai/EvTexture/blob/main/docker/Dockerfile) that you can use to build the image yourself.
```bash
cd EvTexture && docker build -t evtexture ./docker
```
Expand All @@ -73,28 +68,25 @@ https://github.com/DachunKai/EvTexture/assets/66354783/01880c40-147b-4c02-8789-c
source activate evtexture && cd EvTexture && python setup.py develop
```
### Test
1. Download the pretrained models to `experiments/pretrained_models/EvTexture/`. ([Onedrive](https://1drv.ms/f/c/2d90e71fb9eb254f/EnMm8c2mP_FPv6lwt1jy01YB6bQhoPQ25vtzAhycYisERw?e=DiI2Ab)/[Google Drive](https://drive.google.com/drive/folders/1oqOAZbroYW-yfyzIbLYPMJ2ZQmaaCXKy?usp=sharing)/[Baidu Cloud](https://pan.baidu.com/s/161bfWZGVH1UBCCka93ImqQ?pwd=n8hg)(n8hg)). The network architecture code is in [evtexture_arch.py](https://github.com/DachunKai/EvTexture/blob/main/basicsr/archs/evtexture_arch.py).
1. Download the pretrained models from ([Onedrive](https://1drv.ms/f/c/2d90e71fb9eb254f/EnMm8c2mP_FPv6lwt1jy01YB6bQhoPQ25vtzAhycYisERw?e=DiI2Ab) / [Google Drive](https://drive.google.com/drive/folders/1oqOAZbroYW-yfyzIbLYPMJ2ZQmaaCXKy?usp=sharing) / [Baidu Cloud](https://pan.baidu.com/s/161bfWZGVH1UBCCka93ImqQ?pwd=n8hg)(n8hg)) and place them to `experiments/pretrained_models/EvTexture/`. The network architecture code is in [evtexture_arch.py](https://github.com/DachunKai/EvTexture/blob/main/basicsr/archs/evtexture_arch.py).
* *EvTexture_REDS_BIx4.pth*: trained on REDS dataset with BI degradation for $4\times$ SR scale.
* *EvTexture_Vimeo90K_BIx4.pth*: trained on Vimeo-90K dataset with BI degradation for $4\times$ SR scale.

2. Download the preprocessed test sets (including events) for REDS4 and Vid4 to `datasets/`. ([Onedrive](https://1drv.ms/f/c/2d90e71fb9eb254f/EnMm8c2mP_FPv6lwt1jy01YB6bQhoPQ25vtzAhycYisERw?e=DiI2Ab)/[Google Drive](https://drive.google.com/drive/folders/1oqOAZbroYW-yfyzIbLYPMJ2ZQmaaCXKy?usp=sharing)/[Baidu Cloud](https://pan.baidu.com/s/161bfWZGVH1UBCCka93ImqQ?pwd=n8hg)(n8hg))
2. Download the preprocessed test sets (including events) for REDS4 and Vid4 from ([Onedrive](https://1drv.ms/f/c/2d90e71fb9eb254f/EnMm8c2mP_FPv6lwt1jy01YB6bQhoPQ25vtzAhycYisERw?e=DiI2Ab) / [Google Drive](https://drive.google.com/drive/folders/1oqOAZbroYW-yfyzIbLYPMJ2ZQmaaCXKy?usp=sharing) / [Baidu Cloud](https://pan.baidu.com/s/161bfWZGVH1UBCCka93ImqQ?pwd=n8hg)(n8hg)), and place them to `datasets/`.
* *Vid4_h5*: HDF5 files containing preprocessed test datasets for Vid4.

* *REDS4_h5*: HDF5 files containing preprocessed test datasets for REDS4.

3. Run the following command:
* Test on Vid4 for 4x VSR:
```bash
./scripts/dist_test.sh 1 options/test/EvTexture/test_EvTexture_Vid4_BIx4.yml
./scripts/dist_test.sh [num_gpus] options/test/EvTexture/test_EvTexture_Vid4_BIx4.yml
```
* Test on REDS4 for 4x VSR:
```bash
./scripts/dist_test.sh 1 options/test/EvTexture/test_EvTexture_REDS4_BIx4.yml
./scripts/dist_test.sh [num_gpus] options/test/EvTexture/test_EvTexture_REDS4_BIx4.yml
```
This will generate the inference results in `results/`.

4. The output results on REDS4 and Vid4 can be downloaded from ([Onedrive](https://1drv.ms/f/c/2d90e71fb9eb254f/EnMm8c2mP_FPv6lwt1jy01YB6bQhoPQ25vtzAhycYisERw?e=DiI2Ab)/[Google Drive](https://drive.google.com/drive/folders/1oqOAZbroYW-yfyzIbLYPMJ2ZQmaaCXKy?usp=sharing)/[Baidu Cloud](https://pan.baidu.com/s/161bfWZGVH1UBCCka93ImqQ?pwd=n8hg)(n8hg)). Each inference frame is named `f"{frame_index:06d}_{PSNR:.4f}_EvTexture_{dataset}_BIx4.png"`.

This will generate the inference results in `results/`. The output results on REDS4 and Vid4 can be downloaded from ([Onedrive](https://1drv.ms/f/c/2d90e71fb9eb254f/EnMm8c2mP_FPv6lwt1jy01YB6bQhoPQ25vtzAhycYisERw?e=DiI2Ab) / [Google Drive](https://drive.google.com/drive/folders/1oqOAZbroYW-yfyzIbLYPMJ2ZQmaaCXKy?usp=sharing) / [Baidu Cloud](https://pan.baidu.com/s/161bfWZGVH1UBCCka93ImqQ?pwd=n8hg)(n8hg)).


## :blush: Citation
Expand Down

0 comments on commit 383154c

Please sign in to comment.