Sudarshan Rajagopalan | Vishal M. Patel
Illustration of AWRaCLe: Our visual in-context learning approach for all-weather image restoration. Given a context pair (first two rows), AWRaCLe extracts relevant degradation context from it to restore a query image. Our method also performs selective removal of haze and snow from an image containing their mixture as shown in (d) and (e).
Abstract: All-Weather Image Restoration (AWIR) under adverse weather conditions is a challenging task due to the presence of different types of degradations. Prior research in this domain relies on extensive training data but lacks the utilization of additional contextual information for restoration guidance. Consequently, the performance of existing methods is limited by the degradation cues that are learnt from individual training samples. Recent advancements in visual in-context learning have introduced generalist models that are capable of addressing multiple computer vision tasks simultaneously by using the information present in the provided context as a prior. In this paper, we propose All-Weather Image Restoration using Visual In-Context Learning (AWRaCLe), a novel approach for AWIR that innovatively utilizes degradation-specific visual context information to steer the image restoration process. To achieve this, AWRaCLe incorporates Degradation Context Extraction (DCE) and Context Fusion (CF) to seamlessly integrate degradation-specific features from the context into an image restoration network. The proposed DCE and CF blocks leverage CLIP features and incorporate attention mechanisms to adeptly learn and fuse contextual information. These blocks are specifically designed for visual in-context learning under all-weather conditions and are crucial for effective context utilization. Through extensive experiments, we demonstrate the effectiveness of AWRaCLe for all-weather restoration and show that our method advances the state-of-the-art in AWIR.
AWRaCLe integrates degradation-specific information from a context pair to facilitate the image restoration process. Initially, CLIP features are extracted from the context pair and fed into Degradation Context Extraction (DCE) blocks at various levels of the decoder within the image restoration network. The Context Fusion (CF) blocks then fuse the degradation information obtained from the DCE blocks with the decoder features of the query image requiring restoration. Finally, the restored image is generated.
Clone the repository and create a new conda environment with Python=3.9. Install requirements.
git clone https://github.com/sudraj2002/AWRaCLe.git
cd awracle
conda create -n awracle python=3.9 -y
conda activate awracle
pip install -r requirements.txt
Download the training and test data from here. Extract to <data_directory>
.
The dataset structure should look like
<data_directory>
└── data_awracle
├── CSD
├── Rain13K
├── RESIDE
├── Snow100k
├── Train
└── Train_clip
Download the pre-trained model from here and move it to the working directory.
To train the model from scratch on the datasets mentioned in the paper:
bash train.sh
Specify arguments --derain_dir, --dehaze_dir
and desnow_dir
as per your <data_directory>
. Additional arguments can be found in the options.py
file.
After training or when using a pre-trained model, run the test script:
bash test.sh
Make sure to modify the <data_directory>
.
For training and testing on custom data, see custom.
If you find our work useful, please consider citing:
@article{rajagopalan2024awracle,
title={AWRaCLe: All-Weather Image Restoration using Visual In-Context Learning},
author={Sudarshan Rajagopalan and Vishal M. Patel},
journal={arXiv preprint arXiv:2409.00263},
year={2024}
}
We use the codebase of PromptIR. We thank the authors for sharing the code!