LED V0.1.0 Release
This release is mainly for storing pre-trained models, etc.
Explanation on the pretrained models
The data naming convention consists of the following components: {Method}_{Phase}_{Dataset}_{Camera Model}_{Training Setting}_Setting_Ratio{Range}
.
For the LED method, there are two phases in total: Pretrain
and Deploy
. For Pretrain
, the checkpoint obtained cannot be directly tested on any dataset; it is only used for subsequent fine-tuning. On the other hand, for all methods, the Deploy
means that the checkpoint contains parameters consistent with the UNet used in the SID method, making it directly suitable for testing.
Regarding the training setting, there are two mainstream settings known as ELD (CVPR20) and PMN (MM22) settings. We represent these two settings as "CVPR20" and "MM22," respectively.
e.g. LED_Deploy_SID_SonyA7S2_CVPR20_Setting_Ratio100-300
:
- "LED_Deploy": This refers to the LED method in the "deploy" phase.
- "SID_SonyA7S2": This indicates that the testing is done on the SonyA7S2 subset of the SID dataset.
- "CVPR20_Setting": This means that the training strategy during the "pretrain" phase is the same as the one used in the "ELD (CVPR20)" setting.
- "Ratio100-300": This indicates the range of the ratio is from 100 to 300.
Explanation on the noisy pair generator
The data naming convention consists of the following components: {Type}_{Noise Model}_{Noise Type}_{Camera Model}
.
e.g. VirtualNoisyPairGenerator_ELD_ptrqc_5VirtualCameras.pth
denotes the VirtualNoisyPairGenerator
with ELD
noise model and shot (poission), read (tukey lambda), row, quant noise and color bias (black level error). Also the checkpoint contains 5 random sampled virtual cameras.