Novel Class Discovery (NCD) objective is to classify samples from two disjoint but similar dataset, one of which is labelled and the other unlaballed. Here, we present some methods that tackle this task and our contributions: first, we further commented the original code, then we ran experiments to check the behaviors of these models in different settings.
This repository is a student project developed by Noemi Canovi, Kevin Depedri, Jacopo Donà and Mostafa Haggag for the "Trends and Applications of Computer Vision" course of the Master in Artificial Intelligent Systems at the University of Trento, a.y. 2022-2023.
We worked with the two following architectures:
- AutoNovel
- UNO
This readme file presents all the experiments that we have carried out for both the architectures. Inside each folder it is possible to find the original readme file of that specific architecture, which explains its design and how to run the base code.
All the weights of the trained model resulting from performed experiments can be found here
Useful to understand the impact of different loss terms on the performance of the model.
-
Open
Autonovel.py
file -
Comment any of the loss term to ignore it during training
-
Run
autonovel-step
using the pre-trained weights of thesupervised_learning-step
Useful to understand the impact of different topk
values on the performance of the model.
-
Open
scripts/auto_novel_IL_cifar10.sh
-
Change
topk
parameter to the desired value -
Train the model
Useful to understand the impact of the self-supervised learning step on the performance of the model.
-
Open
supervised_learning.py
file -
Comment lines 301-310
model.load_state_dict(state_dict, strict=False)
for name, param in model.named_parameters():
# If the parameter under analysis does not belong to 'head' (one of the two heads) or to 'layer4' (features
# layer before the two heads), then freeze that parameter. In this way we are ensuring that all the parameters
# will be frozen except for the two heads and the features layer, which we want to train.
if 'head' not in name and 'layer4' not in name:
param.requires_grad = False
-
Run
supervised_learning.py
-
Run
autonovel.py
Useful to understand the impact of different self-supervised learning techniques on the performance of the model.
The step of self-supervised learning in the original paper is based on the official code of the RotNet paper. Here, we try to use other SSL models exploiting solo-learn, which is a library of self-supervised methods.
The weights of the trained self-supervised models are given by solo-learn library.
- Run the following command to download the weights
sh scripts/load_SSL_weights.sh
-
Open
supervised_learning.py
to find the following SSL method: Barlow_twins, simsiam, supcon, swav, vibcreg, vicreg, wmse -
There are 2 flags called
New_SSL_methods
andNew_Resnet_config
at line 229 and 230, respectively.New_SSL_methods
: True when using SSL methods other than RotNetNew_Resnet_config
: True
This flag is used to indicate if you use the Autonovel Resnet or Resnet architecture similar to solo learn. There is a small difference in the performance between these two. When you load any of the Sololearn methods, we automatically use the similar Resnet architecture. When we are using rotnet, we leave it to the user to decide what to do. Setting to False means that we are using the Autonovel Resnet architecture.
-
Set variable called
ssl
in line 237 to the preferred method using one of these keys: Barlow_twins, simsiam, supcon, swav, vibcreg, vicreg, wmse -
Run
supervised_learning.py
-
Run
autonovel.py
Useful to understand the impact of domain shift on the performance of the model.
Here, Cifar10 dataset has been automatically corrupted using Gaussian Noise.
- a. Run
selfsupervised_learning
andsupervised_learning_cifar10c.py
with the Cifar10-C by passingcifar10c
as dataset_name OR b. Download weights of self-supervised and supervised training, by runningsh scripts/download_cifar_c_weights.sh
- Run
# cifar-c
CUDA_VISIBLE_DEVICES=0 sh scripts/auto_novel_cifar10c.sh ./data/datasets/CIFAR/ ./data/experiments/ ./data/experiments/cifar_c/ ./data/experiments/cifar_c/supervised_learning/resnet_rotnet_cifar10_c.pth
To evaluate run:
# cifar-c
CUDA_VISIBLE_DEVICES=0 python auto_novel_cifar10_c.py --mode test --dataset_name cifar10 --model_name resnet_IL_cifar10_c --exp_root ./data/experiments/pretrained/
Here, Mnist and Mnist-M are exploited. The dataset will be automatically downloaded by running the code.
-
a. Run
selfsupervised_learning
andsupervised_learning.py
with the Mnist-M by passingmnisit
to load a dataset containing the first 5 classes from the original Mnist dataset and the second 5 classes from the Mnist-M dataset OR b. Download weights of self-supervised and supervised training, by runningsh scripts/download_mnisit_weights.sh
-
Run
# Train on original mnist
CUDA_VISIBLE_DEVICES=0 sh scripts/autonovel_IL_mnisit_mix.sh ./data/datasets/MNISIT/ ./data/experiments/ ./data/experiments/supervised_learning/resnet_rotnet_mnisit_baseline.pth resnet_IL_minsiit_baseline mnisit_baseline
# Train on novel mnist
CUDA_VISIBLE_DEVICES=0 sh scripts/autonovel_IL_mnisit_mix.sh ./data/datasets/MNISIT/ ./data/experiments/ ./data/experiments/supervised_learning/resnet_rotnet_mnisit_MIX.pth resnet_IL_minst_mix mnisit
To evaluate use this command
# for mnist
CUDA_VISIBLE_DEVICES=0 python auto_novel.py --mode test --dataset_name mnisit --model_name resnet_IL_minst_mix --exp_root ./data/experiments/pretrained/
# for mnist Base
CUDA_VISIBLE_DEVICES=0 python auto_novel.py --mode test --dataset_name mnisit_baseline --model_name resnet_IL_minsiit_baseline --exp_root ./data/experiments/pretrained/
Useful to understand the impact of unbalanced domain on the performance of the model.
This experiment allows to train and test a model using a custom number of samples for each class of CIFAR10.
In particular, we performed this experiment to see how the model performs in case the number of labeled samples is very low (1/10 of the unlabeled samples), and in the opposite case, where the number of unlabeled samples is equal to 1/10 of the labeled samples.
The experiment is based on a custom version of the CIFAR10 dataset called CustomCIFAR10. This takes in input all the usual parameters of the CIFAR10 dataset and a remove_dict
. This parameters allow to give in input a dictionary, which specifies how many samples we want to be removed for each class. The dictionary need to follow this format: remove_dict={0: 0, 1: 0, 2: 0, 3: 0, 4: 0, 5: 4500, 6: 4500, 7: 4500, 8: 4500, 9: 4500}
. In this previous example we are removing 0 samples for each class from 0 to 4, while we are removing 4500 samples for each class from 5 to 9. The specified number of samples is removed randomly from the corrsponding class.
To run your own unbalanced experiment, follow the ensuing procedure:
-
Run
selfsupervised_learning-step
and store the weights of your model (refer to thereadme
file of AutoNovel for the full procedure to follow for SSL training) -
Open the file
unbalanced_supervised_learning.py
- At line 191 turn
logging_on
to True if you need to log the data to WandB, otherwise check it to be False - At line 224 turn
New_SSL_methods
to True if you have used a different SSL techinque (see Experiment 1), in that case, specify at line 235 which model you want to load. Otherwise check it to be False - At line 310 verify that
unbalanced
is set to True to use the unbalanced version of CustomCIFAR10 - At line 312 define your own
remove_dict
that will be applied to CustomCIFAR10
- At line 191 turn
-
From cmd run the following line of code to perform the supervised_learning (change the parameters
name_of_your_input_model
andname_of_your_output_model
):CUDA_VISIBLE_DEVICES=0 python unbalanced_supervised_learning.py --ssl_weights_dir ./data/experiments/...../name_of_your_input_model.pth --model_name name_of_your_output_model --new_resnet
The flag
new_resnet
is used to turn on its respective option- Do not use the flag
--new_resnet
if your model has been trained using the ResNet defined by the AutoNovel authors. Use that flag if your model has been trained using a standard ResNet (as from ResNet original paper)
- Do not use the flag
-
Your trained model weights will be stored in
data/experiments/unbalanced_supervised_learning/name_of_your_output_model.pth
-
Once the
unbalanced_supervised_learning-step
is finished, open the fileunabalanced_auto_novel.py
- At line 410 turn
logging_on
to True if you need to log the data to WandB, otherwise check it to be False - At line 430 turn
New_Resnet
to True if you used a standard ResNet, check it to be False if you used the ResNet defined by the authors - At line 486 turn
unbalanced
to True to use the unbalanced version of CustomCIFAR10 - At line 488 define your own
remove_dict
that will be applied to CustomCIFAR10
- At line 410 turn
-
Depending on the Incremental-Learning (IL) setting that you want to use to train your model:
- If IL enabled -> run
auto_novel_IL_cifar10_unbalanced.sh
through cmd using the following line of code (change the parametersname_of_your_input_model
andname_of_your_output_model
):
CUDA_VISIBLE_DEVICES=0 sh scripts/auto_novel_IL_cifar10_unbalanced.sh ./data/experiments/...../name_of_your_input_model.pth name_of_your_output_model
- If IL disabled -> run
auto_novel_cifar10_unbalanced
through cmd using the following line of code change the parametersname_of_your_input_model
andname_of_your_output_model
):
CUDA_VISIBLE_DEVICES=0 sh scripts/auto_novel_cifar10_unbalanced.sh ./data/experiments/...../name_of_your_input_model.pth name_of_your_output_model
- If IL enabled -> run
-
Your trained model weights will be stored in
data/experiments/unbalanced_auto_novel/name_of_your_output_model.pth
Useful to understand the impact of unbalanced domain with different number of labeled and unlabeled classes on the performance of the model.
- run
supervised_learning.py
specifying the number of labeled and unlabaled data with--num_labeled_classes <num_lab> --num_unlabeled_classes <num_unlab>
- run
auto_novel.py --dataset_name cifar10 --warmup_model_dir <checkpoint supervised model> --dataset_root <dataset directory> --model_name <model_name> --num_labeled_classes <num_lab> --num_unlabeled_classes <num_unlab> --lr 0.1 --gamma 0.1 --weight_decay 1e-4 --step_size 170 --batch_size 128 --epochs 200 --rampup_length 50 --rampup_coefficient 5.0 --seed 0 --mode train
The t-distributed Stochastic Neighbor Embedding is a statistical tool that allows to represent high dimensional samples into a low dimensional space relying on a statistical algorithm. Due to its stochastic nature this algorithm leads to different output for each run, also if the input data and the used parameters are exactly the same.
We used the t-SNE plots to show how much the features learned by our models are effective. They allow us to see how the samples belonging to different categories are clustered. Ideally, we would like to see compact cluster well distatiented between them. This condition would point that our model learnt some good features which allows to distinguish between samples coming from different classes in an efficient way.
To plot the t-SNE for your model follow the ensuing procedure (steps using CIFAR-10 as dataset):
-
Train your model until the end of the
AutoNovel-step
and store the weights of your model -
Put the weights of your model into the path
data/experiments/auto_novel_tSNE_plot/name_of_your_input_model.pth
-
To start the generation of the t-SNE plots you will need to use the following command line from cmd (change the parameter
name_of_your_input_model
with the name of the model weights that you want to load fromdata/experiments/auto_novel_tSNE_plot
):CUDA_VISIBLE_DEVICES=0 python auto_novel_tSNE_plot.py --input_model_name name_of_your_input_model --new_resnet --IL
The two flags (
--new_resnet
and--IL
) are used to turn on the respective options- Do not use the flag
--new_resnet
if your model has been trained using the ResNet defined by the AutoNovel authors. Use that flag if your model has been trained using a standard ResNet (as from ResNet original paper) - Use the flag
--IL
if your model has been trained in theAutoNovel-step
using Incremental-Learning, otherwise, do not use this flag
- Do not use the flag
-
The produced plots will be stored in the folder
tSNE_plots/name_of_you_model
-
If you are working on a dataset different from CIFAR-10, or if other changes have been applied on the training procedure, then apply the due changes also to the py file
auto_novel_tSNE_plot.py
Useful to understand the impact of unbalanced domain with different number of labeled and unlabeled classes on the performance of the model.
- run
main_pretrain.py
specifying the number of labeled and unlabaled data with--num_labeled_classes <num_lab> --num_unlabeled_classes <num_unlab>
- run
main_discover.py
specifying the number of labeled and unlabaled data with--num_labeled_classes <num_lab> --num_unlabeled_classes <num_unlab>
Noemi Canovi - ([email protected])
Kevin Depedri - ([email protected])
Jacopo Donà - ([email protected])
Mostafa Haggag - ([email protected])