- Ubuntu 18.04
- Python 3.6+
- PyTorch 1.5+ (recent version is recommended)
- NVIDIA GPU (>= 12GB)
- CUDA 10.0 (optional)
- CUDNN 7.5 (optional)
- Configure virtual (anaconda) environment
conda create -n env_name python=3.6
source activate env_name
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
- Install python libraries
conda install -c conda-forge matplotlib
conda install -c anaconda yaml
conda install -c anaconda pyyaml
conda install -c anaconda scipy
conda install -c anaconda scikit-learn
conda install -c conda-forge easydict
pip install easydl
-
We provide two versions of the repository (with dataset / without dataset) for a flexible experiment
-
Full PrDA repository (with dataset): download link
- In this case, go to
training and testing
step directly
- In this case, go to
- PrDA repository (without dataset): download link
- Visualization of repository structure (Full PrDA repository)
|-- APM_update.py
|-- SFDA_test.py
|-- SFDA_train.py
|-- config.py
|-- data.py
|-- lib.py
|-- net.py
|-- office-train-config.yaml
|-- data
| `-- office
| |-- domain_adaptation_images
| | |-- amazon
| | | `-- images
| | |-- dslr
| | | `-- images
| | `--- webcam
| | `-- images
| |-- amazon_31_list.txt
| |-- dslr_31_list.txt
| `-- webcam_31_list.txt
|-- pretrained_weights
| `-- 02
| `-- domain02accBEST_model_checkpoint.pth.tar
`-- source_pretrained_weights
`-- 02
`-- model_checkpoint.pth.tar
- Download the Office31 dataset (link) and unzip in
./data/office
- Download the text file (link), e.g., amazon_31_list.txt, dslr_31_list.txt, webcam_31_list.txt, in
./data/office
- Download source-pretrained parameters (link) in
./source_pretrained_weights/[scenario_number]
ex) Source-pretrained parameters of A[0] -> W[2] senario should be located in ./source_pretrained_weights/02
- Arguments required for training and testing are contained in
office-train-config.yaml
- Here is an example of running an experiment on Office31 (default: A -> W)
- Scenario can be changed by editing
source: 0, target: 2
inoffice-train-config.yaml
- We will update the full version of our framework including settings for
OfficeHome
andVisda-C
- Run the following command
python SFDA_train.py --config office-train-config.yaml
-
As a first step, download SFDA pretrained parameters (link) in
./pretrained_weights/[scenario_number]
ex) SFDA pretrained parameters of A[0] -> W[2] senario should be located in
./pretrained_weights/02
-
or run the training code to obtain pretrained weights
-
Run the following command
python SFDA_test.py --config office-train-config.yaml
- Results using the provided code
A→W | D→W | W→D | A→D | D→A | W→A | Avg | |
---|---|---|---|---|---|---|---|
Accuracy (%) | 91.06 | 97.35 | 98.99 | 91.96 | 71.60 | 68.62 | 86.60 |