- Clone the repository from here
- Make sure you have all the dependencies installed, see also requirements.txt
- Download the corresponding CORe50 dataset files
- Start an experiment on your local machine with python3 main/train.py
.
├── codetemplate # Template for collaboration purposes
├── config.py # Configuration for experiment parameters
├── data
│ └── core50_128x128 # CORe50 dataset (not included, please download)
├── img
│ └── core50_procedure_.png # Image that displays in README.md
├── LICENSE # MIT License
├── main
│ ├── CORe50_baselines.sh # slurm script to send experiment to the cluster
│ ├── CORe50_combined.sh # slurm script to send experiment to the cluster
│ ├── CORe50_param_experiment.sh # slurm script to send experiment to the cluster
│ ├── CORe50_param_experiment.sh # slurm script to send experiment to the cluster
│ └── train.py # main training file
├── README.md # ReadMe File
├── requirements.txt # conda/pip requirements
├── utils
│ ├── augmentations.py # augmentations for standard SimCLR
│ ├── datasets.py # dataloading and sampling
│ ├── evaluation.py # evaluation methods and analytics
│ ├── general.py # general IO utilities
│ ├── losses.py # definition of loss functions
└── └── networks.py # network definition, e.g. ResNet
Fork a copy of this repository onto your own GitHub account and clone
your fork of the repository into your computer, inside your favorite folder, using:
git clone "PATH_TO_FORKED_REPOSITORY"
Install Python 3.9 and the conda package manager (use miniconda). Navigate to the project directory inside a terminal and create a virtual environment (replace <ENVIRONMENT_NAME>, for example, with CORe50Env
) and install the required packages:
conda create -n <ENVIRONMENT_NAME> --file requirements.txt python=3.9
Activate the virtual environment:
source activate <ENVIRONMENT_NAME>
The underlying CORe50 dataset (Lomonaco and Maltoni, 2017) is publicly available here. Please download and store at your location of choice (default would be at ./data as indicated in the repository structure)
Starting an experiment is straight forward. Just execute the script from the main level and specify your options via the command line.
python3 main/train.py \
--name CEnv_Exp_0 \ # specify experiment name
--data_root './data/' \ # specify where you put the CORe50 dataset
--n_fix 0.95 \ # specify N_o as float probability [0,1]
--n_fix_per_session 0.95 \ # specify N_s as float probability [0,1]
--contrast 'time' \ # choose 'time' or 'combined' for -TT or TT+
--view_sampling randomwalk \ # choose 'randomwalk' or 'uniform'
--test_every 10 \ # test every 10 epochs
--train_split train_alt_0 \ # choose the splits for cross-validation (k in range(5))
--test_split test_alt_0 \
--val_split val_alt_0 \
python3 main/train.py \
--name SimCLR_Exp_0 \ # specify experiment name
--data_root './data/' \ # specify where you put the CORe50 dataset
--contrast 'classic' \ # choose 'classic' for SimCLR type contrasts
--test_every 10 \ # test every 10 epochs
--train_split train_alt_0 \ # choose the splits for cross-validation (k in range(5))
--test_split test_alt_0 \
--val_split val_alt_0 \
python3 main/train.py \
--name Supervised_Exp_0 \ # specify experiment name
--data_root './data/' \ # specify where you put the CORe50 dataset
--contrast 'nocontrast' \ # choose 'nocontrast' for supervised experiments
--main_loss 'supervised' \ # supervised loss
--test_every 10 \ # test every 10 epochs
--train_split train_alt_0 \ # choose the splits for cross-validation (k in range(5))
--test_split test_alt_0 \
--val_split val_alt_0 \
There are several slurm sbatch scripts to run exactly the runs that are presented in our ICLR contribution. These scripts can be found under ./main. Execute one of the scripts script to start a batch job using the slurm job manager that includes all runs presented in a specific Figure or Table, e.g. tab3_foobar.sh includes all runs for the CORE50 environment referenced in table 3.
This project is licensed under the MIT License - see the LICENSE file for details