diff --git a/README.md b/README.md index 26500e215..bab89b7cf 100644 --- a/README.md +++ b/README.md @@ -113,7 +113,7 @@ The directory structure of new project looks like this: │ ├── scripts <- Shell scripts │ -├── src <- Source code +├── tcn_hpl <- Source code │ ├── data <- Data scripts │ ├── models <- Model scripts │ ├── utils <- Utility scripts @@ -156,7 +156,7 @@ pip install -r requirements.txt ``` Template contains example with MNIST classification.
-When running `python src/train.py` you should see something like this: +When running `python tcn_hpl/train.py` you should see something like this:
@@ -474,8 +474,8 @@ If no tags are provided, you will be asked to input them from command line: ```bash >>> python train.py tags=[] -[2022-07-11 15:40:09,358][src.utils.utils][INFO] - Enforcing tags! -[2022-07-11 15:40:09,359][src.utils.rich_utils][WARNING] - No tags provided in config. Prompting user to input tags... +[2022-07-11 15:40:09,358][tcn_hpl.utils.utils][INFO] - Enforcing tags! +[2022-07-11 15:40:09,359][tcn_hpl.utils.rich_utils][WARNING] - No tags provided in config. Prompting user to input tags... Enter a list of comma separated tags (dev): ``` @@ -514,10 +514,10 @@ Suggestions for improvements are always welcome! All PyTorch Lightning modules are dynamically instantiated from module paths specified in config. Example model config: ```yaml -_target_: src.models.mnist_model.MNISTLitModule +_target_: tcn_hpl.models.mnist_model.MNISTLitModule lr: 0.001 net: - _target_: src.models.components.simple_dense_net.SimpleDenseNet + _target_: tcn_hpl.models.components.simple_dense_net.SimpleDenseNet input_size: 784 lin1_size: 256 lin2_size: 256 @@ -539,7 +539,7 @@ Switch between models and datamodules with command line arguments: python train.py model=mnist ``` -Example pipeline managing the instantiation logic: [src/train.py](src/train.py). +Example pipeline managing the instantiation logic: [tcn_hpl/train.py](tcn_hpl/train.py).
@@ -665,12 +665,12 @@ logger: **Basic workflow** -1. Write your PyTorch Lightning module (see [models/mnist_module.py](src/models/mnist_module.py) for example) -2. Write your PyTorch Lightning datamodule (see [data/mnist_datamodule.py](src/data/mnist_datamodule.py) for example) +1. Write your PyTorch Lightning module (see [models/mnist_module.py](tcn_hpl/models/mnist_module.py) for example) +2. Write your PyTorch Lightning datamodule (see [data/mnist_datamodule.py](tcn_hpl/data/mnist_datamodule.py) for example) 3. Write your experiment config, containing paths to model and datamodule 4. Run training with chosen experiment config: ```bash - python src/train.py experiment=experiment_name.yaml + python tcn_hpl/train.py experiment=experiment_name.yaml ``` **Experiment design** @@ -736,7 +736,7 @@ You can use many of them at once (see [configs/logger/many_loggers.yaml](configs You can also write your own logger. -Lightning provides convenient method for logging custom metrics from inside LightningModule. Read the [docs](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html#automatic-logging) or take a look at [MNIST example](src/models/mnist_module.py). +Lightning provides convenient method for logging custom metrics from inside LightningModule. Read the [docs](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html#automatic-logging) or take a look at [MNIST example](tcn_hpl/models/mnist_module.py).
@@ -857,7 +857,7 @@ python train.py trainer=ddp The simplest way is to pass datamodule attribute directly to model on initialization: ```python -# ./src/train.py +# ./tcn_hpl/train.py datamodule = hydra.utils.instantiate(config.data) model = hydra.utils.instantiate(config.model, some_param=datamodule.some_param) ``` @@ -867,7 +867,7 @@ model = hydra.utils.instantiate(config.model, some_param=datamodule.some_param) Similarly, you can pass a whole datamodule config as an init parameter: ```python -# ./src/train.py +# ./tcn_hpl/train.py model = hydra.utils.instantiate(config.model, dm_conf=config.data, _recursive_=False) ``` @@ -875,7 +875,7 @@ You can also pass a datamodule config parameter to your model through variable i ```yaml # ./configs/model/my_model.yaml -_target_: src.models.my_module.MyLitModule +_target_: tcn_hpl.models.my_module.MyLitModule lr: 0.01 some_param: ${data.some_param} ``` @@ -883,7 +883,7 @@ some_param: ${data.some_param} Another approach is to access datamodule in LightningModule directly through Trainer: ```python -# ./src/models/mnist_module.py +# ./tcn_hpl/models/mnist_module.py def on_train_start(self): self.some_param = self.trainer.datamodule.some_param ``` @@ -1115,7 +1115,7 @@ git commit -m "Add raw data" Support installing project as a package It allows other people to easily use your modules in their own projects. -Change name of the `src` folder to your project name and complete the `setup.py` file. +Change name of the `tcn_hpl` folder to your project name and complete the `setup.py` file. Now your project can be installed from local files: @@ -1225,10 +1225,10 @@ ______________________________________________________________________ # Your Project Name -PyTorch -Lightning -Config: Hydra -Template
+PyTorch +Lightning +Config: Hydra +Template
[![Paper](http://img.shields.io/badge/paper-arxiv.1001.2234-B31B1B.svg)](https://www.nature.com/articles/nature14539) [![Conference](http://img.shields.io/badge/AnyConference-year-4b44ce.svg)](https://papers.nips.cc/paper/2020) @@ -1278,20 +1278,20 @@ Train model with default configuration ```bash # train on CPU -python src/train.py trainer=cpu +python tcn_hpl/train.py trainer=cpu # train on GPU -python src/train.py trainer=gpu +python tcn_hpl/train.py trainer=gpu ``` Train model with chosen experiment configuration from [configs/experiment/](configs/experiment/) ```bash -python src/train.py experiment=experiment_name.yaml +python tcn_hpl/train.py experiment=experiment_name.yaml ``` You can override any parameter from command line like this ```bash -python src/train.py trainer.max_epochs=20 data.batch_size=64 +python tcn_hpl/train.py trainer.max_epochs=20 data.batch_size=64 ``` diff --git a/configs/data/all_transforms/MoveCenterPts.yaml b/configs/data/all_transforms/MoveCenterPts.yaml index 0684c3d5d..187e78485 100644 --- a/configs/data/all_transforms/MoveCenterPts.yaml +++ b/configs/data/all_transforms/MoveCenterPts.yaml @@ -1,5 +1,5 @@ MoveCenterPts: - _target_: src.data.components.augmentations.MoveCenterPts + _target_: tcn_hpl.data.components.augmentations.MoveCenterPts hand_dist_delta: 0.05 obj_dist_delta: 0.05 window_size: ${data.window_size} diff --git a/configs/data/all_transforms/NormalizeFromCenter.yaml b/configs/data/all_transforms/NormalizeFromCenter.yaml index 00de12eb3..cf5423296 100644 --- a/configs/data/all_transforms/NormalizeFromCenter.yaml +++ b/configs/data/all_transforms/NormalizeFromCenter.yaml @@ -1,5 +1,5 @@ NormalizeFromCenter: - _target_: src.data.components.augmentations.NormalizeFromCenter + _target_: tcn_hpl.data.components.augmentations.NormalizeFromCenter im_w: 1280 im_h: 720 feat_version: 3 diff --git a/configs/data/all_transforms/NormalizePixelPts.yaml b/configs/data/all_transforms/NormalizePixelPts.yaml index eff189722..0ef55b746 100644 --- a/configs/data/all_transforms/NormalizePixelPts.yaml +++ b/configs/data/all_transforms/NormalizePixelPts.yaml @@ -1,5 +1,5 @@ NormalizePixelPts: - _target_: src.data.components.augmentations.NormalizePixelPts + _target_: tcn_hpl.data.components.augmentations.NormalizePixelPts im_w: 1280 im_h: 720 num_obj_classes: 42 diff --git a/configs/data/mnist.yaml b/configs/data/mnist.yaml index f63bc8947..392912bbc 100644 --- a/configs/data/mnist.yaml +++ b/configs/data/mnist.yaml @@ -1,4 +1,4 @@ -_target_: src.data.mnist_datamodule.MNISTDataModule +_target_: tcn_hpl.data.mnist_datamodule.MNISTDataModule data_dir: ${paths.data_dir} batch_size: 128 train_val_test_split: [55_000, 5_000, 10_000] diff --git a/configs/data/ptg.yaml b/configs/data/ptg.yaml index a8e73fc4e..d0373fcbf 100644 --- a/configs/data/ptg.yaml +++ b/configs/data/ptg.yaml @@ -1,7 +1,7 @@ defaults: - all_transforms: default -_target_: src.data.ptg_datamodule.PTGDataModule +_target_: tcn_hpl.data.ptg_datamodule.PTGDataModule data_dir: ${paths.data_dir} batch_size: 128 num_workers: 0 diff --git a/configs/model/mnist.yaml b/configs/model/mnist.yaml index 6f9c2fa1e..417b9ca53 100644 --- a/configs/model/mnist.yaml +++ b/configs/model/mnist.yaml @@ -1,4 +1,4 @@ -_target_: src.models.mnist_module.MNISTLitModule +_target_: tcn_hpl.models.mnist_module.MNISTLitModule optimizer: _target_: torch.optim.Adam @@ -14,7 +14,7 @@ scheduler: patience: 10 net: - _target_: src.models.components.simple_dense_net.SimpleDenseNet + _target_: tcn_hpl.models.components.simple_dense_net.SimpleDenseNet input_size: 784 lin1_size: 64 lin2_size: 128 diff --git a/configs/model/ptg.yaml b/configs/model/ptg.yaml index 7b9177e9e..3f7ef2d2b 100644 --- a/configs/model/ptg.yaml +++ b/configs/model/ptg.yaml @@ -1,4 +1,4 @@ -_target_: src.models.ptg_module.PTGLitModule +_target_: tcn_hpl.models.ptg_module.PTGLitModule optimizer: _target_: torch.optim.Adam @@ -14,7 +14,7 @@ scheduler: patience: 10 net: - _target_: src.models.components.ms_tcs_net.MultiStageModel + _target_: tcn_hpl.models.components.ms_tcs_net.MultiStageModel num_stages: 4 num_layers: 10 num_f_maps: 64 @@ -22,7 +22,7 @@ net: num_classes: ${data.num_classes} criterion: - _target_: src.models.components.focal_loss.FocalLoss + _target_: tcn_hpl.models.components.focal_loss.FocalLoss alpha: 0.25 gamma: 2 weight: None diff --git a/scripts/schedule.sh b/scripts/schedule.sh index 44b3da111..bac31d2a0 100644 --- a/scripts/schedule.sh +++ b/scripts/schedule.sh @@ -2,6 +2,6 @@ # Schedule execution of many runs # Run from root folder with: bash scripts/schedule.sh -python src/train.py trainer.max_epochs=5 logger=csv +python tcn_hpl/train.py trainer.max_epochs=5 logger=csv -python src/train.py trainer.max_epochs=10 logger=csv +python tcn_hpl/train.py trainer.max_epochs=10 logger=csv diff --git a/tests/test_datamodules.py b/tests/test_datamodules.py index 901f3d6bb..0c257747c 100644 --- a/tests/test_datamodules.py +++ b/tests/test_datamodules.py @@ -3,7 +3,7 @@ import pytest import torch -from src.data.mnist_datamodule import MNISTDataModule +from tcn_hpl.data.mnist_datamodule import MNISTDataModule @pytest.mark.parametrize("batch_size", [32, 128]) diff --git a/tests/test_eval.py b/tests/test_eval.py index 423c9d295..865be5c8d 100644 --- a/tests/test_eval.py +++ b/tests/test_eval.py @@ -5,8 +5,8 @@ from hydra.core.hydra_config import HydraConfig from omegaconf import DictConfig, open_dict -from src.eval import evaluate -from src.train import train +from tcn_hpl.eval import evaluate +from tcn_hpl.train import train @pytest.mark.slow diff --git a/tests/test_sweeps.py b/tests/test_sweeps.py index 7856b1551..c82976c4d 100644 --- a/tests/test_sweeps.py +++ b/tests/test_sweeps.py @@ -5,7 +5,7 @@ from tests.helpers.run_if import RunIf from tests.helpers.run_sh_command import run_sh_command -startfile = "src/train.py" +startfile = "tcn_hpl/train.py" overrides = ["logger=[]"] diff --git a/tests/test_train.py b/tests/test_train.py index c13ae02c8..99504b649 100644 --- a/tests/test_train.py +++ b/tests/test_train.py @@ -5,7 +5,7 @@ from hydra.core.hydra_config import HydraConfig from omegaconf import DictConfig, open_dict -from src.train import train +from tcn_hpl.train import train from tests.helpers.run_if import RunIf