Skip to content

Commit

Permalink
Fix filepaths to package
Browse files Browse the repository at this point in the history
  • Loading branch information
Hannah DeFazio committed Sep 19, 2023
1 parent a733f91 commit d3361c0
Show file tree
Hide file tree
Showing 13 changed files with 41 additions and 41 deletions.
48 changes: 24 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ The directory structure of new project looks like this:
├── scripts <- Shell scripts
├── src <- Source code
├── tcn_hpl <- Source code
│ ├── data <- Data scripts
│ ├── models <- Model scripts
│ ├── utils <- Utility scripts
Expand Down Expand Up @@ -156,7 +156,7 @@ pip install -r requirements.txt
```

Template contains example with MNIST classification.<br>
When running `python src/train.py` you should see something like this:
When running `python tcn_hpl/train.py` you should see something like this:

<div align="center">

Expand Down Expand Up @@ -474,8 +474,8 @@ If no tags are provided, you will be asked to input them from command line:

```bash
>>> python train.py tags=[]
[2022-07-11 15:40:09,358][src.utils.utils][INFO] - Enforcing tags! <cfg.extras.enforce_tags=True>
[2022-07-11 15:40:09,359][src.utils.rich_utils][WARNING] - No tags provided in config. Prompting user to input tags...
[2022-07-11 15:40:09,358][tcn_hpl.utils.utils][INFO] - Enforcing tags! <cfg.extras.enforce_tags=True>
[2022-07-11 15:40:09,359][tcn_hpl.utils.rich_utils][WARNING] - No tags provided in config. Prompting user to input tags...
Enter a list of comma separated tags (dev):
```

Expand Down Expand Up @@ -514,10 +514,10 @@ Suggestions for improvements are always welcome!
All PyTorch Lightning modules are dynamically instantiated from module paths specified in config. Example model config:

```yaml
_target_: src.models.mnist_model.MNISTLitModule
_target_: tcn_hpl.models.mnist_model.MNISTLitModule
lr: 0.001
net:
_target_: src.models.components.simple_dense_net.SimpleDenseNet
_target_: tcn_hpl.models.components.simple_dense_net.SimpleDenseNet
input_size: 784
lin1_size: 256
lin2_size: 256
Expand All @@ -539,7 +539,7 @@ Switch between models and datamodules with command line arguments:
python train.py model=mnist
```

Example pipeline managing the instantiation logic: [src/train.py](src/train.py).
Example pipeline managing the instantiation logic: [tcn_hpl/train.py](tcn_hpl/train.py).

<br>

Expand Down Expand Up @@ -665,12 +665,12 @@ logger:
**Basic workflow**
1. Write your PyTorch Lightning module (see [models/mnist_module.py](src/models/mnist_module.py) for example)
2. Write your PyTorch Lightning datamodule (see [data/mnist_datamodule.py](src/data/mnist_datamodule.py) for example)
1. Write your PyTorch Lightning module (see [models/mnist_module.py](tcn_hpl/models/mnist_module.py) for example)
2. Write your PyTorch Lightning datamodule (see [data/mnist_datamodule.py](tcn_hpl/data/mnist_datamodule.py) for example)
3. Write your experiment config, containing paths to model and datamodule
4. Run training with chosen experiment config:
```bash
python src/train.py experiment=experiment_name.yaml
python tcn_hpl/train.py experiment=experiment_name.yaml
```

**Experiment design**
Expand Down Expand Up @@ -736,7 +736,7 @@ You can use many of them at once (see [configs/logger/many_loggers.yaml](configs

You can also write your own logger.

Lightning provides convenient method for logging custom metrics from inside LightningModule. Read the [docs](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html#automatic-logging) or take a look at [MNIST example](src/models/mnist_module.py).
Lightning provides convenient method for logging custom metrics from inside LightningModule. Read the [docs](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html#automatic-logging) or take a look at [MNIST example](tcn_hpl/models/mnist_module.py).

<br>

Expand Down Expand Up @@ -857,7 +857,7 @@ python train.py trainer=ddp
The simplest way is to pass datamodule attribute directly to model on initialization:

```python
# ./src/train.py
# ./tcn_hpl/train.py
datamodule = hydra.utils.instantiate(config.data)
model = hydra.utils.instantiate(config.model, some_param=datamodule.some_param)
```
Expand All @@ -867,23 +867,23 @@ model = hydra.utils.instantiate(config.model, some_param=datamodule.some_param)
Similarly, you can pass a whole datamodule config as an init parameter:

```python
# ./src/train.py
# ./tcn_hpl/train.py
model = hydra.utils.instantiate(config.model, dm_conf=config.data, _recursive_=False)
```

You can also pass a datamodule config parameter to your model through variable interpolation:

```yaml
# ./configs/model/my_model.yaml
_target_: src.models.my_module.MyLitModule
_target_: tcn_hpl.models.my_module.MyLitModule
lr: 0.01
some_param: ${data.some_param}
```

Another approach is to access datamodule in LightningModule directly through Trainer:

```python
# ./src/models/mnist_module.py
# ./tcn_hpl/models/mnist_module.py
def on_train_start(self):
self.some_param = self.trainer.datamodule.some_param
```
Expand Down Expand Up @@ -1115,7 +1115,7 @@ git commit -m "Add raw data"
<summary><b>Support installing project as a package</b></summary>

It allows other people to easily use your modules in their own projects.
Change name of the `src` folder to your project name and complete the `setup.py` file.
Change name of the `tcn_hpl` folder to your project name and complete the `setup.py` file.

Now your project can be installed from local files:

Expand Down Expand Up @@ -1225,10 +1225,10 @@ ______________________________________________________________________
# Your Project Name
<a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a>
<a href="https://pytorchlightning.ai/"><img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a>
<a href="https://hydra.cc/"><img alt="Config: Hydra" src="https://img.shields.io/badge/Config-Hydra-89b8cd"></a>
<a href="https://github.com/ashleve/lightning-hydra-template"><img alt="Template" src="https://img.shields.io/badge/-Lightning--Hydra--Template-017F2F?style=flat&logo=github&labelColor=gray"></a><br>
<a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" tcn_hpl="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a>
<a href="https://pytorchlightning.ai/"><img alt="Lightning" tcn_hpl="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a>
<a href="https://hydra.cc/"><img alt="Config: Hydra" tcn_hpl="https://img.shields.io/badge/Config-Hydra-89b8cd"></a>
<a href="https://github.com/ashleve/lightning-hydra-template"><img alt="Template" tcn_hpl="https://img.shields.io/badge/-Lightning--Hydra--Template-017F2F?style=flat&logo=github&labelColor=gray"></a><br>
[![Paper](http://img.shields.io/badge/paper-arxiv.1001.2234-B31B1B.svg)](https://www.nature.com/articles/nature14539)
[![Conference](http://img.shields.io/badge/AnyConference-year-4b44ce.svg)](https://papers.nips.cc/paper/2020)
Expand Down Expand Up @@ -1278,20 +1278,20 @@ Train model with default configuration

```bash
# train on CPU
python src/train.py trainer=cpu
python tcn_hpl/train.py trainer=cpu

# train on GPU
python src/train.py trainer=gpu
python tcn_hpl/train.py trainer=gpu
```

Train model with chosen experiment configuration from [configs/experiment/](configs/experiment/)

```bash
python src/train.py experiment=experiment_name.yaml
python tcn_hpl/train.py experiment=experiment_name.yaml
```

You can override any parameter from command line like this

```bash
python src/train.py trainer.max_epochs=20 data.batch_size=64
python tcn_hpl/train.py trainer.max_epochs=20 data.batch_size=64
```
2 changes: 1 addition & 1 deletion configs/data/all_transforms/MoveCenterPts.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
MoveCenterPts:
_target_: src.data.components.augmentations.MoveCenterPts
_target_: tcn_hpl.data.components.augmentations.MoveCenterPts
hand_dist_delta: 0.05
obj_dist_delta: 0.05
window_size: ${data.window_size}
Expand Down
2 changes: 1 addition & 1 deletion configs/data/all_transforms/NormalizeFromCenter.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
NormalizeFromCenter:
_target_: src.data.components.augmentations.NormalizeFromCenter
_target_: tcn_hpl.data.components.augmentations.NormalizeFromCenter
im_w: 1280
im_h: 720
feat_version: 3
2 changes: 1 addition & 1 deletion configs/data/all_transforms/NormalizePixelPts.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
NormalizePixelPts:
_target_: src.data.components.augmentations.NormalizePixelPts
_target_: tcn_hpl.data.components.augmentations.NormalizePixelPts
im_w: 1280
im_h: 720
num_obj_classes: 42
Expand Down
2 changes: 1 addition & 1 deletion configs/data/mnist.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
_target_: src.data.mnist_datamodule.MNISTDataModule
_target_: tcn_hpl.data.mnist_datamodule.MNISTDataModule
data_dir: ${paths.data_dir}
batch_size: 128
train_val_test_split: [55_000, 5_000, 10_000]
Expand Down
2 changes: 1 addition & 1 deletion configs/data/ptg.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
defaults:
- all_transforms: default

_target_: src.data.ptg_datamodule.PTGDataModule
_target_: tcn_hpl.data.ptg_datamodule.PTGDataModule
data_dir: ${paths.data_dir}
batch_size: 128
num_workers: 0
Expand Down
4 changes: 2 additions & 2 deletions configs/model/mnist.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
_target_: src.models.mnist_module.MNISTLitModule
_target_: tcn_hpl.models.mnist_module.MNISTLitModule

optimizer:
_target_: torch.optim.Adam
Expand All @@ -14,7 +14,7 @@ scheduler:
patience: 10

net:
_target_: src.models.components.simple_dense_net.SimpleDenseNet
_target_: tcn_hpl.models.components.simple_dense_net.SimpleDenseNet
input_size: 784
lin1_size: 64
lin2_size: 128
Expand Down
6 changes: 3 additions & 3 deletions configs/model/ptg.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
_target_: src.models.ptg_module.PTGLitModule
_target_: tcn_hpl.models.ptg_module.PTGLitModule

optimizer:
_target_: torch.optim.Adam
Expand All @@ -14,15 +14,15 @@ scheduler:
patience: 10

net:
_target_: src.models.components.ms_tcs_net.MultiStageModel
_target_: tcn_hpl.models.components.ms_tcs_net.MultiStageModel
num_stages: 4
num_layers: 10
num_f_maps: 64
dim: 204
num_classes: ${data.num_classes}

criterion:
_target_: src.models.components.focal_loss.FocalLoss
_target_: tcn_hpl.models.components.focal_loss.FocalLoss
alpha: 0.25
gamma: 2
weight: None
Expand Down
4 changes: 2 additions & 2 deletions scripts/schedule.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@
# Schedule execution of many runs
# Run from root folder with: bash scripts/schedule.sh

python src/train.py trainer.max_epochs=5 logger=csv
python tcn_hpl/train.py trainer.max_epochs=5 logger=csv

python src/train.py trainer.max_epochs=10 logger=csv
python tcn_hpl/train.py trainer.max_epochs=10 logger=csv
2 changes: 1 addition & 1 deletion tests/test_datamodules.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
import pytest
import torch

from src.data.mnist_datamodule import MNISTDataModule
from tcn_hpl.data.mnist_datamodule import MNISTDataModule


@pytest.mark.parametrize("batch_size", [32, 128])
Expand Down
4 changes: 2 additions & 2 deletions tests/test_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@
from hydra.core.hydra_config import HydraConfig
from omegaconf import DictConfig, open_dict

from src.eval import evaluate
from src.train import train
from tcn_hpl.eval import evaluate
from tcn_hpl.train import train


@pytest.mark.slow
Expand Down
2 changes: 1 addition & 1 deletion tests/test_sweeps.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
from tests.helpers.run_if import RunIf
from tests.helpers.run_sh_command import run_sh_command

startfile = "src/train.py"
startfile = "tcn_hpl/train.py"
overrides = ["logger=[]"]


Expand Down
2 changes: 1 addition & 1 deletion tests/test_train.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
from hydra.core.hydra_config import HydraConfig
from omegaconf import DictConfig, open_dict

from src.train import train
from tcn_hpl.train import train
from tests.helpers.run_if import RunIf


Expand Down

0 comments on commit d3361c0

Please sign in to comment.