-
Notifications
You must be signed in to change notification settings - Fork 43
Deeper dive into the code
Each experiment is expressed as pipeline
, which is central idea behind open solution. Pipeline consists of steps
- single pieces of computations, such as model training or postprocessing or ensembling. Each user can defince his/her own steps. Pipeline - when defined - is executed from start to finish, step by step, fully automatically.
Pipelines are defined in the pipelines.py
file and each has its own name. Once you have your pipeline ready you can execute it using main.py
file. This file collects utilities that make working with pipelines easy. In order to run unet
you execute command like this:
(cloud) neptune send main.py --worker gcp-gpu-medium --config neptune.yaml --environment pytorch-0.2.0-gpu-py3 -- train_evaluate_predict_pipeline --pipeline_name unet
(local) neptune run main.py -- train_evaluate_predict_pipeline --pipeline_name unet
Each pipeline accepts parameters. They are encoded as Python dict and are stored in the: pipeline_config.py
.
As you can see in the pipeline.py
the same pipeline name - for example unet
- is associated with three entities, that is: unet_pipeline for training, validation and testing. Thanks to this you can manipulate each stage of the experiment independently. Moreover, you can run them separately: neptune run main.py -- train_pipeline --pipeline_name unet_pipeline
or all at once: neptune run main.py -- train_evaluate_predict_pipeline --pipeline_name unet_pipeline
.
main.py
offers you multiple schemes for running experiments, such as training only or traininig, validation, prediction and Kaggle submit generation. Depending on your needs you may want to try various schemes. main.py
offer following schemes:
train_pipeline
evaluate_pipeline
predict_pipeline
train_evaluate_pipeline
evaluate_predict_pipeline
train_evaluate_predict_pipeline
Additionally, we included two helper routines for metadata and masks generation.
-
prepare_metadata
- run with:
neptune run main.py -- prepare_metadata
- run with:
-
prepare_masks
- run with:
neptune run main.py -- prepare_masks
- run with:
- Solution 1: U-Net
- Solution 2: Multi-output U-Net
- Solution 3: Improved Multi-output U-Net
- Solution 4: U-Net with weighted loss and morphological postprocessing
- Solution 5: U-Net specialists, faster processing, weighted loss function and improved validation