Mini-tutorial. Using Volume Segmantics
to train a model to segment blood vessels from synchrotron X-ray micro-CT data of human placental tissue.
The data for this tutorial comes from experiments described in this paper. The full dataset is publicly available on EPIAR.
In this tutorial we will be using a
Installation and configuration is also described in the README for the Volume Segmantics
repository:
pip install volume-segmantics
into a conda environment or pip vitualenv and activate it.- Create a new directory for working through the tutorial.
- Download and unzip the
volseg-settings
directory and thetraining-data
directory into your new folder. - If required, edit settings in the file
volseg-settings/2d_model_train_settings.yaml
. For example you may want to change the model architecture or number of training epochs. - Train the model using the command
model-train-2d --data training-data/vessels_256cube_DATA.h5 --labels training-data/vessels_256cube_LABELS.h5
. A model will be saved to your working directory. In addition, a figure showing "ground truth" segmentation vs model segmentation for some images in the validation set will be saved. Look at this to get an idea of how your model is performing.
We will use our model trained in the step above to predict segmentation for a larger region of data. Since this
- You can download using your web browser via this link, alternatively you can use a command-line tool such as
curl
like so:curl -o specimen1_512cube_zyx_800-1312_1000-1512_700-1212_DATA.h5 https://zenodo.org/api/files/fc8e12d1-4256-4ed9-8a23-66c0d6c64379/specimen1_512cube_zyx_800-1312_1000-1512_700-1212_DATA.h5
- If required, edit settings in the file
volseg-settings/2d_model_predict_settings.yaml
. For example, you may wish to change the strategy used for prediction by changing thequality
setting, the defaults should give a decent result. - Predict the segmentation using the command
model-predict-2d <name of model file>.pytorch specimen1_512cube_zyx_800-1312_1000-1512_700-1212_DATA.h5
. An HDF5 file containing a segmentation prediction will be saved in your working directory. - To view the HDF5 output, you can use a program such as DAWN.
For example, here is an example of a volume representation of the output along with slices in the three orthogonal planes viewed with DAWN. This prediction was done using a U-Net with pre-trained ResNet-34 encoder using the "Medium" (3-axis) quality setting.