-
Notifications
You must be signed in to change notification settings - Fork 8
Usage
Learn how to use deepBlink using the detailed descriptions below.
To install deepBlink you need a Python installation. There are numerous ways of installing Python. We recommend using Anaconda or through Python's official website.
Make sure your Python version has a version number between 3.7 - 3.10. You can check your version with python -V
. Now you can install deepBlink using Python's packaging manager pip:
pip install deepblink
Check if deepBlink is properly installed using pip show deepblink
or deepblink -V
. You are now ready for takeoff.
Prediction requires two essential components:
- An image to be predicted
- A per-trained model. Depending on your specific dataset, you might want to train your own or download one of our pre-trained models on figshare or using
deepblink download
.
To run a model on an image please provide at least both required parameters:
deepblink predict --model MODEL --input INPUT
In place of the MODEL
or INPUT
please provide the absolute or relative path to the model and input respectively (see here for more information on paths). Note that INPUT
can be both a file or folder.
There are several optional parameters which might be useful depending on your specific use-case:
-
--output
: Output file/folder location [default: input location with input-image base name] -
--radius
: If given, calculate the integrated intensity in the given radius around each coordinate. Set radius to zero if only the central pixels intensity should be calculated. -
--shape
: If given, uses the specified dimension arrangement. Otherwise falls back to defaults. Must be in the format "(x,y,z,t,3)" using the specified characters. Please read below to learn more. - Check the other parameters
--probability
and--pixel-size
using the help textdeepblink predict --help
.
The image shape is defined as how all image dimensions are arranged. Coming from FIJI, this concept isn't really known so we provide a module to easily determine your images shape. If our automated prediction is wrong you can provide your own shape. Please run:
deepblink check INPUT
Please provide a single image as input. DeepBlink will then have a look at your image and tell what it predicts. Please carefully read the output and change the --shape
parameter to deepblink predict
if the automatic prediction isn't correct.
To visualize the output from deepblink predict
or to verify the labels of a dataset if the training didn't go according to plan. You can use the visualize
submodule. Please run:
deepblink visualize --image IMAGE --prediction PREDICTION
Note that this will open a matplotlib
viewer so if you are used to running deepblink
on a server you might have to install things locally. To visualize a npz
dataset run:
deepblink visualize --dataset DATASET --subset SUBSET --index INDEX
Alternatively, you can use Fiji's XY coordinates
or any other application of choice to overlay the prediction coordinates with the input image.
There are a lot of of configuration options for training. Because we don't want you to input a mile-long configuration through flags, we will instead use a config.yaml
file. First, let's generate one with:
deepblink config
If you don't like the name config.yaml
, feel free to change it to your desired name using the --output
flag.
Opening the configuration file, you will see many tuneable settings. The most important one is the dataset used for training. You can either use your own dataset (click here for details on how to create one) or download one of our benchmark datasets here. After changing the path to your dataset (dataset_args
>name
) and the output path (savedir
), we can go ahead and train using:
deepblink train --config CONFIG
If you use a GPU for training, please don't use "CUDA_VISIBLE_DEVICES" and instead pass the GPU's number using the --gpu
flag!
To use existing models as starting point and only fine-tune on new / slightly different data, one can set the train_args
>pre_train
option in the config.yaml
to a model and train as usual.
If you want to visualize the training beyond the terminal output to get all loss curves and some example images, you can use Weights and biases. Create a Wandb account, log in using their command line interface and API key, and set the flag use_wandb
in the configuration file to True
.