You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the release_1 branch linked in Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy it appears that when training fnet_nn_2d.py both the source and target images are standardized with the stardard scaler (subtract the image pixel mean and then divide by the image pixel standard deviation) independently. This standardization during training would inhibit the model from reconstructing the image from the standardized pixel values, since the mean and standard deviation predicted/output image stain is unknown.
Expected Behavior
I would expect that the outputs of the trained model cannot be correctly reconstructed, such as to an 8-bit or 16-bit representation.
Reproduction
We have used the fnet_nn_2d.py model for a similar task to predict GOLD stain nuclei images from DAPI nuclei images. Except, this approach normalizes the source and target images by the maximum pixel value for 16-bit images. This simplifies the reconstruction of the normalized output images back to a 16-bit format, since the normalization factor is already known. Although we would appreciate any suggestions you may have for improving the performance when training the fnet_nn_2d.py model.
Description
In the release_1 branch linked in Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy it appears that when training
fnet_nn_2d.py
both the source and target images are standardized with the stardard scaler (subtract the image pixel mean and then divide by the image pixel standard deviation) independently. This standardization during training would inhibit the model from reconstructing the image from the standardized pixel values, since the mean and standard deviation predicted/output image stain is unknown.Expected Behavior
I would expect that the outputs of the trained model cannot be correctly reconstructed, such as to an 8-bit or 16-bit representation.
Reproduction
We have used the
fnet_nn_2d.py
model for a similar task to predict GOLD stain nuclei images from DAPI nuclei images. Except, this approach normalizes the source and target images by the maximum pixel value for 16-bit images. This simplifies the reconstruction of the normalized output images back to a 16-bit format, since the normalization factor is already known. Although we would appreciate any suggestions you may have for improving the performance when training thefnet_nn_2d.py
model.Environment
This is our conda environment.
The text was updated successfully, but these errors were encountered: