Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Augmentation strategy for generalization across magnification #16

Open
ziw-liu opened this issue Apr 21, 2023 · 1 comment
Open

Augmentation strategy for generalization across magnification #16

ziw-liu opened this issue Apr 21, 2023 · 1 comment

Comments

@ziw-liu
Copy link
Collaborator

ziw-liu commented Apr 21, 2023

Different magnification of the microscope alters the sampling of all 3 spatial dimensions. And the changes in Z is different from that in XY. If we want to train a model on a single dataset that generalizes across magnifications, we need to employ augmentation strategies that simulate the changes in spatial sampling.

@mattersoflight pointed out that scaling down is not a good approximate of reducing magnification. Blurring (e.g. Gaussian filtering) before rescaling can simulate the integration of information along the light path and reduce artifacts.

Another question is that how do we determine the training time Z sampling for better utilization of defocus information. This can potentially be estimated from magnification, Z step size, and the NA of illumination and detection.

@mattersoflight mattersoflight transferred this issue from mehta-lab/microDL Jul 13, 2023
@mattersoflight
Copy link
Member

mattersoflight commented Jul 29, 2023

@ziw-liu

Another question is that how do we determine the training time Z sampling for better utilization of defocus information. This can potentially be estimated from magnification, Z step size, and the NA of illumination and detection.

A meta-comment: we should capture the most information in the training data, and we can always augment the data with CV filters or optical filters to mimic the loss of information that occurs in test data.

Specifically, I think the following are good settings for the types of virtual staining models we need:

  • 63x1.3 NA objective and Nyquist sampling in Brightfield and all fluorescence channels. The high spatial resolution and corresponding sampling will help us see small structures such as nucleolar compartments and lipid droplets.
  • Ensure that the data is acquired to capture all of the sample + substantial blur (~10 slices above and below the edge of the sample).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants