-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support native-space inputs #2
Comments
Indeed, we should operate in the native space, perhaps using AFNI commands that do not interpolate the functional data at the resolution of the template, or viceversa. |
There are some odd ratios though. For example,
Depending on what data are available, I think that should work. |
1- Relatively. I agree with you that the number of CSF voxels will be different in the native space than in the template space. However, my opinion is that this number should not be very different (of course, using the same voxel size) at least for healthy adult young subjects, maybe not for elderly subjects because they tend to have larger ventricles. I would stick with the same value at the moment, and check whether this ratio of voxels changes considerably. Perhaps, one possibility is that we do a linear interpolation, i.e. compute the ratio of voxels for the MNI template and adapt the csfFract according to the ratio in the native space. Thus, if the ratio is the same, the csfFract would also remain the same. |
Now that we're dropping native-to-MNI transforms (#45), we definitely can't warp the packaged standard space masks into native space. Which tissue probability maps do we need to get the requisite masks?
|
I would say the 3 of them: |
Which tools are you planning to use to compute the masks? Since fMRIprep is also interested in performing ICA-AROMA in native space, we could use the same tools, or the same code for generating masks as Tedana so that we can later incorporate these criteria into it. |
One of the main goals is to use pure Python, so I'm planning to use
The EPI mask will have the ventricles filled in, so eroding it won't catch the borders in the edge mask (see below). Perhaps if we erode it after subtracting the CSF mask from the EPI mask? If so, then we can create the brain mask from the EPI data, the CSF mask from a CSF tissue probability map, and the edge mask from a combination of the EPI mask and CSF TPM. I don't think we'll need gray matter or white matter TPMs. |
The CSF mask is computed from the CSF tissue probability maps. I suggest we use the same code as fMRIprep to compute the tissue probability maps. |
The edge mask is the rim shown in the picture. Maybe we need more than 2 voxels. |
The edge mask used by ICA-AROMA isn't just the rim. It's the internal edges too. |
Yes, yes, I mean the internal edges too, so maybe dilation of the epi mask by 2-3 voxels, erosion by 2-3 voxels and then subtract the dilated one minus the eroded one. |
Oh okay. Yes, I get something pretty good with the following code (4 erosions), using just the provided masks: csf_img = nib.load('mask_csf.nii.gz')
oob_img = nib.load('mask_out.nii.gz')
brain_img = image.math_img('1 - mask', mask=oob_img)
gmwm_img = image.math_img('(brain - csf) > 0', brain=brain_img, csf=csf_img)
arr = gmwm_img.get_fdata()
temp = ndimage.binary_erosion(arr, iterations=4)
temp_img = nib.Nifti1Image(temp, gmwm_img.affine, header=gmwm_img.header)
edge_img = image.math_img('brain - core', brain=gmwm_img, core=temp_img) Plotting the resulting I think that means we can use the same code on the binarized CSF tissue probably map and nilearn-generated EPI mask we'll use throughout the native-space workflow. |
If the above approach is good, then we just need a threshold for the CSF tissue probability map for generating the CSF mask. Then, if the user provides a tissue probability map, we use that. Otherwise, we assume the data are in standard space and use the packaged masks. |
Quoting the ICA-AROMA paper: So, yes, that picture is pretty nice |
My question is whether we can also give the option of computing the tissue probability map in the program. We could use the same code as fMRIprep, nipype, or this algorithm in dipy. Or do we want to minimize any dependence? |
I think I'd prefer to accept a TPM as an argument instead of trying to calculate it within the code- at least for now. If the dipy method is pure Python, though, I think that's reasonable to support in the future. What do you think? |
Agree, we will need to look into the code. |
Sorry for joining late. Great discussion!
Then the problem is not the coverage (except for limited FoV scans). If the resolution is the same and the whole brain is covered, then the ratio should be the same. Probably a simple correcting factor based on the ratio between the volume of the voxels in MNI ([2mm]^3) and the native space would do.
I think this makes sense.
Agreed, I'd leave the responsibility for some other module. Here you really want to test that, given adequate TPMs you get similar (or better, if @tsalo and @CesarCaballeroGaudes are willing to eyeball a lot of data) components to those that the original implementation gives. |
Currently, ICA-AROMA only supports standard-space inputs (MNI152 with 2mm3 voxels, I believe). In order to maximize compatibility with other ICA classification methods (e.g.,
tedana
), we need to support native-space inputs as well.Here are some blockers on this:
The text was updated successfully, but these errors were encountered: