Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port SHORE model back from the legacy branch #5

Open
oesteban opened this issue Apr 14, 2021 · 9 comments
Open

Port SHORE model back from the legacy branch #5

oesteban opened this issue Apr 14, 2021 · 9 comments

Comments

@oesteban
Copy link
Member

cf. nipreps/eddymotion#15.

@oesteban
Copy link
Member Author

oesteban commented Dec 9, 2022

@arokem - I believe @tmspvn mentioned at some point that the SHORE model does not have fully implemented the .predict() API, can you tell us more about this?

@dPys if you are still around/interested in this - what was your experience?

oesteban referenced this issue in nipreps/eddymotion Dec 9, 2022
Resolves: #38.
oesteban referenced this issue in nipreps/eddymotion Dec 9, 2022
Resolves: #38.
@oesteban
Copy link
Member Author

oesteban commented Dec 9, 2022

Okay, it does look like @tmspvn is right:

NotImplementedError                       Traceback (most recent call last)
Input In [5], in <cell line: 1>()
----> 1 estimator.EddyMotionEstimator.fit(data, models=("FullSHORE", "DKI"), n_jobs=1, align_kwargs={"num_threads": 16})

File /data/home/oesteban/workspace/eddymotion/src/eddymotion/estimator.py:133, in EddyMotionEstimator.fit(dwdata, align_kwargs, models, omp_nthreads, n_jobs, seed, **kwargs)
    127     dwmodel.fit(
    128         data_train[0],
    129         n_jobs=n_jobs,
    130     )
    132 # generate a synthetic dw volume for the test gradient
--> 133 predicted = dwmodel.predict(data_test[1])
    135 # prepare data for running ANTs
    136 tmpdir = Path(tmpdir)

File /data/home/oesteban/workspace/eddymotion/src/eddymotion/model.py:166, in BaseModel.predict(self, gradient, **kwargs)
    163 n_models = len(self._models) if self._model is None and self._models else 1
    165 if n_models == 1:
--> 166     predicted, _ = _exec_predict(self._model, gradient, S0=S0, **kwargs)
    167 else:
    168     S0 = (
    169         np.array_split(S0, n_models) if S0 is not None
    170         else [None] * n_models
    171     )

File /data/home/oesteban/workspace/eddymotion/src/eddymotion/model.py:15, in _exec_predict(model, gradient, chunk, **kwargs)
     13 def _exec_predict(model, gradient, chunk=None, **kwargs):
     14     """Propagate model parameters and call predict."""
---> 15     return np.squeeze(model.predict(gradient, S0=kwargs.pop("S0", None))), chunk

File /data/home/oesteban/workspace/dipy/dipy/reconst/multi_voxel.py:90, in MultiVoxelFit.predict(self, *args, **kwargs)
     88 if not hasattr(self.fit_array[ijk], 'predict'):
     89     msg = "This model does not have prediction implemented yet"
---> 90     raise NotImplementedError(msg)
     92 first_pred = self.fit_array[ijk].predict(*args, **kwargs)
     93 result = np.zeros(self.fit_array.shape + (first_pred.shape[-1],))

NotImplementedError: This model does not have prediction implemented yet

@oesteban
Copy link
Member Author

oesteban commented Dec 9, 2022

dipy/dipy#2691

@tmspvn
Copy link

tmspvn commented Dec 9, 2022 via email

@oesteban
Copy link
Member Author

oesteban commented Dec 9, 2022

Okay, indeed, it seems MAP-MRI implements predict (https://github.com/dipy/dipy/blob/master/dipy/reconst/mapmri.py#L974). I'll give it a go.

@dPys
Copy link

dPys commented Dec 13, 2022

I'm not super familiar with MAP-MRI but it seems to me like it could be worth a try given that it does have a predict implementation available.

I suppose a hypothetical MAPMRIModel class could just look something like:

class MAPMRIModel(BaseModel):
    """A wrapper of :obj:`dipy.reconst.mapmri.MapmriModel`."""

    _modelargs = (
        "radial_order",
        "laplacian_regularization",
        "laplacian_weighting",
        "pos_grid",
        "pos_radius",
        "anisotropic_scaling",
        "eigenvalue_threshold",
    )
    _model_class = "dipy.reconst.mapmri.MapmriModel"

where anisotropic_scaling would just be set to True, else it would just amount a basis function equivalent to 3d-shore as @tmspvn rightly points out.

@tmspvn -- I think my only question would be: how do you envision tuning these parameters (or developing a "warm-start" methodology that does this) in a way that we can be at least reasonably sure will generalize to different acquisition schemes while delivering tractable runtimes?

@tmspvn
Copy link

tmspvn commented Dec 13, 2022

Hi,
I am not familiar with the model as well but I would start with the 3 versions they suggest. Since we are using it as a target, I would go with also a version that is fully constrained and regularized like their 3rd model, with a laplacian_weighting that "fits all" like the .2 they claim works well on HCP data. Something like this code and leave the rest of the setting as the default for now:

map_model_both_aniso = mapmri.MapmriModel(gtab, 
                                          radial_order=6,
                                          laplacian_regularization=True,
                                          laplacian_weighting=.2,
                                          positivity_constraint=True)

but this is just guesswork.

The thing that bugs me the most is the gradient table that requires the big_delta and small_delta parameters which cannot be found in headers or the JSON sidecar but one must have access to the actual details of the sequence to have the values, which is not always the case.

source:
https://dipy.org/documentation/1.0.0./examples_built/reconst_mapmri/

@dPys
Copy link

dPys commented Dec 13, 2022

Good points @tmspvn !

I wonder whether there might be more creative ways of deriving the delta from other, common elements in the dicom header? (@arokem might know?) This reminds me of the old' readout / dwell-time problem when configuring FSL's EDDY. Seems like the manufacturers would have caught on to this gap by now...

@arokem
Copy link

arokem commented Dec 13, 2022

I don't think that these can be generally/easily derived without knowing how it was set.

However, it looks like there's a "reasonable default" set in the model code, so I'd say maybe it's not necessary for our purposes. I think it's an empirical question.

@oesteban oesteban transferred this issue from nipreps/eddymotion Dec 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants