DIRECT
is a Python, end-to-end pipeline for solving Inverse Problems emerging in Imaging Processing. It is built with PyTorch and stores state-of-the-art Deep Learning imaging inverse problem solvers such as denoising, dealiasing and reconstruction. By defining a base forward linear or non-linear operator, DIRECT
can be used for training models for recovering images such as MRIs from partially observed or noisy input data.
DIRECT
stores inverse problem solvers such as the Learned Primal Dual algorithm, Recurrent Inference Machine and Recurrent Variational Network, which were part of the winning solution in Facebook & NYUs FastMRI challenge in 2019 and the Calgary-Campinas MRI reconstruction challenge at MIDL 2020. For a full list of the baselines currently implemented in DIRECT see here.
Check out the documentation for installation and a quick start.
In the projects folder baseline model configurations are provided for each project.
We provide a set of baseline results and trained models in the DIRECT Model Zoo. Baselines and trained models include the Recurrent Variational Network (RecurrentVarNet), the Recurrent Inference Machine (RIM), the End-to-end Variational Network (VarNet), the Learned Primal Dual Network (LDPNet), the X-Primal Dual Network (XPDNet), the KIKI-Net, the U-Net, the Joint-ICNet, and the AIRS Medical fastmri model (MultiDomainNet).
DIRECT is not intended for clinical use. DIRECT is released under the Apache 2.0 License.
If you use DIRECT in your own research, or want to refer to baseline results published in the DIRECT Model Zoo, please use the following BiBTeX entry:
@misc{DIRECTTOOLKIT,
author = {Yiasemis, George and Moriakov, Nikita and Karkalousos, Dimitrios and Caan, Matthan and Teuwen, Jonas},
title = {DIRECT: Deep Image REConstruction Toolkit},
howpublished = {\url{https://github.com/NKI-AI/direct}},
year = {2021}
}