The purpose of this code is to train a GAN for accelerating CEST and MT quantitative parameter mapping.
-
NumPy
-
SciPy.io
-
Functools
-
TensorFlow
-
Keras
-
Matplotlib
-
A pre-trained VGG network is required for implementing the perceptual loss, it can be downloaded here. Place the file in the top level of this repository.
-
Image files for training
- A single L-arginine phantom slice is included as a demonstration for both training and inference. The network expects sets of 9 128x128 L2 normalized MRF image per slice as input, and sets of 2 128x128 linearly scaled CEST maps (concentration and chemical exchange rate) as output.
-
Trained networks can be found at: https://figshare.com/s/c91bf3f02e91f91edaf9
-
Simonyan Karen, Zisserman Andrew. Very deep convolutional networks for large-scale image recognition arXiv preprint arXiv:1409.1556. 2014.
-
Isola Phillip, Zhu Jun-Yan, Zhou Tinghui, Efros Alexei A. Image-to-image translation with conditional adversarial networks in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition:1125–1134 2017.
-
Jason Brownlee, How to Implement Pix2Pix GAN Models From Scratch With Keras, Machine Learning Mastery, Available from https://machinelearningmastery.com/how-to-implement-pix2pix-gan-models-from-scratch-with-keras/, Accessed October 14, 2021.
-
Johnson, Justin, Alexandre Alahi, and Li Fei-Fei. "Perceptual losses for real-time style transfer and super-resolution." European conference on computer vision. Springer, Cham, 2016.
-
Cheng-Bin Jin, Real-Time Style Transfer, 2018, https://github.com/ChengBinJin/Real-time-style-transfer/
Weigand‐Whittier J, Sedykh M, Herz K, et al. Accelerated and quantitative three‐dimensional molecular MRI using a generative adversarial network. Magn Reson Med. 2022;1‐13. doi:10.1002/mrm.29574.