Date of Thesis
Spring 2021
Description
Magnetic resonance imaging (MRI) can help visualize various brain regions. Typical MRI sequences consist of T1-weighted sequence (favorable for observing large brain structures), T2-weighted sequence (useful for pathology), and T2-FLAIR scan (useful for pathology with suppression of signal from water). While these different scans provide complementary information, acquiring them leads to acquisition times of ~1 hour and an average cost of $2,600, presenting significant barriers. To reduce these costs associated with brain MRIs, we present pTransGAN, a generative adversarial network capable of translating both healthy and unhealthy T1 scans into T2 scans. We show that the addition of non-adversarial perceptual losses, like the style and content loss, improves the translations, especially making the generated images sharper, and makes the model more robust. In previous studies, separate models have been created for healthy and unhealthy brain MRI. However, in a real-world clinical setting, choosing between different models can become cumbersome for a medical professional. Moreover, we show that when pTransGAN is only trained on healthy data, it performs poorly on unhealthy data (and vice-versa). Thus, in this study, we also present a novel simultaneous training protocol that allows pTransGAN to concurrently train on healthy and unhealthy data. As measured by novel metrics that closely match the perceptual similarity of human observers, our simultaneously trained pTransGAN model outperforms the models individually trained on just healthy and unhealthy data as well as previous literature models. Thus, in this study, we present a perceptually improved algorithm to translate both healthy and unhealthy T1 brain MRI into their corresponding T2 scans.
Keywords
Medical Image Translation, Perceptual Losses, MRI, Computer Vision
Access Type
Honors Thesis
Degree Type
Bachelor of Science in Biomedical Engineering
Major
Engineering
Minor, Emphasis, or Concentration
Computer Science
First Advisor
Dr. Joshua Stough
Second Advisor
Dr. Aalpen Patel
Third Advisor
Dr. Benjamin Wheatley
Recommended Citation
Vaidya, Anurag, "Perceptually Improved Medical Image Translations Using Conditional Generative Adversarial Networks" (2021). Honors Theses. 555.
https://digitalcommons.bucknell.edu/honors_theses/555