Advanced image normalization to improve the generalizability of radiomic features

By Leihao Wei
Electrical and Computer Engineering Ph.D. candidate — Dr. Will Hsu Lab

Computed tomography (CT) plays an integral role in the screening and diagnosis of a wide range of diseases. The availability of large CT datasets coupled with advances in medical image analysis has led to a proliferation of machine learning (ML) models that utilize image-derived features for prediction and classification. One significant barrier is that variations in how CT scans are acquired and reconstructed have an enormous impact on the outcome, resulting in radiomic features with poor reproducibility. Scans can look very different due to vendors, protocols, and acquisition parameters. These differences in acquisition affect morphology and texture-based features that are used to describe diseases such as lung nodules, leading to inconsistencies in the detection and characterization of lesions in images. In this study, we use a Generative Adversarial Network to normalize images from various conditions to a common standard reference condition. Our model combines all the normalization tasks aforementioned into one single unified model that can be trained end to end. Not only we are looking for normalizing images to have similar appearances, but also we are ensuring a similar degree of task-based performance is achieved due to more consistent image features.