Learning Deep Representations from Histopathological Slides for Disease Prognosis

By Jiayun Li
Medical and Imaging Informatics Ph.D. Candidate — Dr. Corey Arnold Lab.

Learning Deep Representations from Histopathological Slides for Disease Prognosis

Prostate cancer (PCa) is the most common and second deadliest cancer in men in the United States. Active surveillance (AS) is an important option for the management of low- to intermediate-risk clinically localized prostate cancer. Prostate biopsy, which is an invasive procedure that has associated side effects, is performed repeatedly during the course of AS. Progression on biopsies may trigger curative treatment. Yet, no consensus has been reached on the optimal frequency for repeated biopsies. Gleason score is considered as the current best biomarker in predicting long-term prostate cancer outcomes. However, Gleason scores are assigned manually through pathologist review, a process that has been shown to have low inter-observer agreement across pathologists.

The aforementioned problems create a great clinical need for tools that should leverage rich prognostic information embedded in histopathological images to predict the potential change of tumor histology. The main objective of this project is to extract quantitative representations from histopathological images, which can be combined with clinical variables and imaging features in a multi-modal model to better characterize the progression of prostate cancer. Towards this goal, we have been working on a set of semi-supervised image segmentation models, weakly-supervised detection and classification models, and self-supervised models.

Prostate Cancer Diagnosis and Gleason Grading of Histological Images

By Wenyuan Li
Ph.D. of Electrical and Computer Engineering

Prostate cancer is the most common and second most deadly form of cancer in men in the United States. Pathologists use several screening methodologies to qualitatively describe the diverse tumor histology in the prostate. The classification of prostate cancers based on Gleason grading using histological images is important in risk assessment and treatment planning for patients. In this study, we demonstrate a new region-based convolutional neural network (R-CNN) framework for multi-task prediction using an Epithelial Network Head and a Grading Network Head. Compared to a single task model, our multi-task model can provide complementary contextual information, which contributes to better performance. Our model achieved state-of-the-art performance in epithelial cells detection and Gleason grading tasks simultaneously. Using five-fold cross-validation, our model achieved an epithelial cells detection accuracy of 99.07% with an average AUC of 0.998. As for Gleason grading, our model obtained a mean intersection over union of 79.56% and an overall pixel accuracy of 89.40%.

Advanced image normalization to improve the generalizability of radiomic features

By Leihao Wei
Electrical and Computer Engineering Ph.D. candidate — Dr. Will Hsu Lab

Computed tomography (CT) plays an integral role in the screening and diagnosis of a wide range of diseases. The availability of large CT datasets coupled with advances in medical image analysis has led to a proliferation of machine learning (ML) models that utilize image-derived features for prediction and classification. One significant barrier is that variations in how CT scans are acquired and reconstructed have an enormous impact on the outcome, resulting in radiomic features with poor reproducibility. Scans can look very different due to vendors, protocols, and acquisition parameters. These differences in acquisition affect morphology and texture-based features that are used to describe diseases such as lung nodules, leading to inconsistencies in the detection and characterization of lesions in images. In this study, we use a Generative Adversarial Network to normalize images from various conditions to a common standard reference condition. Our model combines all the normalization tasks aforementioned into one single unified model that can be trained end to end. Not only we are looking for normalizing images to have similar appearances, but also we are ensuring a similar degree of task-based performance is achieved due to more consistent image features.

Quantitative characterization of suspicious microcalcifications on mammography

By Noor Nakhaei
Computer Science graduate student — Dr. Will Hsu Lab

Microcalcifications are a common finding on screening mammography: annually, approximately 580,000 exams in the United States have microcalcifications that prompt further diagnostic workup. Radiologists use rudimentary imaging features to stratify biopsy recommendations for microcalcifications. These imaging features are limited to those that are visible and describable by the radiologist using a limited set of qualitative descriptions. Novel quantitative assessment of microcalcifications has the potential to provide more accurate predictive evidence of early aggressive disease. In this project, we seek to improve the characterization of suspicious microcalcifications by analyzing their shapes, distributions, and texture patterns using quantitative analysis. We are investigating ways to spatially localize biopsy specimens to regions within 2D mammography images. We do this by jointly analyzing the microcalcifications and the surrounding tissue in diagnostic mammograms and the corresponding specimen radiograph images of the biopsy cores taken from that region. We then correlate histopathological features extracted from digital whole slide images to features extracted from matched regions on 2D mammograms.