Image Data Preparation For CNN-based Breast Ultrasound Lesion Diagnostic with Reduced Overfitting

Hassan, Tahir Mohammed (2023) Image Data Preparation For CNN-based Breast Ultrasound Lesion Diagnostic with Reduced Overfitting. Doctoral thesis, The University Of Buckingham.

[img] Text
Hassan Tahir 1405862 Final Thesis.pdf - Submitted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (6MB)

Abstract

This thesis aims to contribute to efforts of leveraging deep learning (DL) techniques, specifically convolutional neural networks (CNNs), for improved diagnostics of breast lesions in ultrasound (US) images with reduced overfitting manifested by the inability to generalise models to unseen data. Our investigations focus on data preparation factors that influence the performance of CNN models for analysing Breast US (BUS) tumour images. These factors include CNN stipulated fixed input image size, adequate US image quality, and availability of a sufficiently large dataset of adequately labelled samples with good class-diversity for training. Current approaches to deal with these factors are focused on image resizing, relying on unstandardized manual quality assessment by radiology experts, and image augmentation. Most of these solutions rely heavily on knowledge of the natural image domain, which differs from US images. The sizes of US tumour region of interest (RoI) are influenced by the adopted cropping/segmentation procedure and vary significantly with a huge range on both sides of the strictly required input image size for most CNN models. Resizing the many tiny RoIs by several factors seriously impacts their quality. Existing augmentation schemes are designed to enlarge training sets and increase diversity, but the learnt feature patterns by pre-trained CNN models are more relevant to natural images. We implemented the bicubic image resizing (BiCubic) method and a Compressed Sensing Super Resolution (CSSR) based image resizing known for superior quality resizing methods in terms of human perception. Our expert radiologist testified that CSSR resized images are of better quality from the clinical point of view. We tested the performance of several pre-trained CNN models trained in fine-tuning mode on a database of BUS recorded and labelled in one clinical centre, whose RoI images were resized by both methods. All models achieved High-to-Excellent diagnostic accuracy, but little or no improvements were noted with the CSSR resizing scheme. No RoI segmentation was adopted, but optimal cropping of RoI was developed from a set of radiologists’ marked lesion border points. We introduced the Convex Hull (CH) lesion border RoI that efficiently minimizes the exclusion of lesion pixels and is easy to expand. We tested the performance of a few pre-trained CNN models and 2 Handcrafted (HC) schemes with various RoI cropping scenarios, including the tumour polygonal shape. We expanded CH at different rates, each with 2 padding schemes for the area between the surrounding rectangular box and the tumour polygon area: zero padding and tissue padding. While tissue padding of several expanded CH rates had improved performance, zero padding of these schemes was marginally lower. Hence, the inclusion of some external tissue surrounding the lesion border shows promise for enhancing model performance. However, for both padding scenarios, the trained models have very low generalisation when tested on two unseen external datasets, confirming the problem of overfitting when the training dataset is not large and diverse enough. Training the same CNN models with the larger Modelling dataset, compiled by including BUS images from 4 other clinical centres, didn’t improve their validation performance but significantly improved their generalisation to the two unseen datasets. This improvement reflects that the expansion created a more diverse sample of the population resulting in reduced overfitting. For the challenge of US image quality assessment (IQA), we uncovered the inadequacy of existing IQA metrics defined for natural images. We developed a simple Multi Characteristic Quality Feature Vector (MCIQ) that captures the spatial distribution of individual IQA metrics. MCIQ have shown good tumour class dependency and a high ability to distinguish different image modalities and datasets. An innovative version of MCIQ, extracted from image convolution with only 6 well conditioned 5×5 Hadamard filters, successfully aligned with our expert radiologist quality labelling of an extremely small set of US images. Finally, to address the scarcity of BUS images beyond recording a larger training dataset, we investigated several existing conventional image augmentation schemes, including Singular Value Decomposition (SVD), besides our innovative Hadamard filters convolution. All these schemes improved the model’s ability to generalize to the two unseen datasets but with varied levels of improvement. However, these schemes are not specific to US images, so it is difficult to determine which causes of overfitting these schemes help mitigate. For that, we developed the Tumour Margin Appending (TMA) strategy that combines several locally optimal cropping ratios to enlarge the training dataset aiming to alleviate the lack of generalization due to variation in RoI cropping practice. It successfully mitigated the lack of generalization to unseen datasets for this cause and removed the need to test with many unseen datasets.

Item Type: Thesis (Doctoral)
Uncontrolled Keywords: Deep learning ; convolutional neural networks ; ultrasound ; reduced overfitting ; Breast US ; Multi Characteristic Quality Feature Vector ; Tumour Margin Appending
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Q Science > QA Mathematics > QA76 Computer software
R Medicine > R Medicine (General)
T Technology > T Technology (General)
Divisions: School of Computing
Depositing User: Freya Tyrrell
Date Deposited: 27 Sep 2024 13:29
Last Modified: 27 Sep 2024 13:29
URI: http://bear.buckingham.ac.uk/id/eprint/639

Actions (login required)

View Item View Item