Abstract
This study aims to develop a robust system for automatically identifying and segmenting different parts of brain tumors from detailed, multi-modal magnetic resonance imaging (MRI) scans. Accurate delineation of tumor substructures is crucial for clinical diagnosis, treatment planning, and monitoring the progression of brain tumors. By leveraging advanced fully convolutional neural networks (FCNN), the proposed approach seeks to accurately identify and classify tumor subcomponents, including complete tumor, tumor core, and enhancing regions. The network architecture is carefully designed by integrating the U-Net framework with the VGG-16 model, which enhances feature extraction and improves the accuracy of matching the segmented outputs with the corresponding ground truth images. To effectively handle challenges associated with imbalanced datasets, a combined Dice-Binary Cross Entropy (BCE) loss function is employed as the evaluation criterion, optimizing the model for both overlap accuracy and pixel-wise classification. The developed methodology was rigorously tested on the publicly available BraTS 2020 dataset, comprising 305 cases of high-grade glioma (HGG) and low-grade glioma (LGG) with three-dimensional multi-modal MRI scans. The experimental results demonstrate the effectiveness of the proposed approach, achieving average Dice similarity scores of 89%, 80%, and 90% for complete tumor, core tumor, and enhancing tumor regions, respectively. These outcomes indicate a significant improvement in accurately aligning the automatically segmented images with the manually annotated ground truth, highlighting the potential of this method for supporting clinical decision-making and aiding in precise tumor assessment.
Keywords: Brain Tumor Segmentation, Convolutional Neural Networks, Magnetic Resonance Imaging, Tumor Substructures, U-Net, VGG 16.