Brain Tumor Detection and Classification from Multi-Channel MRIs using Deep Learning and Transfer Learning

Subhashis Banerjee

Subhashis Banerjee

Kolkata, West Bengal

1 0
  • 0 Collaborators

Glioblastoma Multiforme constitutes 80% of malignant primary brain tumors in adults, and is usually classified as High Grade Glioma (HGG) and Low Grade Glioma (LGG). The LGG tumors are less aggressive, with slower growth rate as compared to HGG, and are responsive to therapy. Tumor biopsy being challenging for brain tumor patients, noninvasive imaging techniques like Magnetic Resonance Imaging (MRI) have been extensively employed in diagnosing brain tumors. Therefore, development of automated systems for the detection and prediction of the grade of tumors based on MRI data becomes necessary for assisting doctors in the framework of augmented intelligence. In this paper, we thoroughly investigate the power of Deep Convolutional Neural Networks (ConvNets) for classification of brain tumors using multi-sequence MR images. First we propose three ConvNets, which are trained from scratch, on MRI patches, slices, and multi-planar volumetric slices. The suitability of transfer learning for the task is next studied by applying two existing ConvNets models (VGGNet and ResNet) trained on ImageNet dataset, through fine-tuning of the last few layers. Leave-one-patient-out (LOPO) testing scheme is used to evaluate the performance of the ConvNets. Results demonstrate that ConvNet achieves better accuracy in all cases where the model is trained on the multi-planar volumetric dataset. Unlike conventional models, it obtains a testing accuracy of 97% without any additional effort towards extraction and selection of features. We study the properties of self-learned kernels/ filters in different layers, through visualization of the intermediate layer outputs. We also compare the results with that of state-of-the-art methods, which require manual feature engineering for the task, demonstrating a maximum improvement of 12% on grading performance of ConvNets. ...learn more

Project status: Published/In Market

Artificial Intelligence

Links [1]

Overview / Usage

Histological grading, based on stereotactic biopsy test, is the gold standard for detecting the grade of brain tumors. The biopsy procedure requires the neurosurgeon to drill a small hole into the skull (exact location of the tumor in the brain guided by MRI), from which the tissue is collected using specialized equipments. There are many risk factors involving the biopsy test, including bleeding from the tumor and brain due to the biopsy needle, which can cause severe migraine, stroke, coma and even death. Other risks involve infection or seizures. But the main concern with the stereotactic biopsy is that it is not 100% accurate, and any misleading histological grading of the tumor may result in serious diagnostic error followed by wrong clinical management of the disease.

In this context multi-sequence MRI plays a major role in the detection, diagnosis, and management of brain cancers in a non-invasive manner. Recent literature reports that that computerized detection and diagnosis of the disease, based on medical image analysis, could be a good alternative. Quantitative imaging features, extracted from MR images demonstrate good disease classification, extraction of those hand-crafted features requires extensive domain knowledge, involves human bias, and is problem-specific. Manual designing of features typically requires greater insight into the exact characteristics of normal and abnormal tissues, and may fail to accurately capture some important representative features; thereby hampering classifier performance. The generalization capability of such classifiers may also suffer due to the discriminative nature of the methods, with the hand-crafted features being usually designed over fixed training sets. Subsequently manual or semi-automatic localization and segmentation of the (Region of Interest) ROI or (Volume of interest) VOI is also needed to extract the quantitative imaging features.

Convolutional Neural Networks (ConvNets) offer state-of-the-art framework for image recognition or classification. ConvNet architecture is designed to loosely mimic the fundamental working of the mammalian visual cortex system. It has been shown that the visual cortex has multiple layers of abstractions which look for specific patterns in the input vision. A ConvNet is built upon a similar idea of stacking multiple layers to allow it to learn multiple different abstractions of the input data. These networks automatically learn mid-level and high-level representations or abstractions from the input training data, in the form of convolution filters that are updated during the training process. They work directly on raw input (image) data, and learn the underlying representative features of the input which are hierarchically complex, thereby ruling out the need for specialized hand-crafted image features. Moreover ConvNets require no prior domain knowledge and can automatically learn to perform any task just by working through the training data.

In this project we exhaustively investigate the behaviour and performance of ConvNets, with and without transfer learning, for non-invasive brain tumor detection and grade prediction from multi-sequence MRI. Tumors are typically heterogeneous, depending on cancer subtypes, and contain a mixture of structural and patch-level variability. Prediction of the grade of a tumor may thus be based on either the image patch containing the tumor, or the 2D MRI slice containing the image of the whole brain including the tumor, or the 3D MRI volume encompassing the full image of the head enclosing the tumor. While in the first case only the tumor patch is necessary as input, the other two cases require the ConvNet to learn to localize the ROI (or VOI) followed by its classification. Therefore, the first case needs only classification while the other two cases additionally require detection or localization. Since the performance and complexity of ConvNets depend on the difficulty level of the problem and the type of input data representation, we prepare here three kinds viz. i) Patch-based, ii) Slice-based, and iii) Volume-based data, from the original MRI dataset. Three ConvNet models are developed corresponding to each case, and trained from scratch. We also compare two state-of-the-art ConvNet architectures, viz. VGGNet and ResNet, with parameters pre-trained on ImageNet using transfer learning (via fine-tuning).

All experiments are performed on the BraTS 2017 dataset, which includes data from BraTS 2012, 2013, 2014 and 2015 challenges along with data from the Cancer Imaging Archive (TCIA). The dataset consists of 210 HGG and 75 LGG glioma cases. Each patient MRI scan set has four MRI sequences or channels, encompassing native (T1) and post-contrast enhanced T1-weighted (T1C), T2-weighted (T2), and T2 Fluid-Attenuated Inversion Recovery (FLAIR) volumes having 155 2D slices of 240 X 240 resolution.

Methodology / Approach

Although the BraTS 2017 dataset consists MRI volumes, we cannot propose a 3D ConvNet model for the classification problem; mainly because the dataset has only 210 HGG and 75 LGG patients data, which is considered as inadequate to train a 3D ConvNet with a huge number of trainable parameters. Another problem with the dataset is its imbalanced class distribution i.e. about 35.72% of the data comes from the LGG class. Therefore we formulate 2D ConvNet models based on the MRI patches (encompassing the tumor region) and slices, followed by a multi-planar slice-based ConvNet model that incorporates the volumetric information as well.

Applying ConvNet directly on the MRI slice could require extensive downsampling, thereby resulting in loss of discriminative details. The tumor can be lying anywhere in the image and can be of any size (scale) or shape. Therefore classifying the tumor grade from patches is easier, because the ConvNet learns to localize only within the extent of the tumor in the image. Thereby the ConvNet needs to learn only the relevant details without getting distracted by irrelevant details. However it may lack spatial and neighborhood details of the tumor, which may adversely influence grade prediction. Although classification based on the 2D slices and patches often achieves good accuracy, the incorporation of volumetric information from the dataset can enable the ConvNet to perform better.

Along these lines, we propose schemes to prepare three different sets viz. (i) patch-based, (ii) slice-based, and (iii) multi-planar volumetric, from the BraTS 2017 dataset. We propose three ConvNet architectures, named PatchNet, SliceNet, and VolumeNet, which are trained from scratch on the three datasets prepared as detailed. This is followed by transfer learning and fine-tuning of the VGGNET and ResNet on the same datasets. PatchNet is trained on the patch-based dataset, and provides the probability of a patch belong to HGG or LGG. SliceNet gets trained on the slice-based dataset, and predicts the probability of a slice being from HGG or LGG. Finally VolumeNet is trained on the multi-planar volumetric dataset, and predicts the grade of a tumor from its 3D representation using the multi-planar 3D MRI data.

As reported in literature, smaller size convolutional filters produce better regularization due to the smaller number of trainable weights; thereby allowing construction of deeper networks without losing too much information in the layers. We use filters of size (3 X 3) for our ConvNet architectures. A greater number of filters, involving deeper convolution layers, allows for more feature maps to be generated. This compensates for the decrease in size of each feature map caused by "valid'' convolution and pooling layers. Due to the complexity of the problem and bigger size of the input image, the SliceNet and VolumeNet architectures are deeper as compared to the PatchNet.

Technologies Used

The ConvNets were developed using TensorFlow, with Keras in Python. The operating system was Ubuntu 16.04.

Comments (0)