CNN for Brain Tumor Segmentation

Subhashis Banerjee

Subhashis Banerjee

Kolkata, West Bengal

1 0
  • 0 Collaborators

Among brain tumors, gliomas are the most aggressive and common, leading to a very short life expectancy in their highest grade. MRI (Magnetic Resonance Imaging) is a widely used imaging technique to access such tumors but the amount of data produced by MRI is huge which prevents manual segmentation in a reasonable amount of time. So, automatic and reliable methods are required, but the variation in the structure and location of such tumors makes automatic segmentation a very challenging task. In this report, we have proposed four different methods for extracting patches which can be used to train Convolution Neural Networks (CNN) to do the automatic segmentation of tumor in the HGG (Higher Grade Gliomas) and the LGG (Lower Grade Gliomas) patients. We have also proposed a Convolution Neural Network (CNN) based on Transfer Learning which does automatic segmentation in a reasonable amount of time with promising results for the LGG patients. ...learn more

Project status: Under Development

Artificial Intelligence

Overview / Usage

Since the amount of time required for manually segmenting the MIRI is huge and since gliomas is a type of tumor which requires treatment as soon as it is detected in the patients, so automatic methods are required to do the segmentation in a very small amount of time. It is also found that the segmentation of tumor using MRI's by experts varies, so, some automatic segmentation method is required to do the segmentation accurately. Also, an application such as brain tumor leaves no room for error. In a surgery, it is important to remove as much as tumor as possible without damaging any healthy tissues and for that we have to precisely identify the location and shape of the tumor.

In this project, we will proposed several different patch extracting algorithms. We have develpoed a CNN for the HGG patients. Also, proposed another CNN based on transfer learning for the LGG patients, which can do the segmentation in a very small amount of time, achieving high dice scores. We have also shown how transfer learning can be used for improving the results in case of very small amount of data.

Identification of tumor is a very challenging task. The location, shape and the structure of tumor vary significantly from patient to patient which makes the segmentation a very challenging task. In the figure shown below, we have shown some images of the same brain slice from different patients, which clearly reflect the variation of the tumor. We can clearly see that the location of the tumor is different in all the 8 images/patients shown below. To make it worse, the shape and the intra-tumoral structure is also different for all the 8 patients/images. In fact, there can be more than one region of a tumor as can be seen from the images below. This indeed reflects the complexity of automatic segmentation.

Methodology / Approach

Data Pre-processing: One of the challenging task in dealing with MRI data is dealing with the artifacts produced either by inhomogeneity in the magnetic field or small movements made by the patient during scan time. Often a bias is present across the patients which make the segmentation difficult for an automatic segmenting model. Since Bias field distortion alters the MR images, therefore the intensity of the same tissue may vary in an image/slice. So, we have applied the N4ITK method to correct this error. However, this is not enough for making the intensity of the same type of tissues similar across multiple patients. As a matter of fact, the same tissue of the same patient can be in different intensity when doing the scan at different times. So to make the intensity of the same tissue type more similar we have used the method proposed by Nyul et al.

Patch extraction: A patch is a sub-image of the original image. Since the size of each slice is 240 × 240, therefore if we train the CNN on the whole image/slice then the number of parameters to train will be very large and thus we will need a very large amount of data. But, since the dataset is not very large, so, we train our model using patches. Also, the class of any given voxel is highly dependent on the class of its surrounding voxels. So, we are training our model using patches. And by testing different patch sizes we have found that 33 × 33 works the best for the dataset that we are using. For a pixel, p we extract a 33×33 patch with p at the center of the patch from each of the four MRI sequences and append them to get the multi-sequence patch of size 4 × 33 × 33. Next find the class of the patch and add that patch to the patch library along with the class if the number of non zero intensity pixel in the patch is > threshold_c (threshold for class c). Then we create a box (N_c × N_c ) around that pixel p, where N_c is the box size for class c and patches of the same class are not extracted from that box. This helps to extract non-overlapping patches thus prevents overfitting.

CNN model for HGG: In the figure below, we have shown the CNN designed for the segmentation of tumor in HGG patients. The filter size is 3x3 for all the layers with a stride of 1x1 for the convolution layer and a stride of 2x2 for the max pooling layers. The first 3 layers of the model are convolution layer followed by a max pooling layer which are followed by another set of 3 convolution and a max pooling layer. Finally, we have 3 fully connected layers at the end of the model.

Balancing classes: Another important factor in patch selection is to make sure that the classes of the input data are balanced. But, since approximately 98% of the voxels belongs to the tumor classes, therefore classes of the input data is not balanced for any of the algorithm discussed above, So, we have selected all the patches from the underrepresented classes and have randomly selected from the others. Since, the number of parametes that we have to train for the HGG model is 21,18,213 for the classification of 89,28,000 voxels, therefore, we have to increase the size of the training dataset for better classification. For this, we have done run time augmentation.

CNN model for LGG: The CNN model for the LGG patients is trained in the following way. Firstly, the last 5 layers of the HGG model are removed and the 6 layer of the HGG model is made as the new output layer. Then the best learned weights that have already been learned for the segmentation of tumor in HGG patients are loaded. All the training patches from the LGG patients are passed through the model explained above and a vector of size (128, 16, 16) is generated and stored for each patch. Once we have extracted vectors for all the training patches of the LGG patients, we will then train the CNN for the LGG patients using those vectors.

CNN parameters: We have tried training the CNN for both HGG and LGG with different optimizers such as Adam, Adadelta, RMSprop, Nadam, SGD but the best result was given by SGD (Stochastic Gradient Descent). We have trained both the HGG and the LGG model for 20 epochs with initial learning rate as 0.003 and final learning rate as 0.00003. The learning rate has been
linearly reduced. A momentum of 0.9 is being used. The batch size that we have used for training the is 128 in both HGG and LGG model. A dropout of 0.1 is used in case of HGG and 0.5 is used in case of LGG model. LeakyRelu has been used as the activation function with alpha=0.333 in all the layers in both HGG and LGG model, but in the last fully connected layer we have used softmax activation function. The parameters of the CNN have been initialized with Xavier uniform initialization. In the convolution layers, the feature maps are padded before convolution for both HGG and LGG model so that the resulting feature map could maintain the same dimension.

Next, patch-based segmentation is done for each of the patients on the whole MRI volumes.

Technologies Used

The models were developed using TensorFlow, with a wrapper library Keras in Python. The experiments were performed on a Dell Precision 7810 Tower with 2x Intel Xeon E5-2600 v3, totaling 12 cores, 256GB RAM, and NVIDIA Quadro K6000 GPU with 12GB VRAM. The operating system was Ubuntu 16.04.

Comments (0)