Nome e qualifica del proponente del progetto: 
sb_p_1673460
Anno: 
2019
Abstract: 

Brain cancer has an uncontrolled growth and it may occur at any part of the brain. It has been quite challenging to detect which part of the brain contains the cancer. So defined is the biggest challenge for brain cancer is the segmentation of brain from the healthy part. Several challenges have been conducted by BRATS for this purpose. One of the most commonly found and hard to detect brain tumors is gliomas having irregular shape and ambiguous boundaries. An accurate and timely diagnosis is required to make sure of the survival of the subject. The purpose of this research is to create an algorithm using deep neural network and create a segmentation technique which is supposed to result in accurate results, faster in working and intelligent in performance. The proposed model will be trained and tested on the images provided by BRATS.

ERC: 
PE6_7
PE6_11
PE6_13
Componenti gruppo di ricerca: 
sb_cp_is_2119081
Innovatività: 

There are many cnn architectures which have been proposed by several researchers in the past which includes, (i) LeNet-5, (ii) AlexNet, (iii) ZFNet, (iv) GoogleNet/ Inception V1, (v) VGGNet, and (vi) ResNet. They are briefly described in this section.
(i) LeNet-5
In 1998 a 7-level convolutional neural network was proposed which was named as LeNet-5 by LeCun et. al.,. The main advantage of this network was digit classification and was used by banks for the classification of hand written number by the costumers. They used 32 * 32 pixel's grey-scale images as input for the classification. In order to process the large images with high resolution demands more convolutional layers, this requirement puts on a limit on this architecture.
(ii) AlexNet
AlexNet is a challenge winner architecture in 2012, by reducing the top-5 errors from 26% to 15.3%. This network was similar to LeNet but was more deep, with increased number of filters per layer and more stacked convolutional layers. It consisted of 11 * 11, 5 * 5, 3 * 3 convolutional kernels, max pooling, dropout, data augmentation, ReLU activation's. ReLU activation was attached after every convolutional and fully connected layer. It took a time period of 2 days to test this network on GPU580 Nvidia Geforce, thats why they split the network into two pipelines. The designers of ALexNet were a supervision group consisting of Alex Krizhevsky, Geoffrey Hinton, and Ilya Sutskever.
(iii) ZFNet
ZFNet was the winner of ImageNet Large Scale Visual Recognition Competition (ILSVRC) 2013. They reduced the top-5 error rate to 14.8% which is the half of non-neural error rate. They achieved it by keeping the AlexNet structure the same but changing its hyper parameters.
(iv) GoogleNet/ Inception V1
This was the winner of ILSVRC-2014 with the top-5 error rate of 6.67% which was very close to human level performance so the creators of the network were forced to perform the human evaluation and after weeks of training the human experts achieved top-5 error rate of 5.1% (single model) and 3.6% for ensemble. The network was a CNN based on LeNet dubbed with inception module. It used batch normalization, image distortions and RMSprop. This was a 22 deep layered CNN network but was able to reduce the parameters from 60 million to 4 million.
(v) VGGNet
VGGNet was a runner-up in ILSVRC-2014. It was made up of 16-convolutional layers and a uniform architecture. It had only 3 * 3 convolution but lots of filters. It was trained for three weeks on 4GPU's. Because of its architectural uniformity it is most appealing network for the purpose of feature extraction from images. The weighted configurations of this architecture were made public and this has been used for the baseline for many applications and challenges as the feature extractor. The biggest challenge one faces for this network are its 138 million parameters which becomes difficult to handle.
(vi) ResNet
Residual neural network ResNet in ILSVRC-2015 used skip connections and feature batch normalization. Those skip connections are also known as the gated recurrent units which are similar to the elements being recently applied in RNNS. This network enabled us to train a neural network with 152 layer and a reduced complexity comparable to VGGNet. The achieved error rate of top-5 was 3.57% and so it beats the human level performance on the given dataset.

Codice Bando: 
1673460

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma