Sort:
Open Access Research Article Issue
Automated brain tumor segmentation on multi-modal MR image using SegNet
Computational Visual Media 2019, 5 (2): 209-219
Published: 23 April 2019
Abstract PDF (13 MB) Collect
Downloads:49

The potential of improving disease detection and treatment planning comes with accurate and fully automatic algorithms for brain tumor segmentation. Glioma, a type of brain tumor, can appear at different locations with different shapes and sizes. Manual segmentation of brain tumor regions is not only time-consuming but also prone to human error, and its performance depends on pathologists’ experience. In this paper, we tackle this problem by applying a fully convolutional neural network SegNet to 3D data sets for four MRI modalities (Flair, T1, T1ce, and T2) for automated segmentation of brain tumor and sub-tumor parts, including necrosis, edema, and enhancing tumor. To further improve tumor segmentation, the four separately trained SegNet models are integrated by post-processing to produce four maximum feature maps by fusing the machine-learned feature maps from the fully convolutional layers of each trained model. The maximum feature maps and the pixel intensity values of the original MRI modalities are combined to encode interesting information into a feature representation. Taking the combined feature as input, a decision tree (DT) is used to classify the MRI voxels into different tumor parts and healthy brain tissue. Evaluating the proposed algorithm on the dataset provided by the Brain Tumor Segmentation 2017 (BraTS 2017) challenge, we achieved F-measure scores of 0.85, 0.81, and 0.79 for whole tumor, tumor core, and enhancing tumor, respectively.

Experimental results demonstrate that using SegNet models with 3D MRI datasets and integrating the four maximum feature maps with pixel intensity values of the original MRI modalities has potential to perform well on brain tumor segmentation.

Open Access Research Article Issue
Skeleton-based canonical forms for non-rigid 3D shape retrieval
Computational Visual Media 2016, 2 (3): 231-243
Published: 14 April 2016
Abstract PDF (18.8 MB) Collect
Downloads:31

The retrieval of non-rigid 3D shapes is an important task. A common technique is to simplify this problem to a rigid shape retrieval task by producing a bending-invariant canonical form for each shape in the dataset to be searched. It is common for these techniques to attempt to “unbend” a shape by applying multidimensional scaling (MDS) to the distances between points on the mesh, but this leads to unwanted local shape distortions. We instead perform the unbending on the skeleton of the mesh, and use this to drive the deformation of the mesh itself. This leads to computational speed-up, and reduced distortion of local shape detail. We compare our method against other canonical forms: our experiments show that our method achieves state-of-the-art retrieval accuracy in a recent canonical forms benchmark, and only a small drop in retrieval accuracy over the state-of-the-art in a second recent benchmark, while being significantly faster.

Open Access Research Article Issue
Saliency guided local and global descriptors for effective action recognition
Computational Visual Media 2016, 2 (1): 97-106
Published: 29 January 2016
Abstract PDF (3.1 MB) Collect
Downloads:24

This paper presents a novel framework for human action recognition based on salient object detection and a new combination of local and global descriptors. We first detect salient objects in video frames and only extract features for such objects. We then use a simple strategy to identify and process only those video frames that contain salient objects. Processing salient objects instead of all frames not only makes the algorithm more efficient, but more importantly also suppresses the interference of background pixels. We combine this approach with a new combination of local and global descriptors, namely 3D-SIFT and histograms of oriented optical flow (HOOF), respectively. The resulting saliency guided 3D-SIFT-HOOF (SGSH) feature is used along with a multi-class support vector machine (SVM) classifier for human action recognition. Experiments conducted on the standard KTH and UCF-Sports action benchmarks show that our new method outperforms the competing state-of-the-art spatiotemporal feature-based human action recognition methods.

Total 3