Sort:
Open Access Research Article Issue
Constructing self-supporting surfaces with planar quadrilateral elements
Computational Visual Media 2022, 8 (4): 571-583
Published: 11 May 2022
Downloads:40

We present a simple yet effective method for constructing 3D self-supporting surfaces with planar quadrilateral (PQ) elements. Starting with a triangular discretization of a self-supporting surface, we firstcompute the principal curvatures and directions of each triangular face using a new discrete differential geometryapproach, yielding more accurate results than existing methods. Then, we smooth the principal direction field to reduce the number of singularities. Next, we partition all faces into two groups in terms of principalcurvature difference. For each face with small curvature difference, we compute a stretch matrix that turns the principal directions into a pair of conjugate directions. For the remaining triangular faces, we simply keep their smoothed principal directions. Finally, applying a mixed-integer programming solver to the mixed principal and conjugate direction field, we obtain a planar quadrilateral mesh. Experimental results show that our method is computationally efficient and can yield high-quality PQ meshes that well approximate the geometry of the input surfaces and maintain their self-supporting properties.

Open Access Research Article Issue
Image smoothing based on global sparsity decomposition and a variable parameter
Computational Visual Media 2021, 7 (4): 483-497
Published: 17 May 2021
Downloads:42

Smoothing images, especially with rich texture, is an important problem in computer vision. Obtaining an ideal result is difficult due to complexity, irregularity, and anisotropicity of the texture. Besides, some properties are shared by the texture and the structure in an image. It is a hard compromise to retain structure and simultaneously remove texture. To create an ideal algorithm for image smoothing, we face three problems. For images with rich textures, the smoothing effect should be enhanced. We should overcome inconsistency of smoothing results in different parts of the image. It is necessary to create a method to evaluate the smoothing effect. We apply texture pre-removal based on global sparse decomposition with a variable smoothing parameter to solve the first two problems. A parametric surface constructed by an improved Bessel method is used to determine the smoothing parameter. Three evaluation measures: edge integrity rate, texture removal rate, and gradient value distribution are proposed to cope with the third problem. We use the alternating direction method of multipliers to complete the whole algorithm and obtain the results. Experiments show that our algorithm is better than existing algorithms both visually and quantitatively. We also demonstrate our method’s ability in other applications such as clip-art compression artifact removal and content-aware image manipulation.

Open Access Research Article Issue
Multi-scale joint feature network for micro-expression recognition
Computational Visual Media 2021, 7 (3): 407-417
Published: 16 April 2021
Downloads:22

Micro-expression recognition is a substantivecross-study of psychology and computer science, and it has a wide range of applications (e.g., psychological and clinical diagnosis, emotional analysis, criminal investigation, etc.). However, the subtle and diverse changes in facial muscles make it difficult for existing methods to extract effective features, which limits the improvement of micro-expression recognition accuracy. Therefore, we propose a multi-scale joint feature networkbased on optical flow images for micro-expression recognition. First, we generate an optical flow image that reflects subtle facial motion information. The optical flow image is then fed into the multi-scale joint network for feature extraction and classification. The proposed joint feature module (JFM) integrates features from different layers, which is beneficial for the capture of micro-expression features with different amplitudes. To improve the recognition ability of the model, we also adopt a strategy for fusing the feature prediction results of the three JFMs with the backbone network. Our experimental results show that our method is superior to state-of-the-art methods on three benchmark datasets (SMIC, CASME II, and SAMM) and a combined dataset (3DB).

Regular Paper Issue
Two-Stream Temporal Convolutional Networks for Skeleton-Based Human Action Recognition
Journal of Computer Science and Technology 2020, 35 (3): 538-550
Published: 29 May 2020

With the growing popularity of somatosensory interaction devices, human action recognition is becoming attractive in many application scenarios. Skeleton-based action recognition is effective because the skeleton can represent the position and the structure of key points of the human body. In this paper, we leverage spatiotemporal vectors between skeleton sequences as input feature representation of the network, which is more sensitive to changes of the human skeleton compared with representations based on distance and angle features. In addition, we redesign residual blocks that have different strides in the depth of the network to improve the processing ability of the temporal convolutional networks (TCNs) for long time dependent actions. In this work, we propose the two-stream temporal convolutional networks (TS-TCNs) that take full advantage of the inter-frame vector feature and the intra-frame vector feature of skeleton sequences in the spatiotemporal representations. The framework can integrate different feature representations of skeleton sequences so that the two feature representations can make up for each other’s shortcomings. The fusion loss function is used to supervise the training parameters of the two branch networks. Experiments on public datasets show that our network achieves superior performance and attains an improvement of 1.2% over the recent GCN-based (BGC-LSTM) method on the NTU RGB+D dataset.

Open Access Research Article Issue
A nonlocal gradient concentration method for image smoothing
Computational Visual Media 2015, 1 (3): 197-209
Published: 14 August 2015
Downloads:44

It is challenging to consistently smooth natural images, yet smoothing results determine the quality of a broad range of applications in computer vision. To achieve consistent smoothing, we propose a novel optimization model making use of the redundancy of natural images, by defining a nonlocal concentration regularization term on the gradient. This nonlocal constraint is carefully combined with a gradient-sparsity constraint, allowing details throughout the whole image to be removed automatically in a data-driven manner. As variations in gradient between similar patches can be suppressed effectively, the new model has excellent edge preserving, detail removal, and visual consistency properties. Comparisons with state-of-the-art smoothing methods demonstrate the effectiveness of the new method. Several applications, including edge manipulation, image abstraction, detail magnification, and image resizing, show the applicability of the new method.

total 5