Sort:
Open Access Research Article Issue
Image-guided color mapping for categorical data visualization
Computational Visual Media 2022, 8 (4): 613-629
Published: 27 May 2022
Downloads:52

Appropriate color mapping for categorical data visualization can significantly facilitate the discovery of underlying data patterns and effectively bring out visual aesthetics. Some systems suggest pre-defined palettes for this task. However, a predefined color mapping is not always optimal, failing to consider users’ needs for customization. Given an input cate-gorical data visualization and a reference image, we present an effective method to automatically generate a coloring that resembles the reference while allowing classes to be easily distinguished. We extract a color palette with high perceptual distance between the colors by sampling dominant and discriminable colors from the image’s color space. These colors are assigned to given classes by solving an integer quadratic program to optimize point distinctness of the given chart while preserving the color spatial relations in the source image. We show results on various coloring tasks, with a diverse set of new coloring appearances for the input data. We also compare our approach to state-of-the-art palettes in a controlled user study, which shows that our method achieves comparable performance in class discrimination, while being more similar to the source image. User feedback after using our system verifies its efficiency in automatically generating desirable colorings that meet the user’s expectations when choosing a reference.

Open Access Research Article Issue
ARM3D: Attention-based relation module for indoor 3D object detection
Computational Visual Media 2022, 8 (3): 395-414
Published: 08 March 2022
Downloads:34

Relation contexts have been proved to be useful for many challenging vision tasks. In the field of 3D object detection, previous methods have been taking the advantage of context encoding, graph embedding, orexplicit relation reasoning to extract relation contexts. However, there exist inevitably redundant relation contexts due to noisy or low-quality proposals. In fact, invalid relation contexts usually indicate underlying scene misunderstanding and ambiguity, which may, on the contrary, reduce the performance in complex scenes. Inspired by recent attention mechanism like Transformer, we propose a novel 3D attention-based relation module (ARM3D). It encompasses object-aware relation reasoning to extract pair-wise relation contexts among qualified proposals and an attention module to distribute attention weights towards different relation contexts. In this way, ARM3D can take full advantage of the useful relation contexts and filter those less relevant or even confusing contexts, which mitigates the ambiguity in detection. We have evaluated the effectiveness of ARM3D by plugging it into several state-of-the-art 3D object detectors and showing more accurate and robust detection results. Extensive experiments show the capability and generalization of ARM3D on 3D object detection. Our source code is available at https://github.com/lanlan96/ARM3D.

Open Access Research Article Issue
Inferring object properties from human interaction and transferring them to new motions
Computational Visual Media 2021, 7 (3): 375-392
Published: 19 April 2021
Downloads:17

Humans regularly interact with their surrounding objects. Such interactions often result in strongly correlated motions between humans and the interacting objects. We thus ask: "Is it possible to infer object properties from skeletal motion alone, even without seeing the interacting object itself?" In this paper, we present a fine-grained action recognition method that learns to infer such latent object properties from human interaction motion alone. This inference allows us to disentangle the motion from the object property and transfer object properties to a given motion. We collected a large number of videos and 3D skeletal motions of performing actors using an inertial motion capture device. We analyzed similar actions and learned subtle differences between them to reveal latent properties of the interacting objects. In particular, we learned to identify the interacting object, by estimating its weight, or its spillability. Our results clearly demonstrate that motions and interacting objects are highly correlated and that related object latent properties can be inferred from 3D skeleton sequences alone, leading to new synthesis possibilities for motions involving human interaction. Our dataset is available at http://vcc.szu.edu.cn/research/2020/IT.html.

Open Access Research Article Issue
Discernible image mosaic with edge-aware adaptive tiles
Computational Visual Media 2019, 5 (1): 45-58
Published: 08 April 2019
Downloads:24

We present a novel method to produce discernible image mosaics, with relatively large image tiles replaced by images drawn from a database, to resemble a target image. Compared to existing works on image mosaics, the novelty of our method is two-fold. Firstly, believing that the presence of visual edges in the final image mosaic strongly supports image perception, we develop an edge-aware photo retrieval scheme which emphasizes the preservation of visual edges in the target image. Secondly, unlike most previous works which apply a pre-determined partition to an input image, our image mosaics are composed of adaptive tiles, whose sizes are determined based on the available images in the database and the objective of maximizing resemblance to the target image. We show discernible image mosaics obtained by our method, using image collections of only moderate size. To evaluate our method, we conducted a user study to validate that the image mosaics generated present both globally and locally appropriate visual impressions to the human observers. Visual comparisons with existing techniques demonstrate the superiority of our method in terms of mosaic quality and perceptibility.

total 4