Sort:
Regular Paper Issue
CDM: Content Diffusion Model for Information-Centric Networks
Journal of Computer Science and Technology 2021, 36 (6): 1431-1451
Published: 30 November 2021

This paper proposes the Content Diffusion Model (CDM) for modeling the content diffusion process in information-centric networking (ICN). CDM is inspired by the epidemic model and it provides a method of theoretical quantitative analysis for the content diffusion process in ICN. Specifically, CDM introduces the key functions to formalize the key factors that inuence the content diffusion process, and thus it can construct the model via a simple but efficient way. Further, we derive CDM by using different combinations of those key factors and put them into several typical ICN scenarios, to analyze the characteristics during the diffusion process such as diffusion speed, diffusion scope, average fetching hops, changing and final state, which can greatly help to analyze the network performance and application design. A series of experiments are conducted to evaluate the efficacy and accuracy of CDM. The results show that CDM can accurately illustrate and model the content diffusion process in ICN.

Open Access Research Article Issue
See clearly on rainy days: Hybrid multiscale loss guided multi-feature fusion network for single image rain removal
Computational Visual Media 2021, 7 (4): 467-482
Published: 23 March 2021
Downloads:50

The quality of photos is highly susceptible to severe weather such as heavy rain; it can also degrade the performance of various visual tasks like object detection. Rain removal is a challenging problem because rain streaks have different appearances even in one image. Regions where rain accumulates appear foggy or misty, while rain streaks can be clearly seen in areas where rain is less heavy. We propose removing various rain effects in pictures using a hybrid multiscale loss guided multiple feature fusion de-raining network (MSGMFFNet). Specially, to deal with rain streaks, our method generates a rain streak attention map, while preprocessing uses gamma correction and contrast enhancement to enhanced images to address the problem of rain accumulation. Using these tools, the model can restore a result with abundant details. Furthermore, a hybrid multiscale loss combining L1 loss and edge loss is used to guide the training process to pay attentionto edge and content information. Comprehensive experiments conducted on both synthetic and real-world datasets demonstrate the effectiveness of our method.

Open Access Research Article Issue
An end-to-end convolutional network for joint detecting and denoising adversarial perturbations in vehicle classification
Computational Visual Media 2021, 7 (2): 217-227
Published: 25 January 2021
Downloads:45

Deep convolutional neural networks (DCNNs)have been widely deployed in real-world scenarios. However, DCNNs are easily tricked by adversarial examples, which present challenges for critical app-lications, such as vehicle classification. To address this problem, we propose a novel end-to-end convolutional network for joint detection and removal of adversarial perturbations by denoising (DDAP). It gets rid of adversarial perturbations using the DDAP denoiser based on adversarial examples discovered by the DDAP detector. The proposed method can be regarded as a pre-processing step—it does not require modifying the structure of the vehicle classification model and hardly affects the classification results on clean images. We consider four kinds of adversarial attack (FGSM, BIM, DeepFool, PGD) to verify DDAP’s capabilities when trained on BIT-Vehicle and other public datasets. It provides better defense than other state-of-the-art defensive methods.

Regular Paper Issue
PVSS: A Progressive Vehicle Search System for Video Surveillance Networks
Journal of Computer Science and Technology 2019, 34 (3): 634-644
Published: 10 May 2019

This paper is focused on the task of searching for a specific vehicle that appears in the surveillance networks. Existing methods usually assume the vehicle images are well cropped from the surveillance videos, and then use visual attributes, like colors and types, or license plate numbers to match the target vehicle in the image set. However, a complete vehicle search system should consider the problems of vehicle detection, representation, indexing, storage, matching, and so on. Besides, it is very difficult for attribute-based search to accurately find the same vehicle due to intra-instance changes in different cameras and the extremely uncertain environment. Moreover, the license plates may be mis-recognized in surveillance scenes due to the low resolution and noise. In this paper, a progressive vehicle search system, named as PVSS, is designed to solve the above problems. PVSS is constituted of three modules: the crawler, the indexer, and the searcher. The vehicle crawler aims to detect and track vehicles in surveillance videos and transfer the captured vehicle images, metadata and contextual information to the server or cloud. Then multi-grained attributes, such as the visual features and license plate fingerprints, are extracted and indexed by the vehicle indexer. At last, a query triplet with an input vehicle image, the time range, and the spatial scope is taken as the input by the vehicle searcher. The target vehicle will be searched in the database by a progressive process. Extensive experiments on the public dataset from a real surveillance network validate the effectiveness of PVSS.

total 4