Big Data Mining and Analytics

ISSN 2096-0654 e-ISSN 2097-406X CN 10-1514/G2
Editors-in-Chief: Yi Pan, Weimin Zheng
Open Access
Journal Home > Notice List > CFP-Special Issue on Intelligent Network Video Advances based on Transformers
Release Time:2023-01-03 Views:707
CFP-Special Issue on Intelligent Network Video Advances based on Transformers

Intelligent video understanding can be defined as the integration of video technology and analytics that can be used for a variety of purposes such as tracking movements or events. The tasks involving video processing, perception and understanding are receiving increasing attention in the remit of computer vision, pattern recognition and machine learning. The advent of deep learning models demonstrates the significance of both low-level and high-level video interpretation in real-world applications, e.g., super-pixel volumetric restoration, automatic driving, human-computer interaction, robotics, and video surveillance, etc. In contrasting to images, videos provide more sequential information, and thus, video streams are highly valuable to compensate the defections of the images. However, understanding videos is much more challenging than dealing with the image counterpart, due to the space-time complexity.

Video transformer network has recently emerged as an effective alternative to convolutional networks for video tasks, such as action classification, video object/instance segmentation, etc. Inspired by recent developments in vision transformers (ViT), the video transformers operate on both spatial-temporal queries across temporal steps. The enclosed temporal self-attention and spatial cross-attention offer a premise to many video recognition tasks.

To embrace the emerging challenges in intelligent video understanding, this special issue establishes a venue to bring brilliant ideas and advanced technological research outcome across the global research and industrial communities. This special issue prompts the engagement in the field that has relevance to advanced transformer-based algorithms in video applications. It also highlights ongoing investigations and new applications. Prospective submissions may fall into, but are not limited to the following topics:

  • Optical flow estimation
  • Depth estimation from video streams
  • Video object/instance/panoptic segmentation
  • Motion estimation
  • Multi object tracking
  • Anomaly event detection
  • Supervised, weak supervised, or unsupervised representation learning methods for video understanding
  • Video generation and intelligent editing
  • Light-weight networks for long-video processing

The authors are requested to submit their full research papers complying with the general scope of the journal. The submitted papers will undergo peer review process before they can be accepted. Notification of acceptance will be communicated as we progress with the review process.

SUBMISSION GUIDELINES

Papers submitted to this journal for possible publication must be original and must not be under consideration for publication in any other journals. Prospective authors should submit an electronic copy of their completed manuscript to https://mc03.manuscriptcentral.com/bdma with manuscript type as “Special Issue on Intelligent Network Video Advances based on Transformers”. Further information on the journal is available at: https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=8254253.

IMPORTANT DATES

Deadline for submissions: January 31, 2024

1st round of acceptance notification: April 30, 2024

Submission of revised papers: May 31, 2024

2nd round of acceptance notification: June 30, 2024

Publication date: August 31, 2024

GUEST EDITORS

Lin Yuanbo Wu, Swansea University, United Kingdom. E-mail: l.y.wu@swansea.ac.uk

Bo Li, Northwestern Polytechnical University, China. E-mail: libo@nwpu.edu.cn

Huibing Wang, Dalian Maritime University, China. E-mail: huibing.wang@dlmu.edu.cn

Chunhua Shen, Zhejiang University, China. E-mail: chhshen@gmail.com

Benjamin Mora, Swansea University, United Kingdom. E-mail: b.mora@swansea.ac.uk

Chen Chen, University of Central Florida, Orlando, FL, USA. E-mail: chen.chen@crcv.ucf.edu

Xianghua Xie, Swansea University, United Kingdom. E-mail: x.xie@swansea.ac.uk