Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Traditional video stabilization methods use a warping operation to smooth the camera path but result in missing regions in the video frames. To solve this issue, full-frame video stabilization techniques attempt to fill in the unidentified boundary regions, but their effectiveness is limited. In this work, we propose a full-frame video stabilization method using spatiotemporal transformers to fill the missing boundary regions after the warping operation. For training, we adopt a self-supervised strategy and improve it by incorporating temporal information. The proposed approach allows the utilization of redundant video information spatially and temporally while filling in missing regions. Experimental results show that our approach achieves superior results on popular video stabilization datasets. The code, pre-trained model, and video results are available at https://github.com/leventkaracan/VidStabFormer.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
To submit a manuscript, please go to https://jcvm.org.
Comments on this article