Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Existing human pose estimation methods seldom consider the impact or constraint of different types of contact. In this paper, we elaborate on the impact of both body-scene contact and self-contact on pose estimation and refer to them as general contact. First, we extend existing datasets by calculating additional contact labels for general contact inference. Moreover, based on the extended dataset, we present the first network to predict dense general contact from a single RGB image. Finally, we develop a novel optimization method that successfully utilizes the inferred general contact information for accurate 3D pose estimation. Our results show that knowledge of contact can provide strong constraints and resolve pose ambiguity, thus significantly improving human pose estimation accuracy, especially for challenging poses that cannot be well handled by existing methods. Experimental results and comparisons further demonstrate the effectiveness of the proposed method. Our results are even more reasonable than certain pseudo-ground truth determined from multi-view images.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
To submit a manuscript, please go to https://jcvm.org.
Comments on this article