AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research | Open Access

PS-Net: high-frequency attention and Bayesian analysis based facial pore segmentation with no human annotation

Qing Zhang1,2 Ling Li2 Rizhao Cai2 Qingli Li1 Bandara Dissanayake3 Yan Wang1 ( )Alex Kot2 
Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, China
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, 639798, Singapore
Beauty Care R&D, Procter and Gamble, Mason, 45040, OH, USA
Show Author Information

Abstract

Facial pore segmentation results can provide reliable evidence to simulate post-product pore conditions and provide product recommendations. However, accurately segmenting pores is challenging due to their small size, weak boundaries and dense distribution. It is also difficult to acquire precise annotation. Therefore, we formulate pore segmentation as a two-stage, weakly supervised task using both traditional and deep learning methods without human annotation. We propose a novel method called the pore segmentation network (PS-Net). Specifically, it contains pore feature extraction with coarse labels generated by a traditional method, as well as fine segmentation with progressively updated pseudo labels. Since pores provide high-frequency information about faces, we propose a high-frequency attention module that emphasizes low-level features. Moreover, we design a Bayesian module to identify pore shapes in high-level features. We establish a large-scale facial pore dataset with coarse labels that were generated via the difference of Gaussian (DoG) Pore method. PS-Net achieves the best performance on this dataset, proving its superiority compared with existing state-of-the-art segmentation methods.

References

【1】
【1】
 
 
Visual Intelligence
Article number: 20

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Zhang Q, Li L, Cai R, et al. PS-Net: high-frequency attention and Bayesian analysis based facial pore segmentation with no human annotation. Visual Intelligence, 2025, 3: 20. https://doi.org/10.1007/s44267-025-00088-9

474

Views

0

Crossref

Received: 18 April 2025
Revised: 15 September 2025
Accepted: 16 September 2025
Published: 06 November 2025
© The Author(s) 2025.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.