AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research | Open Access

PrimitiveNet: decomposing the global constraints for referring segmentation

Chang Liu1Xudong Jiang1 Henghui Ding2 ( )
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, 639798, Singapore
Institute of Big Data, Fudan University, Shanghai, 200433, China
Show Author Information

Abstract

In referring segmentation, modeling the complicated constraints in the multimodal information is one of the most challenging problems. As the information in a given language expression and image becomes increasingly abundant, most of the current one-stage methods that directly output the segmentation mask encounter difficulties in understanding the complicated relationships between the image and the expression. In this work, we propose a PrimitiveNet to decompose the difficult global constraints into a set of simple primitives. Each primitive produces a primitive mask that represents only simple semantic meanings, e.g., all instances from the same category. Then, the output segmentation mask is computed by selectively combining these primitives according to the language expression. Furthermore, we propose a cross-primitive attention (CPA) module and a language-primitive attention (LPA) module to exchange information among all primitives and the language expression, respectively. The proposed CPA and LPA help the network find appropriate weights for primitive masks, so as to recover the target object. Extensive experiments have proven the effectiveness of our design and verified that the proposed network outperforms current state-of-the-art referring segmentation methods on three RefCOCO datasets.

References

【1】
【1】
 
 
Visual Intelligence
Article number: 16

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Liu C, Jiang X, Ding H. PrimitiveNet: decomposing the global constraints for referring segmentation. Visual Intelligence, 2024, 2: 16. https://doi.org/10.1007/s44267-024-00049-8

715

Views

21

Crossref

Received: 09 February 2024
Revised: 07 June 2024
Accepted: 10 June 2024
Published: 27 June 2024
© The Author(s) 2024.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.