AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (676.9 KB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Prompt-Based Learning for Factual Knowledge Infusion in Large Language Models

Ziheng Cheng1( )Yiming Cao2
School of Software, Shandong University, Jinan 250000, China
Research Scientist, Alibaba-NTU Global e-Sustainability CorpLab (ANGEL), Nanyang Technological University, Singapore 639798, Singapore
Show Author Information

Abstract

Large Language Models (LLMs) are increasingly employed in knowledge-intensive tasks but often struggle to effectively apply infused knowledge due to textual-structure mismatches between the infusion and reasoning phases. To address this issue, we propose a prompt-based unification strategy that directly learns from factual triples in knowledge graphs while preserving structural consistency across both phases. This unified design enables seamless transfer of factual knowledge to downstream reasoning tasks without requiring architectural modifications. Extensive experiments on two Knowledge Graph Question Answering (KGQA) benchmarks, WebQSP and MetaQA, demonstrate that our approach consistently outperforms strong baselines. Further ablation and robustness analyses verify that structural unification is the key factor driving the improvements, while its compatibility with adapter-tuning and LoRA highlights practical applicability under parameter-efficient fine-tuning settings. Overall, our results suggest that enforcing textual structural consistency provides a simple yet effective principle for reliable knowledge infusion in LLMs, with broad potential across diverse knowledge-intensive domains.

References

【1】
【1】
 
 
International Journal of Crowd Science
Pages 262-268

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Cheng Z, Cao Y. Prompt-Based Learning for Factual Knowledge Infusion in Large Language Models. International Journal of Crowd Science, 2025, 9(4): 262-268. https://doi.org/10.26599/IJCS.2025.9100014

732

Views

16

Downloads

1

Crossref

1

Scopus

Received: 07 January 2025
Revised: 12 October 2025
Accepted: 13 October 2025
Published: 10 December 2025
© The author(s) 2025.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).