Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Large Language Models (LLMs) are increasingly employed in knowledge-intensive tasks but often struggle to effectively apply infused knowledge due to textual-structure mismatches between the infusion and reasoning phases. To address this issue, we propose a prompt-based unification strategy that directly learns from factual triples in knowledge graphs while preserving structural consistency across both phases. This unified design enables seamless transfer of factual knowledge to downstream reasoning tasks without requiring architectural modifications. Extensive experiments on two Knowledge Graph Question Answering (KGQA) benchmarks, WebQSP and MetaQA, demonstrate that our approach consistently outperforms strong baselines. Further ablation and robustness analyses verify that structural unification is the key factor driving the improvements, while its compatibility with adapter-tuning and LoRA highlights practical applicability under parameter-efficient fine-tuning settings. Overall, our results suggest that enforcing textual structural consistency provides a simple yet effective principle for reliable knowledge infusion in LLMs, with broad potential across diverse knowledge-intensive domains.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).
Comments on this article