AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (2.8 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access | Just Accepted

FT-HashRAG: Combining Hash Retrieval-Augmented Generation with Fine-tuning for Universal Explainable Recommendation

Zixuan ZhangBowen Hao( )Yurui WangXiang WeiQimeng NiuYuxuan Li

School of Management, Capital Normal Univesity, Beijing 100048, China

Show Author Information

Abstract

Explainable recommendation have garnered significant research interest for their transparency. Recent advancements employ Large Language Models (LLMs) through two primary approaches: LLM-auxiliary models, which improve accuracy but rely on conventional recommendation models and produce limited explanations, and LLM-based models, which employ fine-tuning or GraphRAG for explainability——yet face limitations in domain adaptability and neglect intrinsic signals like item weights. Moreover, existing methods struggle with multi-scenario generalization. To address these challenges, we propose FT-HashRAG, a universal explainable recommendation framework that integrates Hash Retrieval-Augmented Generation (HashRAG) with fine-tuning. HashRAG constructs user-specific blocks using intrinsic (e.g., item weights) and extrinsic signals (e.g., item descriptions) to generate fine-grained explanations. By integrating HashRAG with fine-tuning, FT-HashRAG enhances the LLM’s capacity to synthesize retrieved information effectively. We curate a multi-scenario instruction dataset including four explainable recommendation scenarios: item weights, item attributes, popularity bias, and interaction paths, and train LLM via supervised fine-tuning (SFT) and direct preference optimization (DPO). Experiments show FT-HashRAG significantly outperforms state-of-the-art baselines. We open-source our model on https://huggingface.co/tczzx6/FT-HashRAG

Tsinghua Science and Technology
Cite this article:
Zhang Z, Hao B, Wang Y, et al. FT-HashRAG: Combining Hash Retrieval-Augmented Generation with Fine-tuning for Universal Explainable Recommendation. Tsinghua Science and Technology, 2025, https://doi.org/10.26599/TST.2025.9010106

155

Views

25

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 25 April 2025
Revised: 12 June 2025
Accepted: 20 June 2025
Available online: 01 July 2025

© The author(s) 2025

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return