Abstract
Explainable recommendation have garnered significant research interest for their transparency. Recent advancements employ Large Language Models (LLMs) through two primary approaches: LLM-auxiliary models, which improve accuracy but rely on conventional recommendation models and produce limited explanations, and LLM-based models, which employ fine-tuning or GraphRAG for explainability——yet face limitations in domain adaptability and neglect intrinsic signals like item weights. Moreover, existing methods struggle with multi-scenario generalization. To address these challenges, we propose FT-HashRAG, a universal explainable recommendation framework that integrates Hash Retrieval-Augmented Generation (HashRAG) with fine-tuning. HashRAG constructs user-specific blocks using intrinsic (e.g., item weights) and extrinsic signals (e.g., item descriptions) to generate fine-grained explanations. By integrating HashRAG with fine-tuning, FT-HashRAG enhances the LLM’s capacity to synthesize retrieved information effectively. We curate a multi-scenario instruction dataset including four explainable recommendation scenarios: item weights, item attributes, popularity bias, and interaction paths, and train LLM via supervised fine-tuning (SFT) and direct preference optimization (DPO). Experiments show FT-HashRAG significantly outperforms state-of-the-art baselines. We open-source our model on https://huggingface.co/tczzx6/FT-HashRAG