A Survey on explainable recommendation: methods, works, and challenges

Authors

  • Hiếu Trần Nguyễn Minh Trường Đại học Sài Gòn
  • Lăng Trần Văn
  • Huy Nguyễn Quốc

Abstract

Explainable Recommendation Systems (ERS) not only provide suitable recommendations but also include clear explanations to enhance transparency and user trust. This paper surveys key explanation approaches in ERS, including model-based, post-hoc, and user-centric methods, and analyzes representative studies applying SHAP, LIME, PEPLER-D, GaVaMoE, and G-Refer. The results highlight several critical challenges, such as limited capability to model complex user preferences, high computational costs when using LLMs, hallucination phenomena in explanations, lack of standardized datasets and quantitative evaluation metrics, as well as potential risks to user data privacy. To address these issues, potential future directions are proposed, including optimizing computational cost and scalability, ensuring explanation consistency and quality, personalizing explanations for individual users, integrating multiple explanation methods for comprehensiveness, and developing privacy-preserving and ethical mechanisms for explainable recommendation systems. This study provides a systematic overview and offers directions for future research to improve the quality and practical applicability of ERS across various domains.

Published

26-01-2026

How to Cite

Trần Nguyễn Minh, H., Trần Văn, L., & Nguyễn Quốc, H. (2026). A Survey on explainable recommendation: methods, works, and challenges. HUFLIT Journal of Science, 9(4), 1–20. Retrieved from https://hjs.huflit.edu.vn/index.php/hjs/article/view/294

Issue

Section

Science and Technology

Categories