Personalized Tag Recommendation via Denoising Auto-Encoder

Weibin Zhao, Lin Shang, Yonghong Yu, Li Zhang, Can Wang, Jiajun Chen

Research output: Contribution to journalArticlepeer-review

38 Downloads (Pure)


Personalized tag recommender systems automatically recommend users a set of tags used to annotate items according to users’ past tagging information. Learning the representations of involved entities (i.e. users, items and tags) and capturing the complex relationships among them are crucial for personalized tag recommender systems. However, few studies have been conducted to simultaneously achieve these two sub-goals. In this research, we propose a novel personalized tag recommendation model based on the denoising auto-encoder, namely DAE-PTR, which learns the representations of entities and encodes the complex relationships by exploiting the denoising auto-encoder framework. Specifically, for each user, we firstly generate the corrupted version of the respective tagging information by adding the multiplicative mask-out/drop-out noise into the original input. Then, we learn the latent representations from the corrupted input via the auto-encoder framework by using the cross-entropy loss. More importantly, we integrate the latent user and item embeddings into the processing of encoding, which makes the learnt hidden representations of the auto-encoder network encode multiple types of relationships among entities, i.e. the relationships between users and tags, between items and tags, and among tags. Finally, we employ the decoder component to reconstruct the original input based on the learnt latent representations. Experimental results on the real-world datasets show that our proposed DAE-PTR model is superior to the traditional personalized tag recommendation models.
Original languageEnglish
Pages (from-to)95-114
Number of pages20
JournalWorld Wide Web - Internet and Web Information Systems
Publication statusPublished - 20 Dec 2021


  • Auto-Encoder
  • Personalized Tag Recommendation
  • Deep Learning

Cite this