Skip to main content
Article
Revisiting Positive and Negative Samples in Variational Autoencoders for Top-N Recommendation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
  • Wei Liu, University of Macau & Sun Yat-sen University & Guangdong Key Laboratory of Big Data Analysis and Processing
  • Leong Hou U, University of Macau
  • Shangsong Liang, Sun Yat-Sen University & Guangdong Key Laboratory of Big Data Analysis and Processing & Mohamed bin Zayed University of Artificial Intelligence
  • Huaijie Zhu, Sun Yat-Sen University & Guangdong Key Laboratory of Big Data Analysis and Processing
  • Jianxing Yu, Sun Yat-Sen University & Guangdong Key Laboratory of Big Data Analysis and Processing
  • Yubao Liu, Sun Yat-Sen University & Guangdong Key Laboratory of Big Data Analysis and Processing
  • Jian Yin, Sun Yat-Sen University & Guangdong Key Laboratory of Big Data Analysis and Processing
Document Type
Conference Proceeding
Abstract

Top-N recommendation is a common tool to discover interesting items, which ranks the items based on user preference using their interaction history. Implicit feedback is often used by recommender systems due to the hardness of preference collection. Recent solutions simply treat all interacted items of a user as equally important positives and annotate all no-interaction items of a user as negatives. We argue that this annotation scheme of implicit feedback is over-simplified due to the sparsity and missing fine-grained labels of the feedback data. To overcome this issue, we revisit the so-called positive and negative samples for Variational Autoencoders (VAEs). Based on our analysis and observation, we propose a self-adjusting credibility weight mechanism to re-weigh the positive samples and exploit the higher-order relation based on item-item matrix to sample the critical negative samples. Besides, we abandon complex nonlinear structure and develop a simple yet effective VAEs framework with linear structure, which combines the reconstruction loss function for the positive samples and critical negative samples. Extensive experiments conducted on 4 public real-world datasets demonstrate that our VAE++ outperforms other VAEs-based models by a large margin.

DOI
10.1007/978-3-031-30672-3_38
Publication Date
4-1-2023
Keywords
  • Collaborative Filtering,
  • Implicit Feedback,
  • Recommendation,
  • Variational AutoEncoders
Comments

IR conditions: non-described

Citation Information
W. Liu et al. "Revisiting Positive and Negative Samples in Variational Autoencoders for Top-N Recommendation", in Database Systems for Advanced Applications (DASFAA 2023), Lecture Notes in Computer Science, vol 13944, pp. 563-573, Apr 2023. doi:10.1007/978-3-031-30672-3_38