Serial-Autoencoder for Personalized Recommendation
en-GBde-DEes-ESfr-FR

Serial-Autoencoder for Personalized Recommendation

19/09/2024 Frontiers Journals

In the last decade, auxiliary information has been widely used to address data sparsity. Due to the advantages of feature extraction and the no-label requirement, autoencoder-based methods addressing auxiliary information have become quite popular. However, most existing autoencoder-based methods discard the reconstruction of auxiliary information, which poses a huge challenge for better representation learning and model scalability.
To solve the problems, a research team led by Zhu YI published their new research on 15 August 2024 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
The team proposed a novel representation learning method based on serial autoencoders for personalized recommendation. They propose to retain the reconstructed auxiliary information of the decoding layer, which can effectively further enhance the reconstructed rating information. Furthermore, considering that the reconstructed rating information is influenced by the features of the reconstructed auxiliary information, They propose a serially connected autoencoder approach, which aims to learn a higher-level and robust feature representation of the predicted rating information.
In the research, they proposed to use a traditional autoencoder to reconstruct the representations of rating information and auxiliary information. The reconstructed output preserves auxiliary information, which helps to better reconstruct the rating part. To extract more powerful and robust feature representations from the rating matrix, they proposed a serial autoencoder structure to improve the model recommendation performance.
Firstly, an autoencoder is used to extract higher-level features based on the item's rating matrix and auxiliary information. The reconstructed output information includes two parts: the rating information and the auxiliary information reconstruction. Second, we use a second autoencoder to enhance the data representation of the reconstructed rating matrix, which can alleviate the loss of some key feature information during reconstruction. Finally, the output of the second autoencoder is used as a recommendation prediction for the model.
Future work can focus on trying to introduce other deep learning models to mine additional feature information of users and items through joint learning, and trying to introduce multiple auxiliary sources of information and combine them with the self-attention mechanisms to improve the recommendation performance.
DOI: 10.1007/s11704-023-2441-1
Archivos adjuntos
  • The processing flow of our proposed method
19/09/2024 Frontiers Journals
Regions: Asia, China
Keywords: Applied science, Computing

Disclaimer: AlphaGalileo is not responsible for the accuracy of news releases posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Testimonios

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Trabajamos en estrecha colaboración con...


  • BBC
  • The Times
  • National Geographic
  • The University of Edinburgh
  • University of Cambridge
  • iesResearch
Copyright 2024 by DNN Corp Terms Of Use Privacy Statement