COSTA, J. B. G.; http://lattes.cnpq.br/5304058030766012; COSTA, Júlio Barreto Guedes da.
Résumé:
Recommender Systems (RSs) consist of a field of research and application with the goal of
retrieving relevant items for a user. Since the open Netflix Prize challenge for performance
improvement in RSs, they have constantly been built by representing users and items as latent
factors, more commonly known as embeddings, which are often randomly initialized and
updated during the training stages. When looking at the greater Machine Learning (ML) area,
different areas of application obtained performance improvement through Transfer Learning,
such as the boost obtained in the Computer Vision (CV) tasks after the proposal of models
like VGG or AlexNet, or the one achieved in Natural Language Processing (NLP) tasks,
especially after the popularization of Large Language Models (LLMs) such as the BERT and,
more recently, the GPT model families. Unlike other application areas, however, Transfer
Learning for RSs is not trivial since users and items are the entities, while in CV and NLP, the
entities are images and words, respectively. This research aims to study possible applications
of Transfer Learning for RSs, evaluating how unsupervised, self-supervised, and supervised
embedding initialization impact the predictive performance of the models.