Communication publiée dans un ouvrage (Colloques, congrès, conférences scientifiques et actes)
Together Yet Apart: Multimodal Representation Learning for Personalised Visual Art Recommendation
YILMA, Bereket Abera; LEIVA, Luis A.
2023In YILMA, Bereket Abera; LEIVA, Luis A. (Eds.) Proceedings of the ACM Conference on User Modeling, Adaptation and Personalization (UMAP 2023)
Peer reviewed
 

Documents


Texte intégral
Multimodal_VA_RecSys.pdf
Postprint Éditeur (7.76 MB)
Télécharger

Tous les documents dans ORBilu sont protégés par une licence d'utilisation.

Envoyer vers



Détails



Mots-clés :
Recommendation systems; Personalization; Machine Learning; Multimodal Representation Learning; Visual Arts
Résumé :
[en] With the advent of digital media, the availability of art content has greatly expanded, making it increasingly challenging for individuals to discover and curate works that align with their personal preferences and taste. The task of providing accurate and personalised Visual Art (VA) recommendations is thus a complex one, requiring a deep understanding of the intricate interplay of multiple modalities such as images, textual descriptions, or other metadata. In this paper, we study the nuances of modalities involved in the VA domain (image and text) and how they can be effectively harnessed to provide a truly personalised art experience to users. Particularly, we develop four fusion-based multimodal VA recommendation pipelines and conduct a large-scale user-centric evaluation. Our results indicate that early fusion (i.e, joint multimodal learning of visual and textual features) is preferred over a late fusion of ranked paintings from unimodal models (state-of-the-art baselines) but only if the latent representation space of the multimodal painting embeddings is entangled. Our findings open a new perspective for a better representation learning in the VA RecSys domain.
Disciplines :
Sciences informatiques
Auteur, co-auteur :
YILMA, Bereket Abera  ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
LEIVA, Luis A.  ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
Co-auteurs externes :
no
Langue du document :
Anglais
Titre :
Together Yet Apart: Multimodal Representation Learning for Personalised Visual Art Recommendation
Date de publication/diffusion :
26 juin 2023
Nom de la manifestation :
ACM Conference on User Modeling, Adaptation and Personalization (UMAP 2023)
Organisateur de la manifestation :
ACM
Lieu de la manifestation :
Limassol, Chypre
Date de la manifestation :
26-06-2023
Titre de l'ouvrage principal :
Proceedings of the ACM Conference on User Modeling, Adaptation and Personalization (UMAP 2023)
Peer reviewed :
Peer reviewed
Focus Area :
Computational Sciences
Projet européen :
HE - 101071147 - SYMBIOTIK - Context-aware adaptive visualizations for critical decision making
Projet FnR :
FNR15722813 - Brainsourcing For Affective Attention Estimation, 2021 (01/02/2022-31/01/2025) - Luis Leiva
Organisme subsidiant :
CE - Commission Européenne
European Union
Disponible sur ORBilu :
depuis le 04 mai 2023

Statistiques


Nombre de vues
367 (dont 51 Unilu)
Nombre de téléchargements
239 (dont 26 Unilu)

citations Scopus®
 
12
citations Scopus®
sans auto-citations
4
citations OpenAlex
 
9

Bibliographie


Publications similaires



Contacter ORBilu