Multi-modal recommender system using text-to-image generative models and adaptive learning

  • Seongmin Kim
  • , Seona Moon
  • , Yeongseo Lim
  • , Sang Min Choi
  • , Sang Ki Ko

Research output: Contribution to journalArticlepeer-review

Abstract

Recently, various successful approaches have been developed to enhance the performance of recommender systems by incorporating multi-modal data, such as item images and textual descriptions. However, adopting these algorithms in real-world scenarios is challenging, as images or textual descriptions are often unavailable. Moreover, in some cases, the provided images or descriptions may not accurately represent the item. We refer to such situations as missing data. In the fashion domain, visual information is crucial, as people are unlikely to buy clothing without seeing its design and appearance. Thus, we propose employing a text-to-image Generative Adversarial Network (GAN) to generate missing visual data from available textual descriptions, enabling a multi-modal recommender system that leverages both visual and textual information. We also introduce an adaptive feature importance learning mechanism to dynamically determine the weight of each multi-modal feature when calculating the preference score. We demonstrate the effectiveness of the proposed algorithm through extensive experiments on the publicly available Amazon review dataset.

Original languageEnglish
Article number129086
JournalExpert Systems with Applications
Volume296
DOIs
StatePublished - 15 Jan 2026

Keywords

  • Adaptive learning
  • Generative artificial intelligence
  • Multi-modal data
  • Recommender systems

Fingerprint

Dive into the research topics of 'Multi-modal recommender system using text-to-image generative models and adaptive learning'. Together they form a unique fingerprint.

Cite this