Do Vision Encoders Truly Explain Object Hallucination? Mitigating Object Hallucination via Simple Fine-Grained CLIPScore

Research output: Contribution to journalArticlepeer-review

Abstract

Recently, Large Vision-Language Models (LVLMs) show remarkable performance across various domains. However, these models suffer from object hallucination. In this work, we study object hallucination primarily in a discriminative, retrieval-style evaluation setting (OHD-Caps), rather than in free-form caption generation. This study revisits the previous claim that the cause of such hallucinations lies in the limited representational capacity of the vision encoder. Our analysis implies that the capacity of the vision encoder is not nec-essarily a major limiting factor in detecting object hallucination. Based on this insight, we propose Fine-grained CLIPScore (F-CLIPScore), a simple yet effective evaluation metric that enhances object-level granularity by incorporating text embeddings at the noun level. Evaluations on the OHD-Caps benchmark show that F-CLIPScore significantly outperforms conventional CLIPScore in accuracy by a large margin of 39.6% without additional training. We further demonstrate that F-CLIPScore-based data filtering reduces object hallucination in LVLM (4.9% in POPE accuracy after alignment pretraining).

Original languageEnglish
JournalTransactions on Machine Learning Research
Volume2026 January
StatePublished - 2026

Fingerprint

Dive into the research topics of 'Do Vision Encoders Truly Explain Object Hallucination? Mitigating Object Hallucination via Simple Fine-Grained CLIPScore'. Together they form a unique fingerprint.

Cite this