BiVLC: Extending Vision-Language Compositionality Evaluation with Text-to-Image Retrieval

HiTZ Center - Ixa
University of the Basque Country UPV/EHU

Abstract

Existing Vision-Language Compositionality (VLC) benchmarks like SUGARCREPE are formulated as image-to-text retrieval problems, where, given an image, the models need to select between the correct textual description and a synthetic hard negative text. In this work we present the Bidirectional Vision-Language Compositionality (BIVLC) dataset. The novelty of BIVLC is to add a synthetic hard negative image generated from the synthetic text, resulting in two image-to-text retrieval examples (one for each image) and, more importantly, two text-to-image retrieval examples (one for each text). Human annotators filter out ill-formed examples ensuring the validity of the benchmark. The experiments on BiVLC uncover a weakness of current multimodal models, as they perform poorly in the text-to-image direction. In fact, when considering both retrieval directions, the conclusions obtained in previous works change significantly. In addition to the benchmark, we show that a contrastive model trained using synthetic images and texts improves the state of the art in SUGARCREPE and in BIVLC for both retrieval directions. The gap to human performance in BIVLC confirms that Vision-Language Compositionality is still a challenging problem.

BiVLC: Bidirectional Vision-Language Compositionality dataset

BiVLC is a benchmark for Bidirectional Vision-Language Compositionality evaluation. Each instance consists of two images and two captions. Using each of the images and captions as a base, a model is asked to select the pair that correctly represents the base versus the hard negative distractor with minor compositional changes. Thus, we can measure image-to-text and text-to-image retrieval with hard negative pairs. To obtain good results on the dataset, it is necessary that the model performs well in both directions for the same instance.

Three instances of BiVLC . Bottom row with negative captions and the corresponding images created by us. From left to right, negative captions created by REPLACE, SWAP and ADD.

BiVLC has 2,933 instances consisting of 2 images and 2 captions, or 11,732 retrieval instances, 50% text to image and 50% image to text.

Examples of BiVLC instances

BibTeX

@misc{miranda2024bivlc,
      title={BiVLC: Extending Vision-Language Compositionality Evaluation with Text-to-Image Retrieval}, 
      author={Imanol Miranda and Ander Salaberria and Eneko Agirre and Gorka Azkune},
      year={2024},
      eprint={2406.09952},
      archivePrefix={arXiv},
      primaryClass={cs.CV}}
}