ConVQG: Contrastive Visual Question Generation with Multimodal Guidance

École Polytechnique Fédérale de Lausanne (EPFL)
AAAI 2024

*Indicates Equal Contribution


ECEO Logo
NLP Logo

Summary

Asking questions about visual environments is a crucial way for intelligent agents to understand rich multi-faceted scenes, raising the importance of Visual Question Generation (VQG) systems. Generating focused questions using textual constraints while enforcing a high relevance to the image content remains a challenge, as VQG systems often ignore one or both forms of grounding

To address that, we propose ConVQG, a Contrastive Visual Question Generation method that

  1. drives the joint embedding away from single modality questions by two modality-specific contrastive objectives.
  2. generates text-guided, image-grounded and knowledge-enriched questions for images.

Modalities overview

ConVQG at a glance. An image and a text input are processed through a multimodal module, leading to the image text joint embedding. Pre-trained modules produce image-only and text-only question embeddings. A contrastive loss is then optimized to make the joint embedding close to the real question embedding and far from the single modality ones. By design, ConVQG generates questions that are image-grounded (in green) and that meet the requirements of the text constraint (in yellow).

Method

Modalities overview

The pipeline of the ConVQG method. During training, an encoder-decoder VQG framework is powered by two additional branches for image-based question generation (IQGM) and text-based question generation (TQGM) (left part). Then, contrastive losses discriminate image-text joint embeddings with the one from single modality only (right part). During inference, only the encoder-decoder framework is activated.

Method overview

Question Generation Results

BibTeX

@inproceedings{mi2024convqg,
      title={ConVQG: Contrastive Visual Question Generation with Multimodal Guidance}, 
      author={Li Mi, Syrielle Montariol, Javiera Castillo-Navarro, Xianjie Dai, Antoine Bosselut, Devis Tuia},
      year={2024},
      booktitle={AAAI}
      }