Asking questions about visual environments is a crucial way for intelligent agents to understand rich multi-faceted scenes, raising the importance of Visual Question Generation (VQG) systems. Generating focused questions using textual constraints while enforcing a high relevance to the image content remains a challenge, as VQG systems often ignore one or both forms of grounding
To address that, we propose ConVQG, a Contrastive Visual Question Generation method that
ConVQG at a glance. An image and a text input are processed through a multimodal module, leading to the image text joint embedding. Pre-trained modules produce image-only and text-only question embeddings. A contrastive loss is then optimized to make the joint embedding close to the real question embedding and far from the single modality ones. By design, ConVQG generates questions that are image-grounded (in green) and that meet the requirements of the text constraint (in yellow).
The pipeline of the ConVQG method. During training, an encoder-decoder VQG framework is powered by two additional branches for image-based question generation (IQGM) and text-based question generation (TQGM) (left part). Then, contrastive losses discriminate image-text joint embeddings with the one from single modality only (right part). During inference, only the encoder-decoder framework is activated.
@inproceedings{mi2024convqg,
title={ConVQG: Contrastive Visual Question Generation with Multimodal Guidance},
author={Li Mi, Syrielle Montariol, Javiera Castillo-Navarro, Xianjie Dai, Antoine Bosselut, Devis Tuia},
year={2024},
booktitle={AAAI}
}