Towards Multimodal Vision-Language Models Generating Non-generic Text

Wes J Robbins

[AAAI-22] Undergraduate Consortium
Abstract: While text generated by current vision-language models may be accurate and syntactically correct, it is often general. Recent work has used optical character recognition to supplement visual information with text extracted from an image. In many cases, using text in the image improves the specificity and usefulness of generated text. We contend that vision-language models can benefit from additional information extracted from an image. We modify previous multimodal frameworks to accept relevant information from a number of auxiliary classifiers. In particular, we focus on person names as an additional set of tokens and create a novel image-caption dataset to facilitate captioning with person names. The dataset, Politicians and Athletes in Captions (PAC), consists of captioned images of well-known people in context. By fine-tuning pretrained models with this dataset, we demonstrate a model that can naturally integrate facial recognition tokens into generated text by training on limited data.

Sessions where this paper appears

  • Poster Session 3

    Red 4

  • Poster Session 6

    Red 4