Skip to content

Latest commit

 

History

History
10 lines (9 loc) · 494 Bytes

README.md

File metadata and controls

10 lines (9 loc) · 494 Bytes

DenseCapBert

Modern VQA models are easily affected by language priors, which ignore image information and learn the superficial relationship between questions and answers, even in the optimal pre-training model. The main reason is that visual information is not fully extracted and utilized. We propose to extract dense captions from images to enhance the visual information for reasoning and utilize them to release the gap between vision and language.

Our code will be available sooner.