You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am Zhiqiu Lin, a final-year PhD student at Carnegie Mellon University working with Prof. Deva Ramanan.
OneDiffusion is very inspiring and we would like to know if you would be open to evaluate using our benchmarks and metrics:
1 - VQAScore (ECCV'24): A simple but effective alignment score for text-to-image/video/3D generation, strongly agreeing with human judgments. VQAScore can be run using one-line Python code here! Google's Imagen3 used VQAScore as the strongest replacement for CLIPScore.
2 - GenAI-Bench (CVPR'24 SynData Workshop): A benchmark with 1,600 prompts from professional designers for compositional visual generation. We also show VQAScore can serve as a strong reward metric to re-rank the DALLE-3 generated images. GenAI-Bench was awarded the Best Short Paper at the SynData@CVPR24 workshop and adopted in Imagen3's report.
3 - NaturalBench (NeurIPS'24): A vision-centric VQA benchmark using pairs of simple questions about natural imagery. Unlike prior benchmarks (MME/ScienceQA), which blind GPT-3.5 can solve without vision, NaturalBench's protocol prevents such shortcuts. Even top models like GPT-4o and Qwen2-VL fall 50%-70% short of human accuracy on NaturalBench. We also found that current models show strong answer biases, such as favoring “Yes” over “No” regardless of the input. Correcting these biases boosts performance by 2-3x, even for GPT-4o!
Best,
Zhiqiu
The text was updated successfully, but these errors were encountered:
I am Zhiqiu Lin, a final-year PhD student at Carnegie Mellon University working with Prof. Deva Ramanan.
OneDiffusion is very inspiring and we would like to know if you would be open to evaluate using our benchmarks and metrics:
1 - VQAScore (ECCV'24): A simple but effective alignment score for text-to-image/video/3D generation, strongly agreeing with human judgments. VQAScore can be run using one-line Python code here! Google's Imagen3 used VQAScore as the strongest replacement for CLIPScore.
2 - GenAI-Bench (CVPR'24 SynData Workshop): A benchmark with 1,600 prompts from professional designers for compositional visual generation. We also show VQAScore can serve as a strong reward metric to re-rank the DALLE-3 generated images. GenAI-Bench was awarded the Best Short Paper at the SynData@CVPR24 workshop and adopted in Imagen3's report.
3 - NaturalBench (NeurIPS'24): A vision-centric VQA benchmark using pairs of simple questions about natural imagery. Unlike prior benchmarks (MME/ScienceQA), which blind GPT-3.5 can solve without vision, NaturalBench's protocol prevents such shortcuts. Even top models like GPT-4o and Qwen2-VL fall 50%-70% short of human accuracy on NaturalBench. We also found that current models show strong answer biases, such as favoring “Yes” over “No” regardless of the input. Correcting these biases boosts performance by 2-3x, even for GPT-4o!
Best,
Zhiqiu
The text was updated successfully, but these errors were encountered: