Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sharing our benchmark work! #7

Open
linzhiqiu opened this issue Dec 8, 2024 · 1 comment
Open

Sharing our benchmark work! #7

linzhiqiu opened this issue Dec 8, 2024 · 1 comment

Comments

@linzhiqiu
Copy link

I am Zhiqiu Lin, a final-year PhD student at Carnegie Mellon University working with Prof. Deva Ramanan.

OneDiffusion is very inspiring and we would like to know if you would be open to evaluate using our benchmarks and metrics:

1 - VQAScore (ECCV'24): A simple but effective alignment score for text-to-image/video/3D generation, strongly agreeing with human judgments. VQAScore can be run using one-line Python code here! Google's Imagen3 used VQAScore as the strongest replacement for CLIPScore.
2 - GenAI-Bench (CVPR'24 SynData Workshop): A benchmark with 1,600 prompts from professional designers for compositional visual generation. We also show VQAScore can serve as a strong reward metric to re-rank the DALLE-3 generated images. GenAI-Bench was awarded the Best Short Paper at the SynData@CVPR24 workshop and adopted in Imagen3's report.
3 - NaturalBench (NeurIPS'24): A vision-centric VQA benchmark using pairs of simple questions about natural imagery. Unlike prior benchmarks (MME/ScienceQA), which blind GPT-3.5 can solve without vision, NaturalBench's protocol prevents such shortcuts. Even top models like GPT-4o and Qwen2-VL fall 50%-70% short of human accuracy on NaturalBench. We also found that current models show strong answer biases, such as favoring “Yes” over “No” regardless of the input. Correcting these biases boosts performance by 2-3x, even for GPT-4o!

Best,
Zhiqiu

@lehduong
Copy link
Owner

Hi Zhiqiu, thanks for your interest and very cool works! We will try your benchmarks and report them in our updated version of the paper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants