Skip to content

Official Implementation for EMNLP 2024 (main) "AgentReview: Exploring Academic Peer Review with LLM Agent."

License

Notifications You must be signed in to change notification settings

Ahren09/AgentReview

Repository files navigation

title emoji colorFrom colorTo sdk sdk_version app_file pinned license short_description
AgentReview
🎓
indigo
pink
gradio
5.4.0
app.py
false
apache-2.0
EMNLP 2024

AgentReview

Official implementation for the 🔗EMNLP 2024 main track (Oral) paper -- AgentReview: Exploring Peer Review Dynamics with LLM Agents

💡Demo🌐 Website | 📄 Paper | 🔗arXiv💻Code

@inproceedings{jin2024agentreview,
  title={AgentReview: Exploring Peer Review Dynamics with LLM Agents},
  author={Jin, Yiqiao and Zhao, Qinlin and Wang, Yiyang and Chen, Hao and Zhu, Kaijie and Xiao, Yijia and Wang, Jindong},
  booktitle={EMNLP},
  year={2024}
}


Introduction

AgentReview is a pioneering large language model (LLM)-based framework for simulating peer review processes, developed to analyze and address the complex, multivariate factors influencing review outcomes. Unlike traditional statistical methods, AgentReview captures latent variables while respecting the privacy of sensitive peer review data.

Academic Abstract

Peer review is fundamental to the integrity and advancement of scientific publication. Traditional methods of peer review analyses often rely on exploration and statistics of existing peer review data, which do not adequately address the multivariate nature of the process, account for the latent variables, and are further constrained by privacy concerns due to the sensitive nature of the data. We introduce AgentReview, the first large language model (LLM) based peer review simulation framework, which effectively disentangles the impacts of multiple latent factors and addresses the privacy issue. Our study reveals significant insights, including a notable 37.1% variation in paper decisions due to reviewers' biases, supported by sociological theories such as the social influence theory, altruism fatigue, and authority bias. We believe that this study could offer valuable insights to improve the design of peer review mechanisms.

Review Stage Design

Getting Started

Installation

Download the data

Download both zip files in this Dropbox:

Unzip AgentReview_Paper_Data.zip under data/, which contains:

  1. The PDF versions of the paper
  2. The real-world peer review for ICLR 2020 - 2023
unzip AgentReview_Paper_Data.zip -d data/

(Optional) Unzip AgentReview_LLM_Reviews.zip under outputs/, which contains the LLM-generated reviews, (our LLM-generated dataset)

unzip AgentReview_LLM_Review.zip -d outputs/

Install Required Packages:

cd AgentReview/
pip install -r requirements.txt
  1. Set environment variables

If you use OpenAI API, set OPENAI_API_KEY.

export OPENAI_API_KEY=... # Format: sk-...

If you use AzureOpenAI API, set the following

export AZURE_ENDPOINT=...  # Format: https://<your-endpoint>.openai.azure.com/
export AZURE_DEPLOYMENT=...  # Your Azure OpenAI deployment here
export AZURE_OPENAI_KEY=... # Your Azure OpenAI key here

Running the Project

Set the environment variables in run.sh and run it:

bash run.sh

Note: all project files should be run from the AgentReview directory.

Demo

A demo can be found in notebooks/demo.ipynb

Customizing your own environment

You can add a new setting in agentreview/experiment_config.py, then add the setting as a new entry to the all_settings dictionary:

all_settings = {
    "BASELINE": baseline_setting,
    "benign_Rx1": benign_Rx1_setting,
    ...
    "your_setting_name": your_setting

Framework Overview

Stage Design

Our simulation adopts a structured, 5-phase pipeline

  • Phase I. Reviewer Assessment. Each manuscript is evaluated by three reviewers independently.
  • Phase II. Author-Reviewer Discussion. Authors submit rebuttals to address reviewers' concerns;
  • Phase III. Reviewer-AC Discussion. The AC facilitates discussions among reviewers, prompting updates to their initial assessments.
  • Phase IV. Meta-Review Compilation. The AC synthesizes the discussions into a meta-review.
  • Phase V. Paper Decision. The AC makes the final decision on whether to accept or reject the paper, based on all gathered inputs.

Note

  • We use a fixed acceptance rate of 32%, corresponding to the actual acceptance rate of ICLR 2020 -- 2023. See Conference Acceptance Rates for more information.
  • Sometimes the API can apply strict filtering to the request. You may need to adjust the content filtering to get the desired results.

License

This project is licensed under the Apache-2.0 License.

Acknowledgements

The implementation is partially based on the chatarena framework.