Skip to content

Latest commit

 

History

History
28 lines (17 loc) · 1.49 KB

README.md

File metadata and controls

28 lines (17 loc) · 1.49 KB

Better version

The version I have implemented does not have strict code specifications and may have issues. For a more professional version, please refer to the official diffusers repository: train_diffusion_dpo_sdxl.py

AI Feedback-Based Self-Training Direct Preference Optimization

Hits

Dataset Details

Num examples = 37180
Num Epochs = 3

Compared To Human Feedback Model

Our model tends to perform closer to the SDXL-Base, but with optimized image details. The model provided in the original paper exhibits better color and detail performance, more in line with human preferences. This also reflects a characteristic of using self-training to train the original model: it can optimize according to AI preferences while ensuring the capabilities of the original model. Training based on human preference data will make the output quality closely related to the human preference dataset.

Acknowledgement

This work is based on the Diffusion Model Alignment Using Direct Preference Optimization method.