We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, I have a quesiton about calculating log ratio for PPO. I'm very new to this area and I would be really grateful if you can help me.
In accelerate_ppo_trainer.py, def make_experience, line 457 log_ratio = (logprobs - ref_logprobs) * attention_mask[:, :-1]
log_ratio = (logprobs - ref_logprobs) * attention_mask[:, :-1]
but according to # NOTE: logprob[i] is (log)prob at which all_token[i+1] was sampled, so shouldn't it be attention_mask[:, 1:] ?
in accelerate_ppo_trainer.py, def loss, line 188
logprobs, values_pred, mask = ( logprobs[:, start:end], values_pred[:, start:end], attention_mask[:, start + 1 : end + 1], )
Here I think attention mask is shifted the correct way. So why is it different in def make_experience?
Thanks for your help in advance!
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi, I have a quesiton about calculating log ratio for PPO.
I'm very new to this area and I would be really grateful if you can help me.
In accelerate_ppo_trainer.py, def make_experience, line 457
log_ratio = (logprobs - ref_logprobs) * attention_mask[:, :-1]
but according to # NOTE: logprob[i] is (log)prob at which all_token[i+1] was sampled,
so shouldn't it be attention_mask[:, 1:] ?
in accelerate_ppo_trainer.py, def loss, line 188
Here I think attention mask is shifted the correct way. So why is it different in def make_experience?
Thanks for your help in advance!
The text was updated successfully, but these errors were encountered: