Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could you provide me with some advice? #31

Open
binbinyouli12 opened this issue Jun 13, 2024 · 2 comments
Open

Could you provide me with some advice? #31

binbinyouli12 opened this issue Jun 13, 2024 · 2 comments

Comments

@binbinyouli12
Copy link

Hello,

First of all, thank you for providing the DDPG+HER code; it has been a great help. However, I have some basic questions as I am just starting to learn about reinforcement learning. After adapting your application to my custom environment, I noticed that during the initial stages of training, the printed actor loss is very small, typically around 0.000-something, and the critic loss is usually about 0.0000-something. I am not sure if this is normal or if there is a problem somewhere?

@TianhongDai
Copy link
Owner

@binbinyouli12 Thank you for using my code - I think it's normal, the most important metric is the success rate.

@binbinyouli12
Copy link
Author

Hello, I have a few more questions. If I only use DDPG in your code, do I just need to remove the HER replacement in the memory part of the code? Another question is, will using HER lead to issues with reward overestimation and failure to correctly guide the agent to the desired location?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants