Using multiple observation steps for each action step during RL Training. #1744
Replies: 1 comment
-
A suggestion has been posted by @KyleM73 here which is now closed. Please lets follow up under this open discussion post. Thanks. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, I am trying to make a Dual arm manipulation agent that performs actions with a frequency of (50Hz) but the agent uses a the output of a Neural Network that performs system identification using the applied torques and the pose of the manipulated object gathering samples at (500 hz).
how could I incorporate this to train my rl agent using either a Manager based or direct workflow enviroment performing 10 simulation steps using a pd controller towards a fixed joint position gathering the required observation sequence and every 10th step using the rl agent giving the desired position of the joints.
From what I understand I cant just progress the enviroment steps manually and apply the controller actions to get the observations inside the Direct workflow or ManagerbasedRL enviroment.
Any suggestions are welcome.
Beta Was this translation helpful? Give feedback.
All reactions