v0.8.0
ReinforcementLearning v0.8.0
Closed issues:
- Document basic environments (#129)
- Improve interfaces for model exploration and hyperparameter optimization (#28)
- Support SEED RL (SCALABLE AND EFFICIENT DEEP-RL ) (#62)
- Rename
AbstractAgent
toAbstractPolicy
(#111) - Add a stop condition to terminate the experiment after reaching reward threashold (#112)
- ACME RL lib by deepmind (#85)
- Definition of a policy (#86)
- Add remote trajectories (#87)
- Base.convert method for DiscreteSpace (#104)
- Action Space Meaning (#88)
- Base.in method for EmptySpace (#105)
- Renaming get_terminal to isterminated (#106)
- Requesting more informative field names for SharedTrajectory (#113)
- Suggestion: More informative name for FullActionSet & MinimalActionSet (#107)
- Returning an
AbstractSpace
object usingget_actions
(#108) - Split experiments into separate files (#145)
- Add project.toml for tests (#146)
- Docs build error (#91)
- Split out Trajectory & CircularArrayBuffer as independent packages (#114)
- Requesting explanation for better performance at ... (#115)
- Add an extra mode when evaluating agent (#116)
- Why are wrapper environments a part of RLBase instead of RLCore (say)? (#109)
- The names of keyword arguments in Trajectory is kind of misunderstanding (#117)
- Check compatibility between agent and environments (#118)
- Behaviour for hooks for RewardOverridenEnv (#119)
- StopAfterEpisode with custom DQNL errors beyond a particular Episode Count (#96)
ERROR: UndefVarError: NNlib not defined
while loading agent (#110)- Use JLSO for (de)serialization? (#97)
- Setup github actions (#98)
- Fails to load trajectory (#150)
- Test error in ReinforcementLearningEnvironments.jl (#152)
- Move preallocations in MultiThreadEnv from global to local (#153)
- remove @views (#155)
- error in save & load ElasticCompactSARTSATrajectory (#156)
- add early stopping in src\core\stop_conditions.jl (#157)
- add time stamp in load & save function, in file src\components\agents\agent.jl (#158)
- policies in GPU can not be saved || loaded (#159)
- code formatting (#165)
- Purpose of CommonRLInterface (#166)
- Moving example environments from RLBase to RLEnvs? (#167)
- Keeping prefix
get_
in method names likeget_reward
(#168) - Currently getting an ambiguous method error in ReinforcementLearningCore v0.5.1 (#171)
- TD3 Implementation (#174)
- Travis CI Credits (#178)
- Unrecognized symbols (#180)
Merged pull requests:
- update dependency (#177) (@findmyway)