v0.10.2
ReinforcementLearning v0.10.2
Closed issues:
- Add procgen (#126)
- CI fails with [email protected] (#572)
- Missing docs for
TDLearner
(#580) - Add an environment wrapper to IsaacGym (#619)
- How to run this source code in vscode? (#623)
- Examples of multidimensional continous actions (#676)
- Base.copy not implemented for the TicTacToe environment (#678)
- Broken link to src (#693)
- Support Brax (#696)
- PPO on environments with multiple action dimensions? (#703)
- Can't checkout RLCore for development (#704)
- Setup sponsor related info (#730)
- new _run() (#731)
- PPOPolicy training: ERROR: DomainError with NaN: Normal: the condition σ >= zero(σ) is not satisfied. (#739)
- Code Readability (#740)
- MultiThreadEnv not available in ReinforcementLearningZoo (#741)
- ReinforcementLearningExperiment dependencies fail to precompile (#744)
- tanh normalization destabilizes learning with GaussianNetwork (#745)
- Custom Environment Passes RLBase.test_runnable!(env) but infinite hangs and crashes when run. (#757)
- Collect both number of steps and rewards in a single hook (#763)
- Every single environment / experiment crashes with following error: (#766)
- Neural Network Approximator based policies not working (#770)
- "params not defined," "JuliaRL_BasicDQN_CartPole" (#778)
Merged pull requests:
- WIP: Add MPO in zoo (#604) (@HenriDeh)
- Episode reset condition (#621) (@HenriDeh)
- Add a categorical Network (#625) (@HenriDeh)
- Use Trajectories.jl instead (#632) (@findmyway)
- added basic doc for
TDLearner
(#649) (@baedan) - Add
JuliaRL_DQN_CartPole
(#650) (@findmyway) - enable OpenSpiel (#691) (@findmyway)
- Small improvements for TicTacToeEnv (#692) (@jonathan-laurent)
- Update the "how to implement a new algorithm" (#695) (@HenriDeh)
- Fix typo in algorithm implementation docs (#697) (@mplemay)
- add PrioritizedDQN (#698) (@findmyway)
- add QRDQN (#699) (@findmyway)
- add REMDQN (#708) (@findmyway)
- add IQN (#710) (@findmyway)
- checkin Mainifest.toml (#711) (@findmyway)
- CompatHelper: bump compat for "ReinforcementLearningCore" to "0.8" (#712) (@github-actions[bot])
- CompatHelper: bump compat for "ReinforcementLearningEnvironments" to "0.6" (#713) (@github-actions[bot])
- CompatHelper: bump compat for "ReinforcementLearningZoo" to "0.5" (#714) (@github-actions[bot])
- CompatHelper: bump compat for "AbstractTrees" to "0.4" for package ReinforcementLearningBase (#715) (@github-actions[bot])
- CompatHelper: bump compat for "Functors" to "0.3" for package ReinforcementLearningCore (#717) (@github-actions[bot])
- CompatHelper: bump compat for "UnicodePlots" to "3" for package ReinforcementLearningCore (#718) (@github-actions[bot])
- CompatHelper: bump compat for "ReinforcementLearningCore" to "0.8" for package ReinforcementLearningZoo (#720) (@github-actions[bot])
- CompatHelper: bump compat for "Functors" to "0.3" for package ReinforcementLearningZoo (#721) (@github-actions[bot])
- CompatHelper: add new compat entry for "StableRNGs" at version "1" for package ReinforcementLearningExperiments (#722) (@github-actions[bot])
- CompatHelper: bump compat for "ReinforcementLearning" to "0.10" for package ReinforcementLearningExperiments (#723) (@github-actions[bot])
- add rainbow (#724) (@findmyway)
- Adapted SAC to support MultiThreadedEnv (#726) (@BigFood2307)
- Add the number of episodes (#727) (@ll7)
- docs: add ll7 as a contributor for doc (#728) (@allcontributors[bot])
- Add struct view (#732) (@findmyway)
- add VPG (#733) (@findmyway)
- CompatHelper: add new compat entry for "Distributions" at version "0.25" for package ReinforcementLearningZoo (#734) (@github-actions[bot])
- CompatHelper: add new compat entry for "Distributions" at version "0.25" for package ReinforcementLearningExperiments (#735) (@github-actions[bot])
- fixed hyperlink in readme (#742) (@mplemay)
- docs: add mplemay as a contributor for doc (#743) (@allcontributors[bot])
- Create FUNDING.yml (#746) (@findmyway)
- TRPO (#747) (@baedan)
- CompatHelper: bump compat for "CommonRLSpaces" to "0.2" for package ReinforcementLearningBase (#748) (@github-actions[bot])
- Fix parameter names for AsyncTrajectoryStyle (#749) (@ludvigk)
- Update DoEveryNEpisode hook to new api (#750) (@ludvigk)
- docs: add ludvigk as a contributor for code (#751) (@allcontributors[bot])
- Update TwinNetwork (#752) (@ludvigk)
- Typo in hooks docs (#754) (@kir0ul)
- CommonRLSpace -> DomainSets (#756) (@findmyway)
- Fix typo (#767) (@jeremiahpslewis)
- Fix typo (#768) (@jeremiahpslewis)
- Fix TD Learner so that it handles MultiAgent/Simultaneous with NoOp (#769) (@jeremiahpslewis)
- Bump RLBase compat to 0.11 (#771) (@HenriDeh)
- Remove manifest from the repo (#773) (@HenriDeh)
- import params and gradient (#774) (@HenriDeh)
- fix compat (#775) (@HenriDeh)
- Trying to reimplement experiments (#776) (@HenriDeh)
- Add a developer mode (#777) (@HenriDeh)
- added pettingzoo and one single agent example (#782) (@Mytolo)
- Update mpo.jl (#783) (@HenriDeh)
- Reduce unnecessary array allocations (#785) (@jeremiahpslewis)
- Temporarily disable failing experiment so project tests pass (#787) (@jeremiahpslewis)
- Fix spellcheck errors (#788) (@jeremiahpslewis)
- Bug fixes and dependency bump (#789) (@jeremiahpslewis)
- Pin ReinforcementLearning.jl to pre-refactor versions (#793) (@jeremiahpslewis)