Skip to content

v0.10.0

Compare
Choose a tag to compare
@github-actions github-actions released this 08 Oct 12:20
· 223 commits to master since this release
a971df7

ReinforcementLearning v0.10.0

Diff since v0.9.0

Closed issues:

  • In DDPG: Add support for vector actions (#138)
  • Add experiments based on offline RL data (#141)
  • Train policy with GymEnv (#175)
  • SARTTrajectory for SAC (#182)
  • PPO related algorithms are broken (#194)
  • ERROR: type RandomPolicy has no field policy (#208)
  • "Getting Started" too long imo (#210)
  • Documentation of environment; actions seems not work. (#222)
  • Documentation of "How to use Tensorboard?": with_logger not defined (#223)
  • Getting figure object; How to get an animation using GR.plot in CartPolEnv (#246)
  • The components of Rainbow (#229)
  • code in get_started seems to be broken (#233)
  • Document how to save/load parameters (#238)
  • Workflow of saving trajectory data (#239)
  • [Call for Contributors] Summer 2021 of Open Source Promotion Plan (#242)
  • Next Release Plan (v0.9) (#247)
  • Add ReinforcementLearningDatasets (#253)
  • Lack of reproducibility of QRDQN CartPole Experiment. (#281)
  • StopAfterNoImprovement hook test fails occasionally (#297)
  • Get error when using ReinforcementLearning (#298)
  • Problems with PGFPlotsX during the install (#301)
  • Plotting CartPole environment in Jupyter (#306)
  • Local development environment setup tips causing error (#320)
  • Question about PER (#328)
  • Docs error in code output (#332)
  • Setup a CI for typo (#336)
  • double code & dysfunctional master branch when downloading package (#341)
  • Precompilation error; using Plots makes a conflict (#349)
  • Problem with running initial tutorial. Using TabularPolicy() generates an UndefinedKeyword error for n_action (#354)
  • Question: Clarification on the RL plots generated by the run() function (#357)
  • prob question for QBasedPolicy (#360)
  • Can evaluate function be used as a component of RLcore? (#369)
  • problem about precompiling the forked package (#377)
  • Question: Can we use packages like DifferentialEquations.jl to evolve or model the environment in ReinforcementLearning.jl (#378)
  • MultiAgentManager does not select correct action space for RockPaperScissorsEnv (#393)
  • Add ReinforcementLearningDatasets.jl (#397)
  • error: dimension mismatch "cannot broadcast array to have fewer dimensions" (#400)
  • SAC policy problems? (#410)
  • Add pre-training hook (#411)
  • Dead links in documentation (#418)
  • Links of show nbview badges in RLExperiments are incorrect (#421)
  • Problem accessing public google cloud storage bucket for RLDatasets.jl (#424)
  • Function to access base env through multiple wrapper layers (#425)
  • The problem of using GaussianNetwork in gpu (#455)
  • Next Release Plan (v0.10) (#460)
  • Error in experiment "JuliaRL_DDPG_Pendulum" (#471)
  • In Windows, ReinforcementLearningDataset.jl encounter a bug (#485)
  • Conditional Testing (#493)
  • Inconsistency of the EpsilonGreedyExplorer selection function (#520)

Merged pull requests: