-
Notifications
You must be signed in to change notification settings - Fork 15
/
Copy pathREADME.Rmd
101 lines (69 loc) · 4.29 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, echo = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "README-"
)
```
```{r, include=FALSE}
set.seed(0)
```
# Reinforcement Learning
[![Build Status](https://travis-ci.org/nproellochs/ReinforcementLearning.svg?branch=master)](https://travis-ci.org/nproellochs/ReinforcementLearning)
[![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/ReinforcementLearning)](https://cran.r-project.org/package=ReinforcementLearning)
[![DOI](http://joss.theoj.org/papers/10.21105/joss.01087/status.svg)](https://doi.org/10.21105/joss.01087)
**ReinforcementLearning** performs model-free reinforcement learning in R. This implementation enables the learning of
an optimal policy based on sample sequences consisting of states, actions and rewards. In
addition, it supplies multiple predefined reinforcement learning algorithms, such as experience
replay.
## Overview
The most important functions of **ReinforcementLearning** are:
- Learning an optimal policy from a fixed set of a priori known transition samples
- Predefined learning rules and action selection modes
- A highly customizable framework for model-free reinforcement learning tasks
## Installation
You can easily install the latest version of **ReinforcementLearning** with
```{r,eval=FALSE}
# Recommended option: download and install latest version from CRAN
install.packages("ReinforcementLearning")
# Alternatively, install the development version from GitHub:
# install.packages("devtools")
devtools::install_github("nproellochs/ReinforcementLearning")
```
## Usage
This section shows the basic functionality of how to perform reinforcement learning. First, load the corresponding package **ReinforcementLearning**.
```{r, message=FALSE}
library(ReinforcementLearning)
```
The following example shows how to learn a reinforcement learning agent using input data in the form of sample sequences
consisting of states, actions and rewards. The result of the learning process is a state-action table and an optimal
policy that defines the best possible action in each state.
```{r, message=FALSE}
# Generate sample experience in the form of state transition tuples
data <- sampleGridSequence(N = 1000)
head(data)
# Define reinforcement learning parameters
control <- list(alpha = 0.1, gamma = 0.1, epsilon = 0.1)
# Perform reinforcement learning
model <- ReinforcementLearning(data, s = "State", a = "Action", r = "Reward",
s_new = "NextState", control = control)
# Print result
print(model)
```
## Learning Reinforcement Learning
If you are new to reinforcement learning, you are better off starting with a systematic introduction, rather than trying to learn from reading individual documentation pages. There are three good places to start:
1. A thorough introduction to reinforcement learning is provided in [Sutton (1998)](https://www.semanticscholar.org/paper/Reinforcement-Learning%3A-An-Introduction-Sutton-Barto/dd90dee12840f4e700d8146fb111dbc863a938ad).
2. The package [vignette](https://github.com/nproellochs/ReinforcementLearning/blob/master/vignettes/ReinforcementLearning.Rmd) demonstrates the main functionalities of the ReinforcementLearning R package by drawing upon common examples from the literature (e.g. finding optimal game strategies).
3. Multiple blog posts on [R-bloggers](https://www.r-bloggers.com/reinforcement-learning-q-learning-with-the-hopping-robot/) demonstrate the capabilities of the ReinforcementLearning package using practical examples.
## Contributing
If you experience any difficulties with the package, or have suggestions, or want to contribute directly, you have the following options:
* Contact the [maintainer](mailto:[email protected]) by email.
* Issues and bug reports: [File a GitHub issue](https://github.com/nproellochs/ReinforcementLearning/issues).
* Fork the source code, modify, and issue a [pull request](https://help.github.com/articles/creating-a-pull-request-from-a-fork/) through the [project GitHub page](https://github.com/nproellochs/ReinforcementLearning).
## License
**ReinforcementLearning** is released under the [MIT License](https://opensource.org/licenses/MIT)
Copyright (c) 2019 Nicolas Pröllochs & Stefan Feuerriegel