Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify readme #2131

Merged
merged 2 commits into from
Jan 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 3 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,44 +14,9 @@ Check out the paper at [SMARTS: Scalable Multi-Agent Reinforcement Learning Trai
![](docs/_static/smarts_envision.gif)

# Documentation
:rotating_light: :bell: Read the docs :notebook_with_decorative_cover: at [smarts.readthedocs.io](https://smarts.readthedocs.io/) . :bell: :rotating_light:

# Examples
### Primitive
1. [Egoless](examples/e1_egoless.py) example.
+ Run a SMARTS simulation without any ego agents, but with only background traffic.
1. [Single-Agent](examples/e2_single_agent.py) example.
+ Run a SMARTS simulation with a single ego agent.
1. [Multi-Agent](examples/e3_multi_agent.py) example.
+ Run a SMARTS simulation with multiple ego agents.
1. [Environment Config](examples/e4_environment_config.py) example.
+ Demonstrate the main observation/action configuration of the environment.
1. [Agent Zoo](examples/e5_agent_zoo.py) example.
+ Demonstrate how the agent zoo works.
1. [Agent interface example](examples/6_agent_interface.py)
+ TODO demonstrate how the agent interface works.

### Integration examples
A few more complex integrations are demonstrated.

1. Configurable example
+ script: [examples/e7_experiment_base.py](examples/e7_experiment_base.py)
+ Configurable agent number.
+ Configurable agent type.
+ Configurable environment.
1. Parallel environments
+ script: [examples/e8_parallel_environment.py](examples/e8_parallel_environment.py)
+ Multiple SMARTS environments in parallel
+ ActionSpaceType: LaneWithContinuousSpeed

### RL Examples
1. [Drive](examples/e10_drive). See [Driving SMARTS 2023.1 & 2023.2](https://smarts.readthedocs.io/en/latest/benchmarks/driving_smarts_2023_1.html) for more info.
1. [VehicleFollowing](examples/e11_platoon). See [Driving SMARTS 2023.3](https://smarts.readthedocs.io/en/latest/benchmarks/driving_smarts_2023_3.html) for more info.
1. [PG](examples/e12_rllib/pg_example.py). See [RLlib](https://smarts.readthedocs.io/en/latest/ecosystem/rllib.html) for more info.
1. [PG Population Based Training](examples/e12_rllib/pg_pbt_example.py). See [RLlib](https://smarts.readthedocs.io/en/latest/ecosystem/rllib.html) for more info.

### RL Environment
1. [ULTRA](https://github.com/smarts-project/smarts-project.rl/blob/master/ultra) provides a gym-based environment built upon SMARTS to tackle intersection navigation, specifically the unprotected left turn.
1. Read the docs :notebook_with_decorative_cover: at [smarts.readthedocs.io](https://smarts.readthedocs.io/) :fire:
1. [Base examples](https://smarts.readthedocs.io/en/latest/examples/base_examples.html)
1. [RL models](https://smarts.readthedocs.io/en/latest/examples/rl_model.html)

# Issues, Bugs, Feature Requests
1. First, read how to communicate issues, report bugs, and request features [here](./docs/resources/contributing.rst#communication).
Expand Down
18 changes: 11 additions & 7 deletions docs/ecosystem/rllib.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,20 @@
RLlib
=====

**RLlib** is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety
of applications. ``RLlib`` natively supports ``TensorFlow``, ``TensorFlow Eager``, and ``PyTorch``. Most of its internals are agnostic to such
deep learning frameworks.
**RLlib** is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications. ``RLlib`` natively supports ``TensorFlow``, ``TensorFlow Eager``, and ``PyTorch``. Most of its internals are agnostic to such deep learning frameworks.

SMARTS contains two examples using `Policy Gradients (PG) <https://docs.ray.io/en/latest/rllib-algorithms.html#policy-gradients-pg>`_.

1. ``e12_rllib/pg_example.py``
This example shows the basics of using RLlib with SMARTS through :class:`~smarts.env.rllib_hiway_env.RLlibHiWayEnv`.
1. ``e12_rllib/pg_pbt_example.py``
This example combines Policy Gradients with `Population Based Training (PBT) <https://docs.ray.io/en/latest/tune/api/doc/ray.tune.schedulers.PopulationBasedTraining.html>`_ scheduling.
#. Policy gradient

+ script: :examples:`e12_rllib/pg_example.py`
+ Shows the basics of using RLlib with SMARTS through :class:`~smarts.env.rllib_hiway_env.RLlibHiWayEnv`.

#. Policy gradient with population based training

+ script: :examples:`e12_rllib/pg_pbt_example.py`
+ Combines Policy Gradients with `Population Based Training (PBT) <https://docs.ray.io/en/latest/tune/api/doc/ray.tune.schedulers.PopulationBasedTraining.html>`_ scheduling.


Recommended reads
-----------------
Expand Down
Loading