diff --git a/README.md b/README.md index 0a394314..c7cc8400 100644 --- a/README.md +++ b/README.md @@ -89,7 +89,8 @@ If you found this library to be useful in academic work, then please cite: ([arX **Deep learning** [Optax](https://github.com/deepmind/optax): first-order gradient (SGD, Adam, ...) optimisers. [Orbax](https://github.com/google/orbax): checkpointing (async/multi-host/multi-device). -[Levanter](https://github.com/stanford-crfm/levanter): scalable+reliable training of foundation models (e.g. LLMs). +[Levanter](https://github.com/stanford-crfm/levanter): scalable+reliable training of foundation models (e.g. LLMs). +[paramax](https://github.com/danielward27/paramax): parameterizations and constraints for PyTrees. **Scientific computing** [Diffrax](https://github.com/patrick-kidger/diffrax): numerical differential equation solvers. diff --git a/docs/index.md b/docs/index.md index 3a5d5386..8c2b7708 100644 --- a/docs/index.md +++ b/docs/index.md @@ -78,7 +78,8 @@ If this quick start has got you interested, then have a read of [All of Equinox] **Deep learning** [Optax](https://github.com/deepmind/optax): first-order gradient (SGD, Adam, ...) optimisers. [Orbax](https://github.com/google/orbax): checkpointing (async/multi-host/multi-device). -[Levanter](https://github.com/stanford-crfm/levanter): scalable+reliable training of foundation models (e.g. LLMs). +[Levanter](https://github.com/stanford-crfm/levanter): scalable+reliable training of foundation models (e.g. LLMs). +[paramax](https://github.com/danielward27/paramax): parameterizations and constraints for PyTrees. **Scientific computing** [Diffrax](https://github.com/patrick-kidger/diffrax): numerical differential equation solvers.