Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplified docs and added example page #116

Merged
merged 3 commits into from
Nov 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 21 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,41 +5,39 @@

# NIR - Neuromorphic Intermediate Representation

[![Nature Communications Paper](https://zenodo.org/badge/DOI/10.1038/s41467-024-52259-9.svg)](https://doi.org/10.1038/s41467-024-52259-9)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/nir?logo=pypi)](https://pypi.org/project/nir/)
[![GitHub Tag](https://img.shields.io/github/v/tag/neuromorphs/nir?logo=github)](https://github.com/neuromorphs/NIR/releases)
[![Discord](https://img.shields.io/discord/1209533869733453844?logo=discord)](https://discord.gg/JRMRGP9h3c)

NIR is a set of computational primitives, shared across different neuromorphic frameworks and technology stacks.
**NIR is currently supported by 7 simulators and 4 hardware platforms**, allowing users to seamlessly move between any of these platforms.
The goal of NIR is to decouple the evolution of neuromorphic hardware and software, ultimately increasing the interoperability between platforms and improving accessibility to neuromorphic technologies.

## Installation
NIR is installable via [pip](https://pypi.org/)
```bash
pip install nir
```
NIR is useful when you want to move a model from one platform to another, for instance from a simulator to a hardware platform.

> Read more about NIR in our [documentation about NIR primitives](https://neuroir.org/docs/primitives.html)

Check your [local framework]([https://neuroir.org/docs](https://neuroir.org/docs/support.html)) for NIR support.
> See [which frameworks are currently supported by NIR](https://neuroir.org/docs/support.html).

## Usage
> Read more in our [documentation about NIR usage](https://neuroir.org/docs)
> Read more in our [documentation about NIR usage](https://neuroir.org/docs) and see more examples in our [examples section](https://neuroir.org/docs/examples)

To end-users, NIR is just a declarative format that sits between formats and will hopefully be as invisible as possible.
However, it is possible to export Python objects or NIR files.
NIR serves as a format between neuromorphic platforms and will be installed alongside your framework of choice.
Using NIR is typically a part of your favorite framework's workflow, but follows the same pattern when you want to move from a *source* to a *target* platform:

```python
import nir
# Write to file
nir.write("my_graph.nir", nir_graph)

# Read file
# Define a model
my_model = ...
# Save the model (source platform)
nir.write("my_graph.nir", my_model)
# Load the model (target platform)
imported_graph = nir.read("my_graph.nir")
```

## About NIR
> Read more in our [documentation about NIR primitives](https://neuroir.org/docs/primitives.html)

On top of popular primitives such as convolutional or fully connected/linear computations, we define additional compuational primitives that are specific to neuromorphic computing and hardware implementations thereof.
Computational units that are not specifically neuromorphic take inspiration from the Pytorch ecosystem in terms of naming and parameters (such as Conv2d that uses groups/strides).

See our [example section](https://neuroir.org/docs/examples) for how to use NIR with your favorite framework.

## Frameworks that currently support NIR
> Read more in our [documentation about NIR support](https://neuroir.org/docs/support.html)

| **Framework** | **Write to NIR** | **Read from NIR** | **Examples** |
| --------------- | :--: | :--: | :------: |
Expand All @@ -54,7 +52,7 @@ Computational units that are not specifically neuromorphic take inspiration from


## Acknowledgements
This work was originally conceived at the [Telluride Neuromorphic Workshop 2023](tellurideneuromorphic.org) by the authors below (in alphabetical order):
This work was originally conceived at the [Telluride Neuromorphic Workshop 2023](https://tellurideneuromorphic.org) by the authors below (in alphabetical order):
* [Steven Abreu](https://github.com/stevenabreu7)
* [Felix Bauer](https://github.com/bauerfe)
* [Jason Eshraghian](https://github.com/jeshraghian)
Expand All @@ -64,7 +62,7 @@ This work was originally conceived at the [Telluride Neuromorphic Workshop 2023]
* [Sadique Sheik](https://github.com/sheiksadique)
* [Peng Zhou](https://github.com/pengzhouzp)

If you use NIR in your work, please cite the [following arXiv preprint](https://arxiv.org/abs/2311.14641)
If you use NIR in your work, please cite the [following paper](https://www.nature.com/articles/s41467-024-52259-9)

```
article{NIR2024,
Expand Down
11 changes: 10 additions & 1 deletion docs/source/_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,16 @@ repository:
execute:
execute_notebooks: off

parse:
myst_enable_extensions:
- amsmath

launch_buttons:
notebook_interface: "jupyterlab"
binderhub_url: "https://mybinder.org/v2/gh/neuromorphs/nir/main?urlpath=lab"
colab_url: "https://colab.research.google.com"
colab_url: "https://colab.research.google.com"

sphinx:
extra_extensions:
- 'sphinx.ext.autodoc'

29 changes: 17 additions & 12 deletions docs/source/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,19 +14,24 @@ parts:
- file: primitives
- file: support
title: Platform support
- caption: Examples
chapters:
- file: examples/lava/nir-conversion
- file: examples/nengo/nir-conversion
- file: examples/norse/nir-conversion
- file: examples/rockpool/nir-conversion
- file: examples/sinabs/nir-conversion
- file: examples/snntorch/nir-conversion
- file: examples/spinnaker2/import
- file: examples/spyx/conversion
- file: examples/snntorch_to_norse
- file: examples/index
sections:
- file: examples/lava/nir-conversion
- file: examples/nengo/nir-conversion
- file: examples/norse/nir-conversion
- file: examples/rockpool/nir-conversion
- file: examples/sinabs/nir-conversion
- file: examples/snntorch/nir-conversion
- file: examples/spinnaker2/import
- file: examples/spyx/conversion
- file: examples/snntorch_to_norse
- caption: Developer guide
chapters:
- file: porting_nir
- file: api_design
- file: dev_pytorch
- file: dev_jax
- file: contributing
- caption: API documentation
chapters:
- file: api_design
- file: doctrees
4 changes: 4 additions & 0 deletions docs/source/contributing.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# Contributing

NIR is a community-led initiative, and we welcome contributions from everyone.
Here, we outline some technical details on getting started.
Join the conversation on our [Discord server](https://discord.gg/JRMRGP9h3c) or [GitHub](https://github.com/neuromophs/nir) if you have any questions.

## Developer guide: Getting started

Use the standard github workflow.
Expand Down
6 changes: 6 additions & 0 deletions docs/source/dev_jax.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Developing JAX extensions

JAX is a popular deep learning framework that more and more of the NIR-supported libraries are built on.
For PyTorch, we have built the [`nirtorch` package](https://github.com/neuromorphs/nirtorch), but *no such package exists for JAX*.
If you're interested in developing such a package, please reach out to us!
Either on [Discord](https://discord.gg/JRMRGP9h3c) or by [opening an issue](https://github.com/neuromorphs/NIR/issues).
69 changes: 69 additions & 0 deletions docs/source/dev_pytorch.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# Developing PyTorch extensions

PyTorch is a popular deep learning framework that many of the NIR-supported libraries are built on.
We have built the [`nirtorch` package](https://github.com/neuromorphs/nirtorch) to make it easier to develop PyTorch extensions for the NIR-supported libraries.
`nirtorch` helps you write PyTorch code that (1) exports NIR models from PyTorch and (2) imports NIR models into PyTorch.

## Exporting NIR models from PyTorch
Exporting a NIR model requires two things: exporting the model's nodes and edges.

### Exporting edges
Exporting edges is slightly complicated because PyTorch modules can have multiple inputs and outputs.
And because PyTorch modules are connected via function calls, which only happen at runtime.
Therefore, we need to trace the PyTorch module to get the edges with some sample input.
Luckily, `nirtorch` package helps you do exactly that.
It works behind the scenes, but you can read about it in the [`to_nir.py` file in `nirtorch`](https://github.com/neuromorphs/NIRTorch/blob/main/nirtorch/to_nir.py#L11).

### Exporting nodes
The only thing we really have to do to use `nirtorch` is to export modules.
Since all PyTorch modules inherit from the `torch.nn.Module` class, exporting the nodes is straightforward: we simply need a function that looks at a PyTorch module and returns the corresponding NIR node.
Assume this is done in a function called `export_node`.

```python
import nir
import torch

class MyModule(torch.nn.Module):
weight: torch.Tensor
bias: torch.Tensor


def export_node(module: torch.nn.Module) -> Node:
# Export the module to a NIR node
if isinstance(module, MyModule):
return nir.Linear(module.weight, module.bias)
...
```
This example converts a custom Linear module to a NIR Linear node.

### Putting it all together
The following code is a snippet taken from the [Norse library](https://github.com/norse/norse) that demonstrates how to export custom PyTorch models to a NIR using the `nirtorch` package.
Note that we only have to declare the `export_node` function for each custom module we want to export.
The edges are traced automatically by the `nirtorch` package.

```python
def _extract_norse_module(module: torch.nn.Module) -> Optional[nir.NIRNode]:
if isinstance(module, LIFBoxCell):
return nir.LIF(
tau=module.p.tau_mem_inv,
v_th=module.p.v_th,
v_leak=module.p.v_leak,
r=torch.ones_like(module.p.v_leak),
)
elif isinstance(module, torch.nn.Linear):
return nir.Linear(module.weight, module.bias)
elif ...

return None

def to_nir(
module: torch.nn.Module, sample_data: torch.Tensor, model_name: str = "norse"
) -> nir.NIRNode:
return extract_nir_graph(
module, _extract_norse_module, sample_data, model_name=model_name
)
```

## Importing NIR models into PyTorch
Importing NIR models into PyTorch with `nirtorch` is also straightforward.
Assuming you have a NIR graph in the Python object `nir_graph` (see [Usage](#usage))
33 changes: 33 additions & 0 deletions docs/source/examples/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Code examples

NIR can be used to *export* or *import* models.
*Exporting* is when you convert a model from a source platform to NIR, and *importing* is when you convert a model from NIR to a target platform.
One typical workflow is to *export* a model from a simulator and *import* it to a hardware platform.

In the menu, you see examples for how to use NIR with your favorite framework.
But note that some frameworks only support importing or exporting.

## Writing to and reading from files with NIR
While NIR is typically integrated into your favorite framework, NIR supports writing to and reading from files directly.
This is useful when you want to send a model over email, store it for later, or share it with a colleague.

### Writing to a file
To write a model to a file, use the `nir.write` function.
Note that this requires you to provide a NIR model, so you need to find a way to convert your model to NIR within your framework.
The `nir.write` function takes two arguments: the file path and the model to write.
```python
import nir
my_nir_graph = ...
nir.write("my_graph.nir", my_model)
```

### Reading from a file
To read a model from a file, use the `nir.read` function.
This function takes a single argument: the file path.
```python
import nir
imported_graph = nir.read("my_graph.nir")
```

This gives you a NIR model, which then needs to be converted to your framework's model format.
The NIR graph itself is just a data structure.
1 change: 1 addition & 0 deletions docs/source/examples/lava/test.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Test
37 changes: 19 additions & 18 deletions docs/source/primitives.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,24 +10,25 @@ But, if you plan to execute the graph on restricted neuromorphic hardware, pleas

NIR defines 16 fundamental primitives listed in the table below, which backends are free to implement as they want, leading to varying outputs across platforms. While discrepancies could be minimized by constraining implementations or making backends aware of each other's discretization choices, NIR does not do this since it is declarative, specifying only the necessary inputs and outputs. Constraining implementations would cause hardware incompatibilities and making backends aware of each other could create large O(N^2) overhead for N backends. The primitives are already computationally expressive and able to solve complex PDEs.

| Primitive | Parameters | Computation | Reset |
|-|-|-|-|
| **Input** | Input shape | - | - |
| **Output** | Output shape | - | - |
| **Affine** | $W, b$ | $ W*I + b$ | - |
| **Convolution** | $W$, Stride, Padding, Dilation, Groups, Bias | $f \star g$ | - |
| **Current-based leaky integrate-and-fire** | $\tau_\text{syn}$, $\tau_\text{mem}$, R, $v_\text{leak}$, $v_\text{thr}$, $w_\text{in}$ | **LI**_1_; **Linear**; **LIF**_2_ | $\begin{cases} v_\text{LI\_2}-v_\text{thr} & \text{Spike} \\ v & \text{else} \end{cases}$ |
| **Delay** | $\tau$ | $I(t - \tau)$ | - |
| **Flatten** | Input shape, Start dim., End dim. | - | - |
| **Integrator** | $\text{R}$ | $\dot{v} = R I$ | - |
| **Integrate-and-fire** | $\text{R}, v_\text{thr}$ | **Integrator**; **Threshold** | $\begin{cases} v-v_\text{thr} & \text{Spike} \\ v & \text{else} \end{cases}$ |
| **Leaky integrator (LI)** | $\tau, \text{R}, v_\text{leak}$ | $\tau \dot{v} = (v_\text{leak} - v) + R I$ | - |
| **Linear** | $W$ | $W I$ | - |
| **Leaky integrate-fire (LIF)** | $\tau, \text{R}, v_\text{leak}, v_\text{thr}$ | **LI**; **Threshold** | $\begin{cases} v-v_\text{thr} & \text{Spike} \\ v & \text{else} \end{cases}$ |
| **Scale** | $s$ | $s I$ | - |
| **SumPooling** | $p$ | $\sum_{j} x_j$ | |
| **AvgPooling** | $p$ | **SumPooling**; **Scale** | - |
| **Threshold** | $\theta_\text{thr}$ | $H(I - \theta_\text{thr})$ | - |
| Primitive | Parameters | Computation | Reset |
|------------------------------------|---------------------------------------------------------------------------|----------------------------------------------------------|----------------------------------------------------------------------------------------|
| **Input** | Input shape | - | - |
| **Output** | Output shape | - | - |
| **Affine** | $W, b$ | $W \cdot I + b$ | - |
| **Convolution** | $W$, Stride, Padding, Dilation, Groups, Bias | $f \star g$ | - |
| **Current-based leaky integrate-and-fire** | $\tau_\text{syn}, \tau_\text{mem}, R, v_\text{leak}, v_\text{thr}, w_\text{in}$ | **LI**; **Linear**; **LIF** | $\begin{cases} v_\text{LIF} - v_\text{thr} & \text{Spike} \\ v_\text{LIF} & \text{else} \end{cases}$ |
| **Delay** | $\tau$ | $I(t - \tau)$ | - |
| **Flatten** | Input shape, Start dim., End dim. | - | - |
| **Integrator** | $R$ | $\dot{v} = R I$ | - |
| **Integrate-and-fire** | $R, v_\text{thr}$ | **Integrator**; **Threshold** | $\begin{cases} v - v_\text{thr} & \text{Spike} \\ v & \text{else} \end{cases}$ |
| **Leaky integrator (LI)** | $\tau, R, v_\text{leak}$ | $\tau \dot{v} = (v_\text{leak} - v) + R I$ | - |
| **Linear** | $W$ | $W I$ | - |
| **Leaky integrate-fire (LIF)** | $\tau, R, v_\text{leak}, v_\text{thr}$ | **LI**; **Threshold** | $\begin{cases} v - v_\text{thr} & \text{Spike} \\ v & \text{else} \end{cases}$ |
| **Scale** | $s$ | $s I$ | - |
| **SumPooling** | $p$ | $\sum_{j} x_j$ | - |
| **AvgPooling** | $p$ | **SumPooling**; **Scale** | - |
| **Spike** | $\theta_\text{thr}$ | $\dirac(I - \theta_\text{thr})$ | - |


Each primitive is defined by their own dynamical equation, specified in the [API docs](https://nnir.readthedocs.io/en/latest/).

Expand Down
Loading
Loading