Skip to content

Commit

Permalink
docs: Cleaned documentation for pypi release (#31)
Browse files Browse the repository at this point in the history
* docs: Updated docstrings

* docs: Cleaned documentation

* docs: Added documentation website referencing

* docs: Fixed typo in Contributing

* docs: Cleaned documentation index

* style: Fixed lint
  • Loading branch information
frgfm authored May 11, 2020
1 parent d0c789d commit 9b3f927
Show file tree
Hide file tree
Showing 17 changed files with 83 additions and 26 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Use Github [issues](https://github.com/frgfm/Holocron/issues) for feature reques



## Developping torchcam
## Developping holocron


### Commits
Expand Down
1 change: 1 addition & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@
'collapse_navigation': False,
'display_version': True,
'logo_only': False,
'analytics_id': 'UA-148140560-2',
}

# Add any paths that contain custom static files (such as style sheets) here,
Expand Down
9 changes: 8 additions & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,16 @@ holocron
The :mod:`holocron` package aggregates implementations of recent deep
learning tricks in computer vision, easily paired up with your favorite framework and model zoo.


.. toctree::
:maxdepth: 2
:caption: Package Reference
:caption: Getting Started

installing

.. toctree::
:maxdepth: 1
:caption: Package Documentation

models
nn
Expand Down
36 changes: 36 additions & 0 deletions docs/source/installing.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@

************
Installation
************

This library requires Python 3.6 or newer.

Via Python Package
==================

Install the last stable release of the package using pip:

.. code:: bash
pip install pylocron
Via Conda
=========

Install the last stable release of the package using conda:

.. code:: bash
conda install -c frgfm pylocron
Via Git
=======

Install the library in developper mode:

.. code:: bash
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
8 changes: 3 additions & 5 deletions docs/source/nn.rst
Original file line number Diff line number Diff line change
@@ -1,11 +1,9 @@
.. role:: hidden
:class: hidden-section


holocron.nn
============

.. automodule:: holocron.nn
An addition to the :mod:`torch.nn` module of Pytorch to extend the range of neural networks building blocks.


.. currentmodule:: holocron.nn

Non-linear activations
Expand Down
7 changes: 4 additions & 3 deletions docs/source/ops.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
holocron.ops
===============

.. automodule:: holocron.ops
============

.. currentmodule:: holocron.ops

Expand All @@ -10,5 +8,8 @@ holocron.ops
.. note::
Those operators currently do not support TorchScript.

Boxes
-----

.. autofunction:: box_diou
.. autofunction:: box_ciou
4 changes: 3 additions & 1 deletion docs/source/optim.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ the current state and will update the parameters based on the computed gradients
Optimizers
----------

Implementations of recent parameter optimizer for Pytorch modules.

.. autoclass:: Lamb

.. autoclass:: Lars
Expand All @@ -23,7 +25,7 @@ Optimizers
Optimizer wrappers
------------------

:mod:`holocron.optim` implements optimizer wrappers.
:mod:`holocron.optim` also implements optimizer wrappers.

A base optimizer should always be passed to the wrapper; e.g., you
should write your code this way:
Expand Down
6 changes: 4 additions & 2 deletions holocron/nn/modules/activation.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@


class Mish(nn.Module):
"""Implements the Mish activation module from https://arxiv.org/pdf/1908.08681.pdf"""
"""Implements the Mish activation module from `"Mish: A Self Regularized Non-Monotonic Neural Activation Function"
<https://arxiv.org/pdf/1908.08681.pdf>`_"""

def __init__(self):
super(Mish, self).__init__()
Expand All @@ -21,7 +22,8 @@ def forward(self, input):


class NLReLU(nn.Module):
"""Implements the Natural-Logarithm ReLU activation module from https://arxiv.org/pdf/1908.03682.pdf
"""Implements the Natural-Logarithm ReLU activation module from `"Natural-Logarithm-Rectified Activation
Function in Convolutional Neural Networks" <https://arxiv.org/pdf/1908.03682.pdf>`_
Args:
inplace (bool): should the operation be performed inplace
Expand Down
4 changes: 2 additions & 2 deletions holocron/nn/modules/downsample.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@


class ConcatDownsample2d(nn.Module):
"""Implements a loss-less downsampling operation described in https://pjreddie.com/media/files/papers/YOLO9000.pdf
by stacking adjacent information on the channel dimension.
"""Implements a loss-less downsampling operation described in `"YOLO9000: Better, Faster, Stronger"
<https://pjreddie.com/media/files/papers/YOLO9000.pdf>`_ by stacking adjacent information on the channel dimension.
Args:
scale_factor (int): spatial scaling factor
Expand Down
2 changes: 1 addition & 1 deletion holocron/nn/modules/loss.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ def __init__(self, weight=None, ignore_index=-100, reduction='mean'):
self.ignore_index = ignore_index
# Set the reduction method
if reduction not in ['none', 'mean', 'sum']:
raise NotImplementedError(f"argument reduction received an incorrect input")
raise NotImplementedError("argument reduction received an incorrect input")
else:
self.reduction = reduction

Expand Down
6 changes: 4 additions & 2 deletions holocron/ops/boxes.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,8 @@ def iou_penalty(boxes1, boxes2):


def box_diou(boxes1, boxes2):
"""Computes the Distance-IoU loss as described in https://arxiv.org/pdf/1911.08287.pdf
"""Computes the Distance-IoU loss as described in `"Distance-IoU Loss: Faster and Better Learning for
Bounding Box Regression" <https://arxiv.org/pdf/1911.08287.pdf>`_
Args:
boxes1 (torch.Tensor[M, 4]): bounding boxes
Expand Down Expand Up @@ -96,7 +97,8 @@ def aspect_ratio_consistency(boxes1, boxes2):


def box_ciou(boxes1, boxes2):
"""Computes the Complete IoU loss as described in https://arxiv.org/pdf/1911.08287.pdf
"""Computes the Complete IoU loss as described in `"Distance-IoU Loss: Faster and Better Learning for
Bounding Box Regression" <https://arxiv.org/pdf/1911.08287.pdf>`_
Args:
boxes1 (torch.Tensor[M, 4]): bounding boxes
Expand Down
3 changes: 2 additions & 1 deletion holocron/optim/lamb.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,8 @@


class Lamb(Optimizer):
"""Implements the Lamb optimizer from https://arxiv.org/pdf/1904.00962v3.pdf.
"""Implements the Lamb optimizer from `"Large batch optimization for deep learning: training BERT in 76 minutes"
<https://arxiv.org/pdf/1904.00962v3.pdf>`_.
Args:
params (iterable): iterable of parameters to optimize or dicts defining parameter groups
Expand Down
3 changes: 2 additions & 1 deletion holocron/optim/lars.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@


class Lars(Optimizer):
r"""Implements the LARS optimizer from https://arxiv.org/pdf/1708.03888.pdf
r"""Implements the LARS optimizer from `"Large batch training of convolutional networks"
<https://arxiv.org/pdf/1708.03888.pdf>`_.
Args:
params (iterable): iterable of parameters to optimize or dicts defining
Expand Down
4 changes: 3 additions & 1 deletion holocron/optim/lr_scheduler.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,9 @@


class OneCycleScheduler(_LRScheduler):
"""Implements the One Cycle scheduler from https://arxiv.org/pdf/1803.09820.pdf
"""Implements the One Cycle scheduler from `"A disciplined approach to neural network hyper-parameters"
<https://arxiv.org/pdf/1803.09820.pdf>`_. Please note that this implementation was made before pytorch supports it,
using the official Pytorch implementation is advised.
Args:
optimizer (Optimizer): Wrapped optimizer.
Expand Down
3 changes: 2 additions & 1 deletion holocron/optim/radam.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,8 @@


class RAdam(Optimizer):
"""Implements the RAdam optimizer from https://arxiv.org/pdf/1908.03265.pdf
"""Implements the RAdam optimizer from `"On the variance of the Adaptive Learning Rate and Beyond"
<https://arxiv.org/pdf/1908.03265.pdf>`_.
Args:
params (iterable): iterable of parameters to optimize or dicts defining parameter groups
Expand Down
5 changes: 3 additions & 2 deletions holocron/optim/ralars.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,9 @@


class RaLars(Optimizer):
"""Implements the RAdam optimizer from https://arxiv.org/pdf/1908.03265.pdf
with optional Layer-wise adaptive Scaling from https://arxiv.org/pdf/1708.03888.pdf
"""Implements the RAdam optimizer from `"On the variance of the Adaptive Learning Rate and Beyond"
<https://arxiv.org/pdf/1908.03265.pdf>`_ with optional Layer-wise adaptive Scaling from
`"Large Batch Training of Convolutional Networks" <https://arxiv.org/pdf/1708.03888.pdf>`_
Args:
params (iterable): iterable of parameters to optimize or dicts defining parameter groups
Expand Down
6 changes: 4 additions & 2 deletions holocron/optim/wrapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@


class Lookahead(Optimizer):
"""Implements the Lookahead optimizer wrapper from https://arxiv.org/pdf/1907.08610.pdf
"""Implements the Lookahead optimizer wrapper from `"Lookahead Optimizer: k steps forward, 1 step back"
<https://arxiv.org/pdf/1907.08610.pdf>`_.
Args:
base_optimizer (torch.optim.optimizer.Optimizer): base parameter optimizer
Expand Down Expand Up @@ -131,7 +132,8 @@ def sync_params(self, sync_rate=0):


class Scout(Optimizer):
"""Implements a new optimizer wrapper based on the initial Lookahead paper https://arxiv.org/pdf/1907.08610.pdf
"""Implements a new optimizer wrapper based on `"Lookahead Optimizer: k steps forward, 1 step back"
<https://arxiv.org/pdf/1907.08610.pdf>`_.
Args:
base_optimizer (torch.optim.optimizer.Optimizer): base parameter optimizer
Expand Down

0 comments on commit 9b3f927

Please sign in to comment.