Skip to content

Commit

Permalink
Use @autodocs and fix references
Browse files Browse the repository at this point in the history
  • Loading branch information
Saransh-cpp committed Aug 26, 2022
1 parent ef763ef commit 5ac81ce
Show file tree
Hide file tree
Showing 25 changed files with 50 additions and 70 deletions.
2 changes: 1 addition & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ makedocs(modules = [Metalhead, Artifacts, LazyArtifacts, Images, OneHotArrays, D
],
"Developer guide" => "contributing.md",
"API reference" => [
"api/models.md",
"api/reference.md",
],
],
format = Documenter.HTML(
Expand Down
30 changes: 0 additions & 30 deletions docs/src/api/models.md

This file was deleted.

14 changes: 14 additions & 0 deletions docs/src/api/reference.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# API Reference

The API reference of `Metalhead.jl`.

**Note**:

```@autodocs
Modules = [Metalhead]
```

```@docs
Metalhead.squeeze_excite
Metalhead.LayerScale
```
2 changes: 1 addition & 1 deletion docs/src/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ To add a new model architecture to Metalhead.jl, you can [open a PR](https://git

- reuse layers from Flux as much as possible (e.g. use `Parallel` before defining a `Bottleneck` struct)
- adhere as closely as possible to a reference such as a published paper (i.e. the structure of your model should follow intuitively from the paper)
- use generic functional builders (e.g. [`resnet`](#) is the core function that builds "ResNet-like" models)
- use generic functional builders (e.g. [`Metalhead.resnet`](@ref) is the core function that builds "ResNet-like" models)
- use multiple dispatch to add convenience constructors that wrap your functional builder

When in doubt, just open a PR! We are more than happy to help review your code to help it align with the rest of the library. After adding a model, you might consider adding some pre-trained weights (see below).
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
using Flux, Metalhead
```

Using a model from Metalhead is as simple as selecting a model from the table of [available models](#). For example, below we use the pre-trained ResNet-18 model.
Using a model from Metalhead is as simple as selecting a model from the table of [available models](@ref API-Reference). For example, below we use the pre-trained ResNet-18 model.
```julia
using Flux, Metalhead

Expand Down
2 changes: 1 addition & 1 deletion src/convnets/alexnet.jl
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Create a `AlexNet`.
`AlexNet` does not currently support pretrained weights.
See also [`alexnet`](#).
See also [`alexnet`](@ref).
"""
struct AlexNet
layers::Any
Expand Down
6 changes: 3 additions & 3 deletions src/convnets/convnext.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Creates a single block of ConvNeXt.
- `planes`: number of input channels.
- `drop_path_rate`: Stochastic depth rate.
- `layerscale_init`: Initial value for [`LayerScale`](#)
- `layerscale_init`: Initial value for [`Metalhead.LayerScale`](@ref)
"""
function convnextblock(planes::Integer, drop_path_rate = 0.0, layerscale_init = 1.0f-6)
layers = SkipConnection(Chain(DepthwiseConv((7, 7), planes => planes; pad = 3),
Expand All @@ -34,7 +34,7 @@ Creates the layers for a ConvNeXt model.
- `depths`: list with configuration for depth of each block
- `planes`: list with configuration for number of output channels in each block
- `drop_path_rate`: Stochastic depth rate.
- `layerscale_init`: Initial value for [`LayerScale`](#)
- `layerscale_init`: Initial value for [`Metalhead.LayerScale`](@ref)
([reference](https://arxiv.org/abs/2103.17239))
- `inchannels`: number of input channels.
- `nclasses`: number of output classes
Expand Down Expand Up @@ -87,7 +87,7 @@ Creates a ConvNeXt model.
- `inchannels`: number of input channels
- `nclasses`: number of output classes
See also [`Metalhead.convnext`](#).
See also [`Metalhead.convnext`](@ref).
"""
struct ConvNeXt
layers::Any
Expand Down
4 changes: 2 additions & 2 deletions src/convnets/densenet.jl
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Create a DenseNet model
- `inplanes`: the number of input feature maps to the first dense block
- `growth_rates`: the growth rates of output feature maps within each
[`dense_block`](#) (a vector of vectors)
[`dense_block`](@ref) (a vector of vectors)
- `reduction`: the factor by which the number of feature maps is scaled across each transition
- `dropout_rate`: the dropout rate for the classifier head. Set to `nothing` to disable dropout.
- `nclasses`: the number of output classes
Expand Down Expand Up @@ -125,7 +125,7 @@ Set `pretrain = true` to load the model with pre-trained weights for ImageNet.
`DenseNet` does not currently support pretrained weights.
See also [`Metalhead.densenet`](#).
See also [`Metalhead.densenet`](@ref).
"""
struct DenseNet
layers::Any
Expand Down
2 changes: 1 addition & 1 deletion src/convnets/inceptions/googlenet.jl
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ Create an Inception-v1 model (commonly referred to as `GoogLeNet`)
`GoogLeNet` does not currently support pretrained weights.
See also [`googlenet`](#).
See also [`googlenet`](@ref).
"""
struct GoogLeNet
layers::Any
Expand Down
2 changes: 1 addition & 1 deletion src/convnets/inceptions/inceptionv3.jl
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ end
Inceptionv3(; pretrain::Bool = false, inchannels::Integer = 3, nclasses::Integer = 1000)
Create an Inception-v3 model ([reference](https://arxiv.org/abs/1512.00567v3)).
See also [`inceptionv3`](#).
See also [`inceptionv3`](@ref).
# Arguments
Expand Down
2 changes: 1 addition & 1 deletion src/convnets/mobilenets/mobilenetv1.jl
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ Create a MobileNetv1 model with the baseline configuration
`MobileNetv1` does not currently support pretrained weights.
See also [`mobilenetv1`](#).
See also [`Metalhead.mobilenetv1`](@ref).
"""
struct MobileNetv1
layers::Any
Expand Down
6 changes: 3 additions & 3 deletions src/convnets/resnets/core.jl
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Creates a basic residual block (see [reference](https://arxiv.org/abs/1512.03385
- `revnorm`: set to `true` to place the normalisation layer before the convolution
- `drop_block`: the drop block layer
- `drop_path`: the drop path layer
- `attn_fn`: the attention function to use. See [`squeeze_excite`](#) for an example.
- `attn_fn`: the attention function to use. See [`Metalhead.squeeze_excite`](@ref) for an example.
"""
function basicblock(inplanes::Integer, planes::Integer; stride::Integer = 1,
reduction_factor::Integer = 1, activation = relu,
Expand Down Expand Up @@ -60,7 +60,7 @@ Creates a bottleneck residual block (see [reference](https://arxiv.org/abs/1512.
- `revnorm`: set to `true` to place the normalisation layer before the convolution
- `drop_block`: the drop block layer
- `drop_path`: the drop path layer
- `attn_fn`: the attention function to use. See [`squeeze_excite`](#) for an example.
- `attn_fn`: the attention function to use. See [`Metalhead.squeeze_excite`](@ref) for an example.
"""
function bottleneck(inplanes::Integer, planes::Integer; stride::Integer,
cardinality::Integer = 1, base_width::Integer = 64,
Expand Down Expand Up @@ -137,7 +137,7 @@ end
resnet_stem(; stem_type = :default, inchannels::Integer = 3, replace_stem_pool = false,
norm_layer = BatchNorm, activation = relu)
Builds a stem to be used in a ResNet model. See the `stem` argument of [`resnet`](#) for details
Builds a stem to be used in a ResNet model. See the `stem` argument of [`Metalhead.resnet`](@ref) for details
on how to use this function.
# Arguments
Expand Down
2 changes: 1 addition & 1 deletion src/convnets/resnets/res2net.jl
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Creates a bottleneck block as described in the Res2Net paper.
- `activation`: the activation function to use.
- `norm_layer`: the normalization layer to use.
- `revnorm`: set to `true` to place the batch norm before the convolution
- `attn_fn`: the attention function to use. See [`squeeze_excite`](#) for an example.
- `attn_fn`: the attention function to use. See [`Metalhead.squeeze_excite`](@ref) for an example.
"""
function bottle2neck(inplanes::Integer, planes::Integer; stride::Integer = 1,
cardinality::Integer = 1, base_width::Integer = 26,
Expand Down
4 changes: 2 additions & 2 deletions src/convnets/resnets/resnet.jl
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Creates a ResNet model with the specified depth.
- `inchannels`: The number of input channels.
- `nclasses`: the number of output classes
Advanced users who want more configuration options will be better served by using [`resnet`](#).
Advanced users who want more configuration options will be better served by using [`Metalhead.resnet`](@ref).
"""
struct ResNet
layers::Any
Expand Down Expand Up @@ -48,7 +48,7 @@ The number of channels in outer 1x1 convolutions is the same.
- `inchannels`: The number of input channels.
- `nclasses`: The number of output classes
Advanced users who want more configuration options will be better served by using [`resnet`](#).
Advanced users who want more configuration options will be better served by using [`Metalhead.resnet`](@ref).
"""
struct WideResNet
layers::Any
Expand Down
2 changes: 1 addition & 1 deletion src/convnets/resnets/resnext.jl
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Creates a ResNeXt model with the specified depth, cardinality, and base width.
- `inchannels`: the number of input channels.
- `nclasses`: the number of output classes
Advanced users who want more configuration options will be better served by using [`resnet`](#).
Advanced users who want more configuration options will be better served by using [`Metalhead.resnet`](@ref).
"""
struct ResNeXt
layers::Any
Expand Down
4 changes: 2 additions & 2 deletions src/convnets/resnets/seresnet.jl
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Creates a SEResNet model with the specified depth.
`SEResNet` does not currently support pretrained weights.
Advanced users who want more configuration options will be better served by using [`resnet`](#).
Advanced users who want more configuration options will be better served by using [`Metalhead.resnet`](@ref).
"""
struct SEResNet
layers::Any
Expand Down Expand Up @@ -58,7 +58,7 @@ Creates a SEResNeXt model with the specified depth, cardinality, and base width.
`SEResNeXt` does not currently support pretrained weights.
Advanced users who want more configuration options will be better served by using [`resnet`](#).
Advanced users who want more configuration options will be better served by using [`Metalhead.resnet`](@ref).
"""
struct SEResNeXt
layers::Any
Expand Down
2 changes: 1 addition & 1 deletion src/convnets/squeezenet.jl
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Create a SqueezeNet
- `inchannels`: number of input channels.
- `nclasses`: the number of output classes.
See also [`squeezenet`](#).
See also [`squeezenet`](@ref).
"""
struct SqueezeNet
layers::Any
Expand Down
12 changes: 6 additions & 6 deletions src/convnets/vgg.jl
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Create VGG convolution layers
# Arguments
- `config`: vector of tuples `(output_channels, num_convolutions)`
for each block (see [`Metalhead.vgg_block`](#))
for each block (see [`Metalhead.vgg_block`](@ref))
- `batchnorm`: set to `true` to include batch normalization after each convolution
- `inchannels`: number of input channels
"""
Expand All @@ -61,7 +61,7 @@ Create VGG classifier (fully connected) layers
# Arguments
- `imsize`: tuple `(width, height, channels)` indicating the size after
the convolution layers (see [`Metalhead.vgg_convolutional_layers`](#))
the convolution layers (see [`Metalhead.vgg_convolutional_layers`](@ref))
- `nclasses`: number of output classes
- `fcsize`: input and output size of the intermediate fully connected layer
- `dropout_rate`: the dropout level between each fully connected layer
Expand All @@ -86,12 +86,12 @@ Create a VGG model
- `imsize`: input image width and height as a tuple
- `config`: the configuration for the convolution layers
(see [`Metalhead.vgg_convolutional_layers`](#))
(see [`Metalhead.vgg_convolutional_layers`](@ref))
- `inchannels`: number of input channels
- `batchnorm`: set to `true` to use batch normalization after each convolution
- `nclasses`: number of output classes
- `fcsize`: intermediate fully connected layer size
(see [`Metalhead.vgg_classifier_layers`](#))
(see [`Metalhead.vgg_classifier_layers`](@ref))
- `dropout_rate`: dropout level between fully connected layers
"""
function vgg(imsize::Dims{2}; config, batchnorm::Bool = false, fcsize::Integer = 4096,
Expand Down Expand Up @@ -122,7 +122,7 @@ Construct a VGG model with the specified input image size. Typically, the image
- `batchnorm`: set to `true` to use batch normalization after each convolution
- `nclasses`: number of output classes
- `fcsize`: intermediate fully connected layer size
(see [`Metalhead.vgg_classifier_layers`](#))
(see [`Metalhead.vgg_classifier_layers`](@ref))
- `dropout_rate`: dropout level between fully connected layers
"""
struct VGG
Expand Down Expand Up @@ -156,7 +156,7 @@ Create a VGG style model with specified `depth`.
- `inchannels`: number of input channels
- `nclasses`: number of output classes
See also [`vgg`](#).
See also [`vgg`](@ref).
"""
function VGG(depth::Integer; pretrain::Bool = false, batchnorm::Bool = false,
inchannels::Integer = 3, nclasses::Integer = 1000)
Expand Down
2 changes: 1 addition & 1 deletion src/layers/conv.jl
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Create a convolution + batch normalization pair with activation.
- `groups`: groups for the convolution kernel
- `bias`: bias for the convolution kernel. This is set to `false` by default if
`use_norm = true`.
- `weight`, `init`: initialization for the convolution kernel (see [`Flux.Conv`](#))
- `weight`, `init`: initialization for the convolution kernel (see [`Flux.Conv`](@ref))
"""
function conv_norm(kernel_size::Dims{2}, inplanes::Integer, outplanes::Integer,
activation = relu; norm_layer = BatchNorm, revnorm::Bool = false,
Expand Down
2 changes: 1 addition & 1 deletion src/layers/drop.jl
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ regions of size `block_size` in the input. Otherwise, it simply returns the inpu
- `gamma_scale`: multiplicative factor for `gamma` used. For the calculations,
refer to [the paper](https://arxiv.org/abs/1810.12890).
If you are an end-user, you do not want this function. Use [`DropBlock`](#) instead.
If you are an end-user, you do not want this function. Use [`DropBlock`](@ref) instead.
"""
# TODO add experimental `DropBlock` options from timm such as gaussian noise and
# more precise `DropBlock` to deal with edges (#188)
Expand Down
2 changes: 1 addition & 1 deletion src/mixers/gmlp.jl
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ Creates a model with the gMLP architecture.
- `inchannels`: the number of input channels
- `nclasses`: number of output classes
See also [`Metalhead.mlpmixer`](#).
See also [`Metalhead.mlpmixer`](@ref).
"""
struct gMLP
layers::Any
Expand Down
2 changes: 1 addition & 1 deletion src/mixers/mlpmixer.jl
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Creates a model with the MLPMixer architecture.
- `inchannels`: the number of input channels
- `nclasses`: number of output classes
See also [`Metalhead.mlpmixer`](#).
See also [`Metalhead.mlpmixer`](@ref).
"""
struct MLPMixer
layers::Any
Expand Down
2 changes: 1 addition & 1 deletion src/mixers/resmlp.jl
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Creates a model with the ResMLP architecture.
- `inchannels`: the number of input channels
- `nclasses`: number of output classes
See also [`Metalhead.mlpmixer`](#).
See also [`Metalhead.mlpmixer`](@ref).
"""
struct ResMLP
layers::Any
Expand Down
8 changes: 2 additions & 6 deletions src/utilities.jl
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,7 @@ end
Convenience function for applying an activation function to the output after
summing up the input arrays. Useful as the `connection` argument for the block
function in [`resnet`](#).
See also [`reluadd`](#).
function in [`Metalhead.resnet`](@ref).
"""
addact(activation = relu, xs...) = activation(sum(xs))

Expand All @@ -25,9 +23,7 @@ addact(activation = relu, xs...) = activation(sum(xs))
Convenience function for adding input arrays after applying an activation
function to them. Useful as the `connection` argument for the block function in
[`resnet`](#).
See also [`addrelu`](#).
[`Metalhead.resnet`](@ref).
"""
actadd(activation = relu, xs...) = sum(activation.(x) for x in xs)

Expand Down
2 changes: 1 addition & 1 deletion src/vit-based/vit.jl
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ Creates a Vision Transformer (ViT) model.
- `pool`: pooling type, either :class or :mean
- `nclasses`: number of classes in the output
See also [`Metalhead.vit`](#).
See also [`Metalhead.vit`](@ref).
"""
struct ViT
layers::Any
Expand Down

0 comments on commit 5ac81ce

Please sign in to comment.