A neural network library built from scratch, without dedicated deep learning packages. Training and testing deep neural networks and utilizing deep learning best practices for multi-class classification with fully connected neural networks, text generation with recurrent neural networks, and regression with fully connected networks.
-
- image classification on the CIFAR-10 dataset
- one-layer networks with Hinge and cross entropy losses
- cyclical learning rate schedule for improved learning
- exploring the effects of the initial learning rate of the cyclical learning rate schedule and L2 regularization strength on model performance, without hyperparameter search
-
- image classification on the CIFAR-10 dataset
- two-layer networks with cross entropy loss
- cyclical learning rate schedule for improved learning
- Xavier initialization for avoiding activation saturation
- Bayesian hyperparameter search with hyperopt
-
- image classification on the CIFAR-10 dataset
- k-layer networks with cross-entropy loss
- cyclical learning rate schedule for improved learning
- Xavier initialization for avoiding activation saturation
- Bayesian hyperparameter search with hyperopt
- dropout and batch normalization for avoiding overfitting
- data augmentation for avoiding overfitting with imgaug
- AdaGrad for more efficient gradient descent optimization
-
- generating text from Harry Potter books and Donald Trump tweets with RNNs
- one-hot encoding, gradient clipping and smoothed loss, etc.
-
- linear and non-linear regression
The documentation, which was built with Sphinx, is hosted here.
To generate it locally, do the following:
cd nn-blocks
conda env create -f environment.yml
conda activate nn_blocks_env
cd docs
make clean
make html
google-chrome _build/html/index.html