Skip to content

Small project testing the functionalities of kerastuner and benchmarking the three available tuners.

License

Notifications You must be signed in to change notification settings

vb690/kerastuner_benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Benchmarking Kerastuner Algorithms

Small project testing the functionalities of Keras Tuner and benchmarking the three available optimization algorithms.

Problem

Given a learning algorithm and a problem to solve (in the simplest case these are classification or regression tasks) we know that exists a set of hyperparamenters which allow the algorithm to converge to the optimal solution both in terms of loss reduction and generalizability to previously unseen data.

Given the magnitude of all the possible combinations of hyperparameters and their associated values (i.e. the size of the hyperparameters space) the process of finding the best set is often left to optimization algorithms which aims to find promising candidates in an efficient manner.

Therefore, understanding which algorithm is able to achieve the best performance in the shortest ammount of time (or employing the least ammount of computational resources) becomes of pivotal impartance.

Here we aim to compare a set of three optimization algorithms provided by the library Keras Tuner, namely:

  • Random Search
  • Bayesian Optimization using Gaussian Processes
  • Hypberband

The three algorithms are tasked to find the best hyperparameters of a Multilayer Perceptron (MLP) which is used for classifiying digits from 4 variations of the MNIST dataset.

We will compare the three algorithms on 3 main metrics:

  1. Score achieved by the MLP using the proposed configuration.
  2. Time required to complete the optimization process.
  3. Number of hyperparameters configurations explored.

Data

Vanilla MNIST

Back MNIST

Rotated MNIST

RotBack MNIST

Methodology

Results

Visual Comparison

Bayesian Generalized Mixed Model - Varying Intercept

Metric Random Search Gaussian Process HyperBand
Mean Std 3% hdi 97% hdi Mean Std 3% hdi 97% hdi Mean Std 3% hdi 97% hdi
Accuracy 0.150 0.069 0.021 0.283 0.147 0.069 0.016 0.280 0.149 0.069 0.020 0.283
F1 Score 0.145 0.069 0.012 0.272 0.139 0.069 0.002 0.263 0.143 0.069 0.011 0.271
Precision 0.151 0.07 0.016 0.277 0.144 0.07 0.012 0.273 0.152 0.07 0.016 0.277
Recall 0.147 0.07 0.016 0.281 0.143 0.07 0.012 0.278 0.145 0.07 0.012 0.277

Installation

  1. Download your local version of the repository
  2. Install Anaconda
  3. Open the Anaconda Powershell Prompt in the repository directory:
# create anaconda environment
conda create -n tuner_bench_env tensorflow-gpu

# activate the environment
conda activate tuner_bench_env
  1. At this point install all the requirements with:
# install the requirements
conda install -c conda-forge --file requirements.txt

About

Small project testing the functionalities of kerastuner and benchmarking the three available tuners.

Resources

License

Stars

Watchers

Forks