Hamiltonian Descent Methods implementation
We have implemented from scratch, 6 optimizer classes:
- GradientDescent
- Momentum
- ExplicitMethod1
- ExplicitMethod2
- StochasticExplicitMethod1
- StochasticExplicitMethod1
The inputs they require are starting points, hyperparameters, and function class, which contains functions and gradients of the objective, noise and kinetic map.
We have implemented 4 functions here:
- Psi (Used in 2nd Explicit Method)
- PowerFunction2D (Uses coefficients for skewing, and degree of function required)
- PowerFunctionShifted (returns a shifted version of the Power function with 0 skew)
- Noise2D (returns noise for gradient)
Each function requires its own parameters. The class methods are f(), k(), and their respective gradients. Gradients are computed using the autograd library in Python.
We put here all plotting routines that we use to generate the figures in the paper.
All code presented in this folder is fully handwritten, except some snippets in plots.py that we found from matplotlib documentation. Code is modlar enough for usage in real-world tasks, and can be easily extended for other objectives and optimizers.