A learning rate benchmarking and recommending tool, which will help practitioners efficiently select and compose good learning rate policies.
- Semi-automatic Learning Rate Tuning
- Evaluation: A set of Useful Metrics, covering Utility, Cost, and Robustness.
- Verification: Near-optimal Learning Rate
The following figure shows the impacts of different learning rates. The FIX (black, k=0.025) reached the local optimum, while the NSTEP (red, k=0.05, γ=0.1, l=[150, 180]) converged to the global optimum. For TRIEXP (yellow, k0=0.05, k1=0.3, γ=0.1, l=100), even though it was the fastest, it failed to converge with high fluctuation.
If you find this tool useful, please cite the following paper:
@INPROCEEDINGS{lrbench2019,
author={Wu, Yanzhao and Liu, Ling and Bae, Juhyun and Chow, Ka-Ho and Iyengar, Arun and Pu, Calton and Wei, Wenqi and Yu, Lei and Zhang, Qi},
booktitle={2019 IEEE International Conference on Big Data (Big Data)},
title={Demystifying Learning Rate Policies for High Accuracy Training of Deep Neural Networks},
year={2019},
volume={},
number={},
pages={1971-1980},
doi={10.1109/BigData47090.2019.9006104}
}
@article{lrbench-tist,
author = {Wu, Yanzhao and Liu, Ling},
title = {Selecting and Composing Learning Rate Policies for Deep Neural Networks},
year = {2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
issn = {2157-6904},
url = {https://doi.org/10.1145/3570508},
doi = {10.1145/3570508},
journal = {ACM Trans. Intell. Syst. Technol.},
month = {11},
}
If you would like to use LRBench as a Python module, you can simply run the following command to install LRBench.
pip install LRBench
If you would like to have access to the web GUI of LRBench, you can follow these steps:
git clone https://github.com/git-disl/LRBench.git
cd LRBench
pip install -r requirements.txt
python manage.py migrate
python manage.py runserver
See the people page for the full listing of contributors.
Copyright (c) 20XX-20XX Georgia Tech DiSL
Licensed under the Apache License.