Warning: This library is not yet finished. Please only expect useful output when this warning message has been removed and replaced with a usage guide.
Developer documentation can be found at https://maximerobeyns.github.io/OPTK.
OPTK (read optic) is an `optimisation toolkit’ which provides collections of benchmarks on which to evaluate and compare Bayesian optimisation algorithms.
You will notice that it is mostly written in C++; this is both for speed and to allow bindings to be easily written for a variety of languages, allowing folks to use whatever language they’re comfortable with, as well as to include existing implementations in varied languages. I will first be focusing on the Python interface, and then others (such as MATLAB or Julia) as they’re required.
OPTK’s parameter space definition and types may seem a little overdone; ostensibly all we need are continuous variables, categorical variables and ordinal variables (i.e. categorical variables which happen to have an ordering). However to ensure compatibility with ConfigSpace, Ray Tune and NNI’s search space definition (and more importantly, the existing BO algorithms which rely on these) the C++ types are somewhat overcooked. Hopefully this complexity will not leak out into the Python bindings and bindings for other langauges.
One big area of work at the moment is improving the outputs—I am working on
outputting useful plots (both in pgf
and rasterised formats) and other metrics
and traces to help with comparisons, and the results section of a paper or report.
The benchmarks are tagged with relevant characteristics (e.g. scalable,
continuous, separable etc.), which will also be provided alongside the output
data.
At the simplest level, optk
works as a benchmarking program, which takes in an
optimisation algorithm, and will produce a set of traces (iteration, objective
value) pairs for each benchmark, which it will save in a csv file in the
/results
directory in a file with the same name as the provided algorithm:
Copyright (C) 2020 Maxime Robeyns
Written for the Advanced Computing Research Centre, University of Bristol.
Licensed under the Educational Community License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.osedu.org/licenses/ECL-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Jamil, M., & Yang, X., A Literature Survey of Benchmark Functions For Global Optimization Problems, International Journal of Mathematical Modelling and Numerical Optimisation, 4(2), 150 (2013). http://dx.doi.org/10.1504/IJMMNO.2013.055204
Dewancker, I., McCourt, M., Clark, S., Hayes, P., Johnson, A., & Ke, G., A Stratified Analysis of Bayesian Optimization Methods, arXiv:1603.09441 [cs, stat], (), (2016).
Ying, C., Klein, A., Real, E., Christiansen, E., Murphy, K., & Hutter, F., NAS-Bench-101: Towards Reproducible Neural Architecture Search, arXiv:1902.09635 [cs, stat], (), (2019).