Skip to content

Benchmarking script for MindtPy solvers in the pyomo framework solving MINLP instance. This was created as part of my bachelor thesis during my stay at Carnegie Mellon University in Professor Ignacio Grossmanns group.

License

Notifications You must be signed in to change notification settings

RomeoV/pyomo-MINLP-benchmarking

Repository files navigation

pyomo-MINLP-benchmarking

What is this?

This repository mostly contains the run_benchmarks.py script as well as a collection of MINLP problems/models, which have been taken from the MINLPlib and converted to Pyomo models with the translate.py file.

Example usage

When running

python run_benchmarks.py -h  # show some help
python run_benchmarks.py --solver mindtpy --model-dir models
# or, when re-running
python run_benchmarks.py --solver mindtpy --model-dir models --redo-existing --no-skip-failed

After running, the results/<solver> dir will contain

  • a .txt file for each model output
  • trace_file.trc which can be loaded into Paver to generate automatic benchmarking plots (not yet tested)
  • solving_times.csv which contains the model name, aswell as the solving time or the termination condition/error

About

Benchmarking script for MindtPy solvers in the pyomo framework solving MINLP instance. This was created as part of my bachelor thesis during my stay at Carnegie Mellon University in Professor Ignacio Grossmanns group.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages