labscheduler.dev_tools.algorithms_eval module¶
A python script to evaluate multiple solvers of a given test set. The results will be saved in json format with a timestamp and as a latex table (algo_eval_table.tex) Both will be saved in ./table/. You can subsequently call $ pdflatex table.tex to produce a pdf file from the results. solving time and the directory containing the test files can be set via commands line. Defaults and further settings can be configured in algo_eval_config.py. Example call: $ python dev_tools/algorithm_eval.py -d tests/test_data/benchmark_inst/
- labscheduler.dev_tools.algorithms_eval.make_latex_table(data: list[dict[str, str | float | None]])[source]¶
- labscheduler.dev_tools.algorithms_eval.parse_command_line()[source]¶
Looking for command line arguments
- labscheduler.dev_tools.algorithms_eval.run_test_series(scheduler: Scheduler, algorithm_name: str, test_instances: list[str | Path], time_limit: int) tuple[list[dict[str, ScheduledAssignment]], list[float]][source]¶
- labscheduler.dev_tools.algorithms_eval.transfer_results(data: list[dict[str, str | float | None]], transfer_to: str, transfer_from: str)[source]¶
Used the better result of the two algorithms transfer_to and transfer_from as result for transfer_to. That is useful when theoretical one algorithm (usually a heuristic) is used as primal heuristic for the other algorithm, but due to errors on the underlying solver (SCIP or ortools) side, the results gets not transferred.