Benchmark Overview

DACBench contains a range of benchmarks in different categories and from different domains. There is a range of highly configurable, cheap to run benchmarks that often also include a ground truth solution. We recommend using these as an introduction to DAC, to verify new algorithms and to generate detailed insights. They are both based on artificial functions and real algorithms:

  • Sigmoid (Artificial Benchmark): Sigmoid function approximation in multiple dimensions.

  • Luby (Artificial Benchmark): Learning the Luby sequence.

  • ToySGD (Artificial Benchmark): Controlling the learning rate in gradient descent.

  • Geometric (Artificial Benchmark): Approximating several functions at once.

  • Toy version of the FastDownward benchmark: Heuristic selection for the FastDownward Planner with ground truth.

  • Theory benchmark with ground truth: RLS algorithm on the LeadingOnes problem.

Beyond these smaller scale problems we know a lot about, DACBench also contains less interpretable algorithms with larger scopes. These are oftentimes noisier, harder to debug and more costly to run and thus present a real challenge for DAC algorithms:

  • FastDownward benchmark: Heuristic selection for the FastDownward Planner on competition tasks.

  • CMA-ES: Step-size adpation for CMA-ES.

  • ModEA: Selection of Algorithm Components for EAs.

  • ModCMA: Step-size & algorithm component control for EAs backed by IOHProfiler.

  • SGD-DL: Learning rate adaption for neural networks.

Our benchmarks are based on OpenAI’s gym interface for Reinforcement Learning. That means to run a benchmark, you need to create an environment of that benchmark to then interact with it. We include examples of this interaction between environment and DAC methods in our GitHub repository. To instantiate a benchmark environment, run:

from dacbench.benchmarks import SigmoidBenchmark
bench = SigmoidBenchmark()
benchmark_env = bench.get_environment()
class dacbench.abstract_benchmark.AbstractBenchmark(config_path=None, config: Optional[objdict] = None)

Bases: object

Abstract template for benchmark classes

get_config()

Return current configuration

Returns

Current config

Return type

dict

get_environment()

Make benchmark environment

Returns

env – Benchmark environment

Return type

gym.Env

process_configspace(configuration_space)

This is largely the builting cs.json.write method, but doesn’t save the result directly If this is ever implemented in cs, we can replace this method

read_config_file(path)

Read configuration from file

Parameters

path (str) – Path to config file

serialize_config()

Save configuration to json

Parameters

path (str) – File to save config to

set_action_space(kind, args)

Change action space

Parameters
  • kind (str) – Name of action space class

  • args (list) – List of arguments to pass to action space class

set_observation_space(kind, args, data_type)

Change observation_space

Parameters
  • kind (str) – Name of observation space class

  • args (list) – List of arguments to pass to observation space class

  • data_type (type) – Data type of observation space

set_seed(seed)

Set environment seed

Parameters

seed (int) – New seed

class dacbench.abstract_benchmark.objdict

Bases: dict

Modified dict to make config changes more flexible

copy() a shallow copy of D
class dacbench.abstract_env.AbstractEnv(config)

Bases: Env

Abstract template for environments

get_inst_id()

Return instance ID

Returns

ID of current instance

Return type

int

get_instance()

Return current instance

Returns

Currently used instance

Return type

type flexible

get_instance_set()

Return instance set

Returns

List of instances

Return type

list

reset()

Reset environment

Returns

Environment state

Return type

state

reset_(instance=None, instance_id=None, scheme=None)

Pre-reset function for progressing through the instance set Will either use round robin, random or no progression scheme

seed(seed=None, seed_action_space=False)

Set rng seed

Parameters
  • seed – seed for rng

  • seed_action_space (bool, default False) – if to seed the action space as well

seed_action_space(seed=None)

Seeds the action space. :param seed: if None self.initial_seed is be used :type seed: int, default None

set_inst_id(inst_id)

Change current instance ID

Parameters

inst_id (int) – New instance index

set_instance(instance)

Change currently used instance

Parameters

instance – New instance

set_instance_set(inst_set)

Change instance set

Parameters

inst_set (list) – New instance set

step(action)

Execute environment step

Parameters

action – Action to take

Returns

  • state – Environment state

  • reward – Environment reward

  • done (bool) – Run finished flag

  • info (dict) – Additional metainfo

step_()

Pre-step function for step count and cutoff

Returns

End of episode

Return type

bool

use_next_instance(instance=None, instance_id=None, scheme=None)

Changes instance according to chosen instance progession

Parameters
  • instance – Instance specification for potentional new instances

  • instance_id – ID of the instance to switch to

  • scheme – Update scheme for this progression step (either round robin, random or no progression)

use_test_set()

Change to test instance set

use_training_set()

Change to training instance set