For many domains (in particular in AI) there does not exist a single algorithm that dominates all other algorithms across a wide range of problem instances. Therefore in per-instance algorithm selection (AS), AS systems automatically select a well-performing algorithm to solve a given problem instance [Rice’76].
With the advent of the algorithm selection library (ASlib) [Bischl et al’16], we can now compare different algorithm selection systems in a reliable, fair and reproducible way on a broad set of different domains (including SAT, MAXSAT, OR, CSP, QBF, BNSL, ASP and ML). In the tradition of the ICON Challenge on Algorithm Selection and the on-going ASlib evaluation, we invite you to participate in the new “Open Algorithm Selection Challenge” (OASC).
Results
- Presentation slides from COSEAL workshop ’17 with official ranking
- CSV file with all results
- Proceedings
Challenge Format
To participate in the OASC, the participants have to submit only their selection results (see below for details) for given scenarios in the ASlib format. This allows for submissions from closed- and open-source AS systems. In the view of transparent and reproducible research, open-source submissions will be explicitly tagged as such and awarded in a separate track.
Each submission should be accompanied by brief description of the AS system used to produce the submitted results. We will aim to publish summary of the challenge and the system description in PMLR workshop proceedings.
Schedule
- Optional deadline: June 16, 2017: Scenario submission deadline
- Optional deadline: June 16, 2017: Ranking of ASlib scenarios deadline (optional)
- June 30, 2017: Announcement of scenarios to be used in the OASC
- Final deadline: August 27, 2017: Results submission deadline
- September 11 and 12, 2017: Results announcement at COSEAL workshop 2017
Downloads
Participants (external links)
Rules
- General procedure for each scenario:
- For the training set, we provide the scenario meta description (description.txt), performance of each algorithm on each instance (algorithm_runs.arff), instance features for each instance (feature_values.arff and feature_runstatus.arff), and optional: costs to run feature steps (feature_costs.arff) — see ASlib specification for details.
- For the test set, we provide only feature_values.arff and feature_runstatus.arff
- All scenarios will be scrambled such that participants cannot recognize scenarios and thus special domain knowledge cannot be used.
- Participants have to submit:
- Selected algorithms (for details see below) for each instance in a test set
- A 1-2 page (incl. references) description of your AS system and all necessary details to reproduce the results (e.g., used hardware and software)
- Latex code and PDF have to use the JMLR format for PMLR
- Additional requirements for open-source submissions:
- The source code of the AS system used to produce the results has to be submitted (with a README file explaining how to install and run the system).
- The submission has to be online available under a common open-source license, e.g., (A)GPL, BSD, MIT) in freely accessible repository.
- If an AS system violates the ASlib specification, they will be disqualified for the scenario at hand, i.e., the worst possible score will be assigned to it.
- E.g., The output is syntactically not valid.
- E.g., Instance features can only be used if they are computed beforehand.
- E.g., Feature steps have pre-conditions and can only be used after computing all pre-conditions.
- Each team can submit results from up to two different AS systems (or two configurations from a single AS system)
- Since AS systems should avoid that the user has to deal with characteristics of the domain at hand, AS systems have to be robust and a manual tuning for each ASlib scenario is forbidden (e.g., no manual adjustment of pre-solving schedule time limits or manual selection of instance features). The AS systems have to autonomously figure out how to deal with each ASlib scenario (i.e., the only input of an AS system is the ASlib scenario at hand; ASlib scenario-specific arguments are not allowed).
- Final scoring mechanism:
- Metric: Average “Closed Gap between Single Best Algorithm and Oracle” across all scenarios
- Single Best Algorithm is based on average performance
- Oracle is the perfect prediction for each instance
- Cost of a timeout (in running time scenarios) is 10 x cutoff
- Metric: Average “Closed Gap between Single Best Algorithm and Oracle” across all scenarios
Result Submission Format
In the OASC, we allow for different kinds of AS systems, including
- Selection of a single algorithm
- Selection of a schedule of algorithms
- Use of pre-solving schedules
- Use of different instance feature sets (e.g., on a per-instance base)
To this end, the required output format in the OASC will be flexible and uses the following syntax in JSON:
{
“<INSTANCE_1>”: <SCHEDULE_1>,
“<INSTANCE_2>”: <SCHEDULE_2>,
…
}
where
- “<INSTANCE_N>” is a string representing the ID of an instance in a test set
- <SCHEDULE_N> is a list of strings or tuples which can consist of instance feature steps and selected algorithms; algorithms can be associated with a running time limit (using tuples in the form of “(algo, time_limit)”). This list will be sequentially evaluated and the first entry “solving” <Instance N> will be used for evaluation. The challenge distinguishes between two types of scenarios, running time scenarios (e.g., SAT solving) and quality scenarios (e.g. Machine Learning)
- For running time scenarios (performance_type==runtime):
- Notes:
- The schedule can have an arbitrary length and algorithms can be listed more than once—we assume that algorithms listed more than once are started from scratch each time. The length of the schedule is implicitly limited by the cutoff time (see algorithm_cutoff_time as given in description.txt)
- The participants have access to all instance features, but we assume that only these features are used that were specified through the corresponding feature steps.
- Feature steps can have pre-conditions on other feature steps (see description.txt). An AS system will be disqualified for the scenario at hand (i.e., the worst score will be assigned), if they use a feature_step before computing all pre-conditions.
- Exemplary selected schedule:
[(“algo_1”, 30), “feature_step_3” , (“algo_3”, 5), “algo_2”]- First run “algo_1” for at most 30 seconds;
- If it does not solve the instance, compute features corresponding to “feature_step_3”; this has a predefined cost in terms of running time;
- Based on the computed features, “algo_3” was selected to run for at most 5 seconds;
- If also “algo_3” cannot solve the instance at hand, “algo_2” tries to solve the instance within the remaining time budget.
- Example evaluation if the feature computation of “feature_step_3” needed 10 seconds and “algo_3” solved the instance within 2 seconds: 30 + 10 + 2 = 42
- Example Notes:
- Since “feature_step_3” was used after “algo_1” ran, “algo_1” has to be part of a static pre-solving schedule and cannot be selected on a per-instance base — the AS systems are responsible to use only the features in the schedule after they were computed.
- In this example, there were no pre-conditions to compute “feature_step_3”.
- Notes:
- For quality scenarios (performance_type==solution_quality, e.g., performance of an algorithm on an instance is e.g., cost, error, accuracy, etc):
- since ASlib does not provide enough information to evaluate schedules for non-running-time scenarios, <SCHEDULE_N> has to consist of a single algorithm with an arbitrary time limit, e.g. [(algo_7, 99999999)].
- Thus, the goal is to predict the single best algorithm
- All instance features are free and available even without being listed in <SCHEDULE_N>. The time limit will not be used for evaluation.
- For running time scenarios (performance_type==runtime):
We provide an open-source tool that can read an ASlib scenario and an output file in the above specified format and will return a corresponding score across all instances in the scenario.
ASlib Scenario Submissions and Selection
Each team can submit up to 2 new ASlib scenarios, which will be part of the OASC. Scenarios can be either submitted in the ASlib format or by providing 2 CSV files (i.e., one CSV file for all algorithm runs and the associated performance values and one CSV file with all instance features).
If fewer than 10 ASlib scenarios will be submitted by 16th of June, the OASC will use scenarios prepared by the organizers or (scrambled) scenarios from the current ASlib stock. To have an unbiased selection of ASlib scenarios, each team can also submit a ranking of the scenarios in ASlib.
Please send an e-mail to lindauer@cs.uni-freiburg.de to submit your scenario.
Available Tools to Parse ASlib Scenarios
- aslib-r (in R)
- ASlibScenario (in Python)
Updates and News
Please subscribe to our mailinglist to get news and updates concerning the OASC.
You can also use the mailinglist to ask questions related to the OASC.
Organization Team
Marius Lindauer – University of Freiburg
Lars Kotthoff – University of British Columbia
Jan van Rijn – University of Freiburg