Overview
The sixth COSEAL Workshop is a forum for discussing the most recent advances in the automated configuration and selection of algorithms. It will take place on September 17 and 18, 2018 in Paris, France and is organized by Marc Schoenauer and Michèle Sebag’s group, in particular with help from Heri Rakotoarison and Laurence Fontana.
The workshop will consist of posters and talks about late-breaking research and useful tools, discussions regarding intra- and international cooperation, and many opportunities to interact with other attendees. Note also that this COSEAL workshop will be co-located with the OpenML Hackathon, which runs from 17-21st of September.
Contact
All inquiries should be done by email to coseal2018@inria.fr
Registration
Registration is free, but we do not have any funding; hence lunches and dinners will be on your own expense, including the social dinner on Monday evening.
If you would like to present some work at the conference, either as a talk or as a poster (format A0), we would appreciate it if you submit a proposal for your presentation as soon as possible but not later than by August 24. If you only want to attend and participate in discussions, please also register early, as the number of seats is limited for security reasons.
To register, with or without presenting, you need to fill the registration form. Please prepare the answers to the following questions:
- Would you like to present a poster? If yes, what is the topic?
- Would you like to present a talk? If yes, what is the title?
- Will you be attending the get together on Sunday evening, September 16?
- Will you be attending the workshop dinner on Monday, September 17?
- Will you be attending the social activity on September 18, afternoon?
Location
INRIA, 2 Rue Simone IFF, 75012 Paris
How to reach the venue
Metro stations:
- Montgallet (line 8)
- Dugommier (line 6)
- Bercy (lines 6 and 14)
- Gare de Lyon (lines 1, 14, RER A and D, railway)
Accommodation recommendations
There are countless opportunities in Paris, depending whether you want to stay close to the workshop venue (see for instance hotels, airbnb, bed-and-breakfasts, etc ) or in some more lively area …
Social Program
Sunday evening, from 19h30 on: Get together on the bank of the river Seine, a few dozens meter upstream from Les Nautes café (weather permitting, but it looks good as of today, Sept. 5.). Some drinks and snacks will be provided, you can bring your own.
Monday evening, 19h30, dinner together with the participants of the OpenML hackathon, at Restaurant Flam’s, Place Bienvenüe, 32 Avenue du Maine, 75015 Paris.
Tuesday afternoon: Free schedule, one cruise ticket on a Bateau-Mouche, Port de la Bourdonnais, 75007 Paris, will be provided to the 17 registered attendees (3 more available if you have some regrets 🙂
More details on the intro slides.
Schedule
Monday, September 17
- 09:00 – 09:30: Welcome / Breakfast
- 09:30 – 09:45: Introduction (get the slides)
- 09:45 – 10:45: Multi-objective Parameter Selection and Configuration, Talks T1 and T2
- 10:45 – 11:15: Break and networking
- 11:15 – 12:30: Poster session (P1-P4)
- 12:30 – 14:30: Lunch (on your own)
- 14:30 – 15:30: AutoML: Talks T3 and T4
- 15:30 – 16:00: Presentation of OpenML project
- 16:00 – 16:30: Break and networking
- 16:30 – 17:45: Poster session (P5-P8)
- 17:45 – 18:30: Discussion, collaborations, proposal, you name it
- 19:30 : Social dinner (at Flam’s Montparnasse)
Tuesday, September 18
- 09:00 – 09:30: Breakfast
- 09:30 – 11:00: Online configuration and other challenges, Talks T5, T6, and T7
- 11:00 – 11:10: Presentation of the CLAIRE initiative, Holger Hoos
- 11:10 – 11:40: Break and networking
- 11:40 – 12:55: Poster session (P9-P12)
- 12:55 – 13:15: Closing session – including next COSEAL in Potsdam
- 13:15 – Lunch on your own, and bateau-mouche for the volunteers (ask for a ticket)
Restaurants for Monday and Tuesday lunches
There are many restaurants at short walking distance. A nice area is the Place du Colonel Bourgoin (what court-foods should look like 🙂 Note that the area is also well-known for the cheap electronic hardware resellers – dozens on the way for instance.
Participants
A list of participants to the COSEAL Workshop will be available at the workshop. Contact coseal2018@inria.fr for more information.
TBA
Talks
All talks are gven 30mn, including questions – but deep questions can also be asked during the following poster session. Presenter’s names are in bold, regardless of actual contributions to the work.
T1:Practical Design Space Exploration
Luigi Nardi, Stanford University
Abstract: This talk focuses on the challenge of software and hardware multi-objective self-configurability. We introduce a new methodology based on statistical machine learning and corresponding software framework, HyperMapper, for systematic automated optimization of algorithmic and implementation parameters. The proposed methodology follows a white-box model which is simple to understand and interpret (unlike, for example, neural networks) and can be used by the user to better understand the results of the automatic search.
We first apply the new methodology to automatic static tuning of field programmable gate arrays (FPGAs) within the recently introduced Spatial programming language, with minimization of design run-time and compute logic. Our results show that HyperMapper provides better or competitive Pareto fronts compared to state-of-the-art baselines and with 8x improvement in sampling budget for most of the benchmarks explored. Then we investigate the optimization of the multi-objective computer vision benchmark SLAMBench on embedded, mobile and discrete GPUs. The resulting Pareto encompasses improvements of 6.3x in runtime and 2.2x in accuracy.
T2: How do Performance Indicator Parametrizations Influence the Assessment of Algorithm Portfolios? Get the slides
Pascal Kerschke, Jakob Bossek and Heike Trautmann, University of Münster
Abstract: When assessing the performance of optimization algorithms, researchers tend to rely on commonly used performance indicators (PIs) such as the Penalized Average Runtime (e.g., PAR10) or the Expected Running Time (ERT). Depending on the selected method for aggregating an optimizer’s runtimes — across multiple runs of the same problem instance — the overall performance(s) depend on performance variability across runs and hence can be prone to outliers. Aside from the aggregation method, some indicators such as PAR10 utilize a penalty factor, which also influences the optimizer’s overall performance.
As high performing optimization algorithms ideally address two (often conflicting) objectives — namely, finding an optimal solution often (low failure rate) and fast (low running time) — a multi-objective PI such as the hypervolume (HV) indicator incorporating both objectives provides a promising alternative to the aforementioned single-objective state-of-the-art PIs.
Within this work we compare commonly used PIs and the impact of their parameters based on an exemplary performance benchmark of state-of-the-art inexact TSP solvers and single-objective continuous black-box optimizers. Thereby, we also provide a methodological approach for analyzing the robustness of the optimization algorithms at hand w.r.t. the different parametrizations of the respective PIs. The shown results provide valuable insights, which in turn might also help researchers in future works to (a) choose a reasonable PI for their respective experiments, and (b) give feedback on the robustness of their results.
T3: An active meta-learning algorithm as a solution to the AutoML problem.
Lisheng Sun, TAU team, Univ. Paris-Sud, CNRS and INRIA
Abstract: We present an active meta learning approach to model selection or algorithm recommendation. We adopt the point of view “collaborative filtering” recommender systems in which the problem is brought back to a missing data problem: given a sparsely populated matrix of performances of algorithms on given tasks, predict missing performances; more particularly, predict which algorithm will perform best on a new dataset (empty row). In this work, we propose and study an active learning version of the recommender algorithm CofiRank algorithm and compare it with baseline methods. Our benchmark involves three real-world datasets (from StatLog, OpenML, and AutoML) and artificial data. Our results indicate that CofiRank rapidly finds well performing algorithms on new datasets at reasonable computational cost.
T4: Monte Carlo Tree Search for Algorithm Configuration
Herilalaina Rakotoarison and Michèle Sebag, TAU team, Univ. Paris-Sud, CNRS and INRIA
Abstract: The sensitivity of machine learning (ML) algorithms w.r.t. their hyper-parameters and the difficulty of finding the ML algorithm and hyper-parameter setting best suited to a given dataset has led to the rapidly developing field of automated machine learning (AutoML), at the crossroad of meta-learning and structured optimization. Several international AutoML challenges have been organized since 2015, motivating the development of the Bayesian optimization-based approach AutoSkLearn and the Bandit-based approach Hyperband. In this paper, a new approach, called Monte Carlo Tree Search for Algorithm Configuration (MOSAIC) is presented, fully exploiting the tree structure of the algorithm portfolio and hyper-parameter search space. Experiments on 133 datasets of the OpenML repository show that MOSAIC performances match that of AutoSkLearn.
T5: QuSandbox: A lifecycle based approach to model governance
Sri Krishnamurthy, Northeastern University
Abstract: As more and more open-source technologies penetrate enterprises, data scientists have a plethora of choices for building, testing and scaling models. In addition, data scientists have been able to leverage the growing support for cloud based infrastructure and open data sets to develop machine learning applications. Even though there are multiple solutions and platforms available to build machine learning solutions, challenges remain in adopting machine learning in the enterprise. Many of the challenges are associated with how machine learning process can be formalized. As the field matures, formal mechanism for a replicable, interpretable, auditable process for a complete machine learning pipeline from data ingestion to deployment is warranted. Projects like Docker, Binderhub, MLFlow are efforts in this quest and research and industry efforts on replicable machine learning processes are gaining steam. Heavily regulated industries like financial and healthcare industries are looking for best practices to enable their research teams to reproduce research and adopt best practices in model governance.
We present QuSandbox, an end-to-end workflow based system to enable creation and deployment of data science workflows within the enterprise. Our environment supports AWS and Google Cloud platform and incorporates model and data provenance throughout the life cycle of model development.
T6: Algorithm Selection in the Wild – The Sparkle Challenges
Chuna Luo, Holger Hoos, Leiden Universiteit
Abstract: TBA
T7: Towards an Adaptive CMA-ES Configurator
Sander van Rijn, Carola Doerr, and Thomas Bäck – Leiden Universiteit – Sorbonne Université/CNRS
Abstract: Recent work has shown that significant performance gains over state-of-the-art CMA-ES variants can be obtained by a recombination of their algorithmic modules. It seems plausible that further improvements can be realized by an adaptive selection of these configurations. We address this question by quantifying the potential performance gain of such an online algorithm selection approach.
In particular, we study the advantage of structurally adaptive CMA-ES variants on the functions F1, F10, F15, and F20 of the BBOB test suite. Our research reveals that significant speedups might be possible for these functions. Quite notably, significant performance gains might already be possible by adapting the configuration only once. More precisely, we show that for the tested problems such a single configuration switch can result in performance gains of up to 22\%. With such a significant indication for improvement potential, we hope that our results trigger an intensified discussion of online structural algorithm configuration for CMA-ES variants.
Paper published at PPSN 2018 pp. 54-65
Posters
All talks will also eventually be presented in the poster session immediately following their oral presentation, for a deeper interaction with the audience (but they are not listed here).
Presenter’s names are in bold, regardless of actual contributions to the work.
Session 1
P1: HASCO – Hierarchical Algorithm Selection and Configuration
Marcel Wever, Paderborn University
Abstract: On this poster, we present an approach to automated software configuration based on the hierarchical decomposition of the configuration task. Contrary to common software configuration approaches, in this model, a solution to the configuration problem is not an assignment of values to parameters but a set of parametrized components among which the required interfaces have been resolved. Since required interfaces may be resolved recursively, the space of solutions is strictly larger than with previous techniques, because there is no general bound on the number of used component instances and, hence, decisions that need to be made.
We solve the configuration problem by reducing it to a hierarchical AI planning problem, which is then solved with a standard planning algorithm. We evaluate our approach for the configuration of machine learning pipelines and multi-objective evolutionary algorithms.
P2: Learning Multiple Defaults for Machine Learning Algorithms,
Florian Pfisterer, Jan N. van Rijn, Philippp Probst, Andreas Müller,
Bernd Bischl, LMU
Abstract: The performance of modern machine learning methods highly depends on
their hyperparameter configurations. One simple way of selecting a configuration is to use default settings, often proposed along with the publication and implementation of a new
algorithm.
Those default values are usually chosen in an ad-hoc manner to work good enough on a wide variety of datasets. To address this problem, different automatic hyperparameter
configuration algorithms have been proposed, which select an optimal configuration per dataset.
This principled approach usually improves performance, but adds additional algorithmic complexity and computational costs to the training procedure. As an alternative to this, we propose learning a set of complementary default values from a large database of prior empirical results. Selecting an appropriate configuration on a new dataset then requires
only a simple, efficient and embarrassingly parallel search over this set. We demonstrate the effectiveness and efficiency of the approach we propose in comparison to random search and Bayesian Optimization.
P3: Surprisingly Important Design Dimensions in Bayesian Optimization
Katharina Eggensperger, University of Freiburg
Abstract: Bayesian Optimization (BO) is one of the most efficient global optimization algorithms for expensive black-box functions. Across the literature, several design decisions were investigated which may seem unimportant at first glance; however, we show that making the right call can substantially improve results. In particular, we focus on random forests (RFs) which are rarely studied as a surrogate model used for BO, and show that these can perform competitively compared to the commonly used Gaussian Processes (GPs). Surprisingly, some design decisions were overestimated and we show that these can even degenerate results, and other underestimated decisions are crucial for obtaining state-of-the-art performance. We study these design dimensions using a comprehensive set of test functions commonly used in the BO literature. Furthermore, we studied how well our findings generalize to GP-based BO and show that it also benefits from these design choices. On two hyperparameter optimization problems, we validated our results and found that GP-based and RF-based BO perform quite similarly, using the same design decisions.
P4: A self adaptive per-instance algorithm selection
Mohamed Amine EL MAJDOULI, Mohammed V University
Abstract: The general concept of the proposed Self-Adaptive Per Instance Algorithm Selection concept labeled as “SAPIAS” concept is composed of four important cooperating layers. The first two layers are supposed to be implemented for an online execution mode and the last two layers for an offline mode. The first “Prediction” layer defines a Per-Instance Algorithm Selection. This is the classical task where, based on a pre-built prediction model, a schedule of algorithms is selected to optimize a new given instance. Once the optimization process is achieved, the predicted schedule along with the solution found are passed to the second layer called the Advisor layer. In this second “Advisor” layer, two operations are respectively triggered. The first one is using a low budget “Prediction Checker” algorithm to optimize the new instance starting from the actual solution received from the first layer. The goal here is not to check the prediction accuracy in the precedent layer, but rather detect if an improvement of this solution can be performed. Actually, The Prediction Checker implements all the improvement strategies used in all algorithms present in the algorithm space used by the prediction model in the first layer. Afterwards, if the solution is enhanced, the Prediction Checker identifies the algorithm(s) which the enhancing improvement strategy(s) belongs to. The set of active algorithms in the schedule is passed to the next layer. The third “AAC” layer aims to find an algorithm by automatically configuring different components used by the set of algorithms received. For this reason, a Two Phase Automatic Algorithm Configuration is established. In the first phase, an algorithm configuration is performed at the component level. Indeed, an algorithmic framework is constructed first, then an automatic configuration is performed to find a well performing configuration. Besides the technical implementation of such configuration, a very important point is that this configuration is evaluated using only the new instance. Afterwards, if a successful configuration is found, the next phase performs an extensive parameter tuning, however this time, all instances available are used. The parameter tuning should ensure in the first place a good parameter set for the new instance and build a performance space for the new algorithm mapping its performance to each instance. In the fourth “Update” layer, the existing performance space is merged with the new performance space of the new algorithm allowing to remove algorithms where the new algorithm performs better in their respective instances. In this layer, the prediction model is also updated by regenerating a new one using extracted features from the newly given instances. This update is ruled using an update threshold monitoring the number of new instances or algorithms waiting to be added.
Session 2
P5: Learning Range-Query Predictors
Vitalik Melnikov, Paderborn University
Abstract: We study a generalization of the standard setting of supervised learning, in which we seek to learn a model in the form of what we call a range-query predictor (RQP). What we mean by this is a model that accepts partially specified attribute values as an input (query) and produces a set-valued prediction as an output, namely the set of values of the target variable that are possible under any precise input within the range specified by the query. Thus, while the training data is still assumed to be precise, like in standard supervised learning, the model induced from this data is applied and evaluated on data that is only partially specified, typically in the form of interval ranges (hence the term range-query). Apart from predictive accuracy measured by a suitable loss function on sets (that compares predicted with ground-truth output sets), an important aspect is efficiency: given a range query as input, the predictor should be able to compute the output set efficiently. We present a formalization of this problem as well as a first method based on regression trees.
P6: Automatic Gradient Boosting
Janek Thomas, Ludwig-Maximilians-University Munich
Abstract: Automatic machine learning performs predictive modeling with high performing machine learning tools without human interference. This is achieved by making machine learning applications parameter-free, i.e. only a dataset is provided while the complete model selection and model building process is handled internally through (often meta) optimization. Projects like Auto-WEKA and auto-sklearn aim to solve the Combined Algorithm Selection and Hyperparameter optimization (CASH) problem resulting in huge configuration spaces. However, for most real-world applications, the optimization over only a few different key learning algorithms can not only be sufficient, but also potentially beneficial. The latter becomes apparent when one considers that models have to be validated, explained, deployed and maintained. Here, less complex model are often preferred, for validation or efficiency reasons, or even a strict requirement. Automatic gradient boosting simplifies this idea one step further, using only gradient boosting as a single learning algorithm in combination with model-based hyperparameter tuning, threshold optimization and encoding of categorical features. We introduce this general framework as well as a concrete implementation called autoxgboost. It is compared to current AutoML projects on 16 datasets and despite its simplicity is able to achieve comparable results on about half of the datasets as well as performing best on two.
P7: Using meta learning for classifier selection and hyperparameter optimization via HTN planning
Helena Graf, Paderborn University
Abstract: The increasing demand in machine learning functionality has resulted in a number of tools that aim to automatically solve the task of recommending machine learning pipelines for new datasets (AutoML). However, those tools often do not extensively use knowledge gathered in past experiments to guide their search. In this paper, we thus aim to improve existing approaches, creating a solution that is able to estimate the performance of a given (partial) machine learning pipeline. Particularly, we extend the framework of ML-Plan, a new approach to the AutoML problem that is based on hierarchical planning, by means of providing it with a heuristic that estimates pipeline performances as well as an adapted search function. The new search function is a limited discrepancy search with the aim to balance the estimates of pipeline performances with actual pipeline performance obtained through intermediate evaluations during the search. An evaluation of the approach is under way.
P8: Meta-Learning for Convolutional Neural Networks configuration
Yesmina Jaafra, Segula Technologies – ICUBE
Abstract: In this work, we aim at automating the task of CNN architecture selection. In a first phase, we will use a model based on reinforcement learning to build optimized architectures for existing datasets. The next phase is based on meta-data analysis to allocate one of the identified architectures to process a new dataset. This methodology will allow transferring previous knowledge and decrease computational complexity and time consumption to classify new datasets. The meta-learning for adaptive CNN architecture selection is part of an industrial project carried by Segula Technologies for object recognition within point cloud data projected in 2D images. These objects belong to refineries, platforms and petrochemical sectors (pipes, valves, pumps, etc.). They have been scanned and stored within homogenous datasets that should be classified to different subtypes.
Our methodology is inspired by meta-learning for algorithm selection (Vilalta, et al., 2010). Meta-features (e.g. the number of instances, the number of classes) are computed for a given number of datasets. Then, the performance of a range of learning algorithms is measured on these datasets. Meta-learning consists in learning a correspondence between dataset features and algorithm performance. The impact of the problem nature on the performance of learning models revealed through many studies the inexistence of a single best algorithm for all tasks. These empirical findings were confirmed theoretically by “No Free Lunch” theorems (Wolpert & Macready, 1997) which imply that there is no well performing universal optimization strategy.
P8b: BOHB: Robust and Efficient Hyperparameter Optimization at Scale
Stefan Falkner, Aaron Klein and Frank Hutter, University of Freiburg
Abstract: Modern deep learning methods are very sensitive to many hyperparameters, and, due to the long training times of state-of-the-art models, vanilla Bayesian hyperparameter optimization is typically computationally infeasible. On the other hand, bandit-based configuration evaluation approaches based on random search lack guidance and do not converge to the best configurations as quickly. Here, we propose to combine the benefits of both Bayesian optimization and bandit-based methods, in order to achieve the best of both worlds: strong anytime performance and fast convergence to optimal configurations. We propose a new practical state-of-the-art hyperparameter optimization method, which consistently outperforms both Bayesian optimization and Hyperband on a wide range of problem types, including high-dimensional toy functions, support vector machines, feed-forward neural networks,
Bayesian neural networks, deep reinforcement learning, and convolutional neural networks. Our method is robust and versatile, while at the same time being conceptually simple and easy to implement.
Published at ICML 2018, see http://proceedings.mlr.press/v80/falkner18a.html
Session 3
P9: Challenges in Algorithm Control
André Biedenkapp, University of Freiburg
Abstract: General algorithm configuration tools enable algorithms to achieve peak performance. Most research so far focused on either finding one configuration that performs well across homogeneous instances or a set of configurations that perform well across heterogeneous instances. These configurations stay fixed during the whole solving process of the target algorithm. However, it has been shown that different parameter settings are optimal at different stages of the solve process.
In this work we make use of advances in deep reinforcement learning to learn agents that are capable of adapting the parameters of an algorithm on the fly.
P10: Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari
Patryk Chrabąszcz, Ilya Loshchilov and Frank Hutter, University of Freiburg
Abstract: Evolution Strategies (ES) have recently been demonstrated to be a viable alternative to reinforcement learning (RL) algorithms on a set of challenging deep RL problems, including Atari games and MuJoCo humanoid locomotion benchmarks. While the ES algorithms in that work belonged to the specialized class of natural evolution strategies (which resemble approximate gradient RL algorithms, such as REINFORCE), we demonstrate that even a very basic canonical ES algorithm can achieve the same or even better performance. This success of a basic ES algorithm suggests that the state-of-the-art can be advanced further by integrating the many advances made in the field of ES in the last decades. We also demonstrate qualitatively that ES algorithms have very different performance characteristics than traditional RL algorithms: on some games, they learn to exploit the environment and perform much better while on others they can get stuck in suboptimal local minima. Combining their strengths with those of traditional RL algorithms is therefore likely to lead to new advances in the state of the art.
Published at IJCAI 2018. See https://arxiv.org/abs/1802.08842
P11: Deep Learning Assisted Heuristic Tree Search
André Hottung, Shunji Tanaka, Kevin Tierney, Bielefeld University
Abstract: One of the key challenges for operations researchers solving real-world problems is designing and implementing high-quality heuristics to guide their search procedures. In the past, machine learning techniques have failed to play a major role in operations research approaches, especially in terms of guiding branching and pruning decisions. We integrate deep neural networks into a heuristic tree search procedure to decide which branch to choose next and to estimate a bound for pruning the search tree of an optimization problem. We call our approach Deep Learning assisted heuristic Tree Search (DLTS) and apply it to a well-known problem from the container terminals literature, the container pre-marshalling problem (CPMP). Our approach is able to learn heuristics customized to the CPMP solely through analyzing the solutions to CPMP instances, and applies this knowledge within a heuristic tree search to produce the highest quality heuristic solutions to the CPMP to date.
See https://arxiv.org/abs/1709.09972
P12: Parameter tuning for the Late Acceptance Hill Climbing algorithm and its parameter-less variant
Mehdi EL KRARI, Mohammed V University
To be confirmed