Journal Home > Volume 22 , Issue 2

Instance-specific algorithm selection technologies have been successfully used in many research fields, such as constraint satisfaction and planning. Researchers have been increasingly trying to model the potential relations between different candidate algorithms for the algorithm selection. In this study, we propose an instance-specific algorithm selection method based on multi-output learning, which can manage these relations more directly. Three kinds of multi-output learning methods are used to predict the performances of the candidate algorithms: (1) multi-output regressor stacking; (2) multi-output extremely randomized trees; and (3) hybrid single-output and multi-output trees. The experimental results obtained using 11 SAT datasets and 5 MaxSAT datasets indicate that our proposed methods can obtain a better performance over the state-of-the-art algorithm selection methods.


menu
Abstract
Full text
Outline
About this article

Instance-Specific Algorithm Selection via Multi-Output Learning

Show Author's information Kai Chen( )Yong DouQi LvZhengfa Liang
National Laboratory for Parallel and Distributed Processing, National University of Defense Technology, Changsha 410037, China.
College of Computer, National University of Defense Technology, Changsha 410037, China.

Abstract

Instance-specific algorithm selection technologies have been successfully used in many research fields, such as constraint satisfaction and planning. Researchers have been increasingly trying to model the potential relations between different candidate algorithms for the algorithm selection. In this study, we propose an instance-specific algorithm selection method based on multi-output learning, which can manage these relations more directly. Three kinds of multi-output learning methods are used to predict the performances of the candidate algorithms: (1) multi-output regressor stacking; (2) multi-output extremely randomized trees; and (3) hybrid single-output and multi-output trees. The experimental results obtained using 11 SAT datasets and 5 MaxSAT datasets indicate that our proposed methods can obtain a better performance over the state-of-the-art algorithm selection methods.

Keywords: performance prediction, algorithm selection, multi-output learning, extremely randomized trees, constraint satisfaction

References(18)

[1]
Kotthoff L., Algorithm selection for combinatorial search problems: A survey, Ai Magazine, vol. 35, no. 3, pp. 48-60, 2012.
[2]
Rice J. R., The algorithm selection problem, Advances in Computers, vol. 15, pp. 65-118, 1976.
[3]
Xu L., Hutter F., Hoos H., and Leyton-Brown K., Evaluating component solver contributions to portfolio-based algorithm selectors, in Proceedings of the 15th International Conference on Theory and Applications of Satisfiability Testing, 2012, pp. 228-241.
DOI
[4]
Xu L., Hutter F., and Hoos H. H., SATzilla: Portfolio-based algorithm selection for SAT, Journal of Artificial Intelligence Research, vol. 32, no. 1, pp. 565-606, 2008.
[5]
Soos M., Nohl K., and Castelluccia C., Extending sat solvers to cryptographic problems, in International Conference on Theory and Applications of Satisfiability Testing, 2009, pp. 244-257.
DOI
[6]
Eén N. and Sörensson N., An extensible sat-solver, in Theory and Applications of Satisfiability Testing, 6th International Conference, Santa Margherita Ligure, Italy, 2003, pp. 502-518.
DOI
[7]
Tsoumakas G., Spyromitros-Xioufis E., Vrekou A., and Vlahavas I., Multi-target regression via random linear target combinations, in ECML PKDD 2014, 2014, pp. 225-240.
DOI
[8]
Han Z., Liu Y., Zhao J., and Wang W., Real time prediction for converter gas tank levels based on multi-output least square support vector regressor, Control Engineering Practice, vol. 20, no. 3, pp. 1400-1409, 2012.
[9]
Aho T., Enko B., eroski S., and Elomaa T., Multi- target regression with rule ensembles, Journal of Machine Learning Research, vol. 13, no. 1, pp. 2367-2407, 2012.
[10]
O’Mahony E., Hebrard E., Holland A., Nugent C., and O’Sullivan B., Using case-based reasoning in an algorithm portfolio for constraint solving, in Irish Conference on Artificial Intelligence & Cognitive Science, 2013.
[11]
Kadioglu S., Malitsky Y., Sabharwal A., Samulowitz H., and Sellmann M., Algorithm selection and scheduling, in Proceedings of the 17th International Conference on Principles and Practice of Constraint Programming, 2011, pp. 454-469.
DOI
[12]
Samulowitz H., Reddy C., Sabharwal A., and Sellmann M., Snappy: A simple algorithm portfolio, in Proceedings of the 16th International Conference on Theory and Applications of Satisfiability Testing, 2013, pp. 422-428.
DOI
[13]
Oentaryo R. J., Handoko S. D., and Lau H. C., Algorithm selection via ranking, in AAAI Conference on Artificial Intelligence, 2015.
[14]
Malitsky Y., Sabharwal A., Samulowitz H., and Sellmann M., Algorithm portfolios based on cost-sensitive hierarchical clustering, in IJCAI’13 Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, 2013, pp. 608-614.
[15]
Godbole S. and Sarawagi S., Discriminative methods for multi-labeled classification, in Advances in Knowledge Discovery and Data Mining, Springer, 2004, pp. 22-30.
DOI
[16]
Geurts P., Ernst D., and Wehenkel L., Extremely randomized trees, Machine Learning, vol. 63, no. 1, pp. 3-42. 2006.
[17]
Hutter F., Xu L., Hoos H. H., and Leyton-Brown K., Algorithm runtime prediction: Methods & evaluation, Artificial Intelligence, vol. 206, pp. 79-111, 2014.
[18]
Borchani H., Varando G., Bielza C., and Larraaga P., A survey on multi-output regression, Wiley Interdisciplinary Reviews Data Mining & Knowledge Discovery, vol. 5, no. 5, pp. 216-233, 2015.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 30 April 2016
Revised: 15 June 2016
Accepted: 26 September 2016
Published: 06 April 2017
Issue date: April 2017

Copyright

© The author(s) 2017

Acknowledgements

This work was mainly supported by the National Natural Science Foundation of China (Nos. 61125201, 61303070, and U1435219).

Rights and permissions

Return