AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (715 KB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Semiparametric Preference Learning

College of Computing, Georgia Institute of Technology, Atlanta, GA 30332, USA.
Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.
Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China.
Show Author Information

Abstract

Unlike traditional supervised learning problems, preference learning learns from data available in the form of pairwise preference relations between instances. Existing preference learning methods are either parametric or nonparametric in nature. We propose in this paper a semiparametric preference learning model, abbreviated as SPPL, with the aim of combining the strengths of the parametric and nonparametric approaches. SPPL uses multiple Gaussian processes which are linearly coupled to determine the preference relations between instances. SPPL is more powerful than previous models while keeping the computational complexity low (linear in the number of distinct instances). We devise an efficient algorithm for model learning. Empirical studies have been conducted on two real-world data sets showing that SPPL outperforms related preference learning methods.

References

[1]
J. Fürnkranz and E. Hüllermeier, Preference Learning. Springer-Verlag, 2010.
[2]
T. Joachims, Optimizing search engines using clickthrough data, in Proc. of SIGKDD, 2002, pp. 133-142.
[3]
O. Dekel, C. D. Manning, and Y. Singer, Log-linear models for label ranking, in Proc. of NIPS, 2004, pp. 209-216.
[4]
F. Aiolli and A. Sperduti, Learning preferences for multiclass problems, in Proc. of NIPS, 2005, pp. 17-24.
[5]
W. Chu and Z. Ghahramani, Preference learning with Gaussian processes, in Proc. of ICML, 2005, pp. 137-144.
[6]
S. Har-Peled, D. Roth, and D. Zimak, Constraint classification: A new approach to multiclass classification and ranking, in Proc. of NIPS, 2003, pp. 785-792.
[7]
P. Li, C. Burges, and Q. Wu, McRank: Learning to rank using multiple classification and gradient boosting, in NIPS, 2008, pp. 897-904.
[8]
Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer, An efficient boosting algorithm for combining preferences, Journal of Machine Learning Research, vol. 4, pp. 933-969, 2003.
[9]
W. Chu and S. S. Keerthi, New approaches to support vector ordinal regression, in Proc. of ICML, 2005, pp. 145-152.
[10]
W. Chu and Z. Ghahramani, Gaussian processes for ordinal regression, Journal of Machine Learning Research, vol. 6, pp. 1-48, 2005.
[11]
B. Eric, N. D. Freitas, and A. Ghosh, Active preference learning with discrete choice data, in Proc. of NIPS, 2008, pp. 409-416.
[12]
E. Bonilla, S. Guo, and S. Sanner, Gaussian process preference elicitation, in Proc. of NIPS, 2010, pp. 262-270.
[13]
C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning. The MIT Press, 2006.
[14]
B. Schölkopf and A. Smola, Learning with Kernels. The MIT Press, 2002.
[15]
W. W. Cohen, R. E. Schapire, and Y. Singer, Learning to order things, in Proc. of NIPS, 1998, pp. 451-457.
[16]
K. Crammer and Y. Singer, Pranking with ranking, in Proc. of NIPS, 2002, pp. 641-648.
[17]
A. Shashua and A. Levin, Ranking with large margin principle: Two approaches, in Proc. of NIPS, 2003, pp. 937-944.
[18]
C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender, Learning to rank using gradient descent, in Proc. of ICML, 2005, pp. 89-96.
[19]
Y. Yue, T. Finley, F. Radlinski, and T. Joachims, A support vector method for optimizing average precision, in Proc. of SIGIR, 2007, pp. 271-278.
[20]
Z. Cao, T. Qin, T.-Y. Liu, M.-F. Tsai, and H. Li, Learning to rank: From pairwise approach to listwise approach, in Proc. of ICML, 2007, pp. 129-136.
[21]
F. Xia, T.-Y. Liu, J. Wang, W. Zhang, and H. Li, Listwise approach to learning to rank: Theory and algorithm, in Proc. of ICML, 2008, pp. 1192-1199.
[22]
M. N. Volkovs and R. S. Zemel, Boltzrank: Learning to maximize expected ranking gain, in Proc. of ICML, 2009, pp. 1089-1096.
[23]
H. Valizadegan, R. Jin, R. Zhang, and J. Mao, Learning to rank by optimizing NDCG measure, in Proc. of NIPS, 2009, pp. 1883-1891.
[24]
T.-Y. Liu, Learning to rank for information retrieval, Foundations and Trends in Information Retrieval, vol. 3, no. 3, pp. 225-331, 2009.
[25]
E. Hüllermeier, J. Fürnkranz, W. Cheng, and K. Brinker, Label ranking by learning pairwise preferences, Artificial Intelligence, vo. 172, no. 16-17, pp. 1897-1916, 2008.
[26]
W. Cheng, J. Hühn, and E. Hüllermeier, Decision tree and instance-based learning for label ranking, in Proc. of ICML, 2009, pp. 161-168.
[27]
W. Chu, V. Sindhwani, Z. Ghahramani, and S. S. Keerthi, Relational learning with Gaussian processes, in Proc. of NIPS, 2007, pp. 289-296.
[28]
R. Silva, W. Chu, and Z. Ghahramani, Hidden common cause relations in relational learning, in Proc. of NIPS, 2008, pp. 1345-1352.
[29]
W.-J. Li, Z. Zhang, and D.-Y. Yeung, Latent Wishart processes for relational kernel learning, in Proc. of AISTATS, 2009, pp. 336-343.
[30]
Y. W. Teh, M. Seeger, and M. I. Jordan, Semiparametric latent factor models, in Proc. of AISTATS, 2005, pp. 333-340.
[31]
T. Qin, T.-Y. Liu, J. Xu, and H. Li, LETOR: A benchmark collection for research on learning to rank for information retrieval, Information Retrieval, vol. 13, no. 4, pp. 346-374, 2010.
[32]
W. Hersh, C. Buckley, T. J. Leone, and D. Hickam, OHSUMED: An interactive retrieval evaluation and new large test collection for research, in Proc. of SIGIR, 1994, pp. 192-201.
[33]
N. Craswell and D. Hawking, Overview of the TREC-2004 web track, in NIST Special Publication, 2004, pp. 500-261.
[34]
K. Kersting and Z. Xu, Learning preferences with hidden common cause relations, in Proc. of ECML/PKDD, 2009, pp. 676-691.
Tsinghua Science and Technology
Pages 257-264
Cite this article:
Zhen Y, Song Y, Yeung D-Y. Semiparametric Preference Learning. Tsinghua Science and Technology, 2014, 19(3): 257-264. https://doi.org/10.1109/TST.2014.6838196

413

Views

8

Downloads

4

Crossref

N/A

Web of Science

3

Scopus

0

CSCD

Altmetrics

Received: 18 April 2014
Accepted: 22 April 2014
Published: 18 June 2014
© The author(s) 2014
Return