Reference : Aggregated hold-out
Scientific journals : Article
Physical, chemical, mathematical & earth Sciences : Mathematics
http://hdl.handle.net/10993/47480
Aggregated hold-out
English
Maillard, Guillaume mailto [University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Mathematics (DMATH) >]
Arlot, Sylvain [Université Paris-Saclay > Mathematics > > Professor]
Lerasle, Matthieu [ENSAE > > > CR]
Jan-2021
Journal of Machine Learning Research
MIT Press
22
Yes (verified by ORBilu)
International
1532-4435
1533-7928
Brookline
MA
[en] cross-validation ; aggregation ; bagging ; hyperparameter selection ; regularized kernel regression
[en] Aggregated hold-out (agghoo) is a method which averages learning rules selected by hold-out
(that is, cross-validation with a single split).
We provide the first theoretical guarantees on agghoo,
ensuring that it can be used safely:
Agghoo performs at worst like the hold-out when the risk is convex.
The same holds true in classification with the 0--1 risk, with an additional constant factor.
For the hold-out, oracle inequalities are known for bounded losses, as in binary classification.
We show that similar results can be proved, under appropriate assumptions, for other risk-minimization problems.
In particular, we obtain an oracle inequality for regularized kernel regression with a Lipschitz loss,
without requiring that the $Y$ variable or the regressors be bounded.
Numerical experiments show that aggregation brings a significant improvement over the hold-out and
that agghoo is competitive with cross-validation.
Université Paris-Sud, University of Luxembourg
European Union Horizon 2020
Researchers
http://hdl.handle.net/10993/47480

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
19-624.pdfPublisher postprint751.34 kBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.