Browse ORBi

- What it is and what it isn't
- Green Road / Gold Road?
- Ready to Publish. Now What?
- How can I support the OA movement?
- Where can I learn more?

ORBi

Results 1-4 of 4.
((uid:50040641))
A model-based approach to density estimation in sup-norm Maillard, Guillaume E-print/Working paper (2022) Building on the l−estimators of Baraud, we define a general method for finding a quasi-best approximant in sup-norm to a target density p⋆ belonging to a given model m, based on independent samples drawn ... [more ▼] Building on the l−estimators of Baraud, we define a general method for finding a quasi-best approximant in sup-norm to a target density p⋆ belonging to a given model m, based on independent samples drawn from distributions p⋆i which average to p⋆ (which does not necessarily belong to m). We also provide a general method for selecting among a countable family of such models. Both of these esti- mators satisfy oracle inequalities in the general setting. The quality of the bounds depends on the volume of sets C on which |f| is close to its maximum, where f = p − q for some p, q ∈ m (or p ∈ m and q ∈ m′, in the case of model selection). In particular, using piecewise polynomials on dyadic partitions of Rd, we recover optimal rates of convergence for classes of functions with anisotropic smoothness, with optimal depen- dence on semi-norms measuring the smoothness of p⋆ in the coordinate directions. Moreover, our method adapts to the anisotropic smoothness, as long as it is smaller than 1 plus the degree of the polynomials. [less ▲] Detailed reference viewed: 34 (7 UL)Aggregated hold-out for sparse linear regression with a robust loss function Maillard, Guillaume in Electronic Journal of Statistics (2022), 16(1), 935-997 Sparse linear regression methods generally have a free hyperparameter which controls the amount of sparsity, and is subject to a bias-variance tradeoff. This article considers the use of Aggregated hold ... [more ▼] Sparse linear regression methods generally have a free hyperparameter which controls the amount of sparsity, and is subject to a bias-variance tradeoff. This article considers the use of Aggregated hold-out to aggregate over values of this hyperparameter, in the context of linear regression with the Huber loss function. Aggregated hold-out (Agghoo) is a procedure which averages estimators selected by hold-out (cross-validation with a single split). In the theoretical part of the article, it is proved that Agghoo satisfies a non-asymptotic oracle inequality when it is applied to sparse estimators which are parametrized by their zero-norm. In particular, this includes a variant of the Lasso introduced by Zou, Hastié and Tibshirani \cite{Zou_Has_Tib:2007}. Simulations are used to compare Agghoo with cross-validation. They show that Agghoo performs better than CV when the intrinsic dimension is high and when there are confounders correlated with the predictive covariates. [less ▲] Detailed reference viewed: 82 (21 UL)Robust density estimation with the L1-loss. Applications to the estimation of a density on the line satisfying a shape constraint Baraud, Yannick ; Halconruy, Hélène ; Maillard, Guillaume E-print/Working paper (2022) We solve the problem of estimating the distribution of presumed i.i.d.\ observations for the total variation loss. Our approach is based on density models and is versatile enough to cope with many ... [more ▼] We solve the problem of estimating the distribution of presumed i.i.d.\ observations for the total variation loss. Our approach is based on density models and is versatile enough to cope with many different ones, including some density models for which the Maximum Likelihood Estimator (MLE for short) does not exist. We mainly illustrate the properties of our estimator on models of densities on the line that satisfy a shape constraint. We show that it possesses some similar optimality properties, with regard to some global rates of convergence, as the MLE does when it exists. It also enjoys some adaptation properties with respect to some specific target densities in the model for which our estimator is proven to converge at parametric rate. More important is the fact that our estimator is robust, not only with respect to model misspecification, but also to contamination, the presence of outliers among the dataset and the equidistribution assumption. This means that the estimator performs almost as well as if the data were i.i.d.\ with density $p$ in a situation where these data are only independent and most of their marginals are close enough in total variation to a distribution with density $p$. {We also show that our estimator converges to the average density of the data, when this density belongs to the model, even when none of the marginal densities belongs to it}. Our main result on the risk of the estimator takes the form of an exponential deviation inequality which is non-asymptotic and involves explicit numerical constants. We deduce from it several global rates of convergence, including some bounds for the minimax L1-risks over the sets of concave and log-concave densities. These bounds derive from some specific results on the approximation of densities which are monotone, convex, concave and log-concave. Such results may be of independent interest. [less ▲] Detailed reference viewed: 103 (20 UL)Aggregated hold-out Maillard, Guillaume ; ; in Journal of Machine Learning Research (2021), 22 Aggregated hold-out (agghoo) is a method which averages learning rules selected by hold-out (that is, cross-validation with a single split). We provide the first theoretical guarantees on agghoo, ensuring ... [more ▼] Aggregated hold-out (agghoo) is a method which averages learning rules selected by hold-out (that is, cross-validation with a single split). We provide the first theoretical guarantees on agghoo, ensuring that it can be used safely: Agghoo performs at worst like the hold-out when the risk is convex. The same holds true in classification with the 0--1 risk, with an additional constant factor. For the hold-out, oracle inequalities are known for bounded losses, as in binary classification. We show that similar results can be proved, under appropriate assumptions, for other risk-minimization problems. In particular, we obtain an oracle inequality for regularized kernel regression with a Lipschitz loss, without requiring that the $Y$ variable or the regressors be bounded. Numerical experiments show that aggregation brings a significant improvement over the hold-out and that agghoo is competitive with cross-validation. [less ▲] Detailed reference viewed: 67 (12 UL) |
||