Results 1-20 of 29. Search equation: ((uid:50036842)) Sort: Author Issue date Title Filter: All documents types Scientific journals - Article - Short communication - Book review - Letter to the editor - Complete issue - OtherBooks - Book published as author, translator, etc. - Collective work published as editor or directorParts of books - Contribution to collective works - Contribution to encyclopedias, dictionaries... - Preface, postface, glossary...Scientific congresses, symposiums and conference proceedings - Unpublished conference - Paper published in a book - Paper published in a journal - PosterScientific presentation in universities or research centersReports - Expert report - Internal report - External report - OtherDissertations and theses - Bachelor/master dissertation - Doctoral thesis - Postdoctoral thesis - OtherLearning materials - Course notes - OtherPatentCartographic materials - Single work - Part of another publicationComputer developments - Textual, factual or bibliographical database - Software - OtherE-prints/Working papers - First made available on ORBilu - Already available on another siteDiverse speeches and writings - Article for general public - Conference given outside the academic context - Speeches/Talks - Other     1 2   From robust tests to Bayes-like posterior distributionsBaraud, Yannick E-print/Working paper (2021)In the Bayes paradigm and for a given loss function, we propose the construction of a new type of posterior distributions for estimating the law of an n-sample. The loss functions we have in mind are ... [more ▼]In the Bayes paradigm and for a given loss function, we propose the construction of a new type of posterior distributions for estimating the law of an n-sample. The loss functions we have in mind are based on the total variation distance, the Hellinger distance as well as some 𝕃j-distances. We prove that, with a probability close to one, this new posterior distribution concentrates its mass in a neighbourhood of the law of the data, for the chosen loss function, provided that this law belongs to the support of the prior or, at least, lies close enough to it. We therefore establish that the new posterior distribution enjoys some robustness properties with respect to a possible misspecification of the prior, or more precisely, its support. For the total variation and squared Hellinger losses, we also show that the posterior distribution keeps its concentration properties when the data are only independent, hence not necessarily i.i.d., provided that most of their marginals are close enough to some probability distribution around which the prior puts enough mass. The posterior distribution is therefore also stable with respect to the equidistribution assumption. We illustrate these results by several applications. We consider the problems of estimating a location parameter or both the location and the scale of a density in a nonparametric framework. Finally, we also tackle the problem of estimating a density, with the squared Hellinger loss, in a high-dimensional parametric model under some sparcity conditions. The results established in this paper are non-asymptotic and provide, as much as possible, explicit constants. [less ▲]Detailed reference viewed: 50 (2 UL) Tests and estimation strategies associated to some loss functionsBaraud, Yannick in Probability Theory and Related Fields (2021), 180(3), 799-846We consider the problem of estimating the joint distribution of n independent random variables. Given a loss function and a family of candidate probabilities, that we shall call a model, we aim at ... [more ▼]We consider the problem of estimating the joint distribution of n independent random variables. Given a loss function and a family of candidate probabilities, that we shall call a model, we aim at designing an estimator with values in our model that possesses good estimation properties not only when the distribution of the data belongs to the model but also when it lies close enough to it. The losses we have in mind are the total variation, Hellinger, Wasserstein and L_p-distances to name a few. We show that the risk of our estimator can be bounded by the sum of an approximation term that accounts for the loss between the true distribution and the model and a complexity term that corresponds to the bound we would get if this distribution did belong to the model. Our results hold under mild assumptions on the true distribution of the data and are based on exponential deviation inequalities that are non-asymptotic and involve explicit constants. Interestingly, when the model reduces to two distinct probabilities, our procedure results in a robust test whose errors of first and second kinds only depend on the losses between the true distribution and the two tested probabilities. [less ▲]Detailed reference viewed: 188 (41 UL) Robust Estimation of a Regression Function in Exponential FamiliesBaraud, Yannick ; Chen, Juntong E-print/Working paper (2020)Detailed reference viewed: 154 (40 UL) Estimating the number of infected personsBaraud, Yannick ; Nourdin, Ivan ; Peccati, Giovanni E-print/Working paper (2020)The aim of this paper is to provide a confidence interval on the number of infected persons by COVID-19 within the population from the number of deaths reported in the hospitals and the mortality rate ... [more ▼]The aim of this paper is to provide a confidence interval on the number of infected persons by COVID-19 within the population from the number of deaths reported in the hospitals and the mortality rate (that is assumed to be known). [less ▲]Detailed reference viewed: 199 (25 UL) ROBUST BAYES-LIKE ESTIMATION: RHO-BAYES ESTIMATIONBaraud, Yannick ; Birgé, Lucienin Annals of Statistics (2020)We observe n independent random variables with joint distribution P and pretend that they are i.i.d. with some common density s (with respect to a known measure μ) that we wish to estimate. We consider a ... [more ▼]We observe n independent random variables with joint distribution P and pretend that they are i.i.d. with some common density s (with respect to a known measure μ) that we wish to estimate. We consider a density model S for s that we endow with a prior distribution π (with support in S) and build a robust alternative to the classical Bayes posterior distribution which possesses similar concentration properties around s whenever the data are truly i.i.d. and their density s belongs to the model S. Furthermore, in this case, the Hellinger distance between the classical and the robust posterior distributions tends to 0, as the number of observations tends to infinity, under suitable assumptions on the model and the prior. However, unlike what happens with the classical Bayes posterior distribution, we show that the concentration properties of this new posterior distribution are still preserved when the model is misspecified or when the data are not i.i.d. but the marginal densities of their joint distribution are close enough in Hellinger distance to the model S. [less ▲]Detailed reference viewed: 141 (39 UL) Can we trust L2-criteria and L2-losses?Baraud, Yannick in Journal de la Société Française de Statistique (2019), 160(3), Detailed reference viewed: 139 (21 UL) Rho-estimators revisited: general theory and applicationsBaraud, Yannick ; Birgé, Lucienin Annals of Statistics (2018), 46(6B), 3767--3804Detailed reference viewed: 177 (37 UL) Une alternative robuste au maximum de vraisemblance: la $\rho$-estimationBaraud, Yannick ; Birgé, L.in Journal de la Société Française de Statistique (2017), 158(3), 1--26Detailed reference viewed: 68 (14 UL) A new method for estimation and model selection: $\rho$-estimationBaraud, Yannick ; Birgé, L.; Sart, M.in Inventiones Mathematicae (2017), 207(2), 425--517Detailed reference viewed: 215 (28 UL) Bounding the expectation of the supremum of an empirical process over a (weak) VC-major classBaraud, Yannick in Electronic Journal of Statistics (2016), 10(2), 1709--1728Detailed reference viewed: 97 (14 UL) Rho-estimators for shape restricted density estimationBaraud, Yannick ; Birgé, L.in Stochastic Process. Appl. (2016), 126(12), 3888--3912Detailed reference viewed: 98 (20 UL) Estimating composite functions by model selectionBaraud, Yannick ; Birgé, Lucienin Ann. Inst. Henri Poincaré Probab. Stat. (2014), 50(1), 285--314Detailed reference viewed: 69 (10 UL) Estimator selection in the Gaussian settingBaraud, Yannick ; Giraud, Christophe; Huet, Sylviein Ann. Inst. Henri Poincaré Probab. Stat. (2014), 50(3), 1092--1119Detailed reference viewed: 73 (5 UL) Estimation of the density of a determinantal processBaraud, Yannick in Confluentes Mathematici (2013), 5(1), 3--21Detailed reference viewed: 71 (5 UL) Estimator selection with respect to Hellinger-type risksBaraud, Yannick in Probab. Theory Related Fields (2011), 151(1-2), 353--401Detailed reference viewed: 63 (4 UL) A Bernstein-type inequality for suprema of random processes with applications to model selection in non-Gaussian regressionBaraud, Yannick in Bernoulli (2010), 16(4), 1064--1085Detailed reference viewed: 115 (7 UL) Estimating the intensity of a random measure by histogram type estimatorsBaraud, Yannick ; Birgé, Lucienin Probab. Theory Related Fields (2009), 143(1-2), 239--284Detailed reference viewed: 79 (4 UL) Gaussian model selection with an unknown varianceBaraud, Yannick ; Giraud, Christophe; Huet, Sylviein Annals of Statistics (2009), 37(2), 630--672Detailed reference viewed: 77 (9 UL) Testing convex hypotheses on the mean of a Gaussian vector. Application to testing qualitative hypotheses on a regression functionBaraud, Yannick ; Huet, Sylvie; Laurent, Béatricein Annals of Statistics (2005), 33(1), 214--257Detailed reference viewed: 79 (6 UL) Confidence balls in Gaussian regressionBaraud, Yannick in Annals of Statistics (2004), 32(2), 528--551Detailed reference viewed: 92 (5 UL) 1 2