Browse ORBi

- What it is and what it isn't
- Green Road / Gold Road?
- Ready to Publish. Now What?
- How can I support the OA movement?
- Where can I learn more?

ORBi

Convergence of the Huber Regression M-Estimate in the Presence of Dense Outliers ; ; et al in IEEE Signal Processing Letters (2014), 21(11), 1211-1214 We consider the problem of estimating a deterministic unknown vector which depends linearly on noisy measurements, additionally contaminated with (possibly unbounded) additive outliers. The measurement ... [more ▼] We consider the problem of estimating a deterministic unknown vector which depends linearly on noisy measurements, additionally contaminated with (possibly unbounded) additive outliers. The measurement matrix of the model (i.e., the matrix involved in the linear transformation of the sought vector) is assumed known, and comprised of standard Gaussian i.i.d. entries. The outlier variables are assumed independent of the measurement matrix, deterministic or random with possibly unknown distribution. Under these assumptions we provide a simple proof that the minimizer of the Huber penalty function of the residuals converges to the true parameter vector with a root n-rate, even when outliers are dense, in the sense that there is a constant linear fraction of contaminated measurements which can be arbitrarily close to one. The constants influencing the rate of convergence are shown to explicitly depend on the outlier contamination level. [less ▲] Detailed reference viewed: 128 (0 UL)Sparse conjoint analysis through maximum likelihood estimation ; ; et al in IEEE Transactions on Signal Processing (2013), 22 Conjoint analysis (CA) is a classical tool used in preference assessment, where the objective is to estimate the utility function of an individual, or a group of individuals, based on expressed preference ... [more ▼] Conjoint analysis (CA) is a classical tool used in preference assessment, where the objective is to estimate the utility function of an individual, or a group of individuals, based on expressed preference data. An example is choice-based CA for consumer profiling, i.e., unveiling consumer utility functions based solely on choices between products. A statistical model for choice-based CA is investigated in this paper. Unlike recent classification-based approaches, a sparsity-aware Gaussian maximum likelihood (ML) formulation is proposed to estimate the model parameters. Drawing from related robust parsimonious modeling approaches, the model uses sparsity constraints to account for outliers and to detect the salient features that influence decisions. Contributions include conditions for statistical identifiability, derivation of the pertinent Cramér-Rao Lower Bound (CRLB), and ML consistency conditions for the proposed sparse nonlinear model. The proposed ML approach lends itself naturally to ℓ1-type convex relaxations which are well-suited for distributed implementation, based on the alternating direction method of multipliers (ADMM). A particular decomposition is advocated which bypasses the apparent need for outlier communication, thus maintaining scalability. The performance of the proposed ML approach is demonstrated by comparing against the associated CRLB and prior state-of-the-art using both synthetic and real data sets. [less ▲] Detailed reference viewed: 127 (0 UL)Connections between sparse estimation and robust statistical learning ; ; et al in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on (2013) Recent literature on robust statistical inference suggests that promising outlier rejection schemes can be based on accounting explicitly for sparse gross errors in the modeling, and then relying on ... [more ▼] Recent literature on robust statistical inference suggests that promising outlier rejection schemes can be based on accounting explicitly for sparse gross errors in the modeling, and then relying on compressed sensing ideas to perform the outlier detection. In this paper, we consider two models for recovering a sparse signal from noisy measurements, possibly also contaminated with outliers. The models considered here are a linear regression model, and its natural one-bit counterpart where measurements are additionally quantized to a single bit. Our contributions can be summarized as follows: We start by providing conditions for identification and the Cramér-Rao Lower Bounds (CRLBs) for these two models. Then, focusing on the one-bit model, we derive conditions for consistency of the associated Maximum Likelihood estimator, and show the performance of relevant ℓ1-based relaxation strategies by comparing against the theoretical CRLB. [less ▲] Detailed reference viewed: 140 (3 UL)Maximum likelihood based sparse and distributed conjoint analysis ; ; et al in Statistical Signal Processing Workshop (SSP), 2012 IEEE (2012) A new statistical model for choice-based conjoint analysis is proposed. The model uses auxiliary variables to account for outliers and to detect the salient features that influence decisions. Unlike ... [more ▼] A new statistical model for choice-based conjoint analysis is proposed. The model uses auxiliary variables to account for outliers and to detect the salient features that influence decisions. Unlike recent classification-based approaches to choice-based conjoint analysis, a sparsity-aware maximum likelihood (ML) formulation is proposed to estimate the model parameters. The proposed approach is conceptually appealing, mathematically tractable, and is also well-suited for distributed implementation. Its performance is tested and compared to the prior state-of-art using synthetic as well as real data coming from a conjoint choice experiment for coffee makers, with very promising results. [less ▲] Detailed reference viewed: 117 (0 UL)Semidefinite Relaxations of Robust Binary Least Squares under Ellipsoidal Uncertainty Sets ; ; Ottersten, Björn in IEEE Transactions on Signal Processing (2011), 59(11), 5169-5180 The problem of finding the least squares solution s to a system of equations Hs = y is considered, when s is a vector of binary variables and the coefficient matrix H is unknown but of bounded uncertainty ... [more ▼] The problem of finding the least squares solution s to a system of equations Hs = y is considered, when s is a vector of binary variables and the coefficient matrix H is unknown but of bounded uncertainty. Similar to previous approaches to robust binary least squares, we explore the potential of a min-max design with the aim to provide solutions that are less sensitive to the uncertainty in H. We concentrate on the important case of ellipsoidal uncertainty, i.e., the matrix H is assumed to be a deterministic unknown quantity which lies in a given uncertainty ellipsoid. The resulting problem is NP-hard, yet amenable to convex approximation techniques: Starting from a convenient reformulation of the original problem, we propose an approximation algorithm based on semidefinite relaxation that explicitly accounts for the ellipsoidal uncertainty in the coefficient matrix. Next, we show that it is possible to construct a tighter relaxation by suitably changing the description of the feasible region of the problem, and formulate an approximation algorithm that performs better in practice. Interestingly, both relaxations are derived as Lagrange bidual problems corresponding to the two equivalent problem reformulations. The strength of the proposed tightened relaxation is demonstrated by pertinent simulations. [less ▲] Detailed reference viewed: 131 (2 UL)Robust binary least squares: Relaxations and algorithms ; ; Ottersten, Björn in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on (2011) Finding the least squares (LS) solution s to a system of linear equations Hs = y where H, y are given and s is a vector of binary variables, is a well known NP-hard problem. In this paper, we consider ... [more ▼] Finding the least squares (LS) solution s to a system of linear equations Hs = y where H, y are given and s is a vector of binary variables, is a well known NP-hard problem. In this paper, we consider binary LS problems under the assumption that the coefficient matrix H is also unknown, and lies in a given uncertainty ellipsoid. We show that the corresponding worst-case robust optimization problem, although NP-hard, is still amenable to semidefinite relaxation (SDR)-based approximations. However, the relaxation step is not obvious, and requires a certain problem reformulation to be efficient. The proposed relaxation is motivated using Lagrangian duality and simulations suggest that it performs well, offering a robust alternative over the traditional SDR approaches for binary LS problems. [less ▲] Detailed reference viewed: 137 (0 UL) |
||