Statistical Optimization in High Dimensions

Statistical Optimization in High Dimensions

0.00 Avg rating0 Votes
Article ID: iaor2017387
Volume: 64
Issue: 4
Start Page Number: 958
End Page Number: 979
Publication Date: Aug 2016
Journal: Operations Research
Authors: , ,
Keywords: programming: mathematical, statistics: decision, statistics: sampling, stochastic processes
Abstract:

We consider optimization problems whose parameters are known only approximately, based on noisy samples. In large‐scale applications, the number of samples one can collect is typically of the same order of (or even less than) the dimensionality of the problem. This so‐called high‐dimensional statistical regime has been the object of intense recent research in machine learning and statistics, primarily due to phenomena inherent to this regime, such as the fact that the noise one sees here often dwarfs the magnitude of the signal itself. While relevant in numerous important operations research and engineering optimization applications, this setup falls far outside the traditional scope of robust and stochastic optimization. We propose three algorithms to address this setting, combining ideas from statistics, machine learning, and robust optimization. Our algorithms are motivated by three natural optimization objectives: minimizing the number of grossly violated constraints; maximizing the number of exactly satisfied constraints; and, finally, developing algorithms whose running time scales with the intrinsic dimension of a problem, as opposed to its observed dimension–a mismatch that, as we discuss in detail, can be dire in settings where constraints are meant to describe preferences of behaviors. The key ingredients of our algorithms are dimensionality reduction techniques from machine learning, robust optimization, and concentration of measure tools from statistics.

Reviews

Required fields are marked *. Your email address will not be published.