Article ID: | iaor2017387 |
Volume: | 64 |
Issue: | 4 |
Start Page Number: | 958 |
End Page Number: | 979 |
Publication Date: | Aug 2016 |
Journal: | Operations Research |
Authors: | Mannor Shie, Xu Huan, Caramanis Constantine |
Keywords: | programming: mathematical, statistics: decision, statistics: sampling, stochastic processes |
We consider optimization problems whose parameters are known only approximately, based on noisy samples. In large‐scale applications, the number of samples one can collect is typically of the same order of (or even less than) the dimensionality of the problem. This so‐called high‐dimensional statistical regime has been the object of intense recent research in machine learning and statistics, primarily due to phenomena inherent to this regime, such as the fact that the noise one sees here often dwarfs the magnitude of the signal itself. While relevant in numerous important operations research and engineering optimization applications, this setup falls far outside the traditional scope of robust and stochastic optimization. We propose three algorithms to address this setting, combining ideas from statistics, machine learning, and robust optimization. Our algorithms are motivated by three natural optimization objectives: minimizing the number of grossly violated constraints; maximizing the number of exactly satisfied constraints; and, finally, developing algorithms whose running time scales with the