Convex optimization problems involving finite autocorrelation sequences

Convex optimization problems involving finite autocorrelation sequences

0.00 Avg rating0 Votes
Article ID: iaor20032506
Country: Germany
Volume: 93
Issue: 3
Start Page Number: 331
End Page Number: 359
Publication Date: Jan 2002
Journal: Mathematical Programming
Authors: ,
Keywords: time series & forecasting methods
Abstract:

We discuss convex optimization problems in which some of the variables are constrained to be finite autocorrelation sequences. Problems of this form arise in signal processing and communications, and we describe applications in filter design and system identification. Autocorrelation constraints in optimization problems are often approximated by sampling the corresponding power spectral density, which results in a set of linear inequalities. They can also be cast as linear matrix inequalities via the Kalman–Yakubovich–Popov lemma. The linear matrix inequality formulation is exact, and results in convex optimization problems that can be solved using interior-point methods for semidefinite programming. However, it has an important drawback: to represent an autocorrelation sequence of length n, it requires the introduction of a large number (n(n + 1)/2) of auxiliary variables. This results in a high computational cost when general-purpose semidefinite programming solvers ar used. We present a more efficient implementation based on duality and on interior-point methods for convex problems with generalized linear inequalities.

Reviews

Required fields are marked *. Your email address will not be published.