Part I of Slides - Information, Networks, and Signal Processing

Beyond Randomness: Sparse Signal Processing in PracEce Part I Waheed U. Bajwa, Rutgers University Marco F. Duarte, University of MassachuseOs ICASSP 2015, Brisbane, Australia hOp://inspire.rutgers.edu/sparsity-­‐tutorial ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 1 Outline of Part I of the Tutorial Sparse signal processing •  Three Problems: Sparse signal recovery, model selec8on, & sparse linear regression •  A sampling of state-­‐of-­‐the-­‐art results for sparse signal processing •  Computable metrics for sparse signal processing o  Worst-­‐case guarantees for recovery, model selec8on, and regression o  Average-­‐case guarantees for recovery, model selec8on, and regression Some applicaEons of sparse signal processing – Part I •  Radar processing •  Imaging ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 2 State-­‐of-­‐the-­‐art in sparse signal processing A REVIEW OF THREE PROBLEMS Thanks to Mark Davenport (Georgia Tech) for most of these slides ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 3 Sparse Signal Recovery (a.k.a. Compressive Sensing) Replace samples with general linear measurements y = Ax
sampled =
measurements signal k-­‐sparse [Donoho 2006] [Candès, Romberg, and Tao 2006] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 4 Sparsity x
k nonzero entries large wavelet coefficients n pixels ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 5 Sparsity x
k nonzero entries large Fourier coefficients n samples ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 6 Sparsity x
k nonzero entries ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 7 Core TheoreEcal Challenges How should we design the matrix A so that m is as small as possible? y x A ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 8 Core TheoreEcal Challenges How should we design the matrix A so that m is as small as possible? y A How can we recover from the measurements y? ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 9 Restricted Isometry Property (RIP) [Candès and Tao 2005] [Candès and Tao 2006] [Candès 2008] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 10 RIP and Stability e
y
If we want to guarantee that then we must have See also the CS1 property from [Donoho 2006] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) [Davenport et al. 2011] 11 Johnson-­‐Lindenstrauss Lemma Stable projec8on of a discrete set of p points Pick at random using a sub-­‐Gaussian distribu8on For any fixed , concentrates around with (exponen8ally) high probability We preserve the length of all O(p2) difference vectors simultaneously if m = O(log p2) = O(log p) ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 12 JL Lemma Meets RIP [Baraniuk et al. 2008] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 13 RIP Matrix: OpEon 1 Choose a random matrix •  fill out the entries of with i.i.d. samples from a sub-­‐Gaussian distribu8on •  project onto a “random subspace” •  similar to Johnson-­‐Lindenstrauss lemma [Baraniuk et al. 2008] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 14 RIP Matrix: OpEon 2 Random row submatrix of unitary matrix (e.g., orthonormal basis) Unitary matrix coherence [Candès and Tao 2006] [Rudelson and Vershynin 2008, 2010] [Bajwa, Sayeed, and Nowak 2009] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 15 RIP Matrix: OpEon 3 -­‐ “Fast Johnson Lindenstrauss Transform” By first mul8plying by random signs, a random Fourier submatrix can be used for efficient Johnson Lindenstrauss embeddings If you mul8ply the columns of any RIP matrix by random signs, you get a Johnson Lindenstrauss embedding! [Ailon and Chazelle 2007] [Krahmer and Ward 2011] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 16 Hallmarks of Random Measurements Stable With high probability, will preserve informa8on, be robust to noise Universal (Op3ons 1 and 3) will work with any fixed orthonormal basis (with high probability) y
A
Democra3c Each measurement has “equal weight” ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 17 Sub-­‐Gaussian DistribuEons As a first example of a matrix that sa8sfies the RIP, we will consider random construc8ons Sub-­‐Gaussian random variable: •  Gaussian •  Bernoulli/Rademacher ( ) •  any bounded distribu8on For any , if the entries of are sub-­‐Gaussian, then there exists a such that with high probability ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 18 Null Space Property (NSP) Intui8vely, we wish to find matrices for which sparse signals are not contained in its null space (since those signals would be lost): Can extend to compressible signals by requiring that for all subsets with k entries and for all , Intui8vely, this NSP of order k says that null space vectors should not have their energy too concentrated in a few entries. One can show that if sa8sfies the RIP of order 3k, then it also sa8sfies the NSP of order 2k [Cohen, Dahmen, and DeVore 2009] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 19 Mutual Coherence Select measurements as random coefficient subset of x in transform A
AT
y Mutual Coherence: measure dissimilarity b/w two bases where With high probability it suffices to take [Candès and Romberg 2007] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 20 Sparse Signal Recovery y
support values Op8miza8on / -­‐norm minimiza8on Greedy algorithms • 
• 
• 
• 
matching pursuit Itera8ve hard thresholding orthogonal matching pursuit (OMP) modern greedy algorithms (CoSaMP, SP, StOMP, ROMP, AMP…) ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 21 Sparse Recovery: Noiseless Case given find -­‐"norm" minimiza8on: -­‐norm minimiza8on: (aka Basis Pursuit) nonconvex NP-­‐Hard convex linear program If sa8sfies the RIP, CS1-­‐CS3 proper8es, null space property, … then and -­‐norm minima are equivalent! [Donoho 2006] [Candès, Romberg, and Tao 2006] [Cohen, Dahmen, and DeVore 2009] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 22 Why -­‐MinimizaEon Works ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 23 Sparse Recovery: Noisy Case Suppose we observe , where Similar approaches can handle Gaussian noise added to either the signal or the measurements [Candès 2008] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 24 Sparse Recovery: Non-­‐sparse Signals In prac8ce, may not be exactly k-­‐sparse [Candès 2008] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 25 Greedy Algorithms: Key Idea If we can determine , then the problem becomes over-­‐
determined. y
m n
k
In the absence of noise, ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 26 Matching Pursuit Select one index at a 8me using a simple proxy for If sa8sfies the RIP of order , then Set and [Mallat and Zhang 1993] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 27 Matching Pursuit Obtain ini8al es8mate of Update proxy and iterate ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 28 IteraEve Hard Thresholding (IHT) step size hard thresholding proxy vector RIP guarantees convergence and accurate/stable recovery: [Blumensath and Davies 2009] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 29 Modern Matching Pursuit/Greedy Algorithms Original algorithms from the 90’s have been retooled for CS • 
• 
• 
• 
selec8ng many indices in each itera8on, pruning the set of indices (i.e., correc8ng mistakes), solving least-­‐squares es8mate of signal given chosen indices, possibly itera8vely, deriving instance-­‐op8mal theore8cal guarantees based on matrix RIP Non-­‐exhaus8ve list of examples with performance guarantees: • 
• 
• 
• 
• 
• 
• 
• 
Orthogonal Matching Pursuit [Tropp and Gilbert 2007] [Davenport and Wakin 2010] Regularized OMP [Needell and Vershynin 2009, 2010] Itera8ve Hard Thresholding[Blumensath and Davies 2009] Compressive Sampling Matching Pursuit (CoSaMP) [Needell and Tropp 2009] Subspace Pursuit [Dai and Milenkovic 2009] Stagewise OMP [Donoho et al. 2012] Approximate Message Passing [Donoho, Maleki, and Montanari 2009] Generalized Approximate Message Passing [Rangan 2012] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 30 Model SelecEon Concept: Given an observa8on , find the support of the sparse vector y
e
=
+
[Has8e, Tibshirani, Friedman 2009] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 31 Model SelecEon Concept: Given an observa8on , find the support of the sparse vector •  Support corresponds to indices of nonzero entries of or, equivalently, columns of involved in y
•  Highly relevant in sta8s8cal regression: if y is vector of observa8ons and the columns of are variables, iden8fy the variables needed to explain the observa8on •  When measurements are noiseless, perfect recovery is the same as perfect model selec8on: •  Therefore, theory of model selec8on mostly considers the noisy measurement case [Has8e, Tibshirani, Friedman 2009] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 32 Model SelecEon Algorithms and Guarantees Lasso: Unconstrained op8miza8on problem that weights goodness of fit to predic8on complexity: Incoherence CondiEon: For the support , there exists a scalar for which the following inequality holds: Eigenvalue CondiEon: For the support , there exists a scalar for which the following inequality holds: [Wainwright 2009] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 33 Model SelecEon Algorithms and Guarantees Consistency Guarantee: Let with . If has columns with norm at most , the previous condi8ons hold, the parameter of Lasso is set to , and the smallest nonzero entry of obeys , then the solu8on to lasso obeys with probability at least [Wainwright 2009] Inachievability Result: Let with . If sa8sfies the eigenvalue condi8on but does not sa8sfy the incoherence condi8on, then the probability that is less than 1/2 for any value of . [Wainwright 2009] There are known connec8ons between RIP, irrepresentable condi8on, eigenvalue condi8on, and several others that give similar results. [Zhao and Yu 2006] [van de Geer and Bühlmann 2009] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 34 Model SelecEon Algorithms and Guarantees Thresholding (a.k.a. Sure Independence Screening): Greedy algorithm selects variables with highest correla8on to observa8ons: •  Define (standardized) correla8ons •  Denote indices of largest correla8ons as selected variables AsymptoEc Consistency Guarantee for Random Matrices: Assume that with and the entries of drawn i.i.d. from . If n is sufficiently large and then there exists a threshold value for thresholding that achieves with high probability. [Fletcher, Rangan and Goyal 2009] Similar results and condi8ons available from [Fan and Lv 2008] [Genovese et al. 2012] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 35 Linear Regression Concept: Given an observa8on , where is a sparse/
approximately sparse vector, es8mate signal . y
e
=
+
[Has8e, Tibshirani, and Friedman 2009] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 36 Linear Regression Concept: Given an observa8on , where is a sparse/
approximately sparse vector, es8mate signal . •  Can be thought of as the “denoising” counterpart to model selec8on
•  Theory is similar to that for model selec8on, but also requiring stability for the matrix-­‐sparse vector product •  When signal is sparse and measurements are noiseless, perfect recovery is the same as perfect model selec8on and perfect linear regression •  When signal is not sparse, need to account for sparse approxima8on error as well •  Therefore, theory of regression mostly considers the effect of sparse approxima8on and noise [Has8e, Tibshirani, and Friedman 2009] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 37 Regression results (predicEon error) Restricted Eigenvalue CondiEon (REC): RIP extended from sparse signals to compressible signals for some [Bickel, Ritov, and Tsybakov 2009] Stability Guarantee: Assume that with . Then the regression distor8on using the output from Lasso is nearly op8mal: It can be shown that many of the matrix condi8ons given for sparse signal recovery, model selec8on, and regression are roughly equivalent. [van de Geer and Bühlmann 2009] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 38 Sparse Signal Processing COMPUTABLE METRICS FOR WORST-­‐CASE GUARANTEES ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 39 Our Focus in This Tutorial •  What is a worst-­‐case guarantee? A guarantee that holds valid for all signals in a given class, e.g., earlier results that applied for all signals •  Example worst-­‐case guarantees: •  All sparse signals can be recovered exactly, •  All compressible signals can be recovered with instance op8mality, •  All sparse parameter vectors are regressed near-­‐op8mally, •  All sufficiently large entries of the parameter vector are selected. •  Strongest guarantee, but also most strict condi8ons on sensing matrix. ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 40 Our Focus in This Tutorial •  What is a computable metric? We consider metrics that do not require the no8on of randomness in the sensing matrix and that can be computed in polynomial 8me (as opposed to combinatorial) •  Metrics like RIP, spark, etc. cannot be computed easily for specific matrices; results assume random construc8on •  What are determinisEc matrices? A matrix construc8on that does not rely on randomness to generate its entries •  Prac8cal seung: pick one matrix, hard-­‐code it in your sensor, and use it permanently ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 41 Matrix/DicEonary Coherence A measure of similarity between columns of the matrix IntuiEon: When the columns are too close to one another, it will be difficult to dis8nguish between them. DefiniEon: For an matrix , the coherence is the largest inner product absolute value between all pairs of normalized columns: [Santosa and Symes 1986] [Mallat and Zhang 1993] [Gilbert, Muthukrishnan, and Strauss 2003] [Rosenfeld 1996] [Donoho and Huo 2001] [Tropp 2004] [Tropp 2006] [Strohmer and Heath 2003] Welch Bound: For an matrix , the matrix coherence obeys [Welch 1974] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 42 ConnecEng Coherence and Other Matrix Metrics Gersgorin Circle Theorem: Let A be a complex matrix with entries ai,j. For , let be the sum of the magnitudes of the off-­‐diagonal entries of the ith row of A. Then for each eigenvalue of A there exists a row m of A such that [Gersgorin 1931] ConnecEon to RIP: Assume that has unit norm columns and consider the Gram matrix , where . We have that and for each n, implying ; therefore, the theorem gives that . [Donoho and Elad 2003] [Davenport et al. 2011] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 43 Coherence-­‐Based Guarantees for Sparse Signal Recovery Performance of Basis Pursuit for Sparse Signals Theorem: If there exists any sparse vector sa8sfying such that then it is necessarily the unique solu8on to the basis pursuit algorithm [Donoho and Elad 2003] Theorem assumes signal is exactly sparse, noiseless measurements ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 44 Coherence-­‐Based Guarantees for Sparse Signal Recovery Performance of Basis Pursuit with Inequality Constraints Theorem: If a sparse vector sa8sfying has the property then the solu8on to the constrained convex op8miza8on obeys the guarantee where and . [Donoho, Elad, Temlyakov 2006] [Fuchs 2006] [Tropp 2006] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 45 Coherence-­‐Based Guarantees for Sparse Signal Recovery RelaEonship Between Basis Pursuit with Inequality Constraints and Lasso The Lagrangian relaxa8on to the constrained op8miza8on program corresponds to the Lasso: for an appropriate value of . [Donoho, Elad, Temlyakov 2006] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 46 Coherence-­‐Based Guarantees for Sparse Signal Recovery Performance of Orthogonal Matching Pursuit Theorem: Assume that we observe and that , where . Then the output to Orthogonal Matching Pursuit awer k itera8ons obeys where denotes the op8mal K-­‐sparse approxima8on to . [Tropp 2004] Corollary: If is K-­‐sparse, then it will be exactly recovered by OMP. AlternaEve Guarantee: OMP can recover all K-­‐sparse signals if [Donoho, Elad, Temlyakov 2006] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 47 Coherence-­‐Based Guarantees for Model SelecEon Performance of Orthogonal Matching Pursuit Theorem: Assume that we observe with the property for some . Then the output to Orthogonal Matching Pursuit with convergence parameter has the same support as ; addi8onally, where . ICASSP 2015 Tutorial [Donoho, Elad, Temlyakov 2006] Bajwa and Duarte – Beyond Randomness (Part I) 49 Coherence-­‐Based Guarantees for Model SelecEon Performance of Basis Pursuit with Inequality Constraints Theorem: Let a sparse vector sa8sfy with the property , where . Denote and set Then the solu8on to the convex op8miza8on obeys the guarantee . ICASSP 2015 Tutorial [Donoho, Elad, Temlyakov 2006] Bajwa and Duarte – Beyond Randomness (Part I) 48 Low-­‐Coherence Sensing Matrix ConstrucEons Alltop Sequence: Entries , m a prime number. A sequence that has low autocorrela8on, used in radar applica8ons Gabor Frames: Matrix F: Fourier Basis, : Delay operator; contains all modula8ons/shiws of f
Alltop Sequence Gabor Frame: Coherence [Strohmer and Heath 2003] [Pfander, Rauhut, and Tanner 2008] Equiangular Tight Frames: An frame with unit norm columns such that for some constant c we have for all i, j, and for all vectors f we have . [Strohmer and Heath 2003] Design of equiangular 8ght frames is a fer8le research area in math... ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 50 Sparse signal processing BREAKING THE SQUARE-­‐ROOT BOTTLENECK USING AVERAGE-­‐CASE ANALYSIS ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 51 What are Average-­‐Case Guarantees? Note that worst-­‐case results can only provide “Bexer” guarantees that rely on a probabilis8c model for sparse signals. Probability distribuEon for k-­‐sparse signals: •  P1: The support of the sparse vector is selected uniformly at random among all possible choices •  P2: The nonzero entries of the vector have posi8ve and nega8ve sign with equal probability (i.e., their median value is zero) Concept: •  For signals with support , performance depends on submatrix /
subdic8onary •  Like RIP, aim for Gram matrix to be close to I •  Show that most subdic8onaries are well condi8oned ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 52 CondiEoning of Most (Not All) SubdicEonaries Worst case bound (for all subdic8onaries) via coherence: Average case bound: Assume that is chosen uniformly at random (P1) from all possible k-­‐column subsets and that Then, the probability that the subdic8onary is “ill condi8oned” obeys Stronger results are coming up… ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) [Tropp 2008] 53 CondiEoning of Most (Not All) SubdicEonaries is “bad” Probability space Ω is “good” InterpretaEon: As long as coherence is small and spectral norm is small enough, most subdic8onaries will be well condi8oned. [Tropp 2008] Similar condi8ons can be obtained for dic8onaries generated from determinis8c codes. [Calderbank, Howard, and Jafarpour 2010] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 54 Average-­‐Case Guarantees for Signal Recovery Basis Pursuit: Assume that the signal is drawn from the probability distribu8on P1 and P2. If there exist constants such that and then the solu8on to the basis pursuit algorithm applied to exactly recovers with probability greater than [Bajwa, Duarte, and Calderbank 2014] Other results on condi8oning: •  Signal has two addi8ve components in separate dic8onaries [Kuppinger, Durisi, and Bolcksei 2012] •  Dic8onary is composed of incoherent orthonormal bases [Gurevich and Hadani 2008] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 55 Average-­‐Case Guarantees for Signal Recovery Thresholding: A very simple (two-­‐step) algorithm: •  Pick the es8mate support to be the indices of the entries of with the largest absolute values (either k or those greater than ) •  Return sparse vector es8mate given by The Thresholding algorithm recovers a k-­‐sparse signal drawn from the probability distribu8on P1 for random signals with probability at least [Schnass and Vandergheynst 2007] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 56 Average-­‐Case Guarantees for Signal Recovery Average Coherence: Thresholding: Assume that the signal is drawn from the probability distribu8on P1. If for some constants C1, C2, C3 we have , , and , then the solu8on to Thresholding applied to with parameter exactly recovers with probability greater than [Bajwa, Calderbank, and Jafarpour 2010] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 57 Average-­‐Case Guarantees for Stable Signal Recovery Thresholding: Assume that the signal is drawn from the probability distribu8on P1. If there exist constants such that , , and , then Thresholding applied to , , with for returns a signal with the following recovery guarantee: with probability at least [Bajwa, Calderbank, and Mixon 2012] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 58 Average-­‐Case Guarantees for Model SelecEon Lasso: Assume that the signal with support is drawn from the probability distribu8on for random signals and that If there exist constants such that and then the solu8on to the Lasso with parameter applied to , , obeys with probability at least [Candès and Plan 2009] Same model can be employed to es8mate op8mal value of parameter [Chre8en and Darses 2014] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 59 Average-­‐Case Guarantees for Model SelecEon Thresholding: Assume that the signal is drawn from the probability distribu8on for sparse signals. If there exist constants such that , , and , then Thresholding applied to , , with for some returns a signal with support that includes all indices i for which with probability greater than [Bajwa, Calderbank, and Mixon 2012] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 60 Average-­‐Case Guarantees for Regression Lasso: Assume that the signal with support is drawn from the probability distribu8on for random signals and that If there exist constants such that and then the solu8on to the Lasso with parameter applied to , , obeys with probability at least ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) [Candès and Plan 2009] 61 Sparse signal processing and computable metrics APPLICATION TO RADAR PROCESSING ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 62 Target DetecEon Using MonostaEc Radar Primary AssumpEons •  Far-­‐field assump8on and point targets •  Parameters remain fixed for some dura8on •  Background cluxer has been subtracted TransmiOed Signal Received Signal ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 63 Discrete-­‐Time Model of MonostaEc Radar Time-­‐Bandwidth Product: m = T W
[Herman and Strohmer 2009] Discrete Delay–Doppler Space •  The 8me-­‐bandwidth space is divided into N = m2 boxes •  if and only if (ti,vi) = (r/W,s/T) for an i
Δ𝜏=1/𝑊 *Alterna3ves to the assump3on of a discrete delay–Doppler space: [Bajwa, Gedalyahu, and Eldar 2011] [Harms, Bajwa, and Calderbank 2015] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 64 Discrete-­‐Time Model of MonostaEc Radar TransmiOed Signal x0g(t)
𝑡=0 …
𝑡=​1⁄𝑊 𝑡=𝑇 Nyquist-­‐Sampled Received Signal Δ𝜏=1/𝑊 [Herman and Strohmer 2009] [Bajwa, Sayeed, and Nowak 2008] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 65 Gabor Frame and Compressed Sensing Radar Final Form of the Received Data =
Gabor frame with each column a circular shiw and modula8on of input sequence . This is a compressed sensing problem with a k-­‐sparse •  Challenge: How to design the transmixed signal? •  Solu3on: Signal with a sequence that leads to a favorable and [Herman and Strohmer 2009] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 66 “Good” Sequences for Compressed Sensing Radar TransmiOed Signal: Alltop Sequence (prime ) •  , which maintains the Welch (lower) bound •  Implica3on: The 8me-­‐bandwidth product m = T W should be propor8onal to k2 for worst-­‐case guarantees for compressed sensing radar •  , which makes the resul8ng Gabor frame a 8ght frame •  Implica3on: The 8me-­‐bandwidth product m = T W can be propor8onal to k/log N for average case guarantees for compressed sensing radar •  , which yields for the resul8ng Gabor frame •  Implica3on: Average-­‐case guarantees that do not require random reflec8on coefficients [Herman and Strohmer 2009] [Pfander, Rauhut, and Tanner 2008] [Bajwa, Calderbank, and Jafarpour 2010] [Bajwa, Calderbank, and Mixon 2012] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 67 Sparse signal processing and computable metrics APPLICATION TO IMAGING ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 68 DeterminisEc Matrices for Compressive Imaging Indexing: Row indices are binary, length l; columns are indexed by “group” Pi , binary index of length l for column within group Delsarte-­‐Goethals (DG) Matrices: Pi drawn from specific set of binary symmetric matrices DG Frame: matrix with entries given by (binary arithme8c) Each submatrix for each Pi is a basis [Calderbank, Howard, and Jafarpour 2010] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 69 DeterminisEc Matrices for Compressive Imaging Metrics: Because matrix is concatena8on of c orthonormal basis, its spectral norm (awer scaling) is ; coherence can be as low as [Calderbank, Howard, and Jafarpour 2010] Applicability: Matrix can be applied to coefficients of 2-­‐D discrete wavelet transform of original image Results: For usual sparsity ra8os m/n observed in wavelet transforms of images, performance is poor [Ni et al. 2011][Daxa et al. 2013] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 70 DeterminisEc Compressive Imaging -­‐ Results Original Image Basis Pursuit Recovery Two-­‐Step Algorithm Recovery •  Compute coarse-­‐scale wavelet coefficients directly •  Obtain CS measurements, subtract coarse es8mate, recovery fine-­‐scale coefficients [Ni et al. 2011][Daxa et al. 2013] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 71 Culprit: Clustering in Wavelet Coefficient Vectors 2-­‐D wavelet coefficients raster scanned into vector Ordered first by scale, from coarsest to finest scales, then by offset Most large coefficients present at coarsest scales, clustered at beginning of vector -­‐ can clusters be to blame for performance loss? Goal: Study if clustered vectors appear in null space of DG Frame [Duarte, Jafarpour, and Calderbank 2013] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 72 Dyadic ParEEons of DG Frame Columns •  Using group theory proper8es on DG matrix columns, one can show that sufficiently large dyadic column par88ons are linearly dependent •  Sparse vectors with sufficiently compact clusters cannot be recovered •  Recall that some sets of columns are “bad”… [Duarte, Jafarpour, and Calderbank 2013] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 73 SoluEon: Disperse Coefficient Vector Clusters [Duarte, Jafarpour, and Calderbank 2013] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 74 SoluEon: Disperse Coefficient Vector Clusters [Duarte, Jafarpour, and Calderbank 2013] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 75 SoluEon: Disperse Coefficient Vector Clusters [Duarte, Jafarpour, and Calderbank 2013] ICASSP 2015 Tutorial Bajwa and Duarte – Beyond Randomness (Part I) 76