Processing math: 100%

Friday, August 12, 2011

Information Theory of Compressed Sensing, Sparse Vectors

The papers leading up to this point are the papers in the compressed sensing literature about sparse recovery (recovering sparse vectors of unknown support with few linear measurements relative to the ambient dimension), and most recently, a paper about quantization of compressed sensing measurements1. Here is the specific setup:


High dimensional space RN
Class of k-sparse signals Σk={xRN:|supp(x)|=k}
Measurement matrix ΦRm×N, mN, matrix satisfies some special property.
Compression:  Given xΣk,  compute y=Φx, store y.
Recovery:  Solve minzz1 s.t. Φz=y


The recent theoretical results allow successful recovery with mC1klog(N/k) for some constant C1. Furthermore, if the measurements are perturbed by some small vector e with e2ϵ, then using a modified recovery algorithm: minzz1 s.t. Φzy2ϵ, the recovered vector x satisfies  xx2C2ϵ/m.


If we apply this result to quantization error, rounding each measurement yi to the nearest point in an evenly spaced grid δZ, then we note e2mδ so that the recovered vector will be off by C2δ. Note there is no dependence on m. By rounding each measurement independently, we have not taken advantage of the fact that the measurements are correlated (a union of k-dimensional subspaces embedded into m>k dimensions). The higher the number of measurements, the more correlated the measurements are. 


The main result of the quantization paper above is that if we use ΣΔ modulation as the quantization scheme, then we can achieve a better error bound which depends on m:


Theorem B. Let Φ be an m×N matrix whose entries are i.i.d. according to N(0,1). Suppose α(0,1) and λ:=m/kc(logN)1/(1α) where c=c(r,α). Then there are two constants c and C that depend only on r such that with probability at least 1exp(cmλα) on the draw of Φ, the following holds: For every xΣk such that minjsupp(x)|xj|Cδ, the reconstruction satisfies
x˜x2δλα(r1/2)


Here r is the order of ΣΔ used, where we note that for this particular result, for larger r we need more bits in the quantization to keep the scheme stable. This result is also just for random Gaussian measurements. We will ignore the size condition |xj|Cδ for now, but the paper mentions that this technical condition is also practical (δ is smaller than the max uncertainty from rounding, may as well assume elements with magnitude Cδ are 0)


For the rest, assume also that |xj|1, so that we are studying the compression of X=ΣkB1(l(RN)). To summarize the information-theoretic results of the paper, for an r-th order scheme, the number of bits needed in the quantizer is log2(5Cδ 2r+1/2λ(1α)/2k), and the corresponding distortion is D(r,λ)λα(r1/2)δ2r+1/2. Given a bit budget B, let b(r,λ) be number of bits of the quantizer. We need to minimize the distortion D(r,λ) such that kλb(r,λ)B. Then this will give us a rate-distortion curve D(B) using compressed sensing and sigma delta quantization. (To be continued... Edit: There's something missing here and I can't seem to figure out what I need to make all the parameters and constants make sense...)


To compare to a sillier encoding scheme, we can simply store ordered pairs (xi,i), where we use b bits to store the coefficient and log2N bits to store the index. In total this would use bklog(N) bits. Setting this to the bit budget B, we have b=B/(klog(N)). Then to compute the distortion, again we assume |xi|1, and if we use spacing δ for quantization, then we need b=log2(2/δ). Then δ=2(b1). Each entry would pick up an error of at most δ/2=2b, and then the overall distortion in the 2-norm would be kδ/2=k2B/(klog(N))=:D(B).
Or if we look at the inverse function,
B(D)=k log2(k/D)log(N)


Kolmogorov Entropy


We can also compute the theoretical limits via Kolmogorov entropy, or logarithm of the minimum number of balls (with respect to a fixed norm, l2 say) of radius D needed to cover the set X. In this setup, a cover corresponds to an encoding scheme, which is simply to map a given element to the nearest ball center. Then the number of bits of the scheme is the logarithm of the size of the cover.


The minimum cover size is the same as the maximum number of points mutually separated by D, since a cover must necessarily cover each point, and no D-ball can cover two such D-separated points. A maximal set of D-separated points can be used to form a cover by using the points as the centers of the D-balls, and it is necessarily a cover by maximality of the separated set.


Consider again X=ΣkB1(l(RN)). This is a k-dimensional subset, and effectively the (k-dimensional) volume of X is (Nk)2k. Now consider a maximal ϵ-separated set. If we consider (disjoint) l2(RN) balls of radius D/2 around each point in the set, and intersect with X, we have a subset of X with effective volume bounded below by
# points×vol(BD/2(l2(Rk))) so that
# points1ck(2/D)k(Nk)2k1ckk(4λ/D)k
Here ck=πk/2/Γ(k/2+1)(2πe/k)k/2/πk
In other words, the optimal bitrate given distortion has the bound
B(D)klog(ckλ/D)


(What's hidden: This bound is probably not that great because intersecting the ball with X may have many k-dimensional pieces, especially around the origin where all (Nk) hyperplanes pass through)


Another upper bound can be obtained by multiplying the number of optimal ϵ-separated points in B1(l(Rk) by (Nk). This gives
B(D)klog(cλ/D)
(generic result for bounded sets in finite dimensions can be found in paper by Kolmogorov-Tihomirov, ϵ-entropy and ϵ-capacity of sets in function spaces)


Edit: The difference of k seems to be a difference in normalization... actually the k inside the logarithm does not affect the asymptotics with respect to 1/D, as D decreases to 0, so maybe something is being ignored in the latter estimate.

No comments:

Post a Comment