Friday, August 26, 2011

Information Theory of Compressed Sensing, Compressible Signals

Now we want to know about signals that are not $k$-sparse, but are compressible: if the coefficients of the signal $x$ are arranged in decreasing order of magnitude $x_{(n)}$, then they exhibit some power law decay:
\[ |x_{(n)}| \lesssim n^{-1/p} \]


We will consider the case $p=1$ to fix ideas, and call this space $l^{1,w}$ (weak $l^1$).


A first fact about compressible signals is that if we want to approximate $l^{1,w}$ with $\Sigma_k$, i.e. the best $k$-term approximation in the $l^2$ norm say, then the answer is just to take the top $k$ terms. For example, take $x \in l^{1,w}$ with $|x_{(n)}| \leq Cn^{-1/p}$, Then if $x^{(k)}$ is formed from the top $k$ terms of $x$, we have
\[ \| x - x^{(k)} \|_2 \leq C k^{-1/2} \]


[First, some musings:  If we introduce the quantity $|x| := \sup_j j |x_{(j)}|$, the rearrangement destroys triangle inequality:  for instance $|(1,1/2,1/3)| = |(1/3,1/2,1)| = 1$, but the sum $|(4/3,4/3,4/3)| = 4$. Without rearranging terms, it is a norm]


Directions... The previous quantization result does not work well for compressible signals, mainly because it focuses on the best $k$-term approximation, and the scheme cannot affect the best $k$-term error even when more measurements are used. This is a deficiency with this particular encoding...


From an information-theoretic point of view, a first question we can ask is, what is best way to encode/compress this class of signals? Given a fixed bit budget, what is the encoding that achieves the best distortion? Alternatively, given a fixed distortion target, what is the number of bits needed to represent the entire class of signals?This can be answered by studying minimal \(\epsilon\)-nets, Kolmogorov entropy...


The encoding given by entropy considerations is non-constructive...


A further question is then, given a particular method of encoding, what are the theoretical limits? Sigma-like schemes on compressed sensing measurements...


A toy problem... 3 dimensions... $(x_1,x_2,x_3)$ where one is large, the second is medium size, and the third is small. Specifically, $x_{(1)} \leq 1, |x_{(2)}| \leq |x_{(1)}| / 2$, and $|x_{(3)}| \leq 2|x_{(2)}| / 3$. How to compress such a space?
Some quick musings:  let's compare the volume of this space to the cube ($[-1,1]^3$, volume 8). Focusing on the region $x_1 > x_2 > x_3 > 0$, the intersection with the space in question is in the convex hull of $(0,0,0)$, $(1,0,0)$, $(1,1/2,0)$ and $(1,1/2,1/3)$. The volume can be computed using the pyramid 1/3 base height formula:  1/3*(1/2*1/2*1/3) = 1/36, and by symmetry the volume is the same as any of the other 6*8 regions, so the total volume of the space is 8/6. So we are dealing with a space that is 6 times smaller in volume.


How about $n$ dimensions? We need the volume of the convex hull of $(0,\ldots,0)$, $(1,0,\ldots,0)$, $(1,1/2,0,\ldots,0)$, $\ldots$, $(1,1/2,1/3,\ldots,1/n)$. (If the recursive formula of $1/n * V_{n-1} * h_n$ continues to hold for "pyramid-like" structures, then we have $1/(n!)^2 * n! * 2^n$, compared to the full hypercube, this is $1/n!$ times smaller. (Need to check....)


This space is different than the space $(x_1,x_2,x_3)$ where with rearrangement, $|x_{(j)}| \leq 1/j$ for all $j$... but this doesn't quite capture what we want in this small dimensional example, since we want a dominant component and some sort of tail, and there are only 3 spots. Maybe for later.


Toy Problem Compression
Also to investigate, how can we compress the toy problem with linear measurements? First, let's look at something naive: 
If we are given a target distortion $D$, let's take all three coordinates and just round to the nearest $2D/\sqrt{3}$ spaced point (introduces $2$-norm distortion of $D$). Then we need $\log(\sqrt{3}/D)$ bits per coordinate, so we need $3 \log(\sqrt{3}/D)$ bits total. In terms of number of quantization points, this is $(\sqrt{3}/D)^3$, which matches compression bounds for the full cube $[-1,1]^3$ (actually the lower bound has a constant $c=3$ instead of $\sqrt{3}$ that is independent of dimension). In the toy problem, we should be able to reduce the number of quantization points by a factor of 6, roughly speaking.


Let $D' = 2D/\sqrt{3}$ be the spacing of points per coordinate (to achieve distortion of $D$ in the 2-norm). One adaptive strategy is simply to allocate $\log(2/D')$ bits for the largest coordinate, $\log(1/D')$ bits for the middle coordinate, and $\log(2/(3D'))$ bits for the smallest (corresponding to ranges $[-1,1]$, $[-1/2,1/2]$ and $[-1/3,1/3]$). In addition to storing the order information (one of six possibilities, so can use three bits to store, say). Note that this scheme uses $3 + 3 \log( \sqrt{3}/(6D))$ bits, and if translated to the number of quantization points, it becomes $(\sqrt{3}/D)^3 * (8/6)$, which is actually worse (actually, it is worse only because of integrality issues, since we are wasteful in using 3 bits for 6-possibilities for order. otherwise we would be even with the simpler method above of just using $[-1,1]$ for all three and not recording order).


Here's something fun, pictorally, it's easy to see how to recover one-sparse vectors in 3 dimensions with two measurements:
(taken from here)

Of course, just any drawing of a 3d axis shows plainly how this would work. The 2d drawing is a 2d projection (i.e. 2 linear measurements), and every 1-sparse vector (a point on the axes) corresponds to a unique point in this 2d projection.


For our toy model, it would look more like...




The dotted lines enclose one piece of the space and we can already see that in this 2d projection there are many potential points that correspond to the same point in the image. In particular, there does not seem to be a way to obtain a $D$-distortion code for small $D$ for these measurements. The smallest $D$ for which we can code $x$ using 2 measurements $\Phi x$ with $\Phi = \begin{pmatrix} u^T \\ v^T \end{pmatrix}$ for two unit vectors $u,v$ is given by $\sup_{y\in \Phi(X)} {\rm diam}(\Phi^{-1}y)/2$


I wonder what the best angle to "project" this picture is. For instance, if we just project to the x-y plane, the thickness would be $2$, and it would be a very lousy detector for signals concentrated along the z axis. How can I figure this out?


This does not include an additional quantization step needed to compress to a specified bit budget.

Saturday, August 20, 2011

Sigma Delta for Frame Coefficients

The idea behind the compressed sensing scheme in the paper mentioned earlier1 is as follows. We know that under suitable conditions, we can recover a sparse vector $x\in\Sigma_k$ with few non-adaptive linear measurements $y = \Phi x$. For practical applications, we need to be able to store these measurements, and robustness results tell us that perturbing $y$ still allows approximate recovery of $x$. Specifically, we know that  we can determine the approximate support of $x$ from perturbed measurements. Let $T = {\rm supp}(x)$.

If we know the support of $x$, then we also know that if we took the columns of $\Phi$ corresponding to $T$, which we denote $\Phi_T$, then we can recover $x$ from $y$ via any left-inverse of $\Phi_T$ (there are many such left-inverses as $\Phi_T$ is $m \times k$ with $m>k$, i.e. overdetermined). In the language of frame theory, the columns of $\Phi_T$ will form a frame and the rows of the left-inverse form a dual frame.

From what follows we will let $E = \Phi_T$. The problem then boils down to the following: How can we quantize the measurements $q = Q(y)$, so that the error from recovery is as small as possible? Let us fix some alphabet $\mathcal{A} = \delta \mathbb{Z}$ say, and $Q: \mathbb{R}^m \to \mathcal{A}^m$. Essentially we want to solve
\[ \min_{Q, F: FE=I}  \| x - FQ(y) \|_2 \]
Then taking the worst case $x \in \Sigma_k$ will tell us the rate-distortion tradeoff for compressing $\Sigma_k$ with this scheme.

The paper cited above investigates the situation where $\Sigma\Delta$ quantization is used for the quantizer. That is, for an $r$-th order scheme, if $D$ is the finite difference matrix, the scheme solves a difference equation
\[   y - q = D^r u \]
for $q \in \mathcal{A}^m$ and $u$ bounded. In the setting of bandlimited signals, an $r$-th order scheme can obtain an error of the form $\lambda^{-r}$ where $\lambda$ is the oversampling ratio. The intuition here is that actually, since we know the support, which has size $k$, we are effectively oversampling by a factor $\lambda := m/k$, and thus we hope that we can achieve similar results in this setting.

This equation is solved via recursion greedily, by choosing $q_n$ to minimize $u_n$, with the catch that for larger $r$ we need a finer alphabet (or more bits in quantization).

Then to obtain an error estimate, for a dual frame $F$ (satisfying $FE=I$), we have the error estimate
\[  \|x - \hat{x}\| = \| F(y - q) \| = \| FD^r u \| \leq \|FD^r\|_{op} \|u\|_2 \]
Since $u$ is bounded, $\|u\|_2 \leq \sqrt{m} \|u\|_\infty$, and we want to find the dual frame $F$ which minimizes this operator norm, i.e. solve
\[  \min_{FE=I} \|FD^r\|_{op} \]

The solution is $F = (D^{-r}E)^\dagger D^{-r}$,  with operator norm $(\sigma_{min}(D^{-r}E)^{-1}$, which if $E$ is taken to be a random Gaussian frame (which is the case in one setting of compressed sensing), we can obtain an error estimate of $\lambda^{-\alpha r} / \sqrt{m}$ with overwhelming probability. When we put everything together, this gives a recovery error of  $\lambda^{-\alpha r}$.  (Above $\alpha$ is some constant in $(0,1)$, which affects the probability and rate of recovery)

An Extension
One possible extension of the above is then to say, what happens if we don't use $D^r$ as the noise shaping operator? Given $D$ and $E$, we note that there is an optimal dual frame $F$ for recovery. Can we choose $D$ so that the corresponding $F$ leads to a better result? The catch is that we need to constrain $D$ to lead to a stable scheme, so that we can still solve $y - q = Du$ with $u$ bounded, and constrained to the bit budget (alphabet constraint)

Let us study $D$ of the form $D = I + H$ where $H$ is lower triangular with $0$'s on the diagonal
(will add more later...)

Friday, August 12, 2011

Information Theory of Compressed Sensing, Sparse Vectors

The papers leading up to this point are the papers in the compressed sensing literature about sparse recovery (recovering sparse vectors of unknown support with few linear measurements relative to the ambient dimension), and most recently, a paper about quantization of compressed sensing measurements1. Here is the specific setup:


High dimensional space $\mathbb{R}^N$
Class of $k$-sparse signals $\Sigma_k = \{ x\in \mathbb{R}^N:  |\mbox{supp}(x)| = k \}$
Measurement matrix $\Phi \in \mathbb{R}^{m\times N}$, $m \ll N$, matrix satisfies some special property.
Compression:  Given $x \in \Sigma_k$,  compute $y = \Phi x$, store $y$.
Recovery:  Solve $\min_z \|z\|_1$ s.t. $\Phi z = y$. 


The recent theoretical results allow successful recovery with $m \sim C_1 k \log(N/k)$ for some constant $C_1$. Furthermore, if the measurements are perturbed by some small vector $e$ with $\|e\|_2 \leq \epsilon$, then using a modified recovery algorithm: $\min_z \|z\|_1$ s.t. $\| \Phi z - y \|_2 \leq \epsilon$, the recovered vector $x_\ast$ satisfies  $\|x - x_\ast\|_2 \leq C_2 \epsilon / \sqrt{m}$.


If we apply this result to quantization error, rounding each measurement $y_i$ to the nearest point in an evenly spaced grid $\delta \mathbb{Z}$, then we note $\|e\|_2 \leq \sqrt{m} \delta$ so that the recovered vector will be off by $\leq C_2 \delta$. Note there is no dependence on $m$. By rounding each measurement independently, we have not taken advantage of the fact that the measurements are correlated (a union of $k$-dimensional subspaces embedded into $m > k$ dimensions). The higher the number of measurements, the more correlated the measurements are. 


The main result of the quantization paper above is that if we use $\Sigma\Delta$ modulation as the quantization scheme, then we can achieve a better error bound which depends on $m$:


Theorem B. Let $\Phi$ be an $m\times N$ matrix whose entries are i.i.d. according to $\mathcal{N}(0,1)$. Suppose $\alpha \in (0,1)$ and $\lambda := m/k \geq c(\log N)^{1/(1-\alpha)}$ where $c = c(r,\alpha)$. Then there are two constants $c'$ and $C$ that depend only on $r$ such that with probability at least $1-\exp(-c' m \lambda^{-\alpha})$ on the draw of $\Phi$, the following holds: For every $x\in \Sigma_k$ such that $\min_{j \in \rm{supp}(x)} |x_j| \geq C\delta$, the reconstruction satisfies
\[  \|x - \tilde{x}\|_2 \lesssim \delta \lambda^{-\alpha (r-1/2)} \]


Here $r$ is the order of $\Sigma\Delta$ used, where we note that for this particular result, for larger $r$ we need more bits in the quantization to keep the scheme stable. This result is also just for random Gaussian measurements. We will ignore the size condition $|x_j| \geq C \delta$ for now, but the paper mentions that this technical condition is also practical ($\delta$ is smaller than the max uncertainty from rounding, may as well assume elements with magnitude $\leq C \delta$ are $0$)


For the rest, assume also that $|x_j| \leq 1$, so that we are studying the compression of $X = \Sigma_k \cap B_1(l^\infty(\mathbb{R}^N))$. To summarize the information-theoretic results of the paper, for an $r$-th order scheme, the number of bits needed in the quantizer is $\approx \log_2\left(\frac{5}{C\delta} \ 2^{r+1/2} \lambda^{(1-\alpha)/2} k\right)$, and the corresponding distortion is $D(r,\lambda) \lesssim \frac{\lambda^{-\alpha(r-1/2)} \delta}{2^{r+1/2}}$. Given a bit budget $B$, let $b(r,\lambda)$ be number of bits of the quantizer. We need to minimize the distortion $D(r,\lambda)$ such that $k\lambda b(r,\lambda) \leq B$. Then this will give us a rate-distortion curve $D(B)$ using compressed sensing and sigma delta quantization. (To be continued... Edit: There's something missing here and I can't seem to figure out what I need to make all the parameters and constants make sense...)


To compare to a sillier encoding scheme, we can simply store ordered pairs $(x_i, i)$, where we use $b$ bits to store the coefficient and $\log_2 N$ bits to store the index. In total this would use $bk \log(N)$ bits. Setting this to the bit budget $B$, we have $b = B / (k \log(N))$. Then to compute the distortion, again we assume $|x_i| \leq 1$, and if we use spacing $\delta$ for quantization, then we need $b = \log_2(2/\delta)$. Then $\delta = 2^{(b-1)}$. Each entry would pick up an error of at most $\delta/2 = 2^{-b}$, and then the overall distortion in the $2$-norm would be $\sqrt{k}\delta/2 = \sqrt{k} 2^{-B/(k \log(N))} =: D(B)$.
Or if we look at the inverse function,
\[ B(D) = k\ \log_2(\sqrt{k} / D) \log(N) \]


Kolmogorov Entropy


We can also compute the theoretical limits via Kolmogorov entropy, or logarithm of the minimum number of balls (with respect to a fixed norm, $l^2$ say) of radius $D$ needed to cover the set $X$. In this setup, a cover corresponds to an encoding scheme, which is simply to map a given element to the nearest ball center. Then the number of bits of the scheme is the logarithm of the size of the cover.


The minimum cover size is the same as the maximum number of points mutually separated by $D$, since a cover must necessarily cover each point, and no $D$-ball can cover two such $D$-separated points. A maximal set of $D$-separated points can be used to form a cover by using the points as the centers of the $D$-balls, and it is necessarily a cover by maximality of the separated set.


Consider again $X=\Sigma_k \cap B_1(l^\infty(\mathbb{R}^N))$. This is a $k$-dimensional subset, and effectively the ($k$-dimensional) volume of $X$ is $\binom{N}{k} 2^k$. Now consider a maximal $\epsilon$-separated set. If we consider (disjoint) $l^2(\mathbb{R}^N)$ balls of radius $D/2$ around each point in the set, and intersect with $X$, we have a subset of $X$ with effective volume bounded below by
${\rm \#\ points} \times {\rm vol} (B_{D/2}(l^2(\mathbb{R}^k)))$ so that
\[ {\rm \#\ points} \leq \frac{1}{c_k} (2/D)^k \binom{N}{k} 2^k \sim \frac{1}{c_k \sqrt{k}} (4\lambda / D)^k \]
Here $c_k = \pi^{k/2} / \Gamma(k/2+1) \sim (2\pi e/k)^{k/2} / \sqrt{\pi k}$
In other words, the optimal bitrate given distortion has the bound
\[ B(D) \lesssim  k \log(c\sqrt{k} \lambda/D) \]


(What's hidden: This bound is probably not that great because intersecting the ball with $X$ may have many $k$-dimensional pieces, especially around the origin where all $\binom{N}{k}$ hyperplanes pass through)


Another upper bound can be obtained by multiplying the number of optimal $\epsilon$-separated points in $B_1(l^\infty(\mathbb{R}^k)$ by $\binom{N}{k}$. This gives
\[ B(D) \lesssim k \log(c\lambda / D) \]
(generic result for bounded sets in finite dimensions can be found in paper by Kolmogorov-Tihomirov, $\epsilon$-entropy and $\epsilon$-capacity of sets in function spaces)


Edit: The difference of $\sqrt{k}$ seems to be a difference in normalization... actually the $\sqrt{k}$ inside the logarithm does not affect the asymptotics with respect to $1/D$, as $D$ decreases to $0$, so maybe something is being ignored in the latter estimate.