# Binary Compressive Sensing via Analog Fountain Coding

Sep 28Our work on binary compressive sensing using analog fountain codes has been recently published on IEEE Transactions on Signal Processing. This post provides a brief introduction on the work. The full paper is now available on IEEE eXplorer and ArXive.

**COMPRESSIVE SENSING (CS)** has drawn a lot of interests in recent years as a revolutionary signal sampling paradigm. The main idea behind CS is the possibility to reconstruct a sparse signal from an under-determined system of linear equations through a computationally efficient convex programming algorithm. Let us consider the reconstruction problem of a sparse signal **x** ∈ R^{n} from the linear measurements **y** ∈ **R**^{m}, where m<n, **y** = **Gx**, and **G** ∈ **R**^{m×n} is the measurement matrix, modeling the measurement (sensing) process. We assume that the signal x is k-sparse, which means that exactly k<<n elements of **x** are nonzero [1], but we do not know the locations of the nonzero entries of x. Due to the sparsity of **x**, one could try to compute **x** by solving the following optimization problem:

min** _{x}** ||

**x**||

_{0}s.t.

**Gx**=

**y**. (1)

However, solving (1) is an NP-hard problem and thus practically not feasible [2]. Instead, a convex relaxation problem can be formulated as follows, which can be solved efficiently via linear or quadratic programming techniques.

min_{x} ||**x**||_{1} s.t. **Gx** = **y**. (2)

As shown in [2–5], under certain conditions on the matrix **G** and the sparsity of x, both (1) and (2) have the same unique solution. The Restricted Isometry Property (RIP) and the coherence of matrix **G** are to date the most widely used tools to derive such conditions. A comprehensive introduction to compressive sensing can be found in excellent review papers [6, 7]. Unlike the conventional CS theory which considers only the sparsity of the signal, in [8] a more realistic signal model and CS approach, referred to as the Model-based CS, was introduced by considering some structure among the signal coefficients. Model-based CS and its variations has been widely studied in the literature [8–14]. Due to the known structure of the sparse signal in model-based CS, signal models provide two major advantages to CS. First, the number of measurements required for the reconstruction of the sparse signal is significantly reduced. Second, a more accurate and robust recovery is achieved as we can better differentiate true signal information from recovery artifacts [8].

In this work, we are particularly interested in the reconstruction of sparse binary signals using linear measurements. Binary CS is a special case of model based CS which has many applications, such as event detection in wireless sensor networks (WSNs), group testing, and spectrum sensing for cognitive radios. More practical settings and similar discrete types of compressive sensing scenarios can be found in [15–21]. More specifically, in binary CS, the nonzero elements of the sparse signal have the same value of 1, and the objective is to minimize the number of real valued measurements linearly generated from the binary sparse signal. The L_{1}-minimization solution for the noiseless binary CS has been proposed in [17], where (2) was solved by limiting the reconstructed signal in the range [0,1]. Also, by using an integer programming, [15] proposed a model to solve a system of linear equations with integer variables. In [21] a simple verification-based decoder has been proposed for binary CS problems, where the measurement matrix is sparse and its elements are randomly obtained based on a predefined probability distribution function. However, the number of required measurements in these approaches are still very large, as they do not take the binary structure of the signal into the design of the CS approaches. In this work, we propose a binary CS approach based on analog fountain codes (AFCs) [22]. AFCs were originally designed to approach the capacity of the Gaussian channel across a wide range of signal to noise ratios (SNRs).

Each AFC coded symbol is a linear combination of randomly selected information symbols with real weight coefficients. With the proper design of the weight coefficients, each binary linear equation associated with each AFC coded symbol will have a unique solution. As shown in [22], the AFC decoding problem is equivalent to solving a system of binary linear equations at high SNRs, which is the same as the reconstruction of a binary sparse signal from noiseless linear measurements. This interesting property of AFC codes motivates us to incorporate them in the design of an efficient binary CS approach to minimize the number of measurements required for the successful recovery of the sparse signal. In the proposed analog fountain compressive sensing (AFCS) approach, each measurement is generated from a number of randomly selected binary information symbols with weight coefficients selected from a finite weight set. Thanks to the random structure of AFC codes, the number of generated measurements is potentially limitless and the transmitter can keep transmitting the measurements till the destination fully recovers the sparse binary signal. Due to the sparsity and the fact that weight coefficients do not introduce redundancy, each measurement is connected to a very small number of non-zero signal components in the respective weighted bipartite graph; thus, information symbols can be recovered by comparing the measurement value with the weight coefficients assigned to it in an iterative manner. More specifically, we propose a reconstruction algorithm, where each measurement can determine its connected variable nodes if it has connection with at most T nonzero signal elements in the bipartite graph, where T ≥ 0 is a predefined value. To minimize the number of required measurements, we propose an optimization problem to find the optimum measurement degree, defined as the number of information symbols measurement or simply the row weight of the measurement matrix. The minimum number of required measurements is then approximated and shown to be of O(−n log(1−s)), where n is the signal length and s=k/n is the signal sparsity order.

Simulation results show that the proposed scheme has a significantly lower error rate compared to the 1-minimization approach [17] and the binary CS scheme in [21]. We further extend the proposed AFCS scheme to the noisy CS problem, where the measurements are received in the presence of additive white Gaussian noise (AWGN). We propose a message passing decoder to reconstruct the binary sparse signal and show that the AFCS can recover the sparse signal with a very small number of measurements compared to the 1-minimization approach [17]. More specifically, the proposed AFCS scheme requires less number of measurements to completely recover the binary sparse signal in a wide range of SNRs even with a lower measurement degree compared to the 1-minimization approach [17].

**References:**

_{[1] M. Akcakaya, J. Park, and V. Tarokh, “A coding theory approach to noisy compressive sensing using low density frames,” IEEE Trans. Signal Process., vol. 59, no. 11, pp. 5369 –5379, Nov. 2011.}

_{[2] D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, 2006.}

_{[3] T. Strohmer, “Measure what should be measured: Progress and challenges in compressive sensing,” IEEE Signal Process. Lett., vol. 19, no. 12, pp. 887–893, 2012.}

_{[4] E. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489 – 509, Feb. 2006.}

_{[5] E. Candes and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory, vol. 52, no. 12, pp. 5406–5425, 2006.}

_{[6] R. Baraniuk, “Compressive sensing [lecture notes],” IEEE Signal Process. Mag., vol. 24, no. 4, pp. 118–121, 2007.}

_{[7] E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 21–30, 2008.}

_{[8] R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, “Model-based compressive sensing,” IEEE Trans. Inf. Theory, vol. 56, no. 4, pp. 1982–2001, 2010.}

_{[9] A. Cohen, W. Dahmen, and R. DeVore, “Compressed sensing and best k-term approximation,” Journal of the American Mathematical Society, vol. 22, no. 1, pp. 211–231, 2009.}

_{[10] Y. C. Eldar and M. Mishali, “Robust recovery of signals from a structured union of subspaces,” IEEE Trans. Inf. Theory, vol. 55, no. 11, pp. 5302–5316, 2009.}

_{[11] A. Ganesh, Z. Zhou, and Y. Ma, “Separation of a subspace-sparse signal: Algorithms and conditions,” in Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Proces. (ICASSP), 2009, pp. 3141–3144.}

_{[12] M. Stojnic, F. Parvaresh, and B. Hassibi, “On the reconstruction of block-sparse signals with an optimal number of measurements,” IEEE Trans. Signal Process., vol. 57, no. 8, pp. 3075–3085, 2009.}

_{[13] J. A. Tropp, A. C. Gilbert, and M. J. Strauss, “Algorithms for simultaneous sparse approximation. part i: Greedy pursuit,” Signal Processing, vol. 86, no. 3, pp. 572–588, 2006.}

_{[14] A. C. Zelinski, L. L. Wald, K. Setsompop, V. K. Goyal, and E. Adalsteinsson, “Sparsity-enforced slice-selective mri rf excitation pulse design,” IEEE Trans. Med. Imag., vol. 27, no. 9, pp. 1213–1229, 2008.}

_{[15] O. Mangasarian and B. Recht, “Probability of unique integer solution to a system of linear equations,” European Journal of Operational Research, vol. 214, no. 1, pp. 27 – 30, 2011.}

_{[16] W. Yan, Q. Wang, and Y. Shen, “Compressive sensing based sparse event detection in wireless sensor networks,” in Proc. 6th International ICST Conf. on Commun. Networking in China (CHINACOM), 2011, pp. 964–969.}

_{[17] M. Stojnic, “Recovery thresholds for 1 optimization in binary compressed sensing,” in Proc. IEEE Intl Symp. Inf. Theory (ISIT), Jun 2010, pp. 1593 –1597.}

_{[18] F. Zhang and H. Pfister, “Verification decoding of high-rate LDPC codes with applications in compressed sensing,” IEEE Trans. Inf. Theory, vol. 58, no. 8, pp. 5042 –5058, Aug. 2012.}

_{[19] W. Dai and O. Milenkovic, “Weighted superimposed codes and constrained integer compressed sensing,” IEEE Trans. Inf. Theory, vol. 55, no. 5, pp. 2215 –2229, May 2009.}

_{[20] D. Donoho, A. Maleki, and A. Montanari, “Message passing algorithms for compressed sensing: I. motivation and construction,” in Proc. IEEE Inf. Theory Workshop (ITW), Jan. 2010, pp. 1 –5.}

_{[21] U. Nakarami and N. Rahnavard, “Bcs: Compressive sensing for binary sparse signals,” in Proceedings. IEEE Military Communications Conference MILCOM), Nov. 2012.}

_{[22] M. Shirvanimoghaddam, Y. Li, and B. Vucetic, “Near-capacity adaptive analog fountain codes for wireless channels,” IEEE Commun. Lett., vol. 17, no. 12, pp. 2241–2244, Dec. 2013. }