Massive Multiple Access using Superposition Raptor Codes for M2M Communications

Iot2

Machine-to-machine (M2M) wireless systems aim to provide ubiquitous connectivity between machine type communication (MTC) devices without any human intervention. Given the exponential growth of MTC traffic, it is of utmost importance to ensure that future wireless standards are capable of handling this traffic. In this work, we focus on the design of a very efficient massive access strategy for highly dense cellular networks with M2M communications. Several MTC devices are allowed to simultaneously transmit at the same resource block by incorporating Raptor codes and superposition modulation. This significantly reduces the access delay and improves the achievable system throughput. A simple yet efficient random access strategy is proposed to only detect the selected preambles and the number of devices which have chosen them. No device identification is needed in the random access phase which significantly reduces the signalling overhead. The proposed scheme is analyzed and the maximum number of MTC devices that can be supported in a resource block is characterized as a function of the message length, number of available resources, and the number of preambles. Simulation results show that the proposed scheme can effectively support a massive number of M2M devices for a limited number of available resources, when the message size is small.

In this work, we propose a novel RA strategy for M2M communications which provides major improvements in terms of access delay and QoS, by shifting from conventional identification/authentication based RA strategies. The proposed strategy aims to 1) minimize the access delay by enabling the collided devices to transmit at the same data channel, consisting of several RBs, 2) minimize the signalling overhead by signaling once for each group of devices which have selected the same preamble, and 3) minimize the resource wastage due to efficient usage of available resources.

The proposed scheme contains two phases, the RA phase and the data transmission phase. The devices do not need to be identified by the BS in the RA phase; instead, the device ID is sent along with its message in the data transmission phase and later is decoded by the BS. In the proposed scheme, collided devices are transmitting at the same data channel by using the same Raptor code. More specifically, a single degree distribution is used for Raptor codes in all the devices, which significantly simplifies the system design as the code is not dependent on the number of devices or network condition. The BS will need to know only the number of active devices in a data channel to perform the decoding and device identification. In fact, the received signal at the BS can be realized as a superposition of coded symbols sent from the devices, which is then shown to be capacity approaching, when an appropriate successive interference cancellation (SIC) is used for the decoding. This is particularly suitable for M2M communications with strict power limitations, especially when the data size is very small and the number of devices is very large; thus, low rate Raptor codes in the low SNR regime can be effectively used for their data transmission. The maximum number of M2M devices which can transmit at the same resource block is then characterized as a function of the data size and the available bandwidth. The proposed scheme shows an excellent performance in highly dense M2M networks, which makes it an excellent choice for future wireless technologies.

For more details please refer to the Arxiv version of the paper here.


Sparse Event Detection in Wireless Sensor Networks

sensornetwork_med

This work has been presented in IEEE Global Communication Conference (GLOBECOM 2014), and published in IEEE Transactions on Signal Processing. Please refer to the following links to access the papers:

Conference paper  IEEE

Journal paper: IEEE    ArXiv

Summary: In this work, we focus on the sparse event detection (SED) problem in wireless sensor networks (WSNs), where a set of sensor nodes are placed in the field of interest to capture sparse active event sources. The SED problem in WSNs is represented from a coding theory perspective by using capacity approaching analog fountain codes (AFCs) and further solved with a standard belief propagation (BP) decoding algorithm. We show that the sensing process in WSNs produces an equivalent analog fountain code at the sink node, with code parameters determined based on the sensing capability of sensor nodes and channel gains. We analyze the probability of false detection of the proposed approach and show it to be negligible. Simulation results show that the proposed approach, namely sparse event detection with AFC (SED-AFC), achieves a significantly higher probability of correct detection (PCD) compared to existing schemes at various SNR values, with a negligible probability of false detection (PFD). Moreover, we show that the minimum sampling ratio in high SNRs for the proposed scheme reaches the lower bound which is mainly characterized by the sensing coverage of the sensors and field dimensions.

 

Motivations: The Compressive Sensing (CS) theory has been widely used in wireless sensor networks (WSNs) and has shown great performance improvements in terms of network lifetime, energy efficiency and overall system throughput. CS mechanisms are beneficial to WSNs as sensor nodes usually have constrained sensing, limited memory and processing capability, as well as small communication bandwidth. A special scenario is sparse event detection (SED) in WSNs, where a set of monitoring sensors try to capture a small number of active events randomly distributed in the sensing field.

Our solution to the SED problem was inspired by our recent work on capacity approaching analog fountain codes, where modulated symbols are directly generated from a linear combination of information symbols with real wight coefficients. The sensing process in WSNs can then be represented as the encoding process of AFCs, where the real weight coefficients are channel gains, and the code degree is mainly determined by the sensing capability of sensors and number of event sources and sensor nodes. Therefore, the standard belief propagation (BP) decoder can be effectively used for the SED problem by treating sensor readings and event sources as check nodes and binary variable nodes, respectively. More specifically, we characterize the code degree, defined as the number of event sources inside the sensing coverage of a sensor node, and variable node degree, defined as the number of sensors which have covered an event source, for two scenarios of random and uniform deployment of sensor nodes.

Problem definition:

Fig1

Fig. 1. M active sensors are capturing k out of n active events randomly distributed within field S.

We consider a total n event sources, E1, E2,…, En, randomly placed in a sensing field S with a total area of S as depicted in Fig. 1. Each event source Ei randomly generates an event ei to be measured by the sensor nodes and further detected by a sink node. An event is defined with a binary signal, which is 0 or 1 (or a constant value) with a certain probability determined by the physical properties of the sensing field. We refer to those events which have the value of 1 as active events, where the number of simultaneously active events, denoted by k, is much smaller than n, i.e., k<<n.

The event sources are captured by m active monitoring sensors, which can be deployed in the sensing field S in many ways, from which we consider two general cases. In the first case, namely random deployment, sensors are randomly placed in the sensing field. In this approach, some event sources may not be covered by any sensors; thus, we cannot detect those events. In the second case, namely uniform deployment, sensors are uniformly placed within the field to fully cover the sensing filed. The sensing range of each sensor is assumed to be Rs, which means that a sensor can receive the signal from event sources within a distance of at most Rs. Let hi,j denote the channel gain between the ith event source and the jth sensor node in the field, then hi,j is modeled by the path loss model and Rayleigh fading.

The received signal yj at the jth sensor is then given by:
yj=∑i ∈ Sj hi,j ei+zj,
where Sj is the set of event sources inside the sensing range of the jth sensor and zj is the thermal measurement noise at the jth sensor. The received signal vector Y can then be written as follows:
Y=H e+z,
where H=[hi,j], z=[zj] and e=[ei].

The sink node is assumed to have full knowledge of the channel matrix H and all sensors’ readings are sent to the sink node at SNR γ. More specifically, the received signal at the sink node can be shown as follows:

x=He+w,

where w is the m×1 additive white Gaussian (AWGN) noise vector with zero mean and variance σ2.

Our goal is to then find the active events from the received signal of all sensors given the channel matrix H. As the sensing range of each sensor is limited, the number of active events inside the sensing range of each device, or simply the number of nonzero elements of each row of H, is limited. As a result, we can consider the SED problem as a binary compressive sensing problem, where event sources are mapped to sparse binary signal elements and sensors readings are mapped to measurements with measurement matrix H.

Main Results:

1.The following lemma gives the probability distribution of the sensor degree in random deployment scenarios and gives a lower bound on the minimum number of sensors to have a minimum event degree of at least 1.

Lemma1

2. The following lemma gives an upper bound on the probability of false detection (PFD), defined as the probability that a nonactive event is falsely detected as active.

Lemma2

3. This figure shows the minimum sampling ratio required to successfully detect all active events versus the signal sparsity order s and Rs for the uniform deployment scenario. As can be seen in this figure, with increasing s, a larger number of sensors have to be deployed in the sensing field. This is because of the fact that a larger number of active event sources are connected to each sensor nodes; thus, the proposed sum verification decoder with T = 2 requires a larger number of measurements for sparse event detection. Moreover, when Rs increases, a smaller number of sensor nodes is required to be deployed in the sensing field to successfully recover all active events. However, when the signal sparsity order increases, a lower number of sensor nodes is required when the sensor coverage radius is low compared to the case that the sensing coverage radius is high.

Figsim1

4. The following figure shows PCD versus the sampling ratio at different SNR values, when n = 1000 event sources are randomly distributed in an area of 1000 m by 1000 m, k = 10, and α = 3. As can be seen in this figure, the proposed AFCS scheme outperforms both BCS  and SCS  in terms of PCD for different SNRs. Moreover, the number of required sensor nodes in SCS is relatively large (500), which significantly increases the total number of transmissions, leading to poor energy efficiency. It is important to note that the existing approaches need to know the number of active events to achieve a relatively good performance in terms of PCD; however, the proposed AFCS approach does not need to know the sparsity level of the events and can work for various numbers of active events. Moreover, the achievable PCD for the AFCS scheme with random deployment is very close to that for the SCS approach , and so we removed the respective curve from the figure for a clearer representation.

Figsim2

5. The following figure shows the minimum sampling ratio versus SNR in order to achieve a PCD larger than or equal to 99%, when α = 3 and n = 256 event sources are randomly distributed within 500 m by 500 m area. As can be seen in this figure, in low SNRs, the proposed approach can detect the active events with a large number of sensor nodes, where a WSN with sensors with a high coverage radius is more robust to the noise effect in terms of the sampling ratio. Note that in the random deployment scenario, more sensor nodes are required as some event sources may not be covered by any sensor nodes, leading to a lower percentage of correct detection.

Figsim3

References:

  1. Shirvanimoghaddam, M.; Yonghui Li; Vucetic, B., “Sparse event detection in wireless sensor networks using analog fountain codes,” in Global Communications Conference (GLOBECOM), 2014 IEEE , vol., no., pp.3520-3525, 8-12 Dec. 2014. URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7037353&isnumber=7036769
  2. Shirvanimoghaddam, M.; Li, Y.; Vucetic, B.; Yuan, J.; Zhang, P., “Binary Compressive Sensing Via Analog Fountain Coding,” in Signal Processing, IEEE Transactions on , vol.63, no.24, pp.6540-6552, Dec.15, 2015. URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219476&isnumber=7323913


Binary Compressive Sensing via Analog Fountain Coding

network

Our work on binary compressive sensing using analog fountain codes has been recently published on IEEE Transactions on Signal Processing. This post provides a brief introduction on the work. The full paper is now available on IEEE eXplorer and ArXive.

COMPRESSIVE SENSING (CS) has drawn a lot of interests in recent years as a revolutionary signal sampling paradigm. The main idea behind CS is the possibility to reconstruct a sparse signal from an under-determined system of linear equations through a computationally efficient convex programming algorithm. Let us consider the reconstruction problem of a sparse signal x ∈ Rn from the linear measurements yRm, where m<n, y = Gx, and GRm×n is the measurement matrix, modeling the measurement (sensing) process. We assume that the signal x is k-sparse, which means that exactly k<<n elements of x are nonzero [1], but we do not know the locations of the nonzero entries of x. Due to the sparsity of x, one could try to compute x by solving the following optimization problem:

minx ||x||0 s.t. Gx = y. (1)

However, solving (1) is an NP-hard problem and thus practically not feasible [2]. Instead, a convex relaxation problem can be formulated as follows, which can be solved efficiently via linear or quadratic programming techniques.

minx ||x||1 s.t. Gx = y. (2)

As shown in [2–5], under certain conditions on the matrix G and the sparsity of x, both (1) and (2) have the same unique solution. The Restricted Isometry Property (RIP) and the coherence of matrix G are to date the most widely used tools to derive such conditions. A comprehensive introduction to compressive sensing can be found in excellent review papers [6, 7]. Unlike the conventional CS theory which considers only the sparsity of the signal, in [8] a more realistic signal model and CS approach, referred to as the Model-based CS, was introduced by considering some structure among the signal coefficients. Model-based CS and its variations has been widely studied in the literature [8–14]. Due to the known structure of the sparse signal in model-based CS, signal models provide two major advantages to CS. First, the number of measurements required for the reconstruction of the sparse signal is significantly reduced. Second, a more accurate and robust recovery is achieved as we can better differentiate true signal information from recovery artifacts [8].

In this work, we are particularly interested in the reconstruction of sparse binary signals using linear measurements. Binary CS is a special case of model based CS which has many applications, such as event detection in wireless sensor networks (WSNs), group testing, and spectrum sensing for cognitive radios. More practical settings and similar discrete types of compressive sensing scenarios can be found in [15–21]. More specifically, in binary CS, the nonzero elements of the sparse signal have the same value of 1, and the objective is to minimize the number of real valued measurements linearly generated from the binary sparse signal. The L1-minimization solution for the noiseless binary CS has been proposed in [17], where (2) was solved by limiting the reconstructed signal in the range [0,1]. Also, by using an integer programming, [15] proposed a model to solve a system of linear equations with integer variables. In [21] a simple verification-based decoder has been proposed for binary CS problems, where the measurement matrix is sparse and its elements are randomly obtained based on a predefined probability distribution function. However, the number of required measurements in these approaches are still very large, as they do not take the binary structure of the signal into the design of the CS approaches. In this work, we propose a binary CS approach based on analog fountain codes (AFCs) [22]. AFCs were originally designed to approach the capacity of the Gaussian channel across a wide range of signal to noise ratios (SNRs).

Each AFC coded symbol is a linear combination of randomly selected information symbols with real weight coefficients. With the proper design of the weight coefficients, each binary linear equation associated with each AFC coded symbol will have a unique solution. As shown in [22], the AFC decoding problem is equivalent to solving a system of binary linear equations at high SNRs, which is the same as the reconstruction of a binary sparse signal from noiseless linear measurements. This interesting property of AFC codes motivates us to incorporate them in the design of an efficient binary CS approach to minimize the number of measurements required for the successful recovery of the sparse signal. In the proposed analog fountain compressive sensing (AFCS) approach, each measurement is generated from a number of randomly selected binary information symbols with weight coefficients selected from a finite weight set. Thanks to the random structure of AFC codes, the number of generated measurements is potentially limitless and the transmitter can keep transmitting the measurements till the destination fully recovers the sparse binary signal. Due to the sparsity and the fact that weight coefficients do not introduce redundancy, each measurement is connected to a very small number of non-zero signal components in the respective weighted bipartite graph; thus, information symbols can be recovered by comparing the measurement value with the weight coefficients assigned to it in an iterative manner. More specifically, we propose a reconstruction algorithm, where each measurement can determine its connected variable nodes if it has connection with at most T nonzero signal elements in the bipartite graph, where T ≥ 0 is a predefined value. To minimize the number of required measurements, we propose an optimization problem to find the optimum measurement degree, defined as the number of information symbols  measurement or simply the row weight of the measurement matrix. The minimum number of required measurements is then approximated and shown to be of O(−n log(1−s)), where n is the signal length and s=k/n is the signal sparsity order.

Simulation results show that the proposed scheme has a significantly lower error rate compared to the 1-minimization approach [17] and the binary CS scheme in [21]. We further extend the proposed AFCS scheme to the noisy CS problem, where the measurements are received in the presence of additive white Gaussian noise (AWGN). We propose a message passing decoder to reconstruct the binary sparse signal and show that the AFCS can recover the sparse signal with a very small number of measurements compared to the 1-minimization approach [17]. More specifically, the proposed AFCS scheme requires less number of measurements to completely recover the binary sparse signal in a wide range of SNRs even with a lower measurement degree compared to the 1-minimization approach [17].

Continue reading…


Analog Fountain Codes for M2M communications

Telecom_M2MComms1 (1)

Machine-to-Machine communications have emerged as a promising technology to enable trillions of multirole devices, namely machine-type communications (MTC) devices, to communicate with each other with little or no human intervention. It has many potential applications, such as intelligent transportation systems (ITS), healthcare monitoring, retail, banking, smart grids, home automation and so on. It is expected that in the next a few years over 2 billion MTC devices will become directly attached to cellular networks to provide M2M communications [3]. Thus, there will be a massive number of MTC devices with no/low mobility in each cell, which is significantly more than the number of users in current cellular networks. Moreover, M2M traffic involves a large number of short-lived sessions, attempting to deliver a small amount of data (few hundred bits) to the base station, which is quite different from thos e in human-to-human (H2H) communications.

In most existing wireless access networks, the first step in establishing an air interface connection, is to perform random access (RA) in a contention manner. In fact, short-lived sessions with small amount of data in M2M communications, makes it inefficient to establish dedicated resources for data transmission. Thus, one of critical challenges in M2M communications is to design an effective medium access scheme to support a large number of devices. In current RA approaches, when two or more devices select the same RA preamble in the first phase of the contention phase, a collision will occur and the respective devices will not be scheduled for data transmission. Existing random access schemes still suffer from very high access delays in highly dense networks.  Additionally, different MTC devices have diverse service requirements and traffic patterns in M2M communications. Conventional random access approaches are mostly inefficient for M2M communications as they are generally designed for a fixed payload size and thus, cannot support 2 M2M applications with different service requirements.

We consider a realistic model for M2M communications, which supports both regular and random traffics with different delay and service requirements. We develop a practical transmission scheme for M2M communications based on recently proposed analog fountain codes (AFCs) to enable massive number of devices communicating with a common base station (BS) while satisfying QoS requirements of all devices. We further show that the proposed scheme can closely approach the fundamental limits of M2M communications in terms of throughput and can satisfy the delay requirements of all devices at the same time. The main advantages of our proposed scheme are summarized below:

1. Efficient Random Access Proposal

2. Efficient Maximum Throughput Transmission

3. QoS aware Random Transmission

We have recently published a paper titled as “Probabilistic Rateless Multiple Access for Machine-to-Machine Communication” in IEEE Transactions on Wireless Communications, which explains the idea in more details. Interested readers are refer to this paper for more details, which can be found in IEEE eXplore and ArXive.

 

* The photo was taken from http://www.forbescustom.com/TelecomPgs/M2MComms.html. 

Rateless Codes: A Brief Overview

Ratless Codes are a class of powerful error correction codes, were originally proposed for erasure channels. Unlike conventional fixed rate codes, where the code rate is a predetermined value, in rateless codes, a potentially limitless number of coded symbols, is generated and sent to the destination. The destination will send back an acknowledgment to the sender, upon receiving an adequate number of coded symbols and successful decoding of the whole message. In the past decade, rateless code have received lot of attention in both communication and information theory research communities, which led to strong theory behind these codes mostly for erasure channels.

This weblog aims to provide an up to date overview on research work on rateless codes, their applications in wireless communications, and their potential for next generation wireless systems. We provide brief overview on rateless code design strategies and guide the readers to find more information through provided references. We also provide regular updates on the current research work on rateless code design for wireless communications at the School of Electrical Engineering and Computer Science at the University of Newcastle.


The material or views expressed on this Blog are those of the author and do not represent those of the University.  Please report any offensive or improper use of this Blog to RPS@newcastle.edu.au.
Skip to toolbar