Publications

AI‐Aided Channel Prediction

The wireless communication systems of today rely to a large extent on the condition of the accessible channel state information (CSI) at the transmitter and receiver. Channel aging, denoting the temporal and spatial evolution of wireless communication channels, is influenced by obstructions, interference, traffic load, and user mobility. Accurate CSI estimation and prediction empower the network to proactively counteract performance degradation resulting from channel dynamics, such as channel aging, by employing network management strategies such as power allocation. Prior studies have introduced approaches aimed at preserving high-quality CSI such as temporal prediction schemes, particularly in scenarios involving high mobility and channel aging. Conventional model-based estimators and predictors have historically been considered state-of-the-art. Recently, the development of artificial intelligence (AI) has increased the interest in developing models based on AI. Previous works have shown high potential of AI-aided channel estimation and prediction, which inclines the state-of-the-art title from model-based methods to be confiscated. However, there are many aspects to consider in channel estimation and prediction employed by AI in terms of prediction quality, training complexity, and practical feasibility. To investigate these aspects, this chapter provides an overview of state-of-the-art neural networks, applicable to channel estimation and prediction. The principal neural networks from the overview of channel prediction are empirically compared in terms of prediction quality. An innovative comparative analysis is conducted for five prospective neural netwoion horizons. The widely acknowledged tapped delay line (TDL) channel model, as endorsed by the Third Generation Partnership Project (3GPP), is employed to ensure a standardized evaluation of the neural networks. This comparative assessment enables a comprehensive examination of the merits and demerits inherent in each neural network. Subsequent to this analysis, insights are offered to provide guidelines for the selection of the most appropriate neural network in channel prediction applications.

Machine Learning for Spectrum Sharing: A Survey

The 5th generation (5G) of wireless systems is being deployed with the aim to provide many sets of wireless communication services, such as low data rates for a massive amount of devices, broadband, low latency, and industrial wireless access. Such an aim is even more complex in the next generation wireless systems (6G) where wireless connectivity is expected to serve any connected intelligent unit, such as software robots and humans interacting in the metaverse, autonomous vehicles, drones, trains, or smart sensors monitoring cities, buildings, and the environment. Because of the wireless devices will be orders of magnitude denser than in 5G cellular systems, and because of their complex quality of service requirements, the access to the wireless spectrum will have to be appropriately shared to avoid congestion, poor quality of service, or unsatisfactory communication delays. Spectrum sharing methods have been the objective of intense study through model-based approaches, such as optimization or game theories. However, these methods may fail when facing the complexity of the communication environments in 5G, 6G, and beyond. Recently, there has been significant interest in the application and development of data-driven methods, namely machine learning methods, to handle the complex operation of spectrum sharing. In this survey, we provide a complete overview of the state-of-theart of machine learning for spectrum sharing. First, we map the most prominent methods that we encounter in spectrum sharing. Then, we show how these machine learning methods are applied to the numerous dimensions and sub-problems of spectrum sharing, such as spectrum sensing, spectrum allocation, spectrum access, and spectrum handoff. We also highlight several open questions and future trends.

Blind federated learning via over-the-air q-QAM

In this work, we investigate federated edge learning over a fading multiple access channel. To alleviate the communication burden between the edge devices and the access point, we introduce a pioneering digital over-the-air computation strategy employing q-ary quadrature amplitude modulation, culminating in a low latency communication scheme. Indeed, we propose a new federated edge learning framework in which edge devices use digital modulation for over-the-air uplink transmission to the edge server while they have no access to the channel state information. Furthermore, we incorporate multiple antennas at the edge server to overcome the fading inherent in wireless communication. We analyze the number of antennas required to mitigate the fading impact effectively. We prove a non-asymptotic upper bound for the mean squared error for the proposed federated learning with digital over-the-air uplink transmissions under both noisy and fading conditions. Leveraging the derived upper bound, we characterize the convergence rate of the learning process of a non-convex loss function in terms of the mean square error of gradients due to the fading channel. Furthermore, we substantiate the theoretical assurances through numerical experiments concerning mean square error and the convergence efficacy of the digital federated edge learning framework. Notably, the results demonstrate that augmenting the number of antennas at the edge server and adopting higher-order modulations improve the model accuracy up to 60%.

Over-the-Air Histogram Estimation

Communication and computation are traditionally treated as separate entities, allowing for individual optimizations. However, many applications focus on local information’s functionality rather than the information itself. For such cases, harnessing interference for computation in a multiple access channel through digital over-the-air computation can notably increase the computation, as established by the ChannelComp method. However, the coding scheme originally proposed in ChannelComp may suffer from high computational complexity because it is general and is not optimized for specific modulation categories. Therefore, this study considers a specific category of digital modulations for over-the-air computations, quadrature amplitude modulation (QAM) and pulse-amplitude modulation (PAM), for which we introduce a novel coding scheme called SumComp. Furthermore, we derive a mean squared error (MSE) analysis for SumComp coding in the computation of the arithmetic mean function and establish an upper bound on the mean absolute error (MAE) for a set of nomographic functions. Simulation results are presented to affirm the superior performance of SumComp coding compared to traditional analog over-the-air computation and the original coding in ChannelComp approaches in terms of both MSE and MAE over a noisy multiple access channel. Specifically, SumComp coding shows at least 10 dB improvements for computing arithmetic and geometric mean on the normalized MSE for low noise scenarios.

A comparison of neural networks for wireless channel prediction

This paper investigates efficient distributed training of a Federated Learning (FL) model over a wireless network of wireless devices. The communication iterations of the distributed training algorithm may be substantially deteriorated or even blocked by the effects of the devices’ background traffic, packet losses, congestion, or latency. We abstract the communication-computation impacts as an ‘iteration cost’ and propose a cost-aware causal FL algorithm (FedCau) to tackle this problem. We propose an iteration-termination method that trade-offs the training performance and networking costs. We apply our approach when workers use the slotted-ALOHA, carrier-sense multiple access with collision avoidance (CSMA/CA), and orthogonal frequency-division multiple access (OFDMA) protocols. We show that, given a total cost budget, the training performance degrades as either the background communication traffic or the dimension of the training problem increases. Our results demonstrate the importance of proactively designing optimal cost-efficient stopping criteria to avoid unnecessary communication-computation costs to achieve a marginal FL training improvement. We validate our method by training and testing FL over the MNIST and CIFAR-10 dataset. Finally, we apply our approach to existing communication efficient FL methods from the literature, achieving further efficiency. We conclude that cost-efficient stopping criteria are essential for the success of practical FL over wireless networks.

Off-the-grid blind deconvolution and demixing

We consider the problem of gridless blind deconvolution and demixing (GB2D) in scenarios where multiple users communicate messages through multiple unknown channels, and a single base station (BS) collects their contributions. This scenario arises in various communication fields, including wireless communications, the Internet of Things, over-the-air computation, and integrated sensing and communications. In this setup, each user’s message is convolved with a multi-path channel formed by several scaled and delayed copies of Dirac spikes. The BS receives a linear combination of the convolved signals, and the goal is to recover the unknown amplitudes, continuous-indexed delays, and transmitted waveforms from a compressed vector of measurements at the BS. However, without prior knowledge of the transmitted messages and channels, GB2D is highly challenging and intractable in general. To address this issue, we assume that each user’s message follows a distinct modulation scheme living in a known low-dimensional subspace. By exploiting these subspace assumptions and the sparsity of the multipath channels for different users, we transform the nonlinear GB2D problem into a matrix tuple recovery problem from a few linear measurements. To achieve this, we propose a semidefinite programming optimization that exploits the specific low-dimensional structure of the matrix tuple to recover the messages and continuous delays of different communication paths from a single received signal at the BS. Finally, our numerical experiments show that our proposed method effectively recovers all transmitted messages and the continuous delay parameters of the channels with sufficient samples.

ChannelComp: A general method for computation by communications

Over-the-air computation (AirComp) is a well-known technique by which several wireless devices transmit by analog amplitude modulation to achieve a sum of their transmit signals at a common receiver. The underlying physical principle is the superposition property of the radio waves. Since such superposition is analog and in amplitude, it is natural that AirComp uses analog amplitude modulations. Unfortunately, this is impractical because most wireless devices today use digital modulations. It would be highly desirable to use digital communications because of their numerous benefits, such as error correction, synchronization, acquisition of channel state information, and widespread use. However, when we use digital modulations for AirComp, a general belief is that the superposition property of the radio waves returns a meaningless overlapping of the digital signals. In this paper, we break through such beliefs and propose an entirely new digital channel computing method named ChannelComp, which can use digital as well as analog modulations. We propose a feasibility optimization problem that ascertains the optimal modulation for computing arbitrary functions over-the-air. Additionally, we propose pre-coders to adapt existing digital modulation schemes for computing the function over the multiple access channel. The simulation results verify the superior performance of ChannelComp compared to AirComp, particularly for the product functions, with more than 10 dB improvement of the computation error.

Blind asynchronous goal-oriented detection for massive connectivity

Resource allocation and multiple access schemes are instrumental for the success of communication networks, which facilitate seamless wireless connectivity among a growing population of uncoordinated and non-synchronized users. In this paper, we present a novel random access scheme that addresses one of the most severe barriers of current strategies to achieve massive connectivity and ultra reliable and low latency communications for 6G. The proposed scheme utilizes wireless channels’ angular continuous group-sparsity feature to provide low latency, high reliability, and massive access features in the face of limited time-bandwidth resources, asynchronous transmissions, and preamble errors. Specifically, a reconstruction-free goal oriented optimization problem is proposed which preserves the angular information of active devices and is then complemented by a clustering algorithm to assign active users to specific groups. This allows to identify active stationary devices according to their line of sight angles. Additionally, for mobile devices, an alternating minimization algorithm is proposed to recover their preamble, data, and channel gains simultaneously, enabling the identification of active mobile users. Simulation results show that the proposed algorithm provides excellent performance and supports a massive number of devices. Moreover, the performance of the proposed scheme is independent of the total number of devices, distinguishing it from other random access schemes. The proposed method provides a unified solution to meet the requirements of machine-type communications and ultra reliable and low latency communications, making it an important contribution to the emerging 6G networks.

Hierarchical online game-theoretic framework for real-time energy trading in smart grid

In this paper, the real-time energy trading problem between the energy provider and the consumers in a smart grid system is studied. The problem is formulated as a hierarchical game, where the energy provider acts as a leader who determines the pricing strategy that maximizes its profits, while the consumers act as followers who react by adjusting their energy demand to save their energy costs and enhance their energy consumption utility. In particular, the energy provider employs a pricing strategy that depends on the aggregated amount of energy requested by the consumers, which suits a commodity-limited market. With this price setting, the consumers’ energy demand response strategies are designed under a non-cooperative game framework, where a unique generalized Nash equilibrium point is shown to exist. As an extension, the consumers are assumed to be unaware of their future energy consumption behaviors due to uncertain personal needs. To address this issue, an online distributed energy trading framework is proposed, where the energy provider and the consumers can design their strategies only based on the historical knowledge of consumers’ energy consumption behavior at each bidding stage. Besides, the proposed framework can be implemented in a distributed manner such that the consumers can design their demand responses by only exchanging information with their neighboring consumers, which requires much fewer communication resources and would thus be more suitable for the practical operation of the grid. As a theoretical guarantee, the proposed framework is further proved to asymptotically achieve the same performance as the offline solution for both energy provider and consumers’ optimization problems. The performance of practical designs of the proposed online distributed energy trading framework is finally illustrated in numerical experiments.

Computing functions over-the-air using digital modulations

Over-the-air computation (AirComp) is a known technique in which wireless devices transmit values by analog amplitude modulation so that a function of these values is computed over the communication channel at a common receiver. The physical reason is the superposition properties of the electromagnetic waves, which naturally return sums of analog values. Consequently, the applications of AirComp are almost entirely restricted to analog communication systems. However, the use of digital communications for over-the-air computations would have several benefits, such as error correction, synchronization, acquisition of channel state information, and easier adoption by current digital communication systems. Nevertheless, a common belief is that digital modulations are generally unfeasible for computation tasks because the overlapping of digitally modulated signals returns signals that seem to be meaningless for these tasks. This paper breaks through such a belief and proposes a fundamentally new computing method, named ChannelComp, for performing over-the-air computations by any digital modulation. In particular, we propose digital modulation formats that allow us to compute a wider class of functions than AirComp can compute, and we propose a feasibility optimization problem that ascertains the optimal digital modulation for computing functions over-the-air. The simulation results verify the superior performance of ChannelComp in comparison to AirComp, particularly for the product functions, with around 10 dB improvement of the computation error.

A General Framework to Distribute Iterative Algorithms With Localized Information Over Networks

Emerging applications in the Internet of Things (IoT) and edge computing/learning have sparked massive renewed interest in developing distributed versions of existing (centralized) iterative algorithms often used for optimization or machine learning purposes. While existing work in the literature exhibits similarities, for the tasks of both algorithm design and theoretical analysis, there is still no unified method or framework for accomplishing these tasks. This article develops such a general framework for distributing the execution of (centralized) iterative algorithms over networks in which the required information or data is partitioned between the nodes in the network. This article furthermore shows that the distributed iterative algorithm, which results from the proposed framework, retains the convergence properties (rate) of the original (centralized) iterative algorithm. In addition, this article applies the proposed general framework to several interesting example applications, obtaining results comparable to the state of the art for each such example, while greatly simplifying and generalizing their convergence analysis. These example applications reveal new results for distributed proximal versions of gradient descent, the heavy ball method, and Newton’s method. For example, these results show that the dependence on the condition number for the convergence rate of this distributed heavy ball method is at least as good as that of centralized gradient descent.

Federated learning over-the-air by retransmissions

Motivated by the increasing computational capabilities of wireless devices, as well as unprecedented levels of user- and device-generated data, new distributed machine learning (ML) methods have emerged. In the wireless community, Federated Learning (FL) is of particular interest due to its communication efficiency and its ability to deal with the problem of non-IID data. FL training can be accelerated by a wireless communication method called Over-the-Air Computation (AirComp) which harnesses the interference of simultaneous uplink transmissions to efficiently aggregate model updates. However, since AirComp utilizes analog communication, it introduces inevitable estimation errors. In this paper, we study the impact of such estimation errors on the convergence of FL and propose retransmissions as a method to improve FL accuracy over resource-constrained wireless networks. First, we derive the optimal AirComp power control scheme with retransmissions over static channels. Then, we investigate the performance of Over-the-Air FL with retransmissions and find two upper bounds on the FL loss function. Numerical results demonstrate that the power control scheme offers significant reductions in mean squared error. Additionally, we provide simulation results on MNIST classification with a deep neural network that reveals significant improvements in classification accuracy for low-SNR scenarios.

On the primal feasibility in dual decomposition methods under additive and bounded errors

With the unprecedented growth of signal processing and machine learning application domains, there has been a tremendous expansion of interest in distributed optimization methods to cope with the underlying large-scale problems. Nonetheless, inevitable system-specific challenges such as limited computational power, limited communication, latency requirements, measurement errors, and noises in wireless channels impose restrictions on the exactness of the underlying algorithms. Such restrictions have appealed to the exploration of algorithms’ convergence behaviors under inexact settings. Despite the extensive research conducted in the area, it seems that the analysis of convergences of dual decomposition methods concerning primal optimality violations, together with dual optimality violations is less investigated. Here, we provide a systematic exposition of the convergence of feasible points in dual decomposition methods under inexact settings, for an important class of global consensus optimization problems. Convergences and the rate of convergences of the algorithms are mathematically substantiated, not only from a dual-domain standpoint but also from a primal-domain standpoint. Analytical results show that the algorithms converge to a neighborhood of optimality, the size of which depends on the level of underlying distortions.

A-LAQ: Adaptive lazily aggregated quantized gradient

Federated Learning (FL) plays a prominent role in solving machine learning problems with data distributed across clients. In FL, to reduce the communication overhead of data between clients and the server, each client communicates the local FL parameters instead of the local data. However, when a wireless network connects clients and the server, the communication resource limitations of the clients may prevent completing the training of the FL iterations. Therefore, communication-efficient variants of FL have been widely investigated. Lazily Aggregated Quantized Gradient (LAQ) is one of the promising communication-efficient approaches to lower resource usage in FL. However, LAQ assigns a fixed number of bits for all iterations, which may be communication-inefficient when the number of iterations is medium to high or convergence is approaching. This paper proposes Adaptive Lazily Aggregated Quantized Gradient (A-LAQ), which is a method that significantly extends LAQ by assigning an adaptive number of communication bits during the FL iterations. We train FL in an energy-constraint condition and investigate the convergence analysis for A-LAQ. The experimental results highlight that A-LAQ outperforms LAQ by up to a 50% reduction in spent communication energy and an 11% increase in test accuracy.

Federated Learning Using Three-Operator ADMM

Federated learning (FL) has emerged as an instance of distributed machine learning paradigm that avoids the transmission of data generated on the users’ side. Although data are not transmitted, edge devices have to deal with limited communication bandwidths, data heterogeneity, and straggler effects due to the limited computational resources of users’ devices. A prominent approach to overcome such difficulties is FedADMM, which is based on the classical two-operator consensus alternating direction method of multipliers (ADMM). The common assumption of FL algorithms, including FedADMM, is that they learn a global model using data only on the users’ side and not on the edge server. However, in edge learning, the server is expected to be near the base station and has often direct access to rich datasets. In this paper, we argue that it is much more beneficial to leverage the rich data on the edge server then utilizing only user datasets. Specifically, we show that the mere application of FL with an additional virtual user node representing the data on the edge server is inefficient. We propose FedTOP-ADMM, which generalizes FedADMM and is based on a three-operator ADMM-type technique that exploits a smooth cost function on the edge server to learn a global model in parallel to the edge devices. Our numerical experiments indicate that FedTOP-ADMM has substantial gain up to 33% in communication efficiency to reach a desired test accuracy with respect to FedADMM, including a virtual user on the edge server.

Distributed assignment with load balancing for dnn inference at the edge

Inference carried out on pretrained deep neural networks (DNNs) is particularly effective as it does not require retraining and entails no loss in accuracy. Unfortunately, resource-constrained devices such as those in the Internet of Things may need to offload the related computation to more powerful servers, particularly, at the network edge. However, edge servers have limited resources compared to those in the cloud; therefore, inference offloading generally requires dividing the original DNN into different pieces that are then assigned to multiple edge servers. Related approaches in the state-of-the-art either make strong assumptions on the system model or fail to provide strict performance guarantees. This article specifically addresses these limitations by applying distributed assignment to DNN inference at the edge. In particular, it devises a detailed model of DNN-based inference, suitable for realistic scenarios involving edge computing. Optimal inference offloading with load balancing is also defined as a multiple assignment problem that maximizes proportional fairness. Moreover, a distributed algorithm for DNN inference offloading is introduced to solve such a problem in polynomial time with strong optimality guarantees. Finally, extensive simulations employing different data sets and DNN architectures establish that the proposed solution significantly improves upon the state-of-the-art in terms of inference time (1.14 to 2.62 times faster), load balance (with Jain’s fairness index of 0.9), and convergence (one order of magnitude less iterations).

EVM mitigation with PAPR and ACLR constraints in large-scale MIMO-OFDM using TOP-ADMM

Although signal distortion-based peak-to-average power ratio (PAPR) reduction is a feasible candidate for orthogonal frequency division multiplexing (OFDM) to meet standard/regulatory requirements, the error vector magnitude (EVM) stemming from the PAPR reduction has a deleterious impact on the performance of high data-rate achieving multiple-input multiple-output (MIMO) systems. Moreover, these systems must constrain the adjacent channel leakage ratio (ACLR) to comply with regulatory requirements. Several recent works have investigated the mitigation of the EVM seen at the receivers by capitalizing on the excess spatial dimensions inherent in the large-scale MIMO that assume the availability of perfect channel state information (CSI) with spatially uncorrelated wireless channels. Unfortunately, practical systems operate with erroneous CSI and spatially correlated channels. Additionally, most standards support user-specific/CSI-aware beamformed and cell-specific/non-CSI-aware broadcasting channels. Hence, we formulate a robust EVM mitigation problem under channel uncertainty with nonconvex PAPR and ACLR constraints catering to beamforming/broadcasting. To solve this formidable problem, we develop an efficient scheme using our recently proposed three-operator alternating direction method of multipliers (TOP-ADMM) algorithm and benchmark it against two three-operator algorithms previously presented for machine learning purposes. Numerical results show the efficacy of the proposed algorithm under imperfect CSI and spatially correlated channels.

Robust PAPR reduction in large-scale MIMO-OFDM using three-operator ADMM-type techniques

This paper deals with a distortion-based non-convex peak-to-average power ratio (PAPR) problem for large-scale multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. Our work is motivated by the observation that the distortion stemming from the PAPR reduction schemes has a deleterious impact on the data rates of MIMO-OFDM systems. Recently, some approaches have been proposed to either null or mitigate such distortion seen at the receiver(s) side by exploiting the extra degrees of freedom when the downlink channel is perfectly known at the transmitter. Unfortunately, most of these proposed methods are not robust against channel uncertainty, since perfect channel knowledge is practically infeasible at the transmitter. Although some recent works utilize semidefinite programming to cope with channel uncertainty and non-convex PAPR problem, they have formidable computational complexity. Additionally, some prior-art techniques tackle the non-convex PAPR problem by minimizing the peak power, which renders a suboptimal solution. In this work, we showcase the application of powerful first-order optimization schemes, namely the three-operator alternating direction method of multipliers (ADMM)-type techniques, notably 1) three-operator ADMM, 2) Bregman ADMM, and 3) Davis-Yin splitting, to solve the non-convex and robust PAPR problem, yielding a near-optimal solution in a computationally efficient manner.

Dynamic Clustering in Federated Learning

In the resource management of wireless networks, Federated Learning has been used to predict handovers. However, non-independent and identically distributed data degrade the accuracy performance of such predictions. To overcome the problem, Federated Learning can leverage data clustering algorithms and build a machine learning model for each cluster. However, traditional data clustering algorithms, when applied to the handover prediction, exhibit three main limitations: the risk of data privacy breach, the fixed shape of clusters, and the non-adaptive number of clusters. To overcome these limitations, in this paper, we propose a three-phased data clustering algorithm, namely: generative adversarial network-based clustering, cluster calibration, and cluster division. We show that the generative adversarial network-based clustering preserves privacy. The cluster calibration deals with dynamic environments by modifying clusters. Moreover, the divisive clustering explores the different number of clusters by repeatedly selecting and dividing a cluster into multiple clusters. A baseline algorithm and our algorithm are tested on a time series forecasting task. We show that our algorithm improves the performance of forecasting models, including cellular network handover, by 43%.

Wireless for control: Over-the-air controller

In closed-loop wireless control systems, the state-of-the-art approach prescribes that a controller receives by wireless communications the individual sensor measurements, and then sends the computed control signal to the actuators. We propose an over-the-air controller scheme where all sensors attached to the plant simultaneously transmit scaled sensing signals directly to the actuator; then the feedback control signal is computed partially over the air and partially by a scaling operation at the actuator. Such over-the-air controller essentially adopts the over-the-air computation concept to compute the control signal for closed-loop wireless control systems. In contrast to the state-of-the-art sensor-to-controller and controller-to-actuator communication approach, the over-the-air controller exploits the superposition properties of multiple-access wireless channels to complete the communication and computation of a large number of sensing signals in a single communication resource unit. Therefore, the proposed scheme can obtain significant benefits in terms of low actuation delay and low wireless resource utilization by a simple network architecture that does not require a dedicated controller. Numerical results show that our proposed over-the-air controller achieves a huge widening of the stability region in terms of sampling time and delay, and a significant reduction of the computation error of the control signal.

Comparing backscatter communication and energy harvesting in massive IoT networks

Backscatter communication (BC) and radio-frequency energy harvesting (RF-EH) are two promising technologies for extending the battery lifetime of wireless devices. Although there have been some qualitative comparisons between these two technologies, quantitative comparisons are still lacking, especially for massive IoT networks. In this paper, we address this gap in the research literature, and perform a quantitative comparison between BC and RF-EH in massive IoT networks with multiple primary users and multiple low-power devices acting as secondary users. An essential feature of our model is that it includes the interferences caused by the secondary users to the primary users, and we show that these interferences significantly impact the system performance of massive IoT networks. For the RF-EH model, the power requirements of digital-to-analog and signal amplification are taken into account. We pose and solve a power minimization problem for BC, and we show analytically when BC is better than RF-EH. The results of the numerical simulations illustrate the significant benefits of using BC in terms of saving power and supporting massive IoT, compared to using RF-EH. The results also show that the backscatter coefficients of the BC devices must be individually tunable, in order to guarantee good performance of BC.

Proactive fault-tolerant wireless mesh networks for mission-critical control systems

Although wireless networks are becoming a fundamental infrastructure for various control applications, they are inherently exposed to network faults such as lossy links and node failures in environments such as mining, outdoor monitoring, and chemical process control. In this paper, we propose a proactive fault-tolerant mechanism to protect the wireless network against temporal faults without any explicit network state information for mission-critical control systems. Specifically, the proposed mechanism optimizes the multiple routing paths, link scheduling, and traffic generation rate such that it meets the control stability demands even if it experiences multiple link faults and node faults. The proactive network relies on a constrained optimization problem, where the objective function is the network robustness, and the main constraints are the set of the traffic demand, link, and routing layer requirements. To analyze the robustness, we propose a novel performance metric called stability margin ratio, based on the network performance and the stability boundary. Our numerical and experimental performance evaluation shows that the traffic generation rate and the delay of wireless networks are found as critical as the network reliability to guarantee the stability of control systems. Furthermore, the proposed proactive network provides more robust performance than practical state-of-the-art solutions while maintaining high energy efficiency.

Efficient Optimization for Large-Scale MIMO-OFDM Spectral Precoding

Although spectral precoding is a propitious technique to suppress out-of-band emissions, it has a detrimental impact on the system-wide throughput performance, notably, in high data-rate multiple-input multiple-output (MIMO) systems with orthogonal frequency division multiplexing (OFDM), because of (spatially-coloured) transmit error vector magnitude (TxEVM) emanating from spectral precoding. The first contribution of this paper is to propose two mask-compliant spectral precoding schemes, which mitigate the resulting TxEVM seen at the receiver by capitalizing on the immanent degrees-of-freedom in (massive) MIMO systems and consequently improve the system-wide throughput. Our second contribution is an introduction to a new and simple three-operator consensus alternating direction method of multipliers (ADMM) algorithm, referred to as TOP-ADMM, which decomposes a large-scale problem into easy-to-solve subproblems. We employ the proposed TOP-ADMM-based algorithm to solve the spectral precoding problems, which offer computational efficiency. Our third contribution presents substantial numerical results by using an NR release 15 compliant simulator. In case of perfect channel knowledge at the transmitter, the proposed methods render similar block error rate and throughput performance as without spectral precoding yet meeting out-of-band emission (OOBE) requirements at the transmitter. Further, no loss on the OOBE performance with a graceful degradation on the throughput is observed under channel uncertainty.

Smart Antenna Assignment is Essential in Full-Duplex Communications

Full-duplex communications have the potential to almost double the spectral efficiency. To realize such a potentiality, the signal separation at base station’s antennas plays an essential role. This article addresses the fundamentals of such separation by proposing a new smart antenna architecture that allows every antenna to be either shared or separated between uplink and downlink transmissions. The benefits of such architecture are investigated by an assignment problem to optimally assign antennas, beamforming and power to maximize the weighted sum spectral efficiency. We propose a near-to-optimal solution using block coordinate descent that divides the problem into assignment problems, which are NP-hard, a beamforming and power allocation problems. The optimal solutions for the beamforming and power allocation are established while near-to-optimal solutions to the assignment problems are derived by semidefinite relaxation. Numerical results indicate that the proposed solution is close to the optimum, and it maintains a similar performance for high and low residual self-interference powers. With respect to the usually assumed antenna separation technique and half-duplex transmission, the sum spectral efficiency gains increase with the number of antennas. We conclude that our proposed smart antenna assignment for signal separation is essential to realize the benefits of multiple antenna full-duplex communications.

Wireless Avionics Intracommunications: A Survey of Benefits, Challenges, and Solutions

In the aeronautics industry, wireless avionics intracommunications (WAICs) have a tremendous potential to improve efficiency and flexibility while reducing weight, fuel consumption, and maintenance costs over traditional wired avionics systems. This survey starts with an overview of the major benefits and opportunities in the deployment of wireless technologies for critical applications in an aircraft. The current state of the art is presented in terms of system classifications based on data rate demands and transceiver installation locations. We then discuss major technical challenges in the design and realization of the envisioned aircraft applications. Although WAIC has aspects and requirements similar to mission-critical applications of industrial automation, it also has specific issues, such as wireless channels, complex structures, operations, and safety of the aircraft that make this area of research self-standing and challenging. Existing wireless techniques are discussed to investigate the applicability of the current solutions for the critical operations of an aircraft. Specifically, IEEE 802.15.4-based and Bluetooth-based solutions are discussed for low data rate applications, whereas IEEE 802.11-based and UWB-based solutions are considered for high data rate applications. We conclude the survey by highlighting major research directions in this emerging area.

EVM-Constrained and Mask-Compliant MIMO-OFDM Spectral Precoding

Spectral precoding is a promising technique to suppress out-of-band emissions and comply with leakage constraints over adjacent frequency channels and with mask requirements on the unwanted emissions. However, spectral precoding may distort the original data vector, which is formally expressed as the error vector magnitude (EVM) between the precoded and original data vectors. Notably, EVM has a deleterious impact on the performance of multiple-input multiple-output orthogonal frequency division multiplexing-based systems. In this paper we propose a novel spectral precoding approach which constrains the EVM while complying with the mask requirements. We first formulate and solve the EVM-unconstrained mask-compliant spectral precoding problem, which serves as a springboard to the design of two EVM-constrained spectral precoding schemes. The first scheme takes into account a wideband EVM-constraint which limits the average in-band distortion. The second scheme takes into account frequency-selective EVM-constraints, and consequently, limits the signal distortion at the subcarrier level. Numerical examples illustrate that both proposed schemes outperform previously developed schemes in terms of important performance indicators such as block error rate and system-wide throughput while complying with spectral mask and EVM constraints.

Enabling Massive IoT in Ambient Backscatter Communication Systems

Backscatter communication is a promising solution for enabling information transmission between ultra-low-power devices, but its potential is not fully understood. One major problem is dealing with the interference between the backscatter devices, which is usually not taken into account, or simply treated as noise in the cases where there are a limited number of backscatter devices in the network. In order to better understand this problem in the context of massive IoT (Internet of Things), we consider a network with a base station having one antenna, serving one primary user, and multiple IoT devices, called secondary users. We formulate an optimization problem with the goal of minimizing the needed transmit power for the base station, while the ratio of backscattered signal, called backscatter coefficient, is optimized for each of the IoT devices. Such an optimization problem is non-convex and thus finding an optimal solution in real-time is challenging. In this paper, we prove necessary and sufficient conditions for the existence of an optimal solution, and show that it is unique. Furthermore, we develop an efficient solution algorithm, only requiring solving a linear system of equations with as many unknowns as the number of secondary users. The simulation results show a lower energy outage probability by up to 40-80 percentage points in dense networks with up to 150 secondary users. To our knowledge, this is the first work that studies backscatter communication in the context of massive IoT, also taking into account the interference between devices.

Delay Optimization for Industrial Wireless Control Systems Based on Channel Characterization

Wireless communication is gaining popularity in the industry for its simple deployment, mobility, and low cost. Ultralow latency and high reliability requirements of mission-critical industrial applications are highly demanding for wireless communication, and the indoor industrial environment is hostile to wireless communication due to the richness of reflection and obstacles. Assessing the effect of the industrial environment on the reliability and latency of wireless communication is a crucial task, yet it is challenging to accurately model the wireless channel in various industrial sites. In this article, based on the comprehensive channel measurement results from the National Institute of Standards and Technology at 2.245 and 5.4 GHz, we quantify the reliability degradation of wireless communication in multipath fading channels. A delay optimization based on the channel characterization is then proposed to minimize packet transmission times of a cyclic prefix orthogonal frequency division multiplexing system under a reliability constraint at the physical layer. When the transmission bandwidth is abundant and the payload is short, the minimum transmission time is found to be restricted by the optimal cyclic prefix duration, which is correlated with the communication distance. Results further reveal that using relays may, in some cases, reduce end-to-end latency in industrial sites, as achievable minimum transmission time significantly decreases at short communication ranges.

Guest Editorial Millimeter-Wave Networking

Computation Rate Maximization for Wireless Powered Mobile Edge Computing with NOMA

In this paper, we consider a mobile edge computing (MEC) network, that is wirelessly powered. Each user harvests wireless energy and follows a binary computation offloading policy, i.e., it either executes the task locally or offloads it to the MEC as a whole. For the offloading users, non-orthogonal multiple access (NOMA) is adopted for information transmission. We consider rate-adaptive computational tasks and aim at maximizing the sum computation rate of all users by jointly optimizing the individual computing mode selection (local computing or offloading), the time allocations for energy transfer and for information transmission, together with the local computing speed or the transmission power level. The major difficulty of the rate maximization problem lies in the combinatorial nature of the multiuser computing mode selection and its involved coupling with the time allocation. We also study the case where the offloading users adopt time division multiple access (TDMA) as a benchmark, and derive the optimal time sharing among the users. We show that the maximum achievable rate is the same for the TDMA and the NOMA system, and in the case of NOMA it is independent from the decoding order, which can be exploited to improve system fairness. To maximize the sum computation rate, for the mode selection we propose a greedy solution based on the wireless channel gains, combined with the optimal allocation of energy transfer time. Numerical results show that the proposed solution maximizes the computation rate in homogeneous networks, and binary offloading leads to significant gains. Moreover, NOMA increases the fairness of rate distribution among the users significantly, when compared with TDMA.

Compressive Sensing with Applications to Millimeter-wave Architectures

To make the system available at low-cost, millimeter-ave (mmWave) multiple-input multiple-output (MIMO) architectures employ analog arrays, which are driven by a limited number of radio frequency (RF) chains. One primary challenge of using large hybrid analog-digital arrays is that the digital baseband cannot directly access the signal to/from each antenna. To address this limitation, recent research has focused on retransmissions, iterative precoding, and subspace decomposition methods. Unlike these approaches that exploited the channel’s low-rank, in this work we exploit the sparsity of the received signal at both the transmit/receive antennas. While the signal itself is de facto dense, it is well-known that most signals are sparse under an appropriate choice of basis. By delving into the structured compressive sensing (CS) framework and adapting them to variants of the mmWave hybrid architectures, we provide methodologies to recover the analog signal at each antenna from the (low-dimensional) digital signal. Moreover, we characterizes the minimal numbers of measurement and RF chains to provide this recovery, with high probability. We discuss their applications to common variants of the hybrid architecture. By leveraging the inherent sparsity of the received signal, our analysis reveals that a hybrid MIMO system can be " turned into" a fully digital one: the number of needed RF chains increases logarithmically with the number of antennas.

Energy Efficient Full-Duplex Networks

As the specifications of the 5th generation of cellular networks mature, the deployment phase is starting up. Hence, peaks of data rates in the order of tens of Gbit/s as well as more energy efficient deployments are expected. Nevertheless, the quick development of new applications and services encourage the research community to look beyond 5G and explore new technological components. Indeed, to meet the increasing demand for mobile broadband as well as internet of things type of services, the research and standardization communities are currently investigating novel physical and medium access layer technologies, including further virtualization of networks, the use of the lower Terahertz bands, even higher cell densification, and full-duplex (FD) communications. FD has been proposed as one of the enabling technologies to increase the spectral efficiency of conventional wireless transmission modes, by overcoming our prior understanding that it is not possible for radios to transmit and receive simultaneously on the same time-frequency resource. Due to this, we can also refer to FD communications as in-band FD. In-band FD transceivers have the potential of improving the attainable spectral efficiency of traditional wireless networks operating with half-duplex (HD) transceivers by a factor close to two. In addition to the spectral efficiency gains, full-duplex can provide gains in the medium access control layer, in which problems such as the hidden/exposed nodes and collision detection can be mitigated and the energy consumption can be reduced. Until recently, in-band FD was not considered as a solution for wireless networks due to the inherent interference created from the transmitter to its own receiver, the so-called self-interference (SI). However, recent advancements in antenna and analog/digital interference cancellation techniques demonstrate FD transmissions as a viable alternative to traditional HD transmissions. Given the recent architectural progression of 5G towards smaller cells, higher densification, higher number of antennas and utilizing the millimeter wave (mmWave) band, the integration of FD communications into such scenarios is appealing. In-band FD communications are suited for short range communication, and although the SI remains a challenge, the use of multiple antennas and the transmission in the mmWave band are allies that help to mitigate the SI in the spatial domain and provide even more gains for spectral and energy efficiency. To achieve the spectral and energy efficiency gains, it is important to understand the challenges and solutions, which can be roughly divided into resource allocation, protocol design, hardware design and energy harvesting. Hence, FD communications appears as an important technology component to improve the spectral and energy efficiency of current communication systems and help to meet the goals of 5G and beyond. The chapter starts with an overview of FD communications, including its challenges and solutions. Next, a comprehensive literature review of energy efficiency in FD communications is presented along with the key solutions to improve energy efficiency. Finally, we evaluate the key aspects of energy efficiency in FD communications for two scenarios: single-cell with multiple users in a pico-cell scenario, and a system level evaluation with macro- and small-cells with multiple users.

Low-Complexity OFDM Spectral Precoding

This paper proposes a new large-scale mask compliant spectral precoder (LS-MSP) for orthogonal frequency division multiplexing systems. In this paper, we first consider a previously proposed mask-compliant spectral precoding scheme that utilizes a generic convex optimization solver which suffers from high computational complexity, notably in large-scale systems. To mitigate the complexity of computing the LS-MSP, we propose a divide-and-conquer approach that breaks the original problem into smaller rank 1 quadratic-constraint problems and each small problem yields closed-form solution. Based on these solutions, we develop three specialized first-order low-complexity algorithms, based on 1) projection on convex sets and 2) the alternating direction method of multipliers. We also develop an algorithm that capitalizes on the closed-form solutions for the rank 1 quadratic constraints, which is referred to as 3) semianalytical spectral precoding. Numerical results show that the proposed LS-MSP techniques outperform previously proposed techniques in terms of the computational burden while complying with the spectrum mask. The results also indicate that 3) typically needs 3 iterations to achieve similar results as 1) and 2) at the expense of a slightly increased computational complexity.

Smart Antenna Assignment is Essential in Full-Duplex Communications

Full-duplex communications have the potential to almost double the spectralefficiency. To realize such a potentiality, the signal separation at base station’s antennasplays an essential role. This paper addresses the fundamentals of such separationby proposing a new smart antenna architecture that allows every antenna to beeither shared or separated between uplink and downlink transmissions. The benefitsof such architecture are investigated by an assignment problem to optimally assignantennas, beamforming and power to maximize the weighted sum spectral efficiency.We propose a near-to-optimal solution using block coordinate descent that divides theproblem into assignment problems, which are NP-hard, a beamforming and powerallocation problems. The optimal solutions for the beamforming and power allocationare established while near-to-optimal solutions to the assignment problems are derivedby semidefinite relaxation. Numerical results indicate that the proposed solution isclose to the optimum, and it maintains a similar performance for high and low residualself-interference powers. With respect to the usually assumed antenna separationtechnique and half-duplex transmission, the sum spectral efficiency gains increase withthe number of antennas. We conclude that our proposed smart antenna assignment forsignal separation is essential to realize the benefits of multiple antenna full-duplexcommunications.

Latency Analysis of Wireless Networks for Proximity Services in Smart Home and Building Automation : The Case of Thread

Proximity service (ProSe), using the geographic location and device information by considering the proximity of mobile devices, enriches the services we use to interact with people and things around us. ProSe has been used in mobile social networks in proximity and also in smart home and building automation (Google Home). To enable ProSe in smart home, reliable and stable network protocols and communication infrastructures are needed. Thread is a new wireless protocol aiming at smart home and building automation (BA), which supports mesh networks and native Internet protocol connectivity. The latency of Thread should be carefully studied when used in user-friendly and safety-critical ProSe in smart home and BA. In this paper, a system level model of latency in the Thread mesh network is presented. The accumulated latency consists of different kinds of delay from the application layer to the physical layer. A Markov chain model is used to derive the probability distribution of the medium access control service time. The system level model is experimentally validated in a multi-hop Thread mesh network. The outcomes show that the system model results match well with the experimental results. Finally, based on an analytical model, a software tool is developed to estimate the latency of the Thread mesh network, providing developers more network information to develop user-friendly and safety-critical ProSe in smart home and BA.

Adaptive Distributed Association in Time-Variant Millimeter Wave Networks

The underutilized millimeter-wave (mm-wave) band is a promising candidate to enable extremely high data rate communications in future wireless networks. However, the special characteristics of the mm-wave systems such as high vulnerability to obstacles (due to high penetration loss) and to mobility (due to directional communications) demand a careful design of the association between the clients and access points (APs). This challenge can be addressed by distributed association techniques that gracefully adapt to wireless channel variations and client mobilities. We formulated the association problem as a mixed-integer optimization aiming to maximize the network throughput with proportional fairness guarantees. This optimization problem is solved first by a distributed dual decomposition algorithm, and then by a novel distributed auction algorithm where the clients act asynchronously to achieve near-to-optimal association between the clients and APs. The latter algorithm has a faster convergence with a negligible drop in the resulting network throughput. A distinguishing novel feature of the proposed algorithms is that the resulting optimal association does not have to be re-computed every time the network changes (e.g., due to mobility). Instead, the algorithms continuously adapt to the network variations and are thus very efficient. We discuss the implementation of the proposed algorithms on top of existing communication standards. The numerical analysis verifies the ability of the proposed algorithms to optimize the association and to maintain optimality in the time-variant environments of the mm-wave networks.

The sensable city: A survey on the deployment and management for smart city monitoring

In last two decades, various monitoring systems have been designed and deployed in urban environments, toward the realization of the so called smart cities. Such systems are based on both dedicated sensor nodes, and ubiquitous but not dedicated devices such as smart phones and vehicles’ sensors. When we design sensor network monitoring systems for smart cities, we have two essential problems: node deployment and sensing management. These design problems are challenging, due to large urban areas to monitor, constrained locations for deployments, and heterogeneous type of sensing devices. There is a vast body of literature from different disciplines that have addressed these challenges. However, we do not have yet a comprehensive understanding and sound design guidelines. This paper addresses such a research gap and provides an overview of the theoretical problems we face, and what possible approaches we may use to solve these problems. Specifically, this paper focuses on the problems on both the deployment of the devices (which is the system design/configuration part) and the sensing management of the devices (which is the system running part). We also discuss how to choose the existing algorithms in different type of monitoring applications in smart cities, such as structural health monitoring, water pipeline networks, traffic monitoring. We finally discuss future research opportunities and open challenges for smart city monitoring.

Internet of Musical things : Visit and Challenges

The Internet of Musical Things (IoMusT) is an emerging research field positioned at the intersection of Internet of Things, new interfaces for musical expression, ubiquitous music, human-computer interaction, artificial intelligence, and participatory art. From a computer science perspective, IoMusT refers to the networks of computing devices embedded in physical objects (musical things) dedicated to the production and/or reception of musical content. Musical things, such as smart musical instruments or wearables, are connected by an infrastructure that enables multidirectional communication, both locally and remotely. We present a vision in which the IoMusT enables the connection of digital and physical domains by means of appropriate information and communication technologies, fostering novel musical applications and services. The ecosystems associated with the IoMusT include interoperable devices and services that connect musicians and audiences to support musician-musician, audience-musicians, and audience-audience interactions. In this paper, we first propose a vision for the IoMusT and its motivations. We then discuss five scenarios illustrating how the IoMusT could support: 1) augmented and immersive concert experiences; 2) audience participation; 3) remote rehearsals; 4) music e-learning; and 5) smart studio production. We identify key capabilities missing from today’s systems and discuss the research needed to develop these capabilities across a set of interdisciplinary challenges. These encompass network communication (e.g., ultra-low latency and security), music information research (e.g., artificial intelligence for real-time audio content description and multimodal sensing), music interaction (e.g., distributed performance and music e-learning), as well as legal and responsible innovation aspects to ensure that future IoMusT services are socially desirable and undertaken in the public interest.

A Survey of Enabling Technologies for Network Localization, Tracking, and Navigation

Location information for events, assets, and individuals, mostly focusing on two dimensions so far, has triggered a multitude of applications across different verticals, such as consumer, networking, industrial, health care, public safety, and emergency response use cases. To fully exploit the potential of location awareness and enable new advanced location-based services, localization algorithms need to be combined with complementary technologies including accurate height estimation, i.e., three dimensional location, reliable user mobility classification, and efficient indoor mapping solutions. This survey provides a comprehensive review of such enabling technologies. In particular, we present cellular localization systems including recent results on 5G localization, and solutions based on wireless local area networks, highlighting those that are capable of computing 3D location in multi-floor indoor environments. We overview range-free localization schemes, which have been traditionally explored in wireless sensor networks and are nowadays gaining attention for several envisioned Internet of Things applications. We also present user mobility estimation techniques, particularly those applicable in cellular networks, that can improve localization and tracking accuracy. Regarding the mapping of physical space inside buildings for aiding tracking and navigation applications, we study recent advances and focus on smartphone-based indoor simultaneous localization and mapping approaches. The survey concludes with service availability and system scalability considerations, as well as security and privacy concerns in location architectures, discusses the technology roadmap, and identifies future research directions.

Packet Detection by Single OFDM Symbol in URLLC for Critical Industrial Control: a Realistic Study

Packet Detection by Single OFDM Symbol in URLLC for Critical Industrial Control: a Realistic Study

Low-Latency Networking: Where Latency Lurks and How to Tame It

While the current generation of mobile and fixed communication networks has been standardized for mobile broadband services, the next generation is driven by the vision of the Internet of Things and mission-critical communication services requiring latency in the order of milliseconds or submilliseconds. However, these new stringent requirements have a large technical impact on the design of all layers of the communication protocol stack. The cross-layer interactions are complex due to the multiple design principles and technologies that contribute to the layers’ design and fundamental performance limitations. We will be able to develop low-latency networks only if we address the problem of these complex interactions from the new point of view of submilliseconds latency. In this paper, we propose a holistic analysis and classification of the main design principles and enabling technologies that will make it possible to deploy low-latency wireless communication networks. We argue that these design principles and enabling technologies must be carefully orchestrated to meet the stringent requirements and to manage the inherent tradeoffs between low latency and traditional performance metrics. We also review currently ongoing standardization activities in prominent standards associations, and discuss open problems for future research.

Fundamental Constraints for Time-slotted MAC Design in Wireless High Performance : the Realistic Perspective of Timing

How to Split UL/DL Antennas in Full-Duplex Cellular Networks

To further improve the potential of full-duplex communications, networks may employ multiple antennas at the base station or user equipment. To this end, networks that employ current radios usually deal with self-interference and multi-user interference by beamforming techniques. Although previous works investigated beamforming design to improve spectral efficiency, the fundamental question of how to split the antennas at a base station between uplink and downlink in full-duplex networks has not been investigated rigorously. This paper addresses this question by posing antenna splitting as a binary nonlinear optimization problem to minimize the sum mean squared error of the received data symbols. It is shown that this is an NP-hard problem. This combinatorial problem is dealt with by equivalent formulations, iterative convex approximations, and a binary relaxation. The proposed algorithm is guaranteed to converge to a stationary solution of the relaxed problem with much smaller complexity than exhaustive search. Numerical results indicate that the proposed solution is close to the optimal in both high and low self-interference capable scenarios, while the usually assumed antenna splitting is far from optimal. For large number of antennas, a simple antenna splitting is close to the proposed solution. This reveals that the importance of antenna splitting diminishes with the number of antennas.

Optimal Node Deployment and Energy Provision for Wirelessly Powered Sensor Networks

In a typical wirelessly powered sensor network (WPSN), wireless chargers provide energy to sensor nodes by using wireless energy transfer (WET). The chargers can greatly improve the lifetime of a WPSN using energy beamforming by a proper charging scheduling of energy beams. However, the supplied energy still may not meet the demand of the energy of the sensor nodes. This issue can be alleviated by deploying redundant sensor nodes, which not only increase the total harvested energy, but also decrease the energy consumption per node provided that an efficient  scheduling of the sleep/awake of the nodes is performed. Such a problem of joint optimal sensor deployment, WET scheduling, and node activation is posed and investigated in this paper. The problem is an integer optimization that is challenging due to the binary decision variables and non-linear constraints. Based on the analysis of the necessary condition such that the WPSN be immortal, we decouple the original problem into a node deployment problem and a charging and activation scheduling problem. Then, we propose an algorithm and prove that it achieves the optimal solution under a mild condition. The simulation results show that the proposed algorithm reduces the needed nodes to deploy by approximately 16%, compared to a random-based approach. The simulation also shows if the battery buffers are large enough, the optimality condition will be easy to meet.

Communication Complexity of Dual Decomposition Methods for Distributed Resource Allocation Optimization

Dual decomposition methods are among the most prominent approaches for finding primal/dual saddle point solutions of resource allocation optimization problems. To deploy these methods in the emerging Internet of things networks, which will often have limited data rates, it is important to understand the communication overhead they require. Motivated by this, we introduce and explore twomeasures of communication complexity of dual decomposition methods to identify the most efficient communication among these algorithms. The first measure is epsilon-complexity, which quantifies the minimal number of bits needed to find an epsilon-accurate solution. The second measure is b-complexity, which quantifies the best possible solution accuracy that can be achieved from communicating b bits. We find the exact epsilon -and b-complexity of a class of resource allocation problems where a single supplier allocates resources to multiple users. For both the primal and dual problems, the epsilon-complexity grows proportionally to log(2) (1/epsilon) and the b-complexity proportionally to 1/2(b). We also introduce a variant of the epsilon- and b-complexity measures where only algorithms that ensure primal feasibility of the iterates are allowed. Such algorithms are often desirable because overuse of the resources can overload the respective systems, e.g., by causing blackouts in power systems. We provide upper and lower bounds on the convergence rate of these primal feasible complexity measures. In particular, we show that the b-complexity cannot converge at a faster rate than O(1/b). Therefore, the results demonstrate a tradeoff between fast convergence and primal feasibility. We illustrate the result by numerical studies.

Towards Immortal Wireless Sensor Networks by Optimal Energy Beamforming and Data Routing

The lifetime of a wireless sensor network (WSN) determines how long the network can be used to monitor the area of interest. Hence, it is one of the most important performance metrics for WSN. The approaches used to prolong the lifetime can be briefly divided into two categories: reducing the energy consumption, such as designing an efficient routing, and providing extra energy, such as using wireless energy transfer (WET) to charge the nodes. Contrary to the previous line of work where only one of those two aspects is considered, we investigate these two together. In particular, we consider a scenario where dedicated wireless chargers transfer energy wirelessly to sensors. The overall goal is to maximize the minimum sampling rate of the nodes while keeping the energy consumption of each node smaller than the energy it receives. This is done by properly designing the routing of the sensors and the WET strategy of the chargers. Although such a joint routing and energy beamforming problem is non-convex, we show that it can be transformed into a semi-definite optimization problem (SDP). We then prove that the strong duality of the SDP problem holds, and hence the optimal solution of the SDP problem is attained. Accordingly, the optimal solution for the original problem is achieved by a simple transformation. We also propose a low-complexity approach based on pre-determined beamforming directions. Moreover, based on the alternating direction method of multipliers (ADMM), the distributed implementations of the proposed approaches are studied. The simulation results illustrate the significant performance improvement achieved by the proposed methods. In particular, the proposed energy beamforming scheme significantly out-performs the schemes where one does not use energy beamforming, or one does not use optimized routing. A thorough investigation of the effect of system parameters, including the number of antennas, the number of nodes, and the number of chargers, on the system performance is provided. The promising convergence behaviour of the proposed distributed approaches is illustrated.

How to Split UL/DL Antennas in Full-DuplexCellular Networks

To further improve the potential of full-duplex com-munications, networks may employ multiple antennas at thebase station or user equipment. To this end, networks thatemploy current radios usually deal with self-interference andmulti-user interference by beamforming techniques. Althoughprevious works investigated beamforming design to improvespectral efficiency, the fundamental question of how to split theantennas at a base station between uplink and downlink infull-duplex networks has not been investigated rigorously. Thispaper addresses this question by posing antenna splitting as abinary nonlinear optimization problem to minimize the sum meansquared error of the received data symbols. It is shown that thisis an NP-hard problem. This combinatorial problem is dealt withby equivalent formulations, iterative convex approximations, anda binary relaxation. The proposed algorithm is guaranteed toconverge to a stationary solution of the relaxed problem with muchsmaller complexity than exhaustive search. Numerical resultsindicate that the proposed solution is close to the optimal in bothhigh and low self-interference capable scenarios, while the usuallyassumed antenna splitting is far from optimal. For large numberof antennas, a simple antenna splitting is close to the proposedsolution. This reveals that the importance of antenna splittingdiminishes with the number of antennas.

How to Split UL/DL Antennas in Full-DuplexCellular Networks

To further improve the potential of full-duplex com-munications, networks may employ multiple antennas at thebase station or user equipment. To this end, networks thatemploy current radios usually deal with self-interference andmulti-user interference by beamforming techniques. Althoughprevious works investigated beamforming design to improvespectral efficiency, the fundamental question of how to split theantennas at a base station between uplink and downlink infull-duplex networks has not been investigated rigorously. Thispaper addresses this question by posing antenna splitting as abinary nonlinear optimization problem to minimize the sum meansquared error of the received data symbols. It is shown that thisis an NP-hard problem. This combinatorial problem is dealt withby equivalent formulations, iterative convex approximations, anda binary relaxation. The proposed algorithm is guaranteed toconverge to a stationary solution of the relaxed problem with muchsmaller complexity than exhaustive search. Numerical resultsindicate that the proposed solution is close to the optimal in bothhigh and low self-interference capable scenarios, while the usuallyassumed antenna splitting is far from optimal. For large numberof antennas, a simple antenna splitting is close to the proposedsolution. This reveals that the importance of antenna splittingdiminishes with the number of antennas.

Distributed Pareto-optimal state estimation using sensor networks

A novel model-based dynamic distributed state estimator is proposed using sensor networks. The estimator consists of a filtering step – which uses a weighted combination of information provided by the sensors – and a model-based predictor of the system’s state. The filtering weights and the model-based prediction parameters jointly minimize – at each time-step – the bias and the variance of the prediction error in a Pareto optimization framework. The simultaneous distributed design of the filtering weights and of the model-based prediction parameters is considered, differently from what is normally done in the literature. It is assumed that the weights of the filtering step are in general unequal for the different state components, unlike existing consensus-based approaches. The state, the measurements, and the noise components are allowed to be individually correlated, but no probability distribution knowledge is assumed for the noise variables. Each sensor can measure only a subset of the state variables. The convergence properties of the mean and of the variance of the prediction error are demonstrated, and they hold both for the global and the local estimation errors at any network node. Simulation results illustrate the performance of the proposed method, obtaining better results than state of the art distributed estimation approaches.

Low-Overhead Coordination in Sub-28 Millimeter-Wave Networks

Interference Model Similarity Index and Its Applications to Millimeter-Wave Networks

In wireless communication networks, interference models are routinely used for tasks, such as performance analysis, optimization, and protocol design. These tasks are heavily affected by the accuracy and tractability of the interference models. Yet, quantifying the accuracy of these models remains a major challenge. In this paper, we propose a new index for assessing the accuracy of any interference model under any network scenario. Specifically, it is based on a new index that quantifies the ability of any interference model in correctly predicting harmful interference events, that is, link outages. We consider specific wireless scenario of both conventional sub-6 GHz and millimeter-wave networks and demonstrate how our index yields insights into the possibility of simplifying the set of dominant interferers, replacing a Nakagami or Rayleigh random fading by an equivalent deterministic channel, and ignoring antenna sidelobes. Our analysis reveals that in highly directional antenna settings with obstructions, even simple interference models (such as the classical protocol model) are accurate, while with omnidirectional antennas, more sophisticated and complex interference models (such as the classical physical model) are necessary. Our new approach makes it possible to adopt the simplest interference model of adequate accuracy for every wireless network.

Low complexity content replication through clustering in Content-Delivery Networks

Contemporary Content Delivery Networks (CDN) handle a vast number of content items. At such a scale, the replication schemes require a significant amount of time to calculate and realize cache updates, and hence they are impractical in highly-dynamic environments. This paper introduces cluster-based replication, whereby content items are organized in clusters according to a set of features, given by the cache/network management entity. Each cluster is treated as a single item with certain attributes, e.g., size, popularity, etc. and it is then altogether replicated in network caches so as to minimize overall network traffic. Clustering items reduces replication complexity; hence it enables faster and more frequent caches updates, and it facilitates more accurate tracking of content popularity. However, clustering introduces some performance loss because replication of clusters is more coarse-grained compared to replication of individual items. This tradeoff can be addressed through proper selection of the number and composition of clusters. Due to the fact that the exact optimal number of clusters cannot be derived analytically, an efficient approximation method is proposed. Extensive numerical evaluations of time-varying content popularity scenarios allow to argue that the proposed approach reduces core network traffic, while being robust to errors in popularity estimation.

Joint Optimal Pricing and Electrical Efficiency Enforcement for Rational Agents in Microgrids

In electrical distribution grids, the constantly increasing number of power generation devices based on renewables demands a transition from a centralized to a distributed generation paradigm. In fact, power injection from distributed energy resources (DERs) can be selectively controlled to achieve other objectives beyond supporting loads, such as the minimization of the power losses along the distribution lines and the subsequent increase of the grid hosting capacity. However, these technical achievements are only possible if alongside electrical optimization schemes, a suitable market model is set up to promote cooperation from the end users. In contrast with the existing literature, where energy trading and electrical optimization of the grid are often treated separately, or the trading strategy is tailored to a specific electrical optimization objective, in this paper, we consider their joint optimization. We also allow for a modular approach, where the market model can support any smart grid optimization goal. Specifically, we present a multi-objective optimization problem accounting for energy trading, where: 1) DERs try to maximize their profit, resulting from selling their surplus energy; 2) the loads try to minimize their expense; and 3) the main power supplier aims at maximizing the electrical grid efficiency through a suitable discount policy. This optimization problem is proved to be non-convex, and an equivalent convex formulation is derived. Centralized solutions are discussed and a procedure to distribute the solution is proposed. Numerical results to demonstrate the effectiveness of the so obtained optimal policies are finally presented, showing the proposed model results in economic bene fits for all the users (generators and loads) and in an increased electrical efficiency for the grid.

Pilot Precoding and Combining in Multiuser MIMO Networks

Although the benefits of precoding and combining data signals are widely recognized, the potential of these techniques for pilot transmission is not fully understood. This is particularly relevant for multiuser multiple-input multiple-output(MU-MIMO) cellular systems using millimeter-wave (mmWave)communications, where multiple antennas have to be used both at the transmitter and the receiver to overcome the severe path loss.In this paper, we characterize the gains of pilot precoding and combining in terms of channel estimation quality and achievable data rate. Specifically, we consider three uplink pilot transmission scenarios in a mmWave MU-MIMO cellular system: 1) non-precoded and uncombined, 2) precoded but uncombined, and3) precoded and combined. We show that a simple precoder that utilizes only the second-order statistics of the channel reduces the variance of the channel estimation error by a factor that is proportional to the number of user equipment (UE) antennas.We also show that using a linear combiner design based on the second-order statistics of the channel significantly reduces multiuser interference and provides the possibility of reusing some pilots. Specifically, in the large antenna regime, pilot preceding and combining help to accommodate a large number ofUEs in one cell, significantly improve channel estimation quality, boost the signal-to-noise ratio of the UEs located close to the cell edges, alleviate pilot contamination, and address the imbalanced coverage of pilot and data signals.

On the Spectral Efficiency and Fairness in Full-Duplex Cellular Networks

To increase the spectral efficiency of wireless networks without requiring full-duplex capability of user devices, a potential solution is the recently proposed three-node full-duplex mode. To realize this potential, networks employing three-node full-duplex transmissions must deal with self-interference and user-to-user interference, which can be managed by frequency channel and power allocation techniques. Whereas previous works investigated either spectral efficient or fair mechanisms, a scheme that balances these two metrics among users is investigated in this paper. This balancing scheme is based on a new solution method of the multi-objective optimization problem to maximize the weighted sum of the per-user spectral efficiency and the minimum spectral efficiency among users. The mixed integer non-linear nature of this problem is dealt by Lagrangian duality. Based on the proposed solution approach, a low-complexity centralized algorithm is developed, which relies on large scale fading measurements that can be advantageously implemented at the base station. Numerical results indicate that the proposed algorithm increases the spectral efficiency and fairness among users without the need of weighting the spectral efficiency. An important conclusion is that managing user-to-user interference by resource assignment and power control is crucial for ensuring spectral efficient and fair operation of full-duplex networks.

A Sensor  Scheduling  Protocol for  Energy-efficiency and  Robustness to Failures

Spectrum Sharing in mmWave Cellular Networks via Cell Association, Coordination, and Beamforming

This paper investigates the extent to which spectrum sharing in millimeter-wave (mmWave) networks with multiple cellular operators is a viable alternative to traditional dedicated spectrum allocation. Specifically, we develop a general mathematical framework to characterize the performance gain that can be obtained when spectrum sharing is used, as a function of the underlying beamforming, operator coordination, bandwidth, and infrastructure sharing scenarios. The framework is based on joint beamforming and cell association optimization, with the objective of maximizing the long-term throughput of the users. Our asymptotic and non-asymptotic performance analyses reveal five key points: 1) spectrum sharing with light on-demand intra-and inter-operator coordination is feasible, especially at higher mmWave frequencies (for example, 73 GHz); 2) directional communications at the user equipment substantially alleviate the potential disadvantages of spectrum sharing (such as higher multiuser interference); 3) large numbers of antenna elements can reduce the need for coordination and simplify the implementation of spectrum sharing; 4) while inter-operator coordination can be neglected in the large-antenna regime, intra-operator coordination can still bring gains by balancing the network load; and 5) critical control signals among base stations, operators, and user equipment should be protected from the adverse effects of spectrum sharing, for example by means of exclusive resource allocation. The results of this paper, and their extensions obtained by relaxing some ideal assumptions, can provide important insights for future standardization and spectrum policy.

Distributed Optimization of Channel Access Strategies in Reactive Cognitive Networks

In reactive cognitive networks, the channel access and the transmission decisions of the cognitive terminals have a long-term effect on the network dynamics. When multiple cognitive terminals coexist, the optimization and implementation of their strategy is challenging and may require considerable coordination overhead. In this paper, such challenge is addressed by a novel framework for the distributed optimization of transmission and channel access strategies. The objective of the cognitive terminals is to find the optimal action distribution depending on the current network state. To reduce the coordination overhead, in the proposed framework the cognitive terminals distributively coordinate the policy, whereas the action in each individual time slot is independently selected by the terminals. The optimization of the transmission and channel access strategy is performed iteratively by using the alternate convex optimization technique, where at each iteration a cognitive terminal is selected to optimize its own action distribution while assuming fixed those of the other cognitive terminals. For a traditional primary-secondary user network configuration, numerical results show that the proposed algorithm converges to a stable solution in a small number of iterations, and a limited performance loss with respect to the perfect coordinated case.

On the relay-fallback tradeoff in millimeter wave wireless system

Millimeter wave (mmWave) communications systems are promising candidate to support extremely high data rate services in future wireless networks. MmWave communications exhibit high penetration loss (blockage) and require directional transmissions to compensate for severe channel attenuations and for high noise powers. When blockage occurs, there are at least two simple prominent options: 1) switching to the conventional microwave frequencies (fallback option) and 2) using an alternative non-blocked path (relay option). However, currently it is not clear under which conditions and network parameters one option is better than the other. To investigate the performance of the two options, this paper proposes a novel blockage model that allows deriving maximum achievable throughput and delay performance of both options. A simple criterion to decide which option should be taken under which network condition is provided. By a comprehensive performance analysis, it is shown that the right option depends on the payload size, beam training overhead, and blockage probability. For a network with light traffic and low probability of blockage in the direct link, the fallback option is throughput- and delay-optimal. For a network with heavy traffic demands and semistatic topology (low beam-training overhead), the relay option is preferable.

Optimality of Radio Power Control Via Fast-Lipschitz Optimization

In wireless network resource allocation, the radio power control problems are often solved by fixed point algorithms. Although these algorithms give feasible problem solutions, such solutions often lack notion of problem optimality. This paper reconsiders well-known fixed-point algorithms, such as those with standard and type-II standard interference functions, and investigates the conditions under which they give optimal solutions. The optimality is established by the recently proposed fast-Lipschitz optimization framework. To apply such a framework, the analysis is performed by a logarithmic transformation of variables that gives tractable fast-Lipschitz problems. It is shown how the logarithmic problem constraints are contractive by the standard or type-II standard assumptions on the power control problem, and how sets of cost functions fulfill the fast-Lipschitz qualifying conditions. The analysis on nonmonotonic interference function allows establishing a new qualifying condition for fast-Lipschitz optimization. The results are illustrated by considering power control problems with standard interference function, problems with type-II standard interference functions, and a case of subhomogeneous power control problems. Given the generality of fast-Lipschitz optimization compared to traditional methods for resource allocation, it is concluded that such an optimization may help to determine the optimality of many resource allocation problems in wireless networks.

Practical Coding Schemes For Bandwidth Limited One-Way Communication Resource Allocation

This paper investigates resource allocation algorithms that use limited communication - where the supplier of a resource broadcasts a coordinating signal using one bit of information to users per iteration. Rather than relay anticipated consumption to the supplier, the users locally compute their allocation, while the supplier measures the total resource consumption. Since the users do not compare their local consumption against the supplier’s capacity at each iteration, they can easily overload the system and cause an outage (for example blackout in power networks). To address this challenge, this paper investigates pragmatic coding schemes, called PFcodes (Primal-Feasible codes), that not only allow the restriction of communication to a single bit of information, but also avoid system overload due to users’ heavy consumption. We derive a worst case lower bound on the number of bits needed to achieve any desired accuracy using PF-codes. In addition, we demonstrate how to construct time-invariant and time-varying PF-codes. We provide an upper bound on the number of bits needed to achieve any desired solution accuracy using time-invariant PF-codes. Remarkably, the difference between the upper and lower bound is only 2 bits. It is proved that the time-varying PF-codes asymptotically converge to the true primal/dual optimal solution. Simulations demonstrating accuracy of our theoretical analyses are presented.

Distributed Spectral Efficiency Maximization in Full-Duplex Cellular Networks

Three-node full-duplex is a promising new transmission mode between a full-duplex capable wireless node and two other wireless nodes that use half-duplex transmission and reception respectively. Although three-node full-duplex transmissions can increase the spectral efficiency without requiring full-duplex capability of user devices, inter-node interference - in addition to the inherent self-interference - can severely degrade the performance. Therefore, as methods that provide effective self-interference mitigation evolve, the management of inter-node interference is becoming increasingly important. This paper considers a cellular system in which a full-duplex capable base station serves a set of half-duplex capable users. As the spectral efficiencies achieved by the uplink and downlink transmissions are inherently intertwined, the objective is to device channel assignment and power control algorithms that maximize the weighted sum of the uplink-downlink transmissions. To this end a distributed auction based channel assignment algorithm is proposed, in which the scheduled uplink users and the base station jointly determine the set of downlink users for full-duplex transmission. Realistic system simulations indicate that the spectral efficiency can be up to 89% better than using the traditional half-duplex mode. Furthermore, when the self-interference cancelling level is high, the impact of the user-to-user interference is severe unless properly managed.

Lifetime Maximization for Sensor Networks with Wireless Energy Transfer

In Wireless Sensor Networks (WSNs), to supply energy to the sensor nodes, wireless energy transfer (WET) is a promising technique. One of the most efficient procedures to transfer energy to the sensor nodes consists in using a sharp wireless energy beam from the base station to each node at a time. A natural fundamental question is what is the lifetime ensured by WET and how to maximize the network lifetime by scheduling the transmissions of the energy beams. In this paper, such a question is addressed by posing a new lifetime maximization problem for WET enabled WSNs. The binary nature of the energy transmission process introduces a binary constraint in the optimization problem, which makes challenging the investigation of the fundamental properties of WET and the computation of the optimal solution. The sufficient condition for which the WET makes WSNs immortal is established as function of the WET parameters. When such a condition is not met, a solution algorithm to the maximum lifetime problem is proposed. The numerical results show that the lifetime achieved by the proposed algorithm increases by about 50% compared to the case without WET, for a WSN with a small to medium size number of nodes. This suggests that it is desirable to schedule WET to prolong lifetime of WSNs having small or medium network sizes.

On Maximizing Sensor Network Lifetime by Energy Balancing

Many physical systems, such as water/electricity distribution networks, are monitored by battery-powered wireless-sensor networks (WSNs). Since battery replacement of sensor nodes is generally difficult, long-term monitoring can be only achieved if the operation of the WSN nodes contributes to long WSN lifetime. Two prominent techniques to long WSN lifetime are 1) optimal sensor activation and 2) efficient data gathering and forwarding based on compressive sensing. These techniques are feasible only if the activated sensor nodes establish a connected communication network (connectivity constraint), and satisfy a compressive sensing decoding constraint (cardinality constraint). These two constraints make the problem of maximizing network lifetime via sensor node activation and compressive sensing NP-hard. To overcome this difficulty, an alternative approach that iteratively solves energy balancing problems is proposed. However, understanding whether maximizing network lifetime and energy balancing problems are aligned objectives is a fundamental open issue. The analysis reveals that the two optimization problems give different solutions, but the difference between the lifetime achieved by the energy balancing approach and the maximum lifetime is small when the initial energy at sensor nodes is significantly larger than the energy consumed for a single transmission. The lifetime achieved by energy balancing is asymptotically optimal, and that the achievable network lifetime is at least 50% of the optimum. Analysis and numerical simulations quantify the efficiency of the proposed energy balancing approach.

Robustness Analysis for an Online Decentralized Descent Power allocation algorithm

As independent service providers increasingly inject power (from renewable sources like wind) into the power distribution system, the power distribution system will likely experience increasingly significant fluctuations in power supply. Fluctuations in power generation, coupled with time-varying consumption of electricity on the demand side and the massive scale of power distribution networks present the need to not only design decentralized power allocation policies, but also understand how robust they are to dynamic demand and supply. In this paper, via an Online Decentralized Dual Descent (OD3) Algorithm, with communication for decentralized coordination, we design power allocation policies in a power distribution system. Based on the OD3 algorithm, we determine and characterize (in the worst case) how much of observed social welfare andprice volatility can be explained by fluctuations in consumption utilities of users and capacities of suppliers. In coordinating the power allocation, the OD3 algoritihm uses a protocol in which the users’ consumption at each time-step depends on the coordinating (price) signal, which is iteratively updated based on aggregate power consumption. Convergence properties and performance guarantees of the OD3 algorithm is analyzed by characterizing the difference between the online decision and the optimal decision. As more renewable energy sources are integrated into the power grid, the results in this paper providea framework to understand how volatility in the power systems propagate to markets. The theoretical results in the paper are validated and illustrated by numerical experiments.

Distributed Resource Allocation Using One-Way Communication with Applications to Power Networks

Typical coordination schemes for future powergrids require two-way communications. Since the number of end power-consuming devices is large, the bandwidth requirements for such two-way communication schemes may be prohibitive. Motivated by this observation, we study distributed coordination schemes that require only one-way limited communications. In particular, we investigate how dual descent distributed optimization algorithm can be employed in power networks using one-way communication. In this iterative algorithm, system coordinators broadcast coordinating (or pricing) signals to the users/devices who update power consumption based on the received signal. Then system coordinators update the coordinating signals based on the physical measurement of the aggregate power usage. We provide conditions to guarantee the feasibility of the aggregated power usage at each iteration so as to avoid blackout. Furthermore, we prove the convergence of algorithms under these conditions, and establish its rate of convergence. We illustrate the performance of our algorithms using numerical simulations. These results show that one-way limited communication may be viable for coordinating/operating the future smart grids.

Adaptive congestion control in cognitive industrial wireless sensor networks

Strict quality of service requirements of industrial applications, challenged by harsh environments and huge interference especially in multi-vendor sites, demand incorporation of cognition in industrial wireless sensor networks (IWSNs). In this paper, a distributed protocol of light complexity for congestion regulation in cognitive IWSNs is proposed to improve the channel utilization while ensuring predetermined performance for specific devices, called primary devices. By sensing the congestion level of a channel with local measurements, a novel congestion control protocol is proposed by which every device decides whether it should continue operating on the channel, or vacate it in case of saturation. Such a protocol dynamically changes the congestion level based on variations of non-stationary wireless environment as well as traffic demands of the devices. The proposed protocol is implemented on STM32W108 chips that offer IEEE 802.15.4 standard communications. Experimental results confirm substantial performance enhancement compared to the original standard, while imposing almost no signaling/computational overhead. In particular, channel utilization is increased by 56% with fairness and delay guarantees. The presented results provide useful insights on low-complexity adaptive congestion control mechanism in IWSNs.

Design aspects of short range millimeter wave networks : A MAC layer perspective

Increased density of wireless devices, ever growing demands for extremely high data rate, and spectrum scarcity at microwave bands make the millimeter wave (mmWave) frequencies an important player in future wireless networks. However, mmWave communication systems exhibit severe attenuation, blockage, deafness, and may need microwave networks for coordination and fall-back support. To compensate for high attenuation, mmWave systems exploit highly directional operation, which in turn substantially reduces the interference footprint. The significant differences between mmWave networks and legacy communication technologies challenge the classical design approaches, especially at the medium access control (MAC) layer, which has received comparatively less attention than PHY and propagation issues in the literature so far. In this paper, the MAC layer design aspects of shortrange mmWave networks are discussed. In particular, we explain why current mmWave standards fail to fully exploitthe potential advantages of short range mmWave technology, and argue for the necessity of new collision-awarehybrid resource allocation frameworks with on-demand control messages, the advantages of a collision notification message, and the potential of multihop communication to provide reliable mmWave connections.

Performance limitations of localization based on ranging, speed, and orientation

Estimating the position of a mobile node by linear sensor fusion of ranging, speed, and orientation measurements has the potentiality to achieve high localization accuracy. Nevertheless, the design of these sensor fusion algorithms is uncertain if their fundamental limitations are unknown. Despite the substantial research focus on these sensor fusion methods, the characterization of the Cramér Rao Lower Bound (CRLB) has not yet been satisfactorily addressed. In this paper, the existence and derivation of the posterior CRLB for the linear sensor fusion of ranging, speed, and orientation measurements is investigated. The major difficulty in the analysis is that ranging and orientation are not linearly related to the position, which makes it hard to derive the posterior CRLB. This difficulty is overcome by introducing the concept of posterior CRLB in the Cauchy principal value sense and deriving explicit upper and lower bounds to the posteriori Fisher information matrix. Numerical simulation results are provided for both the parametric CRLB and the posterior CRLB, comparing some widely-used methods from the literature to the derived bound. It is shown that many existing methods based on Kalman filtering may be far from the the fundamental limitations given by the CRLB.

Distrubuted association and relaying with fairness in millimeter wave networks

Millimeter wave (mmWave) systems are emerging as an essential technology for enabling extremely high data rate wireless communications. The main limiting factors of mmWave systems are blockage (high penetration loss) and deafness (misalignment between the beams of the transmitter and receiver). To alleviate these problems, it is imperative to incorporate efficient association and relaying between terminals and access points. Unfortunately, the existing association techniques are designed for the traditional interference-limited networks, and thus are highly suboptimal for mmWave communications due to narrow-beam operations and the resulting non-negligible interference-free behavior. This paper introduces a distributed approach that solves the joint association and relaying problem in mmWave networks considering the load balancing at access points. The problem is posed as a novel stochastic optimization problem, which is solved by distributed auction algorithms where the clients and relays act asynchronously to achieve optimal client-relay-access point association. It is shown that the algorithms provably converge to a solution that maximizes the aggregate logarithmic utility within a desired bound. Numerical results allow quantification of the performance enhancements introduced by the relays, and the substantial improvements of the network throughput and fairness among the clients by the proposed association method as compared to standard approaches. It is concluded that mmWave communications with proper association and relaying mechanisms can support extremely high data rates, connection reliability, and fairness among the clients.

Clustered Content Replication for Hierarchical Content Delivery Networks

Caching at the network edge is considered a promising solution for addressing the ever-increasing traffic demand of mobile devices. The problem of proactive content replication in hierarchical cache networks, which consist of both network edge and core network caches, is considered in this paper. This problem arises because network service providers wish to efficiently distribute content so that user-perceived performance is maximized. Nevertheless, current high-complexity replication algorithms are impractical due to the vast number of involved content items. Clustering algorithms inspired from machine learning can be leveraged to simplify content replication and reduce its complexity. Specifically, similar items could be clustered together, e.g., according to their popularity in space and time. Replication on a cluster-level is a problem of substantially smaller dimensionality, but it may result in suboptimal decisions compared to item-level replication. The factors that cause performance loss are identified and a clustering scheme that addresses the specific challenges of content replication is devised. Extensive numerical evaluations, based on realistic traffic data, demonstrate that for reasonable cluster sizes the impact on actual performance is negligible.

User association and the alignment-throughput tradeoff in millimeter wave networks

Millimeter wave (mmWave) communication is apromising candidate for future extremely high data rate, wirelessnetworks. The main challenges of mmWave communications aredeafness (misalignment between the beams of the transmitterand receiver) and blockage (severe attenuation due to obstacles).Due to deafness, prior to link establishment between a clientand its access point, a time consuming alignment/beam trainingprocedure is necessary, whose complexity depends on the operatingbeamwidth. Addressing blockage may require a reassociationto non-blocked access points, which in turn imposes additionalalignment overhead. This paper introduces a unifying frameworkto maximize network throughput considering both deafness andblockage. A distributed auction-based solution is proposed, wherethe clients and access points act asynchronously to achieveoptimal association along with the optimal operating beamwidth.It is shown that the proposed algorithm provably converges toa solution that maximizes the aggregate network utility withina desired bound. Convergence time and performance boundsare derived in closed-forms. Numerical results confirm superiorthroughput performance of the proposed solution compared toexisting approaches, and highlight the existence of a tradeoffbetween alignment overhead and achievable throughput thataffects the optimal association.

On the Convergence of Alternating Direction Lagrangian Methods for Nonconvex Structured Optimization Problems

Nonconvex and structured optimization problemsarise in many engineering applications that demand scalableand distributed solution methods. The study of the convergenceproperties of these methods is in general difficult due to thenonconvexity of the problem. In this paper, two distributedsolution methods that combine the fast convergence propertiesof augmented Lagrangian-based methods with the separabilityproperties of alternating optimization are investigated. The firstmethod is adapted from the classic quadratic penalty functionmethod and is called the Alternating Direction Penalty Method(ADPM). Unlike the original quadratic penalty function method,in which single-step optimizations are adopted, ADPM uses analternating optimization, which in turn makes it scalable. Thesecond method is the well-known Alternating Direction Methodof Multipliers (ADMM). It is shown that ADPM for nonconvexproblems asymptotically converges to a primal feasible pointunder mild conditions and an additional condition ensuringthat it asymptotically reaches the standard first order necessary conditions for local optimality are introduced. In thecase of the ADMM, novel sufficient conditions under whichthe algorithm asymptotically reaches the standard first ordernecessary conditions are established. Based on this, completeconvergence of ADMM for a class of low dimensional problemsare characterized. Finally, the results are illustrated by applyingADPM and ADMM to a nonconvex localization problem inwireless sensor networks.

Millimeter wave ad hoc networks : Noise-limited or interference-limited?

In millimeter wave (mmWave) communication systems,narrow beam operations overcome severe channel attenuations,reduce multiuser interference, and thus introduce thenew concept of noise-limited mmWave wireless networks. Theregime of the network, whether noise-limited or interferencelimited,heavily reflects on the medium access control (MAC)layer throughput and on proper resource allocation and interferencemanagement strategies. Yet, alternating presence of theseregimes and, more importantly, their dependence on the mmWavedesign parameters are ignored in the current approaches tommWave MAC layer design, with the potential disastrous consequenceson the throughput/delay performance. In this paper,tractable closed-form expressions for collision probability andMAC layer throughput of mmWave networks, operating underslotted ALOHA and TDMA, are derived. The new analysis revealsthat mmWave networks may exhibit a non negligible transitionalbehavior from a noise-limited regime to an interference-limitedregime, depending on the density of the transmitters, densityand size of obstacles, transmission probability, beamwidth, andtransmit power. It is concluded that a new framework of adaptivehybrid resource allocation procedure, containing a proactivecontention-based phase followed by a reactive contention-free onewith dynamic phase duration, is necessary to cope with suchtransitional behavior.

Beam-searching and Transmission Scheduling in Millimeter Wave Communications

Millimeter wave (mmWave) wireless networks relyon narrow beams to support multi-gigabit data rates. Nevertheless, the alignment of transmitter and receiver beams is a time consuming operation, which introduces an alignment-throughput tradeoff. A wider beamwidth reduces the alignment overhead,but leads also to reduced directivity gains. Moreover, existing mmWave standards schedule a single transmission in eachtime slot, although directional communications facilitate multiple concurrent transmissions. In this paper, a joint consideration ofthe problems of beamwidth selection and scheduling is proposed to maximize effective network throughput. The resulting optimization problem requires exact knowledge of network topology,which may not be available in practice. Therefore, two standard compliant approximation algorithms are developed, which relyon underestimation and overestimation of interference. The first one aims to maximize the reuse of available spectrum, whereas the second one is a more conservative approach that schedules together only links that cause no interference. Extensive performance analysis provides useful insights on the directionality level and the number of concurrent transmissions that should bepursued. Interestingly, extremely narrow beams are in general not optimal.

Optimal sensor placement for bacteria detection in water distribution networks

The real-time detection of bacteria and other bio-pollutants in water distribution networks and the real-time control of the water quality is made possible by new biosensors. However, the limited communication capabilities of these sensors, which are placed underground, and their limited number, due to their high cost, pose significant challenges in the deployment and the reliable monitoring. This paper presents a preliminary study concerning the problem of the static optimal sensor placement of a wireless biosensor network in a water distribution network for real-time detection of bacterial contamination. An optimal sensor placement strategy is proposed, which maximizes the probability of detection considering a limited number of sensors while ensuring a connected communication topology. A lightweight algorithm that solves the optimal placement problem is developed. The performance of the proposed algorithm is evaluated through simulations, considering different network topologies using a water pipelines emulator. The results indicate that the proposed optimization outperforms more traditional approaches in terms of detection probability. It is concluded that the availability of a dynamic model of the bacterial propagation along with a spatio-temporal correlation of the process could lead to a more advanced real-time control of the water distribution networks.

Fast-Lipschitz optimization with wireless sensor networks applications

Motivated by the need for fast computations demanded by wireless sensor networks, the new F-Lipschitz optimization theory is introduced for a novel class of optimization problems. These problems are defined by simple qualifying properties specified in terms of increasing objective function and contractive constraints. It is shown that feasible F-Lipschitz problems have always a unique optimal solution that satisfies the constraints at equality. The solution is obtained quickly by asynchronous algorithms of certified convergence. F-Lipschitz optimization can be applied to both centralized and distributed optimization. Compared to traditional Lagrangian methods, which often converge linearly, the convergence time of centralized F-Lipschitz problems is at least superlinear. Distributed F-Lipschitz algorithms converge fast, as opposed to traditional La-grangian decomposition and parallelization methods, which generally converge slowly and at the price of many message passings. In both cases, the computational complexity is much lower than traditional Lagrangian methods. Examples of application of the new optimization method are given for distributed detection and radio power control in wireless sensor networks. The drawback of the F-Lipschitz optimization is that it might be difficult to check the qualifying properties. For more general optimization problems, it is suggested that it is convenient to have conditions ensuring that the solution satisfies the constraints at equality.

Proceedings of the 2010 2nd International Conference on Future Computer and Communication, ICFCC 2010 : Preface

Self-triggered control of multiple loops over IEEE 802.15.4 networks

Given the communication savings offered by self-triggered sampling, it is becoming an essential paradigm for closed-loop control over energy-constrained wireless sensor networks (WSNs). The understanding of the performance of self-triggered control systems when the feedback loops are closed over IEEE 802.15.4 WSNs is of major interest, since the communication standard IEEE 802.15.4 is the de-facto the reference protocol for energy-efficient WSNs. In this paper, a new approach to control several processes over a shared IEEE 802.15.4 network by self-triggered sampling is proposed. It is shown that the sampling time of the processes, the protocol parameters, and the scheduling of the transmissions must be jointly selected to ensure stability of the processes and energy efficiency of the network. The challenging part of the proposed analysis is ensuring stability and making an energy efficient scheduling of the state transmissions. These transmissions over IEEE 802.15.4 are allowed only at certain time slots, which are difficult to schedule when multiple control loops share the network. The approach establishes that the joint design of self-triggered samplers and the network protocol 1) ensures the stability of each loop, 2) increases the network capacity, 3) reduces the number of transmissions of the nodes, and 4) increases the sleep time of the nodes. A new dynamic scheduling problem is proposed to control each process, adapt the protocol parameters, and reduce the energy consumption. An algorithm is then derived, which adapts to any choice of the self-triggered samplers of every control loop. Numerical examples illustrate the analysis and show the benefits of the new approach.

Duty-cycle optimization for IEEE 802.15.4 wireless sensor networks

Most applications of wireless sensor networks require reliable and timely data communication with maximum possible network lifetime under low traffic regime. These requirements are very critical especially for the stability of wireless sensor and actuator networks. Designing a protocol that satisfies these requirements in a network consisting of sensor nodes with traffic pattern and location varying over time and space is a challenging task. We propose an adaptive optimal duty-cycle algorithm running on top of the IEEE 802.15.4 medium access control tominimize power consumption while meeting the reliability and delay requirements. Such a problem is complicated because simple and accurate models of the effects of the duty cycle on reliability, delay, and power consumption are not available. Moreover, the scarce computational resources of the devices and the lack of prior information about the topology make it impossible to compute the optimal parameters of the protocols. Based on an experimental implementation, we propose simple experimental models to expose the dependency of reliability, delay, and power consumption on the duty cycle at the node and validate it through extensive experiments. The coefficients of the experimental-based models can be easily computed on existing IEEE 802.15.4 hardware platforms by introducing a learning phase without any explicit information about data traffic, network topology, and medium access control parameters. The experimental-based model is then used to derive a distributed adaptive algorithm for minimizing the power consumption while meeting the reliability and delay requirements in the packet transmission. The algorithm is easily implementable on top of the IEEE 802.15.4 medium access control without any modifications of the protocol. An experimental implementation of the distributed adaptive algorithm on a test bed with off-the-shelf wireless sensor devices is presented. The experimental performance of the algorithms is compared to the existing solutions from the literature. The experimental results show that the experimentalbased model is accurate and that the proposed adaptive algorithm attains the optimal value of the duty cycle, maximizing the lifetime of the network while meeting the reliability and delay constraints under both stationary and transient conditions. Specifically, even if the number of devices and their traffic configuration change sharply, the proposed adaptive algorithm allows the network to operate close to its optimal value. Furthermore, for Poisson arrivals, the duty-cycle protocol is modeled as a finite capacity queuing system in a star network. This simple analytical model provides insights into the performance metrics, including the reliability, average delay, and average power consumption of the duty-cycle protocol.

Analysis and Optimization of Random Sensing Order in Cognitive Radio Systems

Developing an efficient spectrum access policy enables cognitive radios to dramatically increase spectrum utilization while assuring predetermined quality of service levels for the primary users. In this letter, modeling, performance analysis, and optimization of a distributed secondary network with random sensing order policy are studied. Specifically, the secondary users create a random order of the available channels to sense and find a transmission opportunity in a distributed manner. For this network, the average throughputs of the secondary users and average interference level between the secondary and primary users are evaluated by a new Markov model. Then, a maximization of the secondary network performance in terms of throughput while keeping under control the average interference is proposed. Then, a simple and practical adaptive algorithm is developed to optimize the network in a distributed manner. Interestingly, the proposed algorithm follows the variations of the wireless channels in non-stationary conditions and besides having substantially lower computational cost, it outperforms static brute force optimization. Finally, numerical results are provided to demonstrate the efficiencies of the proposed schemes. It is shown that fully distributed algorithms can achieve substantial performance improvements in cognitive radio networks without the need of centralized management or message passing among the users.

Distributed Estimation

Performance analysis and optimization of the joining protocol for a platoon of vehicles

Platooning of vehicles allows to saving energy and increasing safety provided that there are reliable wireless communication protocols. In this paper, the optimization of the medium access control (MAC) protocol based on IEEE 802.11e for the platoon joining is investigated. The exchange of prosperous dynamic information among vehicules through certain bounded and closed-fitting timeout windows is challenging. On the one side, safe and secure joining of vehicles to a platoon is time-consuming and in the actual speed of the vehicles may be very difficult. On the other side, the decrease in joining timeout windows results in rising of joining failure. The analytical characterization of the appropriate timeout windows, which is dependent on the rate of exchange messages to request and verify joining, is proposed. By using such a new characterization, the estimation of closed-fitting timeout windows for joining is achieved based on the rate of transferred joining messages. Numerical results show that regular joining timeout windows suffer unacceptable delay for platooning. By contrast, adaptive optimized timeout windows reduce delay of communication. It is concluded that the new optimization proposed in this paper can potentially reduce energy consumption of vehicles and increase safety.

Real Time Scheduling in LTE for Smart Grids

The latest wireless network, 3GPP Long Term Evolution (LTE), is considered to be a promising solution for smart grids because it provides both low latency and large bandwidth. However, LTE was not originally intended for smart grids applications, where data generated by the grid have specific delay requirements that are different from traditional data or voice communications. In this paper, the specific requirements imposed by a smart grids on the LTE communication infrastructure is first determined. The latency offered by the LTE network to smart grids components is investigated and an empirical mathematical model of the distribution of the latency is established. It is shown by experimental results that with the current LTE up-link scheduler, smart grid latency requirements are not always satisfied and that only a limited number of components can be accommodated. To overcome such a deficiency, a new scheduler of the LTE medium access control is proposed for smart grids. The scheduler is based on a mathematical linear optimization problem that considers simultaneously both the smart grid components and common user equipments. An algorithm for the solution to such a problem is derived based on a theoretical analysis. Simulation results based on this new scheduler illustrate the analysis. It is concluded that LTE can be effectively used in smart grids if new schedulers are employed for improving latency.

Design principles of wireless sensor networks protocols for control applications

Control applications over wireless sensor networks (WSNs) require timely, reliable, and energy efficient communications. This is challenging because reliability and latency of delivered packets and energy are at odds, and resource constrained nodes support only simple algorithms. In this chapter, a new system-level design approach for protocols supporting control applications over WSNs is proposed. The approach suggests a joint optimization, or co-design, of the control specifications, networking layer, the medium access control layer, and physical layer. The protocol parameters are adapted by an optimization problem whose objective function is the network energy consumption, and the constraints are the reliability and latency of the packets as requested by the control application. The design method aims at the definition of simple algorithms that are easily implemented on resource constrained sensor nodes. These algorithms allow the network to meet the reliability and latency required by the control application while minimizing for energy consumption. The design method is illustrated by two protocols: Breath and TREnD, which are implemented on a test-bed and compared to some existing solutions. Experimental results show good performance of the protocols based on this design methodology in terms of reliability, latency, low duty cycle, and load balancing for both static and time-varying scenarios. It is concluded that a system-level design is the essential paradigm to exploit the complex interaction among the layers of the protocol stack and reach a maximum WSN efficiency.

Adaptive IEEE 802.15.4 medium access control protocol for control and monitoring applications

The IEEE 802.15.4 standard for wireless sensor networks (WSNs) can support energy efficient, reliable, and timely packet transmission by tuning the medium access control (MAC) parameters macMinBE; macMaxCSMABackoffs, and macMaxFrameRetries. Such a tuning is difficult, because simple and accurate models of the influence of these parameters on the probability of successful packet transmission, packet delay, and energy consumption are not available. Moreover, it is not clear how to adapt the parameters to the changes of the network and traffic regimes by algorithms that can run on resource-constrained nodes. In this chapter, a generalizedMarkov chain is proposed to model these relations by simple expressions without giving up the accuracy. In contrast to previous work, the presence of limited number of retransmissions, acknowledgments, unsaturated traffic, and packet size is accounted for. The model is then used to derive an adaptive algorithm forminimizing the power consumptionwhile guaranteeing reliability and delay constraints in the packet transmission. The algorithm does not require any modification of the IEEE 802.15.4 standard and can be easily implemented on network nodes. Numerical results show that the analysis is accurate and that the proposed algorithm satisfies reliability and delay constraints, and ensures a longer lifetime of the network under both stationary and transient network conditions.

System level design of self-triggered IEEE 802.15.4 networked control loops increases the network capacity

Adaptive self-triggered control over IEEE 802.15.4 networks

The communication protocol IEEE 802.15.4 is becoming pervasive for low power and low data rate wireless sensor networks (WSNs) applications, including control and automation. Nevertheless, there is not yet any adequate study about control systems networked by this protocol. In this paper, the stability of IEEE 802.15.4 networked control systems (NCSs) is addressed. While in recent works fundamental results are developed for networks that are abstracted only in terms of packet loss and time delays, here the constraints imposed by the protocol to the feedback channel and the network energy consumption are explicitly considered. A general analysis for linear systems with parameter uncertainty and external bounded disturbances with control loops closed over IEEE 802.15.4 networks is proposed. To reduce the number of transmissions and thus save energy, a self-triggered control strategy is used. A sufficient stability condition is given as function of both the protocol and control parameters. A decentralized algorithm to adapt jointly the self-triggered control and the protocol parameters is proposed. It is concluded that stability is not always guaranteed unless protocol parameters are appropriately tuned, and that event-triggered control strategies may be difficult to use with the current version of IEEE 802.15.4.

Wireless ventilation control for large-scale systems : The mining industrial case

Mining ventilation is an interesting example of a large scale system with high environmental impact where advanced control strategies can bring major improvements. Indeed, one of the first objectives of modern mining industry is to fulfill environmental specifications [1] during the ore extraction and crushing, by optimizing the energy consumption or the production of polluting agents. The mine electric consumption was 4 % of total industrial electric demand in the US in 1994 (6 % in 2007 in South Africa) and 90 % of it was related to motor system energy [2]. Another interesting figure is given in [3] where it is estimated that the savings associated with global control strategies for fluid systems (pumps, fans and compressors) represent approximately 20 % of the total manufacturing motor system energy savings. This motivates the development of new control strategies for large scale aerodynamic processes based on appropriate automation and a global consideration of the system. More specifically, the challenge in this work is focused on the mining ventilation since as much as 50 % or more of the energy consumed by the mining process may go into the ventilation (including heating the air). It is clear that investigating automatic control solutions and minimizing the amount of pumped air to save energy consumption (proportional to the cube of airflow quantity [4]) is of great environmental and industrial interest.

MAC protocol engine for sensor networks

We present a novel approach for Medium Access Control (MAC) protocol design based on protocol engine. Current way of designing MAC protocols for a specific application is based on two steps: First the application specifications (such as network topology and packet generation rate), the requirements for energy consumption, delay and reliability, and the resource constraints from the underlying physical layer (such as energy consumption and data rate) are specified, and then the protocol that satisfies all these constraints is designed. Main drawback of this procedure is that we have to restart the design process for each possible application, which may be a waste of time and efforts. The goal of a MAC protocol engine is to provide a library of protocols together with their analysis such that for each new application the optimal protocol is chosen automatically among its library with optimal parameters. We illustrate the MAC engine idea by including an original analysis of IEEE 802.15.4 unslotted random access and Time Division Multiple Access (TDMA) protocols, and implementing these protocols in the software framework called SPINE, which runs on top of TinyOS and is designed for health care applications. Then we validate the analysis and demonstrate how the protocol engine chooses the optimal protocol under different application scenarios via an experimental implementation.

Breath : A self-adapting protocol for wireless sensor networks in control and automation

The novel cross-layer protocol Breath for wireless sensor networks is designed, implemented, and experimentally evaluated. The Breath protocol is based on randomized routing, MAC and duty-cycling, which allow it to minimize the energy consumption of the network while ensuring a desired packet delivery end-to-end reliability and delay. The system model includes a set of source nodes that transmit packets via multi-hop communication to the destination. A constrained optimization problem, for which the objective function is the network energy consumption and the constraints are the packet latency and reliability, is posed and solved. It is shown that the communication layers can be jointly optimized for energy efficiency. The optimal working point of the network is achieved with a simple algorithm, which adapts to traffic variations with negligible overhead. The protocol was implemented on a test-bed with off-the-shelf wireless sensor nodes. It is compared with a standard IEEE 802.15.4 solution. Experimental results show that Breath meets the latency and reliability requirements, and that it exhibits a good distribution of the working load, thus ensuring a long lifetime of the network.