The wireless communication systems of today rely to a large extent on the condition of the accessible channel state information (CSI) at the transmitter and receiver. Channel aging, denoting the temporal and spatial evolution of wireless communication channels, is influenced by obstructions, interference, traffic load, and user mobility. Accurate CSI estimation and prediction empower the network to proactively counteract performance degradation resulting from channel dynamics, such as channel aging, by employing network management strategies such as power allocation. Prior studies have introduced approaches aimed at preserving high-quality CSI such as temporal prediction schemes, particularly in scenarios involving high mobility and channel aging. Conventional model-based estimators and predictors have historically been considered state-of-the-art. Recently, the development of artificial intelligence (AI) has increased the interest in developing models based on AI. Previous works have shown high potential of AI-aided channel estimation and prediction, which inclines the state-of-the-art title from model-based methods to be confiscated. However, there are many aspects to consider in channel estimation and prediction employed by AI in terms of prediction quality, training complexity, and practical feasibility. To investigate these aspects, this chapter provides an overview of state-of-the-art neural networks, applicable to channel estimation and prediction. The principal neural networks from the overview of channel prediction are empirically compared in terms of prediction quality. An innovative comparative analysis is conducted for five prospective neural netwoion horizons. The widely acknowledged tapped delay line (TDL) channel model, as endorsed by the Third Generation Partnership Project (3GPP), is employed to ensure a standardized evaluation of the neural networks. This comparative assessment enables a comprehensive examination of the merits and demerits inherent in each neural network. Subsequent to this analysis, insights are offered to provide guidelines for the selection of the most appropriate neural network in channel prediction applications.
The 5th generation (5G) of wireless systems is being deployed with the aim to provide many sets of wireless communication services, such as low data rates for a massive amount of devices, broadband, low latency, and industrial wireless access. Such an aim is even more complex in the next generation wireless systems (6G) where wireless connectivity is expected to serve any connected intelligent unit, such as software robots and humans interacting in the metaverse, autonomous vehicles, drones, trains, or smart sensors monitoring cities, buildings, and the environment. Because of the wireless devices will be orders of magnitude denser than in 5G cellular systems, and because of their complex quality of service requirements, the access to the wireless spectrum will have to be appropriately shared to avoid congestion, poor quality of service, or unsatisfactory communication delays. Spectrum sharing methods have been the objective of intense study through model-based approaches, such as optimization or game theories. However, these methods may fail when facing the complexity of the communication environments in 5G, 6G, and beyond. Recently, there has been significant interest in the application and development of data-driven methods, namely machine learning methods, to handle the complex operation of spectrum sharing. In this survey, we provide a complete overview of the state-of-theart of machine learning for spectrum sharing. First, we map the most prominent methods that we encounter in spectrum sharing. Then, we show how these machine learning methods are applied to the numerous dimensions and sub-problems of spectrum sharing, such as spectrum sensing, spectrum allocation, spectrum access, and spectrum handoff. We also highlight several open questions and future trends.
In this work, we investigate federated edge learning over a fading multiple access channel. To alleviate the communication burden between the edge devices and the access point, we introduce a pioneering digital over-the-air computation strategy employing q-ary quadrature amplitude modulation, culminating in a low latency communication scheme. Indeed, we propose a new federated edge learning framework in which edge devices use digital modulation for over-the-air uplink transmission to the edge server while they have no access to the channel state information. Furthermore, we incorporate multiple antennas at the edge server to overcome the fading inherent in wireless communication. We analyze the number of antennas required to mitigate the fading impact effectively. We prove a non-asymptotic upper bound for the mean squared error for the proposed federated learning with digital over-the-air uplink transmissions under both noisy and fading conditions. Leveraging the derived upper bound, we characterize the convergence rate of the learning process of a non-convex loss function in terms of the mean square error of gradients due to the fading channel. Furthermore, we substantiate the theoretical assurances through numerical experiments concerning mean square error and the convergence efficacy of the digital federated edge learning framework. Notably, the results demonstrate that augmenting the number of antennas at the edge server and adopting higher-order modulations improve the model accuracy up to 60%.
We introduce a new class of codebook-aware jamming strategies against coded transmissions over AWGN channels. The proposed strategies derive attack vectors from non-zero positions of minimum-weight codewords in Hamming space and utilize the geometry of minimum-distance error events in Euclidean space in their attack. We characterize the success probability of the attacker analytically and utilize these results for attack optimization. We demonstrate that the proposed jamming attacks are highly efficient; compared to Gaussian attack vectors with the same energy budget, the attacker’s success probability is increased by more than two orders of magnitude while only consuming a fraction of the energy of the attacked codeword.
This paper explores the integration of device-to-device communications into clustered federated learning (FL), where clients are grouped into multiple clusters based on the similarity of their learning tasks. To mitigate communication costs, we propose an efficient FL algorithm. Specifically, we designate a primary client within each cluster responsible for uploading the model to the server, while other clients within the cluster serve as secondary clients. Each secondary client assesses its model’s similarity to the primary client by computing a layer-wise model distance. If a secondary client’s model distance exceeds a predefined threshold, indicating a divergence from the primary client’s model, it transmits its model distance to the edge server. The primary client then updates the cluster model parameters and broadcasts them to the secondary clients within the cluster. Closed-form expressions for the time spent by the proposed layer-wise efficient FL is derived. Numerical results validate the training accuracy of the layer-wise efficient FL and demonstrate a notable reduction in communication costs compared to naive FL.
Communication and computation are traditionally treated as separate entities, allowing for individual optimizations. However, many applications focus on local information’s functionality rather than the information itself. For such cases, harnessing interference for computation in a multiple access channel through digital over-the-air computation can notably increase the computation, as established by the ChannelComp method. However, the coding scheme originally proposed in ChannelComp may suffer from high computational complexity because it is general and is not optimized for specific modulation categories. Therefore, this study considers a specific category of digital modulations for over-the-air computations, quadrature amplitude modulation (QAM) and pulse-amplitude modulation (PAM), for which we introduce a novel coding scheme called SumComp. Furthermore, we derive a mean squared error (MSE) analysis for SumComp coding in the computation of the arithmetic mean function and establish an upper bound on the mean absolute error (MAE) for a set of nomographic functions. Simulation results are presented to affirm the superior performance of SumComp coding compared to traditional analog over-the-air computation and the original coding in ChannelComp approaches in terms of both MSE and MAE over a noisy multiple access channel. Specifically, SumComp coding shows at least 10 dB improvements for computing arithmetic and geometric mean on the normalized MSE for low noise scenarios.
We consider the problem of secure histogram es-timation, where n users hold private items x i from a size-d domain and a server aims to estimate the histogram of the user items. Previous results utilizing orthogonal communication schemes have shown that this problem can be solved securely with a total communication cost of O(n 2 log(d)) bits by hiding each item x i with a mask. In this paper, we offer a different approach to achieving secure aggregation. Instead of masking the data, our scheme protects individuals by aggregating their messages via a multiple-access channel. A naive communication scheme over the multiple-access channel requires d channel uses, which is generally worse than the O(n 2 1og(d)) bits communication cost of the prior art in the most relevant regime d » n . Instead, we propose a new scheme that we call Over-the-Air Group Testing (AirG T) which uses group testing codes to solve the histogram estimation problem in O(n log(d)) channel uses. AirGT reconstructs the histogram exactly with a vanishing probability of error P error = O(d -T ) that drops exponentially in the number of channel uses T.
This paper investigates over-the-air computation (AirComp) in the context of multiple-access time-varying mul-tipath channels. We focus on a scenario where devices with high mobility transmit their sensing data to a fusion center (FC) for averaging. To combat the time-varying channel and Doppler effect, each device adopts orthogonal time frequency space (OTFS) modulation. After signals are received by the FC, the aggregated data undergoes demodulation and estimation within the delay-Doppler domain. We leverage the mean squared error (MSE) as a metric for the computational error of OTFS-based AirComp. We then derive the optimal transmit power at each device and signal scaling factor at FC for minimizing MSE. Notably, the performance of OTFS-based AirComp is not only affected by the noise but also by the inter-symbol interference and inter-link interference arising from the multipath channel. To counteract the interference-induced computational errors, we incorporate zero-padding (ZP)-assisted OTFS into AirComp and propose algorithms for interference cancellation. Numerical results underscore the enhanced performance of ZP-assisted OTFS-based AirComp over naive OTFS-based AirComp.
To enable the development of multi-functional cellular networks that aim to satisfy increasing expectations of connectivity and trustworthiness, it is crucial to provide reliable quality of service (QoS) guarantees for end users. With predictive QoS (pQoS), cellular networks become proactive to meet QoS requirements for a wide variety of new use cases, including advanced driver assistance applications. This work introduces a novel predictive framework to improve the availability and performance of pQoS in cellular networks, especially for advanced road transport applications. We show that by dividing the road into geographical segments, clustering segments with similar data, and assigning each cluster a predictive model, the adversary effects of the propagation environment and interference on QoS become manageable. To this end, each predictive cluster model is trained locally on vehicles within the cluster boundaries by data driven Federated Learning, resulting in personalized predictive models for each cluster. Our numerical results show that the clustered predictive model outperforms the more common predictive approach proposed by previous works that train a single global predictive model for an entire dataset.
This paper investigates efficient distributed training of a Federated Learning (FL) model over a wireless network of wireless devices. The communication iterations of the distributed training algorithm may be substantially deteriorated or even blocked by the effects of the devices’ background traffic, packet losses, congestion, or latency. We abstract the communication-computation impacts as an ‘iteration cost’ and propose a cost-aware causal FL algorithm (FedCau) to tackle this problem. We propose an iteration-termination method that trade-offs the training performance and networking costs. We apply our approach when workers use the slotted-ALOHA, carrier-sense multiple access with collision avoidance (CSMA/CA), and orthogonal frequency-division multiple access (OFDMA) protocols. We show that, given a total cost budget, the training performance degrades as either the background communication traffic or the dimension of the training problem increases. Our results demonstrate the importance of proactively designing optimal cost-efficient stopping criteria to avoid unnecessary communication-computation costs to achieve a marginal FL training improvement. We validate our method by training and testing FL over the MNIST and CIFAR-10 dataset. Finally, we apply our approach to existing communication efficient FL methods from the literature, achieving further efficiency. We conclude that cost-efficient stopping criteria are essential for the success of practical FL over wireless networks.
The performance of modern wireless communications systems depends critically on the quality of the available channel state information (CSI) at the transmitter and receiver. Several previous works have proposed concepts and algorithms that help maintain high-quality CSI even in the presence of high mobility and channel aging, such as temporal prediction schemes that employ neural networks. However, it is still unclear which neural network-based scheme provides the best performance in terms of prediction quality, training complexity, and practical feasibility. To investigate such a question, this article first provides an overview of state-of-the-art neural networks applicable to channel prediction, and compares their performance in terms of prediction quality. Next, a new comparative analysis is proposed for five promising neural networks with different prediction horizons. The well-known tapped delay channel model recommended by the Third Generation Partnership Program is used for a standardized comparison among the neural networks. Based on this comparative evaluation, the advantages and disadvantages of each neural network are discussed, and guidelines for selecting the best-suited neural network in channel prediction applications are given.
We consider the problem of gridless blind deconvolution and demixing (GB2D) in scenarios where multiple users communicate messages through multiple unknown channels, and a single base station (BS) collects their contributions. This scenario arises in various communication fields, including wireless communications, the Internet of Things, over-the-air computation, and integrated sensing and communications. In this setup, each user’s message is convolved with a multi-path channel formed by several scaled and delayed copies of Dirac spikes. The BS receives a linear combination of the convolved signals, and the goal is to recover the unknown amplitudes, continuous-indexed delays, and transmitted waveforms from a compressed vector of measurements at the BS. However, without prior knowledge of the transmitted messages and channels, GB2D is highly challenging and intractable in general. To address this issue, we assume that each user’s message follows a distinct modulation scheme living in a known low-dimensional subspace. By exploiting these subspace assumptions and the sparsity of the multipath channels for different users, we transform the nonlinear GB2D problem into a matrix tuple recovery problem from a few linear measurements. To achieve this, we propose a semidefinite programming optimization that exploits the specific low-dimensional structure of the matrix tuple to recover the messages and continuous delays of different communication paths from a single received signal at the BS. Finally, our numerical experiments show that our proposed method effectively recovers all transmitted messages and the continuous delay parameters of the channels with sufficient samples.
Over-the-air computation (AirComp) is a promising wireless communication method for aggregating data from many devices in dense wireless networks. The fundamental idea of AirComp is to exploit signal superposition to compute functions of multiple simultaneously transmitted signals. However, the time-and phase-alignment of these superimposed signals have a significant effect on the quality of function computation. In this study, we analyze the AirComp problem for a system with unknown random time delays and phase shifts. We show that the classical matched filter does not produce optimal results, and generates bias in the function estimates. To counteract this, we propose a new filter design and show that, under a bound on the maximum time delay, it is possible to achieve unbiased function computation. Additionally, we propose a Tikhonov regularization problem that produces an optimal filter given a tradeoff between the bias and noise-induced variance of the function estimates. When the time delays are long compared to the length of the transmitted pulses, our filter vastly outperforms the matched filter both in terms of bias and mean-squared error (MSE). For shorter time delays, our proposal yields similar MSE as the matched filter, while reducing the bias.
Distributed machine learning at the network edge has emerged as a promising new paradigm. Various machine learning (ML) technologies will distill Artificial Intelligence (AI) from enormous mobile data to automate future wireless networking and a wide range of Internet-of-Things (IoT) applications. In distributed edge learning, multiple edge devices train a common learning model collaboratively without sending their raw data to a central server, which not only helps to preserve data privacy but also reduces network traffic. However, distributed edge training and edge inference typically still require extensive communications among devices and servers connected by wireless links. As a result, the salient features of wireless networks, including interference and channels’ heterogeneity, time-variability, and unreliability, have significant impacts on the learning performance.
Over-the-air computation (AirComp) is a well-known technique by which several wireless devices transmit by analog amplitude modulation to achieve a sum of their transmit signals at a common receiver. The underlying physical principle is the superposition property of the radio waves. Since such superposition is analog and in amplitude, it is natural that AirComp uses analog amplitude modulations. Unfortunately, this is impractical because most wireless devices today use digital modulations. It would be highly desirable to use digital communications because of their numerous benefits, such as error correction, synchronization, acquisition of channel state information, and widespread use. However, when we use digital modulations for AirComp, a general belief is that the superposition property of the radio waves returns a meaningless overlapping of the digital signals. In this paper, we break through such beliefs and propose an entirely new digital channel computing method named ChannelComp, which can use digital as well as analog modulations. We propose a feasibility optimization problem that ascertains the optimal modulation for computing arbitrary functions over-the-air. Additionally, we propose pre-coders to adapt existing digital modulation schemes for computing the function over the multiple access channel. The simulation results verify the superior performance of ChannelComp compared to AirComp, particularly for the product functions, with more than 10 dB improvement of the computation error.
The articles in this special section focus on the role of data sets for the evolution of the telecommunication industry in the 5G and 6G era. In 5G and 6G, many new services are emerging to accommodate various Internet of Things (IoT) devices, going beyond the traditional provisions of mobile phones and internet connectivity. Examples of these services include extended reality devices, sensors, or ground and aerial robots. The deployment of these advanced services, however, poses challenges for the wireless network, particularly in its ability to support ubiquitous connections while meeting diverse quality-of-service (QoS) requirements. Despite the remarkable success of model-based design and analysis in wireless networks, it has become evident that these conventional approaches may not be fully adequate to address the dynamic and diverse QoS requirements posed by the emerging IoT landscape. The heterogeneity of devices and services necessitates a more adaptive and intelligent approach to ensure efficient network performance.
Resource allocation and multiple access schemes are instrumental for the success of communication networks, which facilitate seamless wireless connectivity among a growing population of uncoordinated and non-synchronized users. In this paper, we present a novel random access scheme that addresses one of the most severe barriers of current strategies to achieve massive connectivity and ultra reliable and low latency communications for 6G. The proposed scheme utilizes wireless channels’ angular continuous group-sparsity feature to provide low latency, high reliability, and massive access features in the face of limited time-bandwidth resources, asynchronous transmissions, and preamble errors. Specifically, a reconstruction-free goal oriented optimization problem is proposed which preserves the angular information of active devices and is then complemented by a clustering algorithm to assign active users to specific groups. This allows to identify active stationary devices according to their line of sight angles. Additionally, for mobile devices, an alternating minimization algorithm is proposed to recover their preamble, data, and channel gains simultaneously, enabling the identification of active mobile users. Simulation results show that the proposed algorithm provides excellent performance and supports a massive number of devices. Moreover, the performance of the proposed scheme is independent of the total number of devices, distinguishing it from other random access schemes. The proposed method provides a unified solution to meet the requirements of machine-type communications and ultra reliable and low latency communications, making it an important contribution to the emerging 6G networks.
In this paper, the real-time energy trading problem between the energy provider and the consumers in a smart grid system is studied. The problem is formulated as a hierarchical game, where the energy provider acts as a leader who determines the pricing strategy that maximizes its profits, while the consumers act as followers who react by adjusting their energy demand to save their energy costs and enhance their energy consumption utility. In particular, the energy provider employs a pricing strategy that depends on the aggregated amount of energy requested by the consumers, which suits a commodity-limited market. With this price setting, the consumers’ energy demand response strategies are designed under a non-cooperative game framework, where a unique generalized Nash equilibrium point is shown to exist. As an extension, the consumers are assumed to be unaware of their future energy consumption behaviors due to uncertain personal needs. To address this issue, an online distributed energy trading framework is proposed, where the energy provider and the consumers can design their strategies only based on the historical knowledge of consumers’ energy consumption behavior at each bidding stage. Besides, the proposed framework can be implemented in a distributed manner such that the consumers can design their demand responses by only exchanging information with their neighboring consumers, which requires much fewer communication resources and would thus be more suitable for the practical operation of the grid. As a theoretical guarantee, the proposed framework is further proved to asymptotically achieve the same performance as the offline solution for both energy provider and consumers’ optimization problems. The performance of practical designs of the proposed online distributed energy trading framework is finally illustrated in numerical experiments.
Facing the upcoming era of Internet-of-Things and connected intelligence, efficient information processing, computation, and communication design becomes a key challenge in large-scale intelligent systems. Recently, Over-the-Air (OtA) computation has been proposed for data aggregation and distributed computation of functions over a large set of network nodes. Theoretical foundations for this concept exist for a long time, but it was mainly investigated within the context of wireless sensor networks. There are still many open questions when applying OtA computation in different types of distributed systems where modern wireless communication technology is applied. In this article, we provide a comprehensive overview of the OtA computation principle and its applications in distributed learning, control, and inference systems, for both server-coordinated and fully decentralized architectures. Particularly, we highlight the importance of the statistical heterogeneity of data and wireless channels, the temporal evolution of model updates, and the choice of performance metrics, for the communication design in OtA federated learning (FL) systems. Several key challenges in privacy, security, and robustness aspects of OtA FL are also identified for further investigation.
Over-the-air computation (AirComp) is a known technique in which wireless devices transmit values by analog amplitude modulation so that a function of these values is computed over the communication channel at a common receiver. The physical reason is the superposition properties of the electromagnetic waves, which naturally return sums of analog values. Consequently, the applications of AirComp are almost entirely restricted to analog communication systems. However, the use of digital communications for over-the-air computations would have several benefits, such as error correction, synchronization, acquisition of channel state information, and easier adoption by current digital communication systems. Nevertheless, a common belief is that digital modulations are generally unfeasible for computation tasks because the overlapping of digitally modulated signals returns signals that seem to be meaningless for these tasks. This paper breaks through such a belief and proposes a fundamentally new computing method, named ChannelComp, for performing over-the-air computations by any digital modulation. In particular, we propose digital modulation formats that allow us to compute a wider class of functions than AirComp can compute, and we propose a feasibility optimization problem that ascertains the optimal digital modulation for computing functions over-the-air. The simulation results verify the superior performance of ChannelComp in comparison to AirComp, particularly for the product functions, with around 10 dB improvement of the computation error.
Emerging applications in the Internet of Things (IoT) and edge computing/learning have sparked massive renewed interest in developing distributed versions of existing (centralized) iterative algorithms often used for optimization or machine learning purposes. While existing work in the literature exhibits similarities, for the tasks of both algorithm design and theoretical analysis, there is still no unified method or framework for accomplishing these tasks. This article develops such a general framework for distributing the execution of (centralized) iterative algorithms over networks in which the required information or data is partitioned between the nodes in the network. This article furthermore shows that the distributed iterative algorithm, which results from the proposed framework, retains the convergence properties (rate) of the original (centralized) iterative algorithm. In addition, this article applies the proposed general framework to several interesting example applications, obtaining results comparable to the state of the art for each such example, while greatly simplifying and generalizing their convergence analysis. These example applications reveal new results for distributed proximal versions of gradient descent, the heavy ball method, and Newton’s method. For example, these results show that the dependence on the condition number for the convergence rate of this distributed heavy ball method is at least as good as that of centralized gradient descent.
We develop a gradient-like algorithm to minimize a sum of peer objective functions based on coordination through a peer interconnection network. The coordination admits two stages: the first is to constitute a gradient, possibly with errors, for updating locally replicated decision variables at each peer and the second is used for error-free averaging for synchronizing local replicas. Unlike many related algorithms, the errors permitted in our algorithm can cover a wide range of inexactnesses, as long as they are bounded. Moreover, we do not impose any gradient boundedness conditions for the objective functions. Furthermore, the second stage is not conducted in a periodic manner, like many related algorithms. Instead, a locally verifiable criterion is devised to dynamically trigger the peer-to-peer coordination at the second stage, so that expensive communication overhead for error-free averaging can significantly be reduced. Finally, the convergence of the algorithm is established under mild conditions.
Motivated by the increasing computational capabilities of wireless devices, as well as unprecedented levels of user- and device-generated data, new distributed machine learning (ML) methods have emerged. In the wireless community, Federated Learning (FL) is of particular interest due to its communication efficiency and its ability to deal with the problem of non-IID data. FL training can be accelerated by a wireless communication method called Over-the-Air Computation (AirComp) which harnesses the interference of simultaneous uplink transmissions to efficiently aggregate model updates. However, since AirComp utilizes analog communication, it introduces inevitable estimation errors. In this paper, we study the impact of such estimation errors on the convergence of FL and propose retransmissions as a method to improve FL accuracy over resource-constrained wireless networks. First, we derive the optimal AirComp power control scheme with retransmissions over static channels. Then, we investigate the performance of Over-the-Air FL with retransmissions and find two upper bounds on the FL loss function. Numerical results demonstrate that the power control scheme offers significant reductions in mean squared error. Additionally, we provide simulation results on MNIST classification with a deep neural network that reveals significant improvements in classification accuracy for low-SNR scenarios.
Current sound-based practices and systems developed in both academia and industry point to convergent research trends that bring together the field of sound and music Computing with that of the Internet of Things. This article proposes a vision for the emerging field of the Internet of Sounds (IoS), which stems from such disciplines. The IoS relates to the network of Sound Things, i.e., devices capable of sensing, acquiring, processing, actuating, and exchanging data serving the purpose of communicating sound-related information. In the IoS paradigm, which merges under a unique umbrella the emerging fields of the Internet of Musical Things and the Internet of Audio Things, heterogeneous devices dedicated to musical and nonmusical tasks can interact and cooperate with one another and with other things connected to the Internet to facilitate sound-based services and applications that are globally available to the users. We survey the state-of-the-art in this space, discuss the technological and nontechnological challenges ahead of us and propose a comprehensive research agenda for the field.
This work proposes a reliable leakage detection methodology for water distribution networks (WDNs) using machine-learning strategies. Our solution aims at detecting leakage in WDNs using efficient machine-learning strategies. We analyze pressure measurements from pumps in district metered areas (DMAs) in Stockholm, Sweden, where we consider a residential DMA of the water distribution network. Our proposed methodology uses learning strategies from unsupervised learning (K-means and cluster validation techniques), and supervised learning (learning vector quantization algorithms). The learning strategies we propose have low complexity, and the numerical experiments show the potential of using machine-learning strategies in leakage detection for monitored WDNs. Specifically, our experiments show that the proposed learning strategies are able to obtain correct classification rates up to 93.98%.
With the unprecedented growth of signal processing and machine learning application domains, there has been a tremendous expansion of interest in distributed optimization methods to cope with the underlying large-scale problems. Nonetheless, inevitable system-specific challenges such as limited computational power, limited communication, latency requirements, measurement errors, and noises in wireless channels impose restrictions on the exactness of the underlying algorithms. Such restrictions have appealed to the exploration of algorithms’ convergence behaviors under inexact settings. Despite the extensive research conducted in the area, it seems that the analysis of convergences of dual decomposition methods concerning primal optimality violations, together with dual optimality violations is less investigated. Here, we provide a systematic exposition of the convergence of feasible points in dual decomposition methods under inexact settings, for an important class of global consensus optimization problems. Convergences and the rate of convergences of the algorithms are mathematically substantiated, not only from a dual-domain standpoint but also from a primal-domain standpoint. Analytical results show that the algorithms converge to a neighborhood of optimality, the size of which depends on the level of underlying distortions.
Energy efficient control of energy systems in buildings is a widely recognized challenge due to the use of low temperature heating, renewable electricity sources, and the incorporation of thermal storage. Reinforcement Learning (RL) has been shown to be effective at minimizing the energy usage in buildings with maintained thermal comfort despite the high system complexity. However, RL has certain disadvantages that make it challenging to apply in engineering practices. In this review, we take a computer science approach to identifying three main categories of challenges of using RL for control of Building Energy Systems (BES). The three categories are the following: RL in single buildings, RL in building clusters, and multi-agent aspects. For each topic, we analyse the main challenges, and the state-of-the-art approaches to alleviate them. We also identify several future research directions on subjects such as sample efficiency, transfer learning, and the theoretical properties of RL in building energy systems. In conclusion, our review shows that the work on RL for BES control is still in its initial stages. Although significant progress has been made, more research is needed to realize the goal of RL-based control of BES at scale.
Over-the-air computation (AirComp) has recently emerged as an efficient analog method for data acquisition from wireless sensor devices. In essence, AirComp exploits the signal superposition property of a multiple access channel to estimate functions of the transmitted data points. Unless devices are excluded from participation, state-of-the-art AirComp methods do not achieve unbiased function computation, thereby introducing systematic errors in the acquired function. In this paper, we propose a new AirComp scheme that employs retransmissions to achieve probabilistically unbiased function computation. We solve a power control problem that minimizes the bias subject to a peak transmission power constraint. We show that the optimal power control follows a greedy structure that maximizes the devices’ contribution to the received function at every retransmission. Numerical results show that the proposed scheme can achieve unbiased function computation with a few retransmissions and drastically reduce the mean squared error in the function estimation compared to the current state-of-the-art.
Federated Edge Learning (FEEL) is a distributed machine learning technique where each device contributes to training a global inference model by independently performing local computations with their data. More recently, FEEL has been merged with over-the-air computation (OAC), where the global model is calculated over the air by leveraging the superposition of analog signals. However, when implementing FEEL with OAC, there is the challenge on how to precode the analog signals to overcome any time misalignment at the receiver. In this work, we propose a novel synchronization-free method to recover the parameters of the global model over the air without requiring any prior information about the time misalignments. For that, we construct a convex optimization based on the norm minimization problem to directly recover the global model by solving a convex semi-definite program. The performance of the proposed method is evaluated in terms of accuracy and convergence via numerical experiments. We show that our proposed algorithm is close to the ideal synchronized scenario by 10%, and performs 4× better than the simple case where no recovering method is used.
Federated Learning (FL) plays a prominent role in solving machine learning problems with data distributed across clients. In FL, to reduce the communication overhead of data between clients and the server, each client communicates the local FL parameters instead of the local data. However, when a wireless network connects clients and the server, the communication resource limitations of the clients may prevent completing the training of the FL iterations. Therefore, communication-efficient variants of FL have been widely investigated. Lazily Aggregated Quantized Gradient (LAQ) is one of the promising communication-efficient approaches to lower resource usage in FL. However, LAQ assigns a fixed number of bits for all iterations, which may be communication-inefficient when the number of iterations is medium to high or convergence is approaching. This paper proposes Adaptive Lazily Aggregated Quantized Gradient (A-LAQ), which is a method that significantly extends LAQ by assigning an adaptive number of communication bits during the FL iterations. We train FL in an energy-constraint condition and investigate the convergence analysis for A-LAQ. The experimental results highlight that A-LAQ outperforms LAQ by up to a 50% reduction in spent communication energy and an 11% increase in test accuracy.
Federated learning (FL) has emerged as an instance of distributed machine learning paradigm that avoids the transmission of data generated on the users’ side. Although data are not transmitted, edge devices have to deal with limited communication bandwidths, data heterogeneity, and straggler effects due to the limited computational resources of users’ devices. A prominent approach to overcome such difficulties is FedADMM, which is based on the classical two-operator consensus alternating direction method of multipliers (ADMM). The common assumption of FL algorithms, including FedADMM, is that they learn a global model using data only on the users’ side and not on the edge server. However, in edge learning, the server is expected to be near the base station and has often direct access to rich datasets. In this paper, we argue that it is much more beneficial to leverage the rich data on the edge server then utilizing only user datasets. Specifically, we show that the mere application of FL with an additional virtual user node representing the data on the edge server is inefficient. We propose FedTOP-ADMM, which generalizes FedADMM and is based on a three-operator ADMM-type technique that exploits a smooth cost function on the edge server to learn a global model in parallel to the edge devices. Our numerical experiments indicate that FedTOP-ADMM has substantial gain up to 33% in communication efficiency to reach a desired test accuracy with respect to FedADMM, including a virtual user on the edge server.
Looking at the last decade of evolution, 4G and current 5G (4th/5 th generation of mobile communications) boosted the integration of services (other than voice and text) provided to end-users, marking a significant discontinuity with previous generation. The maintenance of such a trend can be envisaged, imagining today what 6G (6 th generation) and, on a longer term, Future Networks (FNs) will be. Yet, within such a hectic scenario, the trend of a seamlessly increasing human-centricity of services and functionalities is clear. In this work, we claim that pursuing such an advancement exclusively from the point of view of technology and engineering may be limiting and, ultimately, potentially risky. To this end, we identify the development of an open and supra-disciplinary framework of knowledge, as a proper base to develop 6G and FNs. Such a model, apart from engineering-based disciplines, will have to be enriched by interactions with other sciences, among which psychology, cognitive and behavioral science are to be mentioned. To our vision, this will be the key to realize the concept of a Network of Feelings (Nof), which we regard as a relevant feature to support relentless integration of services and features in 6G and FNs.
Inference carried out on pretrained deep neural networks (DNNs) is particularly effective as it does not require retraining and entails no loss in accuracy. Unfortunately, resource-constrained devices such as those in the Internet of Things may need to offload the related computation to more powerful servers, particularly, at the network edge. However, edge servers have limited resources compared to those in the cloud; therefore, inference offloading generally requires dividing the original DNN into different pieces that are then assigned to multiple edge servers. Related approaches in the state-of-the-art either make strong assumptions on the system model or fail to provide strict performance guarantees. This article specifically addresses these limitations by applying distributed assignment to DNN inference at the edge. In particular, it devises a detailed model of DNN-based inference, suitable for realistic scenarios involving edge computing. Optimal inference offloading with load balancing is also defined as a multiple assignment problem that maximizes proportional fairness. Moreover, a distributed algorithm for DNN inference offloading is introduced to solve such a problem in polynomial time with strong optimality guarantees. Finally, extensive simulations employing different data sets and DNN architectures establish that the proposed solution significantly improves upon the state-of-the-art in terms of inference time (1.14 to 2.62 times faster), load balance (with Jain’s fairness index of 0.9), and convergence (one order of magnitude less iterations).
As data generation increasingly takes place on devices without a wired connection, Machine Learning (ML) related traffic will be ubiquitous in wireless networks. Many studies have shown that traditional wireless protocols are highly inefficient or unsustainable to support ML, which creates the need for new wireless communication methods. In this monograph, we give a comprehensive review of the state-of-the-art wireless methods that are specifically designed to support ML services over distributed datasets. Currently, there are two clear themes within the literature, analog over-the-air computation and digital radio resource management optimized for ML. This survey gives an introduction to these methods, reviews the most important works, highlights open problems, and discusses application scenarios.
Although signal distortion-based peak-to-average power ratio (PAPR) reduction is a feasible candidate for orthogonal frequency division multiplexing (OFDM) to meet standard/regulatory requirements, the error vector magnitude (EVM) stemming from the PAPR reduction has a deleterious impact on the performance of high data-rate achieving multiple-input multiple-output (MIMO) systems. Moreover, these systems must constrain the adjacent channel leakage ratio (ACLR) to comply with regulatory requirements. Several recent works have investigated the mitigation of the EVM seen at the receivers by capitalizing on the excess spatial dimensions inherent in the large-scale MIMO that assume the availability of perfect channel state information (CSI) with spatially uncorrelated wireless channels. Unfortunately, practical systems operate with erroneous CSI and spatially correlated channels. Additionally, most standards support user-specific/CSI-aware beamformed and cell-specific/non-CSI-aware broadcasting channels. Hence, we formulate a robust EVM mitigation problem under channel uncertainty with nonconvex PAPR and ACLR constraints catering to beamforming/broadcasting. To solve this formidable problem, we develop an efficient scheme using our recently proposed three-operator alternating direction method of multipliers (TOP-ADMM) algorithm and benchmark it against two three-operator algorithms previously presented for machine learning purposes. Numerical results show the efficacy of the proposed algorithm under imperfect CSI and spatially correlated channels.
In this paper, we propose a scheme optimizing the per-user channel sensing duration in millimeter-wave (mmWave) multi-user multiple-input single-output (MU-MISO) systems. For each user, the BS predicts the effective rate to be achieved after pilot transmission. Then, the channel sensing duration of each user is optimized by ending the pilot transmission when the predicted rate is lower than the current rate. The robust regularized zero-forcing (RRZF) precoder and equal power allocation (EPA) are adopted to transmit sensing pilots and data. Numerical results show that the more severe the interference among users, the longer channel sensing duration is required. Moreover, the proposed scheme results in a higher sum rate compared to benchmark schemes.
The over-the-air controller was recently proposed to enable efficient computation of the control signal for control systems, by leveraging the over-the-air computation concept. This letter introduces a transmit power allocation scheme for the over-the-air controller, where the wireless channel directly produces the control gain of a discrete-time linear control system. The proposed design scheme essentially minimizes the worst effect of the channel noise to the desired control system output, subject to the transmit power limit over the wireless channel. Despite the non-convexity of the problem, we derive the control cost criterion as linear matrix inequalities with transmit power constraints. We comprehensively investigate the control performance and the transmit power cost of the proposed scheme in various scenarios. The proposed optimization scheme shows significant control performance gain against the state-of-the-art solution, while having a comparable transmit power consumption.
This paper deals with a distortion-based non-convex peak-to-average power ratio (PAPR) problem for large-scale multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. Our work is motivated by the observation that the distortion stemming from the PAPR reduction schemes has a deleterious impact on the data rates of MIMO-OFDM systems. Recently, some approaches have been proposed to either null or mitigate such distortion seen at the receiver(s) side by exploiting the extra degrees of freedom when the downlink channel is perfectly known at the transmitter. Unfortunately, most of these proposed methods are not robust against channel uncertainty, since perfect channel knowledge is practically infeasible at the transmitter. Although some recent works utilize semidefinite programming to cope with channel uncertainty and non-convex PAPR problem, they have formidable computational complexity. Additionally, some prior-art techniques tackle the non-convex PAPR problem by minimizing the peak power, which renders a suboptimal solution. In this work, we showcase the application of powerful first-order optimization schemes, namely the three-operator alternating direction method of multipliers (ADMM)-type techniques, notably 1) three-operator ADMM, 2) Bregman ADMM, and 3) Davis-Yin splitting, to solve the non-convex and robust PAPR problem, yielding a near-optimal solution in a computationally efficient manner.
In the Internet of Things, learning is one of most prominent tasks. In this paper, we consider an Internet of Things scenario where federated learning is used with simultaneous transmission of model data and wireless power. We investigate the trade-off between the number of communication rounds and communication round time while harvesting energy to compensate the energy expenditure. We formulate and solve an optimization problem by considering the number of local iterations on devices, the time to transmit-receive the model updates, and to harvest sufficient energy. Numerical results indicate that maximum ratio transmission and zero-forcing beamforming for the optimization of the local iterations on devices substantially boost the test accuracy of the learning task. Moreover, maximum ratio transmission instead of zero-forcing provides the best test accuracy and communication round time trade-off for various energy harvesting percentages. Thus, it is possible to learn a model quickly with few communication rounds without depleting the battery.
Federated Learning (FL) is a distributed machine learning technique designed to utilize the distributed datasets collected by our mobile and internet-of-things devices. As such, it is natural to consider wireless communication for FL. In wireless networks, Over-the-Air Computation (AirComp) can accelerate FL training by harnessing the interference of uplink gradient transmissions. However, since AirComp utilizes analog transmissions, it introduces an inevitable estimation error due to channel fading and noise. In this paper, we propose retransmissions as a method to reduce such estimation errors and thereby improve the FL classification accuracy. First, we derive the optimal power control scheme with retransmissions. Then we investigate the performance of FL with retransmissions analytically and find an upper bound on the FL loss function. The analysis indicates that our proposed retransmission scheme improves both the final classification accuracy after convergence and the convergence speed per communication round. Experimental results demonstrate that the introduction of retransmissions can give higher classification accuracy than one-shot uplink transmissions, without incurring extra communication costs or latency.
In this paper, a novel distributed control strategy addressing a (feasible) psycho-social-physical welfare problem in islanded Direct Current (DC) smart grids is proposed. Firstly, we formulate a (convex) optimization problem that allows prosumers to share current with each other, taking into account the technical and physical aspects and constraints of the grid (e.g., stability, safety), as well as psycho-social factors (i.e., prosumers’ personal values). Secondly, we design a controller whose (unforced) dynamics represent the continuous time primal-dual dynamics of the considered optimization problem. Thirdly, a passive interconnection between the physical grid and the controller is presented. Global asymptotic convergence of the closed-loop system to the desired steady-state is proved and simulations based on collected data on psycho-social aspects illustrate and confirm the theoretical results.
In the resource management of wireless networks, Federated Learning has been used to predict handovers. However, non-independent and identically distributed data degrade the accuracy performance of such predictions. To overcome the problem, Federated Learning can leverage data clustering algorithms and build a machine learning model for each cluster. However, traditional data clustering algorithms, when applied to the handover prediction, exhibit three main limitations: the risk of data privacy breach, the fixed shape of clusters, and the non-adaptive number of clusters. To overcome these limitations, in this paper, we propose a three-phased data clustering algorithm, namely: generative adversarial network-based clustering, cluster calibration, and cluster division. We show that the generative adversarial network-based clustering preserves privacy. The cluster calibration deals with dynamic environments by modifying clusters. Moreover, the divisive clustering explores the different number of clusters by repeatedly selecting and dividing a cluster into multiple clusters. A baseline algorithm and our algorithm are tested on a time series forecasting task. We show that our algorithm improves the performance of forecasting models, including cellular network handover, by 43%.
In closed-loop wireless control systems, the state-of-the-art approach prescribes that a controller receives by wireless communications the individual sensor measurements, and then sends the computed control signal to the actuators. We propose an over-the-air controller scheme where all sensors attached to the plant simultaneously transmit scaled sensing signals directly to the actuator; then the feedback control signal is computed partially over the air and partially by a scaling operation at the actuator. Such over-the-air controller essentially adopts the over-the-air computation concept to compute the control signal for closed-loop wireless control systems. In contrast to the state-of-the-art sensor-to-controller and controller-to-actuator communication approach, the over-the-air controller exploits the superposition properties of multiple-access wireless channels to complete the communication and computation of a large number of sensing signals in a single communication resource unit. Therefore, the proposed scheme can obtain significant benefits in terms of low actuation delay and low wireless resource utilization by a simple network architecture that does not require a dedicated controller. Numerical results show that our proposed over-the-air controller achieves a huge widening of the stability region in terms of sampling time and delay, and a significant reduction of the computation error of the control signal.
Backscatter communication (BC) and radio-frequency energy harvesting (RF-EH) are two promising technologies for extending the battery lifetime of wireless devices. Although there have been some qualitative comparisons between these two technologies, quantitative comparisons are still lacking, especially for massive IoT networks. In this paper, we address this gap in the research literature, and perform a quantitative comparison between BC and RF-EH in massive IoT networks with multiple primary users and multiple low-power devices acting as secondary users. An essential feature of our model is that it includes the interferences caused by the secondary users to the primary users, and we show that these interferences significantly impact the system performance of massive IoT networks. For the RF-EH model, the power requirements of digital-to-analog and signal amplification are taken into account. We pose and solve a power minimization problem for BC, and we show analytically when BC is better than RF-EH. The results of the numerical simulations illustrate the significant benefits of using BC in terms of saving power and supporting massive IoT, compared to using RF-EH. The results also show that the backscatter coefficients of the BC devices must be individually tunable, in order to guarantee good performance of BC.
Although wireless networks are becoming a fundamental infrastructure for various control applications, they are inherently exposed to network faults such as lossy links and node failures in environments such as mining, outdoor monitoring, and chemical process control. In this paper, we propose a proactive fault-tolerant mechanism to protect the wireless network against temporal faults without any explicit network state information for mission-critical control systems. Specifically, the proposed mechanism optimizes the multiple routing paths, link scheduling, and traffic generation rate such that it meets the control stability demands even if it experiences multiple link faults and node faults. The proactive network relies on a constrained optimization problem, where the objective function is the network robustness, and the main constraints are the set of the traffic demand, link, and routing layer requirements. To analyze the robustness, we propose a novel performance metric called stability margin ratio, based on the network performance and the stability boundary. Our numerical and experimental performance evaluation shows that the traffic generation rate and the delay of wireless networks are found as critical as the network reliability to guarantee the stability of control systems. Furthermore, the proposed proactive network provides more robust performance than practical state-of-the-art solutions while maintaining high energy efficiency.
We consider a set of binary random variables and address the open problems of inferring provable logical relations among these random variables, and prediction. We propose to solve these two problems by learning a Kolmogorov model (KM) for these random variables. Our proposed framework allows us to derive provable logical relations, i.e., mathematical relations among the outcomes of the random variables in the training set, and thus, extract valuable relations from that set. The proposed method to discover the logical relations is established using implication in mathematical logic, thereby offering a provable analytical basis for asserting these relations, unlike similar factorization methods. We also propose an efficient algorithm for learning the KM model and show its first-order optimality, despite the combinatorial nature of the learning problem. We illustrate our general framework by applying to recommendation systems and gene expression data. In recommendation systems, the proposed logical relations identify groups of items for which a user liking an item logically implies that he/she likes all items in that group. Our work is a significant step toward interpretable machine learning.
Although spectral precoding is a propitious technique to suppress out-of-band emissions, it has a detrimental impact on the system-wide throughput performance, notably, in high data-rate multiple-input multiple-output (MIMO) systems with orthogonal frequency division multiplexing (OFDM), because of (spatially-coloured) transmit error vector magnitude (TxEVM) emanating from spectral precoding. The first contribution of this paper is to propose two mask-compliant spectral precoding schemes, which mitigate the resulting TxEVM seen at the receiver by capitalizing on the immanent degrees-of-freedom in (massive) MIMO systems and consequently improve the system-wide throughput. Our second contribution is an introduction to a new and simple three-operator consensus alternating direction method of multipliers (ADMM) algorithm, referred to as TOP-ADMM, which decomposes a large-scale problem into easy-to-solve subproblems. We employ the proposed TOP-ADMM-based algorithm to solve the spectral precoding problems, which offer computational efficiency. Our third contribution presents substantial numerical results by using an NR release 15 compliant simulator. In case of perfect channel knowledge at the transmitter, the proposed methods render similar block error rate and throughput performance as without spectral precoding yet meeting out-of-band emission (OOBE) requirements at the transmitter. Further, no loss on the OOBE performance with a graceful degradation on the throughput is observed under channel uncertainty.
As the Internet of Musical Things (IoMusT) emerges, audio-specific operating systems (OSs) are required on embedded hardware to ease development and portability of IoMusT applications. Despite the increasing importance of IoMusT applications, in this article, we show that there is no OS able to fulfill the diverse requirements of IoMusT systems. To address such a gap, we propose the Elk Audio OS as a novel and open source OS in this space. It is a Linux-based OS optimized for ultra-low-latency and high-performance audio and sensor processing on embedded hardware, as well as for handling wireless connectivity to local and remote networks. Elk Audio OS uses the Xenomai real-time kernel extension, which makes it suitable for the most demanding of low-latency audio tasks. We provide the first comprehensive overview of Elk Audio OS, describing its architecture and the key components of interest to potential developers and users. We explain operational aspects like the configuration of the architecture and the control mechanisms of the internal sound engine, as well as the tools that enable an easier and faster development of connected musical devices. Finally, we discuss the implications of Elk Audio OS, including the development of an open source community around it.
Although in cellular networks full duplex and dynamic time-division duplexing promise increased spectrum efficiency, their potential is so far challenged by increased interference. While previous studies have shown that self-interference can be suppressed to a sufficient level, we show that the cross-link interference for both duplexing modes, especially from base station to base station, is the remaining challenge in multi-cell networks, restricting the uplink performance. Using beamforming techniques of low complexity, we show that this interference can be mitigated, and that full duplex and dynamic time-division duplexing can substantially increase the capacity of multi-cell networks. Our results suggest that if we can control the cross-link interference in full duplex, we can almost double the multi-cell network capacity as well as user throughput. Therefore, the techniques in this article have the potential to enable a smooth introduction of full duplex into cellular systems.
Full-duplex communications have the potential to almost double the spectral efficiency. To realize such a potentiality, the signal separation at base station’s antennas plays an essential role. This article addresses the fundamentals of such separation by proposing a new smart antenna architecture that allows every antenna to be either shared or separated between uplink and downlink transmissions. The benefits of such architecture are investigated by an assignment problem to optimally assign antennas, beamforming and power to maximize the weighted sum spectral efficiency. We propose a near-to-optimal solution using block coordinate descent that divides the problem into assignment problems, which are NP-hard, a beamforming and power allocation problems. The optimal solutions for the beamforming and power allocation are established while near-to-optimal solutions to the assignment problems are derived by semidefinite relaxation. Numerical results indicate that the proposed solution is close to the optimum, and it maintains a similar performance for high and low residual self-interference powers. With respect to the usually assumed antenna separation technique and half-duplex transmission, the sum spectral efficiency gains increase with the number of antennas. We conclude that our proposed smart antenna assignment for signal separation is essential to realize the benefits of multiple antenna full-duplex communications.
Wireless communication is evolving to support critical control in automation systems. The fifth-generation (5G) mobile network air interface New Radio adopts a scalable numerology and mini-slot transmission for short packets that make it potentially suitable for critical control systems. The reliable minimum cycle time is an important indicator for industrial communication techniques but has not yet been investigated within 5G. To address such a question, this article considers 5G-based industrial networks and uses the delay optimization based on data-driven channel characterization (CCDO) approach to propose a method to evaluate the reliable minimum cycle time of 5G. Numerical results in three representative industrial environments indicate that following the CCDO approach, 5G-based industrial networks can achieve, in real-world scenario, millisecond-level minimum cycle time to support several hundred nodes with reliability higher than 99.9999%.
In the aeronautics industry, wireless avionics intracommunications (WAICs) have a tremendous potential to improve efficiency and flexibility while reducing weight, fuel consumption, and maintenance costs over traditional wired avionics systems. This survey starts with an overview of the major benefits and opportunities in the deployment of wireless technologies for critical applications in an aircraft. The current state of the art is presented in terms of system classifications based on data rate demands and transceiver installation locations. We then discuss major technical challenges in the design and realization of the envisioned aircraft applications. Although WAIC has aspects and requirements similar to mission-critical applications of industrial automation, it also has specific issues, such as wireless channels, complex structures, operations, and safety of the aircraft that make this area of research self-standing and challenging. Existing wireless techniques are discussed to investigate the applicability of the current solutions for the critical operations of an aircraft. Specifically, IEEE 802.15.4-based and Bluetooth-based solutions are discussed for low data rate applications, whereas IEEE 802.11-based and UWB-based solutions are considered for high data rate applications. We conclude the survey by highlighting major research directions in this emerging area.
An important task in the Internet of Things (IoT) is field monitoring, where multiple IoT nodes take measurements and communicate them to the base station or the cloud for processing, inference, and analysis. When the measurements are high-dimensional (e.g., videos or time-series data), IoT networks with limited bandwidth and low-power devices may not be able to support such frequent transmissions with high data rates. To ensure communication efficiency, this article proposes to model the measurement compression at IoT nodes and the inference at the base station or cloud as a deep neural network (DNN). We propose a new framework where the data to be transmitted from nodes are the intermediate outputs of a layer of the DNN. We show how to learn the model parameters of the DNN and study the trade-off between the communication rate and the inference accuracy. The experimental results show that we can save approximately 96 percent transmissions with only a degradation of 2.5 percent in inference accuracy, which shows the potentiality to enable many new IoT data analysis applications that generate a large amount of measurements.
In millimeter-wave (mmWave) wireless communications, the duration of the channel estimation plays a major role to establish the links before data transmission. However, fixed or long channel estimation can substantially hinder the achievable transmit data rates. In this letter, we propose a new scheme that optimizes the channel estimation duration to establish the link between the base station (BS) and a mobile station (MS) in mmWave communications. Before pilot transmissions, the BS predicts the downlink effective rate that would be achieved after channel estimation with the pilot precoder and compares this predicted rate with the current rate, based on the current channel estimates. The proposed scheme optimizes the mmWave channel estimation duration by ending pilot transmissions when the predicted rate is lower than the current rate.
Spectral precoding is a promising technique to suppress out-of-band emissions and comply with leakage constraints over adjacent frequency channels and with mask requirements on the unwanted emissions. However, spectral precoding may distort the original data vector, which is formally expressed as the error vector magnitude (EVM) between the precoded and original data vectors. Notably, EVM has a deleterious impact on the performance of multiple-input multiple-output orthogonal frequency division multiplexing-based systems. In this paper we propose a novel spectral precoding approach which constrains the EVM while complying with the mask requirements. We first formulate and solve the EVM-unconstrained mask-compliant spectral precoding problem, which serves as a springboard to the design of two EVM-constrained spectral precoding schemes. The first scheme takes into account a wideband EVM-constraint which limits the average in-band distortion. The second scheme takes into account frequency-selective EVM-constraints, and consequently, limits the signal distortion at the subcarrier level. Numerical examples illustrate that both proposed schemes outperform previously developed schemes in terms of important performance indicators such as block error rate and system-wide throughput while complying with spectral mask and EVM constraints.
The computational detection of musical patterns is widely studied in the field of Music Information Retrieval and has numerous applications. However, pattern detection in real-time has not yet received adequate attention. The real-time detection is important in several application domains, especially in the field of the Internet of Musical Things. This study considers a single musical instrument and investigates the detection in real-time of patterns of a monophonic music stream. We present a representation mechanism to denote musical notes as a single column matrix, whose content corresponds to three key attributes of each musical note-pitch, amplitude and duration. The note attributes are obtained from a symbolic MIDI representation. Based on such representation, we compare the most prominent candidate methods based on neural networks and one deterministic method. Numerical results show the accuracy of each method, and allow us to characterize the trade-offs among those methods.
Backscatter communication is a promising solution for enabling information transmission between ultra-low-power devices, but its potential is not fully understood. One major problem is dealing with the interference between the backscatter devices, which is usually not taken into account, or simply treated as noise in the cases where there are a limited number of backscatter devices in the network. In order to better understand this problem in the context of massive IoT (Internet of Things), we consider a network with a base station having one antenna, serving one primary user, and multiple IoT devices, called secondary users. We formulate an optimization problem with the goal of minimizing the needed transmit power for the base station, while the ratio of backscattered signal, called backscatter coefficient, is optimized for each of the IoT devices. Such an optimization problem is non-convex and thus finding an optimal solution in real-time is challenging. In this paper, we prove necessary and sufficient conditions for the existence of an optimal solution, and show that it is unique. Furthermore, we develop an efficient solution algorithm, only requiring solving a linear system of equations with as many unknowns as the number of secondary users. The simulation results show a lower energy outage probability by up to 40-80 percentage points in dense networks with up to 150 secondary users. To our knowledge, this is the first work that studies backscatter communication in the context of massive IoT, also taking into account the interference between devices.
Wireless communication is gaining popularity in the industry for its simple deployment, mobility, and low cost. Ultralow latency and high reliability requirements of mission-critical industrial applications are highly demanding for wireless communication, and the indoor industrial environment is hostile to wireless communication due to the richness of reflection and obstacles. Assessing the effect of the industrial environment on the reliability and latency of wireless communication is a crucial task, yet it is challenging to accurately model the wireless channel in various industrial sites. In this article, based on the comprehensive channel measurement results from the National Institute of Standards and Technology at 2.245 and 5.4 GHz, we quantify the reliability degradation of wireless communication in multipath fading channels. A delay optimization based on the channel characterization is then proposed to minimize packet transmission times of a cyclic prefix orthogonal frequency division multiplexing system under a reliability constraint at the physical layer. When the transmission bandwidth is abundant and the payload is short, the minimum transmission time is found to be restricted by the optimal cyclic prefix duration, which is correlated with the communication distance. Results further reveal that using relays may, in some cases, reduce end-to-end latency in industrial sites, as achievable minimum transmission time significantly decreases at short communication ranges.
This paper considers a general class of iterative algorithms performing a distributed training task over a network where the nodes have background traffic and communicate through a shared wireless channel.Focusing on the carrier-sense multiple access with collision avoidance (CSMA/CA) as the main communication protocol, we investigate the mini-batch size and convergence of the training algorithm as a function of the communication protocol and network settings. We show that, given a total latency budget to run the algorithm, the training performance becomes worse as either the background traffic or the dimension of the training problem increases. We then propose a lightweight algorithm to regulate the network congestion at every node, based on local queue size with no explicit signaling with other nodes, and demonstrate the performance improvement due to this algorithm. We conclude that a co-design of distributed optimization algorithms and communication protocols is essential for the success of machine learning over wireless networks and edge computing.
This chapter is devoted to the use of machine learning (ML) tools to address the spectrum‐sharing problem in cellular networks. The emphasis is on a hybrid approach that combines the traditional model‐based approach with a (ML) data‐driven approach. Taking millimeter‐wave cellular network as an application case, the theoretical analyses and experiments presented in the chapter show that the proposed hybrid approach is a very promising solution in dealing with the key technical aspects of spectrum sharing: the choice of beamforming, the level of information exchange for coordination and association, and the sharing architecture. The chapter then focuses on motivation and background related to spectrum sharing. It also presents the system model and problem formulation, and focuses on all technical aspects of the proposed hybrid approach. Finally, the chapter discusses further issues and conclusions.
This paper addresses the problem of distributed training of a machine learning model over the nodes of a wireless communication network. Existing distributed training methods are not explicitly designed for these networks, which usually have physical limitations on bandwidth, delay, or computation, thus hindering or even blocking the training tasks. To address such a problem, we consider a general class of algorithms where the training is performed by iterative distributed computations across the nodes. We assume that the nodes have some background traffic and communicate using the slotted-ALOHA protocol. We propose an iteration-termination criterion to investigate the trade-off between achievable training performance and the overall cost of running the algorithms. We show that, given a total running budget, the training performance becomes worse as either the background communication traffic or the dimension of the training problem increases. We conclude that a co-design of distributed optimization algorithms and communication protocols is essential for the success of machine learning over wireless networks and edge computing.
Wirelessly-powered sensor networks (WPSNs) are becoming increasingly important in different monitoring applications. We consider a WPSN where a multiple-antenna base station, which is dedicated for energy transmission, sends pilot signals to estimate the channel state information and consequently shapes the energy beams toward the sensor nodes. Given a fixed energy budget at the base station, in this paper, we investigate the novel problem of optimally allocating the power for the channel estimation and for the energy transmission. We formulate this non-convex optimization problem for general channel estimation and beamforming schemes that satisfy some qualification conditions. We provide a new solution approach and a performance analysis in terms of optimality and complexity. We also present a closed-form solution for the case where the channels are estimated based on a least square channel estimation and a maximum ratio transmit beamforming scheme. The analysis and simulations indicate a significant gain in terms of the network sensing rate, compared to the fixed power allocation, and the importance of improving the channel estimation efficiency.
An innovative feature of the 5th Generation mobile network (5G) is to consider industrial applications as use cases for which its new radio access, 5G New Radio, aims to provide ultra low latency and ultra high reliability performance. These requirements are fulfilled by minimizing standard performance indicators such as end-to-end latency and packet error rate. However, industrial control applications typically require periodic exchange of small data, where the ability of networks to support short and deterministic cycle times is the main key performance indicator. This paper proposes a methodology to evaluate the achievable cycle time of an industrial network deployed over the 5G New Radio specifications. Numerical results shows that 5G can achieve millisecond level cycle time with network size of several hundred, which is promising for many factory automation applications.
This paper proposes a new class of augmented musical instruments, “Smart Instruments”, which are characterized by embedded computational intelligence, bidirectional wireless connectivity, an embedded sound delivery system, and an onboard system for feedback to the player. Smart Instruments bring together separate strands of augmented instrument, networked music and Internet of Things technology, offering direct point-to-point communication between each other and other portable sensor-enabled devices, without need for a central mediator such as a laptop. This technological infrastructure enables an ecosystem of interoperable devices connecting performers as well as performers and audiences, which can support new performer-performer and audience-performer interactions. As an example of the Smart Instruments concept, this paper presents the Sensus Smart Guitar, a guitar augmented with sensors, onboard processing and wireless communication.
In this paper we propose to extend the concept of the Internet of Things to the musical domain leading to a subfield coined as the Internet of Musical Things (IoMUT). IoMUT refers to the network of computing devices embedded in physical objects (Musical Things) dedicated to the production and/or reception of musical content. Musical Things, such as smart musical instruments or smart devices, are connected by an infrastructure that enables multidirectional communication, both locally and remotely. The IoMUT digital ecosystem gathers interoperable devices and services that connect performers and audiences to support performer-performer and audience-performers interactions, not possible beforehand. The paper presents the main concepts of IoMUT and discusses the related implications and challenges.
A lightweight distributed MAC protocol is proposed in this paper to regulate the coexistence of high-priority (primary) and low-priority (secondary) wireless devices in cognitive wireless sensor networks. The protocol leverages the available spectrum resources while guaranteeing stringent quality of service requirements. By sensing the congestion level of the channel with local measurements and without any message exchange, a novel adaptive congestion control protocol is developed by which every device independently decides whether it should continue operating on a channel, or vacate it in case of saturation. The proposed protocol dynamically changes the congestion level based on variations of the non-stationary network. The protocol also determines the optimal number of active secondary devices needed to maximize the channel utilization without sacrificing latency requirements of the primary devices. This protocol has almost no signaling and computational overheads and can be directly implemented on top of existing wireless protocols without any hardware/equipment modification. Experimental results showsubstantial performance enhancement compared to the existing protocols and provide useful insights on low-complexity distributed adaptive MAC mechanism in cognitive wireless sensor networks.
In this paper, we consider a mobile edge computing (MEC) network, that is wirelessly powered. Each user harvests wireless energy and follows a binary computation offloading policy, i.e., it either executes the task locally or offloads it to the MEC as a whole. For the offloading users, non-orthogonal multiple access (NOMA) is adopted for information transmission. We consider rate-adaptive computational tasks and aim at maximizing the sum computation rate of all users by jointly optimizing the individual computing mode selection (local computing or offloading), the time allocations for energy transfer and for information transmission, together with the local computing speed or the transmission power level. The major difficulty of the rate maximization problem lies in the combinatorial nature of the multiuser computing mode selection and its involved coupling with the time allocation. We also study the case where the offloading users adopt time division multiple access (TDMA) as a benchmark, and derive the optimal time sharing among the users. We show that the maximum achievable rate is the same for the TDMA and the NOMA system, and in the case of NOMA it is independent from the decoding order, which can be exploited to improve system fairness. To maximize the sum computation rate, for the mode selection we propose a greedy solution based on the wireless channel gains, combined with the optimal allocation of energy transfer time. Numerical results show that the proposed solution maximizes the computation rate in homogeneous networks, and binary offloading leads to significant gains. Moreover, NOMA increases the fairness of rate distribution among the users significantly, when compared with TDMA.
The integration of volatile renewable energy into distribution networks on a large scale will demand advanced voltage control algorithms. Communication will be an integral part of these algorithms, however, it is unclear what kind of communication protocols will be most effective for the task. Motivated by such questions, this paper investigates how voltage control can be achieved using event triggered communications. In particular, we consider online algorithms that require the network’s buses to communicate only when their voltage is outside a feasible operation range. We prove the convergence of these algorithms to an optimal operating point at the rate O(1/τ), assuming linearized power flows. We illustrate the performance of the algorithms on the full nonlinear AC power flow in simulations. Our results show that event-triggered protocols can significantly reduce the communication for smart grid control.
In electricity distribution networks, the increasing penetration of renewable energy generation necessitates faster and more sophisticated voltage controls. Unfortunately, recent research shows that local voltage control fails in achieving the desired regulation, unless there is communication between the controllers. However, the communication infrastructure for distribution systems is less reliable and less ubiquitous compared to that for the bulk transmission system. In this paper, we design distributed voltage control that uses limited communication. That is, only neighboring buses need to communicate a few bits between each other for each control step. We investigate how these controllers can achieve the desired asymptotic behavior of the voltage regulation and we provide upper bounds on the number of bits that are needed to ensure a predefined accuracy of the regulation. Finally, we illustrate the results by numerical simulations.
Full-duplex base-stations with half-duplex nodes, allowing simultaneous uplink and downlink from different nodes, have the potential to double the spectrum efficiency without adding additional complexity at mobile nodes. Hybrid beam forming is commonly used in millimeter wave systems for its implementation efficiency. An important element of hybrid beam-forming is quantized phase shifters. In this paper, we ask if low-resolution phase shifters suffice for beamforming-based full-duplex millimeter wave systems. We formulate the problem of joint design for both self-interference suppression and downlink beamforming as an optimization problem, which we solve using penalty dual decomposition to obtain a near-optimal solution. Numerical results indicate that low-resolution phase shifters can perform close to systems that use infinite phase shifter resolution, and that even a single quantization bit outperforms half-duplex transmissions in both low and high residual self-interference scenarios.
To make the system available at low-cost, millimeter-ave (mmWave) multiple-input multiple-output (MIMO) architectures employ analog arrays, which are driven by a limited number of radio frequency (RF) chains. One primary challenge of using large hybrid analog-digital arrays is that the digital baseband cannot directly access the signal to/from each antenna. To address this limitation, recent research has focused on retransmissions, iterative precoding, and subspace decomposition methods. Unlike these approaches that exploited the channel’s low-rank, in this work we exploit the sparsity of the received signal at both the transmit/receive antennas. While the signal itself is de facto dense, it is well-known that most signals are sparse under an appropriate choice of basis. By delving into the structured compressive sensing (CS) framework and adapting them to variants of the mmWave hybrid architectures, we provide methodologies to recover the analog signal at each antenna from the (low-dimensional) digital signal. Moreover, we characterizes the minimal numbers of measurement and RF chains to provide this recovery, with high probability. We discuss their applications to common variants of the hybrid architecture. By leveraging the inherent sparsity of the received signal, our analysis reveals that a hybrid MIMO system can be " turned into" a fully digital one: the number of needed RF chains increases logarithmically with the number of antennas.
Finding a dataset of minimal cardinality to characterize the optimal parameters of a model is of paramount importance in machine learning and distributed optimization over a network. This paper investigates the compressibility of large datasets. More specifically, we propose a framework that jointly learns the input-output mapping as well as the most representative samples of the dataset (sufficient dataset). Our analytical results show that the cardinality of the sufficient dataset increases sub-linearly with respect to the original dataset size. Numerical evaluations of real datasets reveal a large compressibility, up to 95%, without a noticeable drop in the learnability performance, measured by the generalization error.
As the specifications of the 5th generation of cellular networks mature, the deployment phase is starting up. Hence, peaks of data rates in the order of tens of Gbit/s as well as more energy efficient deployments are expected. Nevertheless, the quick development of new applications and services encourage the research community to look beyond 5G and explore new technological components. Indeed, to meet the increasing demand for mobile broadband as well as internet of things type of services, the research and standardization communities are currently investigating novel physical and medium access layer technologies, including further virtualization of networks, the use of the lower Terahertz bands, even higher cell densification, and full-duplex (FD) communications. FD has been proposed as one of the enabling technologies to increase the spectral efficiency of conventional wireless transmission modes, by overcoming our prior understanding that it is not possible for radios to transmit and receive simultaneously on the same time-frequency resource. Due to this, we can also refer to FD communications as in-band FD. In-band FD transceivers have the potential of improving the attainable spectral efficiency of traditional wireless networks operating with half-duplex (HD) transceivers by a factor close to two. In addition to the spectral efficiency gains, full-duplex can provide gains in the medium access control layer, in which problems such as the hidden/exposed nodes and collision detection can be mitigated and the energy consumption can be reduced. Until recently, in-band FD was not considered as a solution for wireless networks due to the inherent interference created from the transmitter to its own receiver, the so-called self-interference (SI). However, recent advancements in antenna and analog/digital interference cancellation techniques demonstrate FD transmissions as a viable alternative to traditional HD transmissions. Given the recent architectural progression of 5G towards smaller cells, higher densification, higher number of antennas and utilizing the millimeter wave (mmWave) band, the integration of FD communications into such scenarios is appealing. In-band FD communications are suited for short range communication, and although the SI remains a challenge, the use of multiple antennas and the transmission in the mmWave band are allies that help to mitigate the SI in the spatial domain and provide even more gains for spectral and energy efficiency. To achieve the spectral and energy efficiency gains, it is important to understand the challenges and solutions, which can be roughly divided into resource allocation, protocol design, hardware design and energy harvesting. Hence, FD communications appears as an important technology component to improve the spectral and energy efficiency of current communication systems and help to meet the goals of 5G and beyond. The chapter starts with an overview of FD communications, including its challenges and solutions. Next, a comprehensive literature review of energy efficiency in FD communications is presented along with the key solutions to improve energy efficiency. Finally, we evaluate the key aspects of energy efficiency in FD communications for two scenarios: single-cell with multiple users in a pico-cell scenario, and a system level evaluation with macro- and small-cells with multiple users.
This paper proposes a new large-scale mask compliant spectral precoder (LS-MSP) for orthogonal frequency division multiplexing systems. In this paper, we first consider a previously proposed mask-compliant spectral precoding scheme that utilizes a generic convex optimization solver which suffers from high computational complexity, notably in large-scale systems. To mitigate the complexity of computing the LS-MSP, we propose a divide-and-conquer approach that breaks the original problem into smaller rank 1 quadratic-constraint problems and each small problem yields closed-form solution. Based on these solutions, we develop three specialized first-order low-complexity algorithms, based on 1) projection on convex sets and 2) the alternating direction method of multipliers. We also develop an algorithm that capitalizes on the closed-form solutions for the rank 1 quadratic constraints, which is referred to as 3) semianalytical spectral precoding. Numerical results show that the proposed LS-MSP techniques outperform previously proposed techniques in terms of the computational burden while complying with the spectrum mask. The results also indicate that 3) typically needs 3 iterations to achieve similar results as 1) and 2) at the expense of a slightly increased computational complexity.
This paper considers a millimeter-wave communication system and proposes an efficient channel estimation scheme with a minimum number of pilots. We model the dynamics of the channel’s second-order statistics by a Markov process and develop a learning framework to obtain these dynamics from an unlabeled set of measured angles of arrival and departure. We then find the optimal precoding and combining vectors for pilot signals. Using these vectors, the transmitter and receiver will sequentially estimate the corresponding angles of departure and arrival, and then refine the pilot precoding and combining vectors to minimize the error of estimating the channel gains.
Trealize the Industry 4.0 vision and enable mobile connectivity and flexible deployment in harsh industrial environments, wireless communication is essential. But before wireless communications technology can be widely deployed for critical control applications, first it must be assessed, and that requires a comprehensive characterization of the wireless channel. This can be done by analyzing large amounts of wireless data collected from different industrial environments. In this article, we discuss the possibilities offered by a recently published industrial wireless data set. This data set is more exhaustive than measurements previously reported. We show two cases of how those data have been applied to improve latency performance and to investigate the feasibility of physical-layer security techniques for wireless communication in industrial environments.
Ultra-high reliable and low-latency communication (URLLC)is envisaged to support emerging applications with strict latency and reliability requirements. Critical industrial control is among the most important URLLC applications where the stringent requirements make the deployment of wireless networks critical, especially as far as latency is concerned. Since the amount of data exchanged in critical industrial communications is generally small, an effective way to reduce the latency is to minimize the packet’s synchronization overhead, starting from the physical layer (PHY). This paper proposes to use a short one-symbol PHY preamble for critical wireless industrial communications, reducing significantly the transmission latency with respect to other wireless standards. Dedicated packet detection and synchronization algorithms are discussed, analyzed, and tuned to ensure that the required reliability level is achieved with such extremely short preamble. Theoretical analysis, simulations, and experiments show that detection error rates smaller than 10(-6) can be achieved with the proposed preamble while minimizing the latencies.
We address a generalization of change point detection with the purpose of detecting the change locations and the levels of clusters of a piece- wise constant signal. Our approach is to model it as a nonparametric penalized least square model selection on a family of models indexed over the collection of partitions of the design points and propose a computationally efficient algorithm to approximately solve it. Statistically, minimizing such a penalized criterion yields an approximation to the maximum a-posteriori probability (MAP) estimator. The criterion is then ana-lyzed and an oracle inequality is derived using a Gaussian concentration inequality. The oracle inequality is used to derive on one hand conditions for consistency and on the other hand an adaptive upper bound on the expected square risk of the estimator, which statistically motivates our approximation. Finally, we apply our algorithm to simulated data to experimentally validate the statistical guarantees and illustrate its behavior.
Full-duplex communications have the potential to almost double the spectralefficiency. To realize such a potentiality, the signal separation at base station’s antennasplays an essential role. This paper addresses the fundamentals of such separationby proposing a new smart antenna architecture that allows every antenna to beeither shared or separated between uplink and downlink transmissions. The benefitsof such architecture are investigated by an assignment problem to optimally assignantennas, beamforming and power to maximize the weighted sum spectral efficiency.We propose a near-to-optimal solution using block coordinate descent that divides theproblem into assignment problems, which are NP-hard, a beamforming and powerallocation problems. The optimal solutions for the beamforming and power allocationare established while near-to-optimal solutions to the assignment problems are derivedby semidefinite relaxation. Numerical results indicate that the proposed solution isclose to the optimum, and it maintains a similar performance for high and low residualself-interference powers. With respect to the usually assumed antenna separationtechnique and half-duplex transmission, the sum spectral efficiency gains increase withthe number of antennas. We conclude that our proposed smart antenna assignment forsignal separation is essential to realize the benefits of multiple antenna full-duplexcommunications.
Millimeter-wave using large-antenna arrays is a key technological component forthe future cellular systems, where it is expected that hybrid beamforming along withquantized phase shifters will be used due to their implementation and cost efficiency.In this paper, we investigate the efficacy of full-duplex mmWave communicationwith hybrid beamforming using low-resolution phase shifters, without any analogself-interference cancellation. We formulate the problem of joint self-interferencesuppression and downlink beamforming as a mixed-integer nonconvex joint opti-mization problem. We propose LowRes, a near-to-optimal solution using penaltydual decomposition. Numerical results indicate that LowRes using low-resolutionphase shifters perform within 3% of the optimal solution that uses infinite phaseshifter resolution. Moreover, even a single quantization bit outperforms half-duplextransmissions, respectively by 29% and 10% for both low and high residual self-interference scenarios, and for a wide range of practical antenna to radio-chain ratios.Thus, we conclude that 1-bit phase shifters suffice for full-duplex millimeter-wavecommunications, without requiring any additional new analog hardware.
Proximity service (ProSe), using the geographic location and device information by considering the proximity of mobile devices, enriches the services we use to interact with people and things around us. ProSe has been used in mobile social networks in proximity and also in smart home and building automation (Google Home). To enable ProSe in smart home, reliable and stable network protocols and communication infrastructures are needed. Thread is a new wireless protocol aiming at smart home and building automation (BA), which supports mesh networks and native Internet protocol connectivity. The latency of Thread should be carefully studied when used in user-friendly and safety-critical ProSe in smart home and BA. In this paper, a system level model of latency in the Thread mesh network is presented. The accumulated latency consists of different kinds of delay from the application layer to the physical layer. A Markov chain model is used to derive the probability distribution of the medium access control service time. The system level model is experimentally validated in a multi-hop Thread mesh network. The outcomes show that the system model results match well with the experimental results. Finally, based on an analytical model, a software tool is developed to estimate the latency of the Thread mesh network, providing developers more network information to develop user-friendly and safety-critical ProSe in smart home and BA.
The underutilized millimeter-wave (mm-wave) band is a promising candidate to enable extremely high data rate communications in future wireless networks. However, the special characteristics of the mm-wave systems such as high vulnerability to obstacles (due to high penetration loss) and to mobility (due to directional communications) demand a careful design of the association between the clients and access points (APs). This challenge can be addressed by distributed association techniques that gracefully adapt to wireless channel variations and client mobilities. We formulated the association problem as a mixed-integer optimization aiming to maximize the network throughput with proportional fairness guarantees. This optimization problem is solved first by a distributed dual decomposition algorithm, and then by a novel distributed auction algorithm where the clients act asynchronously to achieve near-to-optimal association between the clients and APs. The latter algorithm has a faster convergence with a negligible drop in the resulting network throughput. A distinguishing novel feature of the proposed algorithms is that the resulting optimal association does not have to be re-computed every time the network changes (e.g., due to mobility). Instead, the algorithms continuously adapt to the network variations and are thus very efficient. We discuss the implementation of the proposed algorithms on top of existing communication standards. The numerical analysis verifies the ability of the proposed algorithms to optimize the association and to maintain optimality in the time-variant environments of the mm-wave networks.
This paper presents a case study of a fully working prototype of the Sensus smart guitar. Eleven professional guitar players were interviewed after a prototype test session. The smartness of the guitar was perceived as enabling the integration of a range of equipment into a single device, and the proactive exploration of novel expressions. The results draw attention to the musicians’ sense-making of the smart qualities, and to the perceived impact on their artistic practices. The themes highlight how smartness was experienced in relation to the guitar’s agency and the skills it requires, the tension between explicit (e.g. playing a string) and implicit (e.g. keeping rhythm) body movements, and to performing and producing music. Understanding this felt sense of smartness is relevant to how contemporary HCI research conceptualizes mundane artefacts enhanced with smart technologies, and to how such discourse can inform related design issues.
In last two decades, various monitoring systems have been designed and deployed in urban environments, toward the realization of the so called smart cities. Such systems are based on both dedicated sensor nodes, and ubiquitous but not dedicated devices such as smart phones and vehicles’ sensors. When we design sensor network monitoring systems for smart cities, we have two essential problems: node deployment and sensing management. These design problems are challenging, due to large urban areas to monitor, constrained locations for deployments, and heterogeneous type of sensing devices. There is a vast body of literature from different disciplines that have addressed these challenges. However, we do not have yet a comprehensive understanding and sound design guidelines. This paper addresses such a research gap and provides an overview of the theoretical problems we face, and what possible approaches we may use to solve these problems. Specifically, this paper focuses on the problems on both the deployment of the devices (which is the system design/configuration part) and the sensing management of the devices (which is the system running part). We also discuss how to choose the existing algorithms in different type of monitoring applications in smart cities, such as structural health monitoring, water pipeline networks, traffic monitoring. We finally discuss future research opportunities and open challenges for smart city monitoring.
The Internet of Musical Things (IoMusT) is an emerging research field positioned at the intersection of Internet of Things, new interfaces for musical expression, ubiquitous music, human-computer interaction, artificial intelligence, and participatory art. From a computer science perspective, IoMusT refers to the networks of computing devices embedded in physical objects (musical things) dedicated to the production and/or reception of musical content. Musical things, such as smart musical instruments or wearables, are connected by an infrastructure that enables multidirectional communication, both locally and remotely. We present a vision in which the IoMusT enables the connection of digital and physical domains by means of appropriate information and communication technologies, fostering novel musical applications and services. The ecosystems associated with the IoMusT include interoperable devices and services that connect musicians and audiences to support musician-musician, audience-musicians, and audience-audience interactions. In this paper, we first propose a vision for the IoMusT and its motivations. We then discuss five scenarios illustrating how the IoMusT could support: 1) augmented and immersive concert experiences; 2) audience participation; 3) remote rehearsals; 4) music e-learning; and 5) smart studio production. We identify key capabilities missing from today’s systems and discuss the research needed to develop these capabilities across a set of interdisciplinary challenges. These encompass network communication (e.g., ultra-low latency and security), music information research (e.g., artificial intelligence for real-time audio content description and multimodal sensing), music interaction (e.g., distributed performance and music e-learning), as well as legal and responsible innovation aspects to ensure that future IoMusT services are socially desirable and undertaken in the public interest.
Location information for events, assets, and individuals, mostly focusing on two dimensions so far, has triggered a multitude of applications across different verticals, such as consumer, networking, industrial, health care, public safety, and emergency response use cases. To fully exploit the potential of location awareness and enable new advanced location-based services, localization algorithms need to be combined with complementary technologies including accurate height estimation, i.e., three dimensional location, reliable user mobility classification, and efficient indoor mapping solutions. This survey provides a comprehensive review of such enabling technologies. In particular, we present cellular localization systems including recent results on 5G localization, and solutions based on wireless local area networks, highlighting those that are capable of computing 3D location in multi-floor indoor environments. We overview range-free localization schemes, which have been traditionally explored in wireless sensor networks and are nowadays gaining attention for several envisioned Internet of Things applications. We also present user mobility estimation techniques, particularly those applicable in cellular networks, that can improve localization and tracking accuracy. Regarding the mapping of physical space inside buildings for aiding tracking and navigation applications, we study recent advances and focus on smartphone-based indoor simultaneous localization and mapping approaches. The survey concludes with service availability and system scalability considerations, as well as security and privacy concerns in location architectures, discusses the technology roadmap, and identifies future research directions.
Millimeter-wave (mmWave) networks rely on directional transmissions, in both control plane and data plane, to overcome severe path-loss. Nevertheless, the use of narrow beams complicates the initial cell-search procedure where we lack sufficient information for beamforming. In this paper, we investigate the feasibility of random beamforming for cell-search. We develop a stochastic geometry framework to analyze the performance in terms of failure probability and expected latency of cell-search. Meanwhile, we compare our results with the naive, but heavily used, exhaustive search scheme. Numerical results show that, for a given discovery failure probability, random beamforming can substantially reduce the latency of exhaustive search, especially in dense networks. Our work demonstrates that developing complex cell-discovery algorithms may be unnecessary in dense mmWave networks and thus shed new lights on mmWave system design.
In millimeter-wave wireless networks, the use of narrow beams, required to compensate for the severe path-loss, complicates the cell-discovery and initial access. In this paper, we investigate the feasibility of random beam forming and enhanced exhaustive search for cell-discovery by analyzing the latency and detection failure probability in the control-plane and the user throughput in the data-plane. We show that, under realistic propagation model and antenna patterns, both approaches are suitable for 3GPP New Radio cellular networks. The performance gain, compared to the heavily used exhaustive and iterative search schemes, is more prominent in dense networks and large antenna regimes and can be further improved by optimizing the beam forming code-books.
While the current generation of mobile and fixed communication networks has been standardized for mobile broadband services, the next generation is driven by the vision of the Internet of Things and mission-critical communication services requiring latency in the order of milliseconds or submilliseconds. However, these new stringent requirements have a large technical impact on the design of all layers of the communication protocol stack. The cross-layer interactions are complex due to the multiple design principles and technologies that contribute to the layers’ design and fundamental performance limitations. We will be able to develop low-latency networks only if we address the problem of these complex interactions from the new point of view of submilliseconds latency. In this paper, we propose a holistic analysis and classification of the main design principles and enabling technologies that will make it possible to deploy low-latency wireless communication networks. We argue that these design principles and enabling technologies must be carefully orchestrated to meet the stringent requirements and to manage the inherent tradeoffs between low latency and traditional performance metrics. We also review currently ongoing standardization activities in prominent standards associations, and discuss open problems for future research.
This note is concerned with the consensus of linear multiagent systems under tactile communication. Motivated by the emerging tactile communication technology where extremely low latency has to be supported, a distributed event-triggered communication and control scheme is proposed for the data reduction of each agent. First, an event-triggered data reduction scheme is designed for the communication between neighbors. Under such a communication scheme, a distributed event-triggered output feedback controller is further implemented for each agent, which is updated asynchronously with the communication action. It is proven that the consensus of the underlying multiagent systems is achieved asymptotically. Furthermore, it is shown that the proposed communication and control strategy fulfils the reduction of both the frequency of communication and controller updates as well as excluding Zeno behavior. A numerical example is given to illustrate the effectiveness of the proposed control strategy.
To further improve the potential of full-duplex communications, networks may employ multiple antennas at the base station or user equipment. To this end, networks that employ current radios usually deal with self-interference and multi-user interference by beamforming techniques. Although previous works investigated beamforming design to improve spectral efficiency, the fundamental question of how to split the antennas at a base station between uplink and downlink in full-duplex networks has not been investigated rigorously. This paper addresses this question by posing antenna splitting as a binary nonlinear optimization problem to minimize the sum mean squared error of the received data symbols. It is shown that this is an NP-hard problem. This combinatorial problem is dealt with by equivalent formulations, iterative convex approximations, and a binary relaxation. The proposed algorithm is guaranteed to converge to a stationary solution of the relaxed problem with much smaller complexity than exhaustive search. Numerical results indicate that the proposed solution is close to the optimal in both high and low self-interference capable scenarios, while the usually assumed antenna splitting is far from optimal. For large number of antennas, a simple antenna splitting is close to the proposed solution. This reveals that the importance of antenna splitting diminishes with the number of antennas.
In a typical wirelessly powered sensor network (WPSN), wireless chargers provide energy to sensor nodes by using wireless energy transfer (WET). The chargers can greatly improve the lifetime of a WPSN using energy beamforming by a proper charging scheduling of energy beams. However, the supplied energy still may not meet the demand of the energy of the sensor nodes. This issue can be alleviated by deploying redundant sensor nodes, which not only increase the total harvested energy, but also decrease the energy consumption per node provided that an efficient scheduling of the sleep/awake of the nodes is performed. Such a problem of joint optimal sensor deployment, WET scheduling, and node activation is posed and investigated in this paper. The problem is an integer optimization that is challenging due to the binary decision variables and non-linear constraints. Based on the analysis of the necessary condition such that the WPSN be immortal, we decouple the original problem into a node deployment problem and a charging and activation scheduling problem. Then, we propose an algorithm and prove that it achieves the optimal solution under a mild condition. The simulation results show that the proposed algorithm reduces the needed nodes to deploy by approximately 16%, compared to a random-based approach. The simulation also shows if the battery buffers are large enough, the optimality condition will be easy to meet.
Dual decomposition methods are among the most prominent approaches for finding primal/dual saddle point solutions of resource allocation optimization problems. To deploy these methods in the emerging Internet of things networks, which will often have limited data rates, it is important to understand the communication overhead they require. Motivated by this, we introduce and explore twomeasures of communication complexity of dual decomposition methods to identify the most efficient communication among these algorithms. The first measure is epsilon-complexity, which quantifies the minimal number of bits needed to find an epsilon-accurate solution. The second measure is b-complexity, which quantifies the best possible solution accuracy that can be achieved from communicating b bits. We find the exact epsilon -and b-complexity of a class of resource allocation problems where a single supplier allocates resources to multiple users. For both the primal and dual problems, the epsilon-complexity grows proportionally to log(2) (1/epsilon) and the b-complexity proportionally to 1/2(b). We also introduce a variant of the epsilon- and b-complexity measures where only algorithms that ensure primal feasibility of the iterates are allowed. Such algorithms are often desirable because overuse of the resources can overload the respective systems, e.g., by causing blackouts in power systems. We provide upper and lower bounds on the convergence rate of these primal feasible complexity measures. In particular, we show that the b-complexity cannot converge at a faster rate than O(1/b). Therefore, the results demonstrate a tradeoff between fast convergence and primal feasibility. We illustrate the result by numerical studies.
The lifetime of a wireless sensor network (WSN) determines how long the network can be used to monitor the area of interest. Hence, it is one of the most important performance metrics for WSN. The approaches used to prolong the lifetime can be briefly divided into two categories: reducing the energy consumption, such as designing an efficient routing, and providing extra energy, such as using wireless energy transfer (WET) to charge the nodes. Contrary to the previous line of work where only one of those two aspects is considered, we investigate these two together. In particular, we consider a scenario where dedicated wireless chargers transfer energy wirelessly to sensors. The overall goal is to maximize the minimum sampling rate of the nodes while keeping the energy consumption of each node smaller than the energy it receives. This is done by properly designing the routing of the sensors and the WET strategy of the chargers. Although such a joint routing and energy beamforming problem is non-convex, we show that it can be transformed into a semi-definite optimization problem (SDP). We then prove that the strong duality of the SDP problem holds, and hence the optimal solution of the SDP problem is attained. Accordingly, the optimal solution for the original problem is achieved by a simple transformation. We also propose a low-complexity approach based on pre-determined beamforming directions. Moreover, based on the alternating direction method of multipliers (ADMM), the distributed implementations of the proposed approaches are studied. The simulation results illustrate the significant performance improvement achieved by the proposed methods. In particular, the proposed energy beamforming scheme significantly out-performs the schemes where one does not use energy beamforming, or one does not use optimized routing. A thorough investigation of the effect of system parameters, including the number of antennas, the number of nodes, and the number of chargers, on the system performance is provided. The promising convergence behaviour of the proposed distributed approaches is illustrated.
Wirelessly powered sensor networks (WPSNs) are becoming increasingly important to monitor many internet-of-things systems. In these WPSNs, dedicated base stations (BSs) with multiple antennas charge the sensor nodes without the need of replacing their batteries thanks to two essential procedures: i) getting of the channel state information of the nodes by sending pilots, and based on this, ii) performing energy beamforming to transmit energy to the nodes. However, the BSs have limited power budget and thus these two procedures are not independent, contrarily to what is assumed in some previous studies. In this paper, we investigate the novel problem of how to optimally allocate the power for channel estimation and energy transmission. Although the problem is non-convex, we provide a new solution approach and a performance analysis in terms of optimality and complexity. We also provide a closed form solution for the case where the channels are estimated based on a least square estimation. The simulations show a gain of approximately 10% in allocating the power optimally, and the importance of improving the channel estimation efficiency.
To further improve the potential of full-duplex com-munications, networks may employ multiple antennas at thebase station or user equipment. To this end, networks thatemploy current radios usually deal with self-interference andmulti-user interference by beamforming techniques. Althoughprevious works investigated beamforming design to improvespectral efficiency, the fundamental question of how to split theantennas at a base station between uplink and downlink infull-duplex networks has not been investigated rigorously. Thispaper addresses this question by posing antenna splitting as abinary nonlinear optimization problem to minimize the sum meansquared error of the received data symbols. It is shown that thisis an NP-hard problem. This combinatorial problem is dealt withby equivalent formulations, iterative convex approximations, anda binary relaxation. The proposed algorithm is guaranteed toconverge to a stationary solution of the relaxed problem with muchsmaller complexity than exhaustive search. Numerical resultsindicate that the proposed solution is close to the optimal in bothhigh and low self-interference capable scenarios, while the usuallyassumed antenna splitting is far from optimal. For large numberof antennas, a simple antenna splitting is close to the proposedsolution. This reveals that the importance of antenna splittingdiminishes with the number of antennas.
To further improve the potential of full-duplex com-munications, networks may employ multiple antennas at thebase station or user equipment. To this end, networks thatemploy current radios usually deal with self-interference andmulti-user interference by beamforming techniques. Althoughprevious works investigated beamforming design to improvespectral efficiency, the fundamental question of how to split theantennas at a base station between uplink and downlink infull-duplex networks has not been investigated rigorously. Thispaper addresses this question by posing antenna splitting as abinary nonlinear optimization problem to minimize the sum meansquared error of the received data symbols. It is shown that thisis an NP-hard problem. This combinatorial problem is dealt withby equivalent formulations, iterative convex approximations, anda binary relaxation. The proposed algorithm is guaranteed toconverge to a stationary solution of the relaxed problem with muchsmaller complexity than exhaustive search. Numerical resultsindicate that the proposed solution is close to the optimal in bothhigh and low self-interference capable scenarios, while the usuallyassumed antenna splitting is far from optimal. For large numberof antennas, a simple antenna splitting is close to the proposedsolution. This reveals that the importance of antenna splittingdiminishes with the number of antennas.
A novel model-based dynamic distributed state estimator is proposed using sensor networks. The estimator consists of a filtering step – which uses a weighted combination of information provided by the sensors – and a model-based predictor of the system’s state. The filtering weights and the model-based prediction parameters jointly minimize – at each time-step – the bias and the variance of the prediction error in a Pareto optimization framework. The simultaneous distributed design of the filtering weights and of the model-based prediction parameters is considered, differently from what is normally done in the literature. It is assumed that the weights of the filtering step are in general unequal for the different state components, unlike existing consensus-based approaches. The state, the measurements, and the noise components are allowed to be individually correlated, but no probability distribution knowledge is assumed for the noise variables. Each sensor can measure only a subset of the state variables. The convergence properties of the mean and of the variance of the prediction error are demonstrated, and they hold both for the global and the local estimation errors at any network node. Simulation results illustrate the performance of the proposed method, obtaining better results than state of the art distributed estimation approaches.
This paper proposes an efficient channel estimation scheme with a minimum number of pilots for a frequency-selective millimeter-wave communication system. We model the dynamics of the channel’s second-order statistics by a Markov process and develop a learning framework that finds the optimal precoding and combining vectors for pilot signals, given the channel dynamics. Using these vectors, the transmitter and receiver will sequentially estimate the corresponding angles of departure and arrival, and then refine the pilot precoding and combining vectors to minimize the error of estimating the small-scale fading of all subcarriers. Numerical results demonstrate near-optimality of our approach, compared to the oracle wherein the second-order statistics (not the dynamics) are perfectly known a priori.
This paper investigates an event-triggered output feedback control strategy of linear systems under tactile communication, for which two different frameworks are considered. Motivated by the emerging tactile communications technology where latencies are very small but at the price of limited message sizes, a perception-based deadband principle is proposed for the data reduction of communication. In each framework, under an assumption that the deadband factor is upper bounded with respect to the system model, it is proven that global asymptotic stability of the closed loop system is achieved. Then, an event-triggered output feedback controller under tactile communication is further introduced. It is shown that the designed controller is capable of reducing the frequency of controller updates as well as excluding Zeno behavior. Numerical examples are given to illustrate the effectiveness of the proposed control algorithm.
Industry 4.0 is the emerging trend of the industrial automation. Millimeter-wave (mmWave) communication is a prominent technology for wireless networks to support the Industry 4.0 requirements. The availability of tractable accurate interference models would greatly facilitate performance analysis and protocol development for these networks. In this paper, we investigate the accuracy of an interference model that assumes impenetrable obstacles and neglects the sidelobes. We quantify the error of such a model in terms of statistical distribution of the signal to noise plus interference ratio and of the user rate for outdoor mmWave networks under different carrier frequencies and antenna array settings. The results show that assuming impenetrable obstacle comes at almost no accuracy penalty, and the accuracy of neglecting antenna sidelobes can be guaranteed with sufficiently large number of antenna elements. The comprehensive discussions of this paper provide useful insights for the performance analysis and protocol design of outdoor mmWave networks.
As electric power system operators shift from conventional energy to renewable energy sources, power distribution systems will experience increasing fluctuations in supply. These fluctuations present the need to not only design online decentralized power allocation algorithms, but also characterize how effective they are given fast-changing consumer demand and generation. In this paper, we present an online decentralized dual descent (OD3) power allocation algorithm and determine (in the worst case) how much of observed social welfare can be explained by fluctuations in generation capacity and consumer demand. Convergence properties and performance guarantees of the OD3 algorithm are analyzed by characterizing the difference between the online decision and the optimal decision. We demonstrate validity and accuracy of the theoretical results in the paper through numerical experiments using real power generation data.
Distributed optimization increasingly plays a centralrole in economical and sustainable operation of cyber-physicalsystems. Nevertheless, the complete potential of the technologyhas not yet been fully exploited in practice due to communicationlimitations posed by the real-world infrastructures. This workinvestigates fundamental properties of distributed optimizationbased on gradient methods, where gradient information iscommunicated using limited number of bits. In particular, ageneral class of quantized gradient methods are studied wherethe gradient direction is approximated by a finite quantizationset. Sufficient and necessary conditions are provided on sucha quantization set to guarantee that the methods minimize anyconvex objective function with Lipschitz continuous gradient anda nonempty and bounded set of optimizers. A lower bound on thecardinality of the quantization set is provided, along with specificexamples of minimal quantizations. Convergence rate results areestablished that connect the fineness of the quantization andthe number of iterations needed to reach a predefined solutionaccuracy. Generalizations of the results to a relevant class ofconstrained problems using projections are considered. Finally,the results are illustrated by simulations of practical systems.
In wireless communication networks, interference models are routinely used for tasks, such as performance analysis, optimization, and protocol design. These tasks are heavily affected by the accuracy and tractability of the interference models. Yet, quantifying the accuracy of these models remains a major challenge. In this paper, we propose a new index for assessing the accuracy of any interference model under any network scenario. Specifically, it is based on a new index that quantifies the ability of any interference model in correctly predicting harmful interference events, that is, link outages. We consider specific wireless scenario of both conventional sub-6 GHz and millimeter-wave networks and demonstrate how our index yields insights into the possibility of simplifying the set of dominant interferers, replacing a Nakagami or Rayleigh random fading by an equivalent deterministic channel, and ignoring antenna sidelobes. Our analysis reveals that in highly directional antenna settings with obstructions, even simple interference models (such as the classical protocol model) are accurate, while with omnidirectional antennas, more sophisticated and complex interference models (such as the classical physical model) are necessary. Our new approach makes it possible to adopt the simplest interference model of adequate accuracy for every wireless network.
Although the benefits of precoding and combining of data streams are widely recognized, the potential of precoding the pilot signals at the user equipment (UE) side and combining them at the base station (BS) side has not received adequate attention. This paper considers a multiuser multiple input multiple output (MU-MIMO) cellular system in which the BS acquires channel state information (CSI) by means of uplink pilot signals and proposes pilot precoding and combining to improve the CSI quality. We first evaluate the channel estimation performance of a baseline scenario in which CSI is acquired with no pilot precoding. Next, we characterize the channel estimation error when the pilot signals are precoded by spatial filters that asymptotically maximize the channel estimation quality. Finally, we study the case when, in addition to pilot precoding at the UE side, the BS utilizes the second order statistics of the channels to further improve the channel estimation performance. The analytical and numerical results show that, specially in scenarios with large number of antennas at the BS and UEs, pilot precoding and combining has a great potential to improve the channel estimation quality in MU-MIMO systems.
This paper presents some of the possibilities for interaction between performers, audiences, and their smart devices, offered by the novel family of musical instruments, the Smart Instruments. For this purpose, some implemented use cases are described, which involved a preliminary prototype of MIND Music Labs’ Sensus Smart Guitar, the first exemplar of Smart Instrument. Sensus consists of a guitar augmented with sensors, actuators, onboard processing, and wireless communication. Some of the novel interactions enabled by Sensus technology are presented, which are based on connectivity of the instrument to smart devices, virtual reality headsets, and the cloud.
Contemporary Content Delivery Networks (CDN) handle a vast number of content items. At such a scale, the replication schemes require a significant amount of time to calculate and realize cache updates, and hence they are impractical in highly-dynamic environments. This paper introduces cluster-based replication, whereby content items are organized in clusters according to a set of features, given by the cache/network management entity. Each cluster is treated as a single item with certain attributes, e.g., size, popularity, etc. and it is then altogether replicated in network caches so as to minimize overall network traffic. Clustering items reduces replication complexity; hence it enables faster and more frequent caches updates, and it facilitates more accurate tracking of content popularity. However, clustering introduces some performance loss because replication of clusters is more coarse-grained compared to replication of individual items. This tradeoff can be addressed through proper selection of the number and composition of clusters. Due to the fact that the exact optimal number of clusters cannot be derived analytically, an efficient approximation method is proposed. Extensive numerical evaluations of time-varying content popularity scenarios allow to argue that the proposed approach reduces core network traffic, while being robust to errors in popularity estimation.
In electrical distribution grids, the constantly increasing number of power generation devices based on renewables demands a transition from a centralized to a distributed generation paradigm. In fact, power injection from distributed energy resources (DERs) can be selectively controlled to achieve other objectives beyond supporting loads, such as the minimization of the power losses along the distribution lines and the subsequent increase of the grid hosting capacity. However, these technical achievements are only possible if alongside electrical optimization schemes, a suitable market model is set up to promote cooperation from the end users. In contrast with the existing literature, where energy trading and electrical optimization of the grid are often treated separately, or the trading strategy is tailored to a specific electrical optimization objective, in this paper, we consider their joint optimization. We also allow for a modular approach, where the market model can support any smart grid optimization goal. Specifically, we present a multi-objective optimization problem accounting for energy trading, where: 1) DERs try to maximize their profit, resulting from selling their surplus energy; 2) the loads try to minimize their expense; and 3) the main power supplier aims at maximizing the electrical grid efficiency through a suitable discount policy. This optimization problem is proved to be non-convex, and an equivalent convex formulation is derived. Centralized solutions are discussed and a procedure to distribute the solution is proposed. Numerical results to demonstrate the effectiveness of the so obtained optimal policies are finally presented, showing the proposed model results in economic bene fits for all the users (generators and loads) and in an increased electrical efficiency for the grid.
Efficient handover algorithms are essential for highly performing cellular networks. These algorithms depend on numerous parameters, whose settings must be appropriately optimized to offer a seamless connectivity. Nevertheless, such an optimization is difficult in a time varying context, unless adaptive strategies are used. In this paper, a new approach for the handover optimization is proposed. Three dynamical optimization approaches are presented, where the probability of outage and the probability of handover are considered. Since it is shown that these probabilities are difficult to compute, simple approximations of adequate accuracy are developed. A distributed optimization algorithm is then developed to maximize handover performance. Numerical results show that the proposed algorithm improves the performance of the handover considerably when compared to more traditional approaches.
In this paper, a distributed method for fault detection using sensor networks is proposed. Each sensor communicates only with neighboring nodes to compute locally an estimate of the state of the system to monitor. A residual is defined and suitable stochastic thresholds are designed, allowing to set the parameters so to guarantee a maximum false alarms probability. The main novelty and challenge of the proposed approach consists in addressing the individual correlations between the state, the measurements, and the noise components, thus significantly generalising the estimation methodology compared to previous results. No assumptions on the probability distribution family are needed for the noise variables. Simulation results show the effectiveness of the proposed method, including an extensive sensitivity analysis with respect to fault magnitude and measurement noise.
Although the benefits of precoding and combining data signals are widely recognized, the potential of these techniques for pilot transmission is not fully understood. This is particularly relevant for multiuser multiple-input multiple-output(MU-MIMO) cellular systems using millimeter-wave (mmWave)communications, where multiple antennas have to be used both at the transmitter and the receiver to overcome the severe path loss.In this paper, we characterize the gains of pilot precoding and combining in terms of channel estimation quality and achievable data rate. Specifically, we consider three uplink pilot transmission scenarios in a mmWave MU-MIMO cellular system: 1) non-precoded and uncombined, 2) precoded but uncombined, and3) precoded and combined. We show that a simple precoder that utilizes only the second-order statistics of the channel reduces the variance of the channel estimation error by a factor that is proportional to the number of user equipment (UE) antennas.We also show that using a linear combiner design based on the second-order statistics of the channel significantly reduces multiuser interference and provides the possibility of reusing some pilots. Specifically, in the large antenna regime, pilot preceding and combining help to accommodate a large number ofUEs in one cell, significantly improve channel estimation quality, boost the signal-to-noise ratio of the UEs located close to the cell edges, alleviate pilot contamination, and address the imbalanced coverage of pilot and data signals.
To increase the spectral efficiency of wireless networks without requiring full-duplex capability of user devices, a potential solution is the recently proposed three-node full-duplex mode. To realize this potential, networks employing three-node full-duplex transmissions must deal with self-interference and user-to-user interference, which can be managed by frequency channel and power allocation techniques. Whereas previous works investigated either spectral efficient or fair mechanisms, a scheme that balances these two metrics among users is investigated in this paper. This balancing scheme is based on a new solution method of the multi-objective optimization problem to maximize the weighted sum of the per-user spectral efficiency and the minimum spectral efficiency among users. The mixed integer non-linear nature of this problem is dealt by Lagrangian duality. Based on the proposed solution approach, a low-complexity centralized algorithm is developed, which relies on large scale fading measurements that can be advantageously implemented at the base station. Numerical results indicate that the proposed algorithm increases the spectral efficiency and fairness among users without the need of weighting the spectral efficiency. An important conclusion is that managing user-to-user interference by resource assignment and power control is crucial for ensuring spectral efficient and fair operation of full-duplex networks.
The need of fast distributed solvers for optimization problems in networked systems has motivated the recent development of the Fast-Lipschitz optimization framework. In such an optimization, problems satisfying certain qualifying conditions, such as monotonicity of the objective function and contractivity of the constraints, have a unique optimal solution obtained via fast distributed algorithms that compute the fixed point of the constraints. This paper extends the set of problems for which the Fast-Lipschitz framework applies. Existing assumptions on the problem form are relaxed and new qualifying conditions are established. It is shown for which cases of more constraints than decision variables, and less constraints than decision variables Fast-Lipschitz optimization applies. New results are obtained by imposing non strict monotonicity of the objective functions. The extended Fast-Lipschitz framework is illustrated by a number of examples, including network optimization and optimal control problems
For operating electrical power networks, the Optimal Power Flow (OPF) problem plays a central role. The problem is nonconvex and NP hard. Therefore, designing efficient solution algorithms is crucial, though their global optimality is not guaranteed. Existing semi-definite programming relaxation based approaches are restricted to OPF problems for which zero duality holds, whereas for non-convex problems there is a lack of solution methods of provable performance. In this paper, an efficient novel method to address the general nonconvex OPF problem is investigated. The proposed method is based on alternating direction method of multipliers combined with sequential convex approximations. The global OPF problem is decomposed into smaller problems associated to each bus of the network, the solutions of which are coordinated via a light communication protocol. Therefore, the proposed method is highly scalable. The convergence properties of the proposed algorithm are mathematically and numerically substantiated.
Motivated by the specific characteristics of mmWave technologies, we discuss the possibility of an authorization regime that allows spectrum sharing between multiple operators, also referred to as spectrum pooling. In particular, considering user rate as the performance measure, we assess the benefit of coordination among networks of different operators, study the impact of beamforming at both base stations and user terminals, and analyze the pooling performance at different frequency carriers. We also discuss the enabling spectrum mechanisms, architectures, and protocols required to make spectrum pooling work in real networks. Our initial results show that, from a technical perspective, spectrum pooling at mmWave has the potential to use the resources more efficiently than traditional exclusive spectrum allocation to a single operator. However, further studies are needed in order to reach a thorough understanding of this matter, and we hope that this article will help stimulate further research in this area.
The lifetime of wireless sensor networks (WSNs) can be substantially extended by transferring energy wirelessly to the sensor nodes. In this poster, a wireless energy transfer (WET) enabled WSN is presented, where a base station transfers energy wirelessly to the sensor nodes that are deployed in several regions of interest, to supply them with energy to sense and to upload data. The WSN lifetime can be extended by deploying redundant sensor nodes, which allows the implementation of duty-cycling mechanisms to reduce nodes’ energy consumption. In this context, a problem on sensor node deployment naturally arises, where one needs to determine how many sensor nodes to deploy in each region such that the total number of nodes is minimized, and the WSN is immortal. The problem is formulated as an integer optimization, whose solution is challenging due to the binary decision variables and a non-linear constraint. A greedy-based algorithm is proposed to achieve the optimal solution of such deployment problem. It is argued that such scheme can be used in monitoring systems in smart cities, such as smart buildings and water lines.
Spectrum pooling is not typically used in current cellular networks, because it only provides a slight performance improvement while requiring heavy coordination among different cellular operators. However, these problems can be potentially overcome in millimeter-wave (mmWave) networks, thanks to the use of beamforming both at base stations and at user equipments. In this paper, we develop a joint beamforming and cell association optimization problem to characterize the performance gain that can be obtained when spectrum pooling is used, as a function of the underlying beamforming and coordination strategies. Our performance analysis reveals that beamforming can substantially reduce the need for coordination and simplify the implementation of spectrum pooling. These benefits are more prominent at higher mmWave frequencies (for example, 73 GHz) due to the possibility of having antenna arrays with more elements within the radome. The results of this paper provides useful insights on the feasibility of spectrum pooling at mmWave networks.
Special characteristics of millimeter wave (mmWave) systems such as high vulnerability to random obstacles (due to high penetration loss) and mobility (due to directional communications) demand redesigning the existing algorithms or the association between clients and access points. In this paper, we propose a novel dynamic association scheme, based on the distributed auction algorithm, that is robust to variations of the mmWave wireless channel and to mobility of client. In particular, the resulting optimal association solution does not have to be re-computed every time the network changes (e.g., due to mobility). Instead, the algorithm continuously adapt to the network variation and is thus very efficient. Numerical analysis verifies the ability of the proposed algorithms to optimize the association and to maintain optimality in dynamic environments of mmWave networks.
Industry 4.0 is the emerging trend of the industrial automation. Millimeter-wave (mmWave) communication is a prominent technology for wireless networks to support the Industry 4.0 implementation. The availability of tractable accurate interference models would greatly facilitate the design of these networks. In this paper, we investigate the accuracy of an interference model that assumes impenetrable obstacles and neglects the sidelobes. We quantify the error of such a model in terms of statistical distribution of the signal to noise plus interference ratio for outdoor mmWave networks under different antenna array settings. The results show that assuming impenetrable obstacle comes at almost no accuracy penalty, and the accuracy of neglecting antenna sidelobes can be guaranteed with sufficiently large number of antenna elements.
This paper investigates the extent to which spectrum sharing in millimeter-wave (mmWave) networks with multiple cellular operators is a viable alternative to traditional dedicated spectrum allocation. Specifically, we develop a general mathematical framework to characterize the performance gain that can be obtained when spectrum sharing is used, as a function of the underlying beamforming, operator coordination, bandwidth, and infrastructure sharing scenarios. The framework is based on joint beamforming and cell association optimization, with the objective of maximizing the long-term throughput of the users. Our asymptotic and non-asymptotic performance analyses reveal five key points: 1) spectrum sharing with light on-demand intra-and inter-operator coordination is feasible, especially at higher mmWave frequencies (for example, 73 GHz); 2) directional communications at the user equipment substantially alleviate the potential disadvantages of spectrum sharing (such as higher multiuser interference); 3) large numbers of antenna elements can reduce the need for coordination and simplify the implementation of spectrum sharing; 4) while inter-operator coordination can be neglected in the large-antenna regime, intra-operator coordination can still bring gains by balancing the network load; and 5) critical control signals among base stations, operators, and user equipment should be protected from the adverse effects of spectrum sharing, for example by means of exclusive resource allocation. The results of this paper, and their extensions obtained by relaxing some ideal assumptions, can provide important insights for future standardization and spectrum policy.
Ongoing IEC 61850 standardization activities aim at improved grid reliability through advanced monitoring and remote control services in medium- and low-voltage. However, extending energy automation beyond the substation boundaries introduces the need for timely and reliable information exchange over wide areas. LTE appears as a promising solution since it supports extensive coverage, low latency, high throughput and Quality-of-Service (QoS) differentiation. In this paper, the feasibility of implementing IEC 61850 services over public LTE infrastructure is investigated. Since standard LTE cannot meet the stringent latency requirements of such services, a new LTE QoS class is introduced along with a new LTE scheduler that prioritizes automation traffic with respect to background human-centric traffic. Two representative grid automation services are considered, a centralized (MMS) and a distributed one (GOOSE), and the achievable latency/throughput performance is evaluated on a radio system simulator platform. Simulations of realistic overload scenarios demonstrate that properly designed LTE schedulers can successfully meet the performance requirements of IEC 61850 services with negligible impact on background traffic.
In reactive cognitive networks, the channel access and the transmission decisions of the cognitive terminals have a long-term effect on the network dynamics. When multiple cognitive terminals coexist, the optimization and implementation of their strategy is challenging and may require considerable coordination overhead. In this paper, such challenge is addressed by a novel framework for the distributed optimization of transmission and channel access strategies. The objective of the cognitive terminals is to find the optimal action distribution depending on the current network state. To reduce the coordination overhead, in the proposed framework the cognitive terminals distributively coordinate the policy, whereas the action in each individual time slot is independently selected by the terminals. The optimization of the transmission and channel access strategy is performed iteratively by using the alternate convex optimization technique, where at each iteration a cognitive terminal is selected to optimize its own action distribution while assuming fixed those of the other cognitive terminals. For a traditional primary-secondary user network configuration, numerical results show that the proposed algorithm converges to a stable solution in a small number of iterations, and a limited performance loss with respect to the perfect coordinated case.
Millimeter wave (mmWave) communications systems are promising candidate to support extremely high data rate services in future wireless networks. MmWave communications exhibit high penetration loss (blockage) and require directional transmissions to compensate for severe channel attenuations and for high noise powers. When blockage occurs, there are at least two simple prominent options: 1) switching to the conventional microwave frequencies (fallback option) and 2) using an alternative non-blocked path (relay option). However, currently it is not clear under which conditions and network parameters one option is better than the other. To investigate the performance of the two options, this paper proposes a novel blockage model that allows deriving maximum achievable throughput and delay performance of both options. A simple criterion to decide which option should be taken under which network condition is provided. By a comprehensive performance analysis, it is shown that the right option depends on the payload size, beam training overhead, and blockage probability. For a network with light traffic and low probability of blockage in the direct link, the fallback option is throughput- and delay-optimal. For a network with heavy traffic demands and semistatic topology (low beam-training overhead), the relay option is preferable.
In wireless network resource allocation, the radio power control problems are often solved by fixed point algorithms. Although these algorithms give feasible problem solutions, such solutions often lack notion of problem optimality. This paper reconsiders well-known fixed-point algorithms, such as those with standard and type-II standard interference functions, and investigates the conditions under which they give optimal solutions. The optimality is established by the recently proposed fast-Lipschitz optimization framework. To apply such a framework, the analysis is performed by a logarithmic transformation of variables that gives tractable fast-Lipschitz problems. It is shown how the logarithmic problem constraints are contractive by the standard or type-II standard assumptions on the power control problem, and how sets of cost functions fulfill the fast-Lipschitz qualifying conditions. The analysis on nonmonotonic interference function allows establishing a new qualifying condition for fast-Lipschitz optimization. The results are illustrated by considering power control problems with standard interference function, problems with type-II standard interference functions, and a case of subhomogeneous power control problems. Given the generality of fast-Lipschitz optimization compared to traditional methods for resource allocation, it is concluded that such an optimization may help to determine the optimality of many resource allocation problems in wireless networks.
This paper investigates resource allocation algorithms that use limited communication - where the supplier of a resource broadcasts a coordinating signal using one bit of information to users per iteration. Rather than relay anticipated consumption to the supplier, the users locally compute their allocation, while the supplier measures the total resource consumption. Since the users do not compare their local consumption against the supplier’s capacity at each iteration, they can easily overload the system and cause an outage (for example blackout in power networks). To address this challenge, this paper investigates pragmatic coding schemes, called PFcodes (Primal-Feasible codes), that not only allow the restriction of communication to a single bit of information, but also avoid system overload due to users’ heavy consumption. We derive a worst case lower bound on the number of bits needed to achieve any desired accuracy using PF-codes. In addition, we demonstrate how to construct time-invariant and time-varying PF-codes. We provide an upper bound on the number of bits needed to achieve any desired solution accuracy using time-invariant PF-codes. Remarkably, the difference between the upper and lower bound is only 2 bits. It is proved that the time-varying PF-codes asymptotically converge to the true primal/dual optimal solution. Simulations demonstrating accuracy of our theoretical analyses are presented.
Three-node full-duplex is a promising new transmission mode between a full-duplex capable wireless node and two other wireless nodes that use half-duplex transmission and reception respectively. Although three-node full-duplex transmissions can increase the spectral efficiency without requiring full-duplex capability of user devices, inter-node interference - in addition to the inherent self-interference - can severely degrade the performance. Therefore, as methods that provide effective self-interference mitigation evolve, the management of inter-node interference is becoming increasingly important. This paper considers a cellular system in which a full-duplex capable base station serves a set of half-duplex capable users. As the spectral efficiencies achieved by the uplink and downlink transmissions are inherently intertwined, the objective is to device channel assignment and power control algorithms that maximize the weighted sum of the uplink-downlink transmissions. To this end a distributed auction based channel assignment algorithm is proposed, in which the scheduled uplink users and the base station jointly determine the set of downlink users for full-duplex transmission. Realistic system simulations indicate that the spectral efficiency can be up to 89% better than using the traditional half-duplex mode. Furthermore, when the self-interference cancelling level is high, the impact of the user-to-user interference is severe unless properly managed.
In Wireless Sensor Networks (WSNs), to supply energy to the sensor nodes, wireless energy transfer (WET) is a promising technique. One of the most efficient procedures to transfer energy to the sensor nodes consists in using a sharp wireless energy beam from the base station to each node at a time. A natural fundamental question is what is the lifetime ensured by WET and how to maximize the network lifetime by scheduling the transmissions of the energy beams. In this paper, such a question is addressed by posing a new lifetime maximization problem for WET enabled WSNs. The binary nature of the energy transmission process introduces a binary constraint in the optimization problem, which makes challenging the investigation of the fundamental properties of WET and the computation of the optimal solution. The sufficient condition for which the WET makes WSNs immortal is established as function of the WET parameters. When such a condition is not met, a solution algorithm to the maximum lifetime problem is proposed. The numerical results show that the lifetime achieved by the proposed algorithm increases by about 50% compared to the case without WET, for a WSN with a small to medium size number of nodes. This suggests that it is desirable to schedule WET to prolong lifetime of WSNs having small or medium network sizes.
We study the repair problem in distributed storage systems where storage nodes are connected through packet erasure channels and some nodes are dedicated to repair [termed as dedicated-for-repair (DR) storage nodes]. We first investigate the minimum required repair-bandwidth in an asymptotic setup, in which the stored file is assumed to have an infinite size. The result shows that the asymptotic repair-bandwidth over packet erasure channels with a fixed erasure probability has a closed-form relation to the repair-bandwidth in lossless networks. Next, we show the benefits of DR storage nodes in reducing the repair bandwidth, and then we derive the necessary minimal storage space of DR storage nodes. Finally, we study the repair in a nonasymptotic setup, where the stored file size is finite. We study the minimum practical-repair-bandwidth, i.e., the repair-bandwidth for achieving a given probability of successful repair. A combinatorial optimization problem is formulated to provide the optimal practical-repair-bandwidth for a given packet erasure probability. We show the gain of our proposed approaches in reducing the repair-bandwidth.
This chapter is concerned with the networked distributed estimation problem. A set of agents (observers) are assumed to be estimating the state of a large-scale process. Each of them must provide a reliable estimate of the state of the plant, but it have only access to some plant outputs. Local observability is not assumed, so the agents need to communicate and collaborate to obtain their estimates. This chapter proposes a structure of the observers, which merges local Luenberger-like estimators with consensus matrices.
Many physical systems, such as water/electricity distribution networks, are monitored by battery-powered wireless-sensor networks (WSNs). Since battery replacement of sensor nodes is generally difficult, long-term monitoring can be only achieved if the operation of the WSN nodes contributes to long WSN lifetime. Two prominent techniques to long WSN lifetime are 1) optimal sensor activation and 2) efficient data gathering and forwarding based on compressive sensing. These techniques are feasible only if the activated sensor nodes establish a connected communication network (connectivity constraint), and satisfy a compressive sensing decoding constraint (cardinality constraint). These two constraints make the problem of maximizing network lifetime via sensor node activation and compressive sensing NP-hard. To overcome this difficulty, an alternative approach that iteratively solves energy balancing problems is proposed. However, understanding whether maximizing network lifetime and energy balancing problems are aligned objectives is a fundamental open issue. The analysis reveals that the two optimization problems give different solutions, but the difference between the lifetime achieved by the energy balancing approach and the maximum lifetime is small when the initial energy at sensor nodes is significantly larger than the energy consumed for a single transmission. The lifetime achieved by energy balancing is asymptotically optimal, and that the achievable network lifetime is at least 50% of the optimum. Analysis and numerical simulations quantify the efficiency of the proposed energy balancing approach.
As independent service providers increasingly inject power (from renewable sources like wind) into the power distribution system, the power distribution system will likely experience increasingly significant fluctuations in power supply. Fluctuations in power generation, coupled with time-varying consumption of electricity on the demand side and the massive scale of power distribution networks present the need to not only design decentralized power allocation policies, but also understand how robust they are to dynamic demand and supply. In this paper, via an Online Decentralized Dual Descent (OD3) Algorithm, with communication for decentralized coordination, we design power allocation policies in a power distribution system. Based on the OD3 algorithm, we determine and characterize (in the worst case) how much of observed social welfare andprice volatility can be explained by fluctuations in consumption utilities of users and capacities of suppliers. In coordinating the power allocation, the OD3 algoritihm uses a protocol in which the users’ consumption at each time-step depends on the coordinating (price) signal, which is iteratively updated based on aggregate power consumption. Convergence properties and performance guarantees of the OD3 algorithm is analyzed by characterizing the difference between the online decision and the optimal decision. As more renewable energy sources are integrated into the power grid, the results in this paper providea framework to understand how volatility in the power systems propagate to markets. The theoretical results in the paper are validated and illustrated by numerical experiments.
Typical coordination schemes for future powergrids require two-way communications. Since the number of end power-consuming devices is large, the bandwidth requirements for such two-way communication schemes may be prohibitive. Motivated by this observation, we study distributed coordination schemes that require only one-way limited communications. In particular, we investigate how dual descent distributed optimization algorithm can be employed in power networks using one-way communication. In this iterative algorithm, system coordinators broadcast coordinating (or pricing) signals to the users/devices who update power consumption based on the received signal. Then system coordinators update the coordinating signals based on the physical measurement of the aggregate power usage. We provide conditions to guarantee the feasibility of the aggregated power usage at each iteration so as to avoid blackout. Furthermore, we prove the convergence of algorithms under these conditions, and establish its rate of convergence. We illustrate the performance of our algorithms using numerical simulations. These results show that one-way limited communication may be viable for coordinating/operating the future smart grids.
Strict quality of service requirements of industrial applications, challenged by harsh environments and huge interference especially in multi-vendor sites, demand incorporation of cognition in industrial wireless sensor networks (IWSNs). In this paper, a distributed protocol of light complexity for congestion regulation in cognitive IWSNs is proposed to improve the channel utilization while ensuring predetermined performance for specific devices, called primary devices. By sensing the congestion level of a channel with local measurements, a novel congestion control protocol is proposed by which every device decides whether it should continue operating on the channel, or vacate it in case of saturation. Such a protocol dynamically changes the congestion level based on variations of non-stationary wireless environment as well as traffic demands of the devices. The proposed protocol is implemented on STM32W108 chips that offer IEEE 802.15.4 standard communications. Experimental results confirm substantial performance enhancement compared to the original standard, while imposing almost no signaling/computational overhead. In particular, channel utilization is increased by 56% with fairness and delay guarantees. The presented results provide useful insights on low-complexity adaptive congestion control mechanism in IWSNs.
We develop a new framework for measuring and comparing the accuracy of any wireless interference models used in the analysis and design of wireless networks. Our approach is based on a new index that assesses the ability of the interference model to correctly predict harmful interference events, i.e., link outages. We use this new index to quantify the accuracy of various interference models used in the literature, under various scenarios such as Rayleigh fading wireless channels, directional antennas, and blockage (impenetrable obstacles) in the network. Our analysis reveals that in highly directional antenna settings with obstructions, even simple interference models (e.g., the classical protocol model) are accurate, while with omnidirectional antennas, more sophisticated and complex interference models (e.g., the classical physical model) are necessary. Our new approach makes it possible to adopt the appropriate interference model of adequate accuracy and simplicity in different settings.
Increased density of wireless devices, ever growing demands for extremely high data rate, and spectrum scarcity at microwave bands make the millimeter wave (mmWave) frequencies an important player in future wireless networks. However, mmWave communication systems exhibit severe attenuation, blockage, deafness, and may need microwave networks for coordination and fall-back support. To compensate for high attenuation, mmWave systems exploit highly directional operation, which in turn substantially reduces the interference footprint. The significant differences between mmWave networks and legacy communication technologies challenge the classical design approaches, especially at the medium access control (MAC) layer, which has received comparatively less attention than PHY and propagation issues in the literature so far. In this paper, the MAC layer design aspects of shortrange mmWave networks are discussed. In particular, we explain why current mmWave standards fail to fully exploitthe potential advantages of short range mmWave technology, and argue for the necessity of new collision-awarehybrid resource allocation frameworks with on-demand control messages, the advantages of a collision notification message, and the potential of multihop communication to provide reliable mmWave connections.
Estimating the position of a mobile node by linear sensor fusion of ranging, speed, and orientation measurements has the potentiality to achieve high localization accuracy. Nevertheless, the design of these sensor fusion algorithms is uncertain if their fundamental limitations are unknown. Despite the substantial research focus on these sensor fusion methods, the characterization of the Cramér Rao Lower Bound (CRLB) has not yet been satisfactorily addressed. In this paper, the existence and derivation of the posterior CRLB for the linear sensor fusion of ranging, speed, and orientation measurements is investigated. The major difficulty in the analysis is that ranging and orientation are not linearly related to the position, which makes it hard to derive the posterior CRLB. This difficulty is overcome by introducing the concept of posterior CRLB in the Cauchy principal value sense and deriving explicit upper and lower bounds to the posteriori Fisher information matrix. Numerical simulation results are provided for both the parametric CRLB and the posterior CRLB, comparing some widely-used methods from the literature to the derived bound. It is shown that many existing methods based on Kalman filtering may be far from the the fundamental limitations given by the CRLB.
Empirical studies show that cruising for car parking accounts for a non-negligible amount of the daily traffic, especially in central areas of large cities. Therefore, mechanisms for minimizing traffic from cruising directly affect the dynamics of traffic congestions. One way to minimizing cruising traffic is efficient car-parking-slot assignment. Usually, the related design problems are combinatorial and the worst-case complexity of optimal methods grows exponentially with the problem sizes. As a result, almost all existing methods for parking slot assignment are simple and greedy approaches, where each car or the user is assigned a free parking slot, which is closer to its destination. Moreover, no emphasis is placed to optimize any form of fairness among the users as the a social benefit. In this paper, the fairness as a metric for modeling the aggregate social benefit of the users is considered. An algorithm based on Lagrange duality is developed for car-parking-slot assignment. Numerical results illustrate the performance of the proposed algorithm compared to the optimal assignment and a greedy method.
Millimeter wave (mmWave) systems are emerging as an essential technology for enabling extremely high data rate wireless communications. The main limiting factors of mmWave systems are blockage (high penetration loss) and deafness (misalignment between the beams of the transmitter and receiver). To alleviate these problems, it is imperative to incorporate efficient association and relaying between terminals and access points. Unfortunately, the existing association techniques are designed for the traditional interference-limited networks, and thus are highly suboptimal for mmWave communications due to narrow-beam operations and the resulting non-negligible interference-free behavior. This paper introduces a distributed approach that solves the joint association and relaying problem in mmWave networks considering the load balancing at access points. The problem is posed as a novel stochastic optimization problem, which is solved by distributed auction algorithms where the clients and relays act asynchronously to achieve optimal client-relay-access point association. It is shown that the algorithms provably converge to a solution that maximizes the aggregate logarithmic utility within a desired bound. Numerical results allow quantification of the performance enhancements introduced by the relays, and the substantial improvements of the network throughput and fairness among the clients by the proposed association method as compared to standard approaches. It is concluded that mmWave communications with proper association and relaying mechanisms can support extremely high data rates, connection reliability, and fairness among the clients.
The need of fast distributed solvers for optimizationproblems in networked systems has motivated the recent developmentof the Fast-Lipschitz optimization framework. In such an optimization, problems satisfying certain qualifying conditions,such as monotonicity of the objective function and contractivityof the constraints, have a unique optimal solution obtained via fast distributed algorithms that compute the fixed point of the constraints. This paper extends the set of problems for which the Fast-Lipschitz framework applies. Existing assumptions on the problem form are relaxed and new and generalized qualifying conditions are established by novel results based on Lagrangianduality. It is shown for which cases of more constraints thandecision variables, and less constraints than decision variables Fast-Lipschitz optimization applies. New results are obtained by imposing non strict monotonicity of the objective functions. The extended Fast-Lipschitz framework is illustrated by a number ofexamples, including network optimization and optimal control problems.
Caching at the network edge is considered a promising solution for addressing the ever-increasing traffic demand of mobile devices. The problem of proactive content replication in hierarchical cache networks, which consist of both network edge and core network caches, is considered in this paper. This problem arises because network service providers wish to efficiently distribute content so that user-perceived performance is maximized. Nevertheless, current high-complexity replication algorithms are impractical due to the vast number of involved content items. Clustering algorithms inspired from machine learning can be leveraged to simplify content replication and reduce its complexity. Specifically, similar items could be clustered together, e.g., according to their popularity in space and time. Replication on a cluster-level is a problem of substantially smaller dimensionality, but it may result in suboptimal decisions compared to item-level replication. The factors that cause performance loss are identified and a clustering scheme that addresses the specific challenges of content replication is devised. Extensive numerical evaluations, based on realistic traffic data, demonstrate that for reasonable cluster sizes the impact on actual performance is negligible.
Millimeter wave (mmWave) communication is apromising candidate for future extremely high data rate, wirelessnetworks. The main challenges of mmWave communications aredeafness (misalignment between the beams of the transmitterand receiver) and blockage (severe attenuation due to obstacles).Due to deafness, prior to link establishment between a clientand its access point, a time consuming alignment/beam trainingprocedure is necessary, whose complexity depends on the operatingbeamwidth. Addressing blockage may require a reassociationto non-blocked access points, which in turn imposes additionalalignment overhead. This paper introduces a unifying frameworkto maximize network throughput considering both deafness andblockage. A distributed auction-based solution is proposed, wherethe clients and access points act asynchronously to achieveoptimal association along with the optimal operating beamwidth.It is shown that the proposed algorithm provably converges toa solution that maximizes the aggregate network utility withina desired bound. Convergence time and performance boundsare derived in closed-forms. Numerical results confirm superiorthroughput performance of the proposed solution compared toexisting approaches, and highlight the existence of a tradeoffbetween alignment overhead and achievable throughput thataffects the optimal association.
The IEEE 802.15.3c standard defines physical layer and Medium Access Control (MAC) specifications for millimeter-Wave Wireless Personal Area Networks. The MAC protocol implements a combination of random channel access and time division multiple access mechanisms to exploit the sectorization granted by the directional antennas. In this work, a novel two-level stochastic model is presented to capture the complex dynamics of channel access in this network environment. Different from prior work, the finite temporal horizon of the channel contention phase is accurately modeled, and the common assumption of saturated terminals is removed. Based on the proposed modeling framework, the allocation of time resource to each sector is optimized to improve the network performance.
Nonconvex and structured optimization problemsarise in many engineering applications that demand scalableand distributed solution methods. The study of the convergenceproperties of these methods is in general difficult due to thenonconvexity of the problem. In this paper, two distributedsolution methods that combine the fast convergence propertiesof augmented Lagrangian-based methods with the separabilityproperties of alternating optimization are investigated. The firstmethod is adapted from the classic quadratic penalty functionmethod and is called the Alternating Direction Penalty Method(ADPM). Unlike the original quadratic penalty function method,in which single-step optimizations are adopted, ADPM uses analternating optimization, which in turn makes it scalable. Thesecond method is the well-known Alternating Direction Methodof Multipliers (ADMM). It is shown that ADPM for nonconvexproblems asymptotically converges to a primal feasible pointunder mild conditions and an additional condition ensuringthat it asymptotically reaches the standard first order necessary conditions for local optimality are introduced. In thecase of the ADMM, novel sufficient conditions under whichthe algorithm asymptotically reaches the standard first ordernecessary conditions are established. Based on this, completeconvergence of ADMM for a class of low dimensional problemsare characterized. Finally, the results are illustrated by applyingADPM and ADMM to a nonconvex localization problem inwireless sensor networks.
In millimeter wave (mmWave) communication systems,narrow beam operations overcome severe channel attenuations,reduce multiuser interference, and thus introduce thenew concept of noise-limited mmWave wireless networks. Theregime of the network, whether noise-limited or interferencelimited,heavily reflects on the medium access control (MAC)layer throughput and on proper resource allocation and interferencemanagement strategies. Yet, alternating presence of theseregimes and, more importantly, their dependence on the mmWavedesign parameters are ignored in the current approaches tommWave MAC layer design, with the potential disastrous consequenceson the throughput/delay performance. In this paper,tractable closed-form expressions for collision probability andMAC layer throughput of mmWave networks, operating underslotted ALOHA and TDMA, are derived. The new analysis revealsthat mmWave networks may exhibit a non negligible transitionalbehavior from a noise-limited regime to an interference-limitedregime, depending on the density of the transmitters, densityand size of obstacles, transmission probability, beamwidth, andtransmit power. It is concluded that a new framework of adaptivehybrid resource allocation procedure, containing a proactivecontention-based phase followed by a reactive contention-free onewith dynamic phase duration, is necessary to cope with suchtransitional behavior.
Millimeter wave (mmWave) wireless networks relyon narrow beams to support multi-gigabit data rates. Nevertheless, the alignment of transmitter and receiver beams is a time consuming operation, which introduces an alignment-throughput tradeoff. A wider beamwidth reduces the alignment overhead,but leads also to reduced directivity gains. Moreover, existing mmWave standards schedule a single transmission in eachtime slot, although directional communications facilitate multiple concurrent transmissions. In this paper, a joint consideration ofthe problems of beamwidth selection and scheduling is proposed to maximize effective network throughput. The resulting optimization problem requires exact knowledge of network topology,which may not be available in practice. Therefore, two standard compliant approximation algorithms are developed, which relyon underestimation and overestimation of interference. The first one aims to maximize the reuse of available spectrum, whereas the second one is a more conservative approach that schedules together only links that cause no interference. Extensive performance analysis provides useful insights on the directionality level and the number of concurrent transmissions that should bepursued. Interestingly, extremely narrow beams are in general not optimal.
The real-time detection of bacteria and other bio-pollutants in water distribution networks and the real-time control of the water quality is made possible by new biosensors. However, the limited communication capabilities of these sensors, which are placed underground, and their limited number, due to their high cost, pose significant challenges in the deployment and the reliable monitoring. This paper presents a preliminary study concerning the problem of the static optimal sensor placement of a wireless biosensor network in a water distribution network for real-time detection of bacterial contamination. An optimal sensor placement strategy is proposed, which maximizes the probability of detection considering a limited number of sensors while ensuring a connected communication topology. A lightweight algorithm that solves the optimal placement problem is developed. The performance of the proposed algorithm is evaluated through simulations, considering different network topologies using a water pipelines emulator. The results indicate that the proposed optimization outperforms more traditional approaches in terms of detection probability. It is concluded that the availability of a dynamic model of the bacterial propagation along with a spatio-temporal correlation of the process could lead to a more advanced real-time control of the water distribution networks.
Effective spectrum sensing strategies enable cognitiveradios to enhance the spectrum efficiency. In this paper,modeling, performance analysis, and optimization of spectrum handoff in a centralized cognitive radio network are studied.More specifically, for a given sensing order, the average throughput of secondary users and average interference level among the secondary and primary users are evaluated for a cognitive radio network with only one secondary user. By aMarkov chain analysis, a network with multiple secondary users performing cooperative spectrum sensing is modeled, and the above performance metrics are derived. Then, a maximization ofthe secondary network performance in terms of throughput while keeping under control the average interference is formulated.Finally, numerical results validate the analytical derivations andshow that optimally tuning sensing time significantly enhancesthe performance of the spectrum handoff. Also, we observe that exploiting OR rule for cooperative spectrum sensing provides a higher average throughput compared to AND rule.
Motivated by the need for fast computations demanded by wireless sensor networks, the new F-Lipschitz optimization theory is introduced for a novel class of optimization problems. These problems are defined by simple qualifying properties specified in terms of increasing objective function and contractive constraints. It is shown that feasible F-Lipschitz problems have always a unique optimal solution that satisfies the constraints at equality. The solution is obtained quickly by asynchronous algorithms of certified convergence. F-Lipschitz optimization can be applied to both centralized and distributed optimization. Compared to traditional Lagrangian methods, which often converge linearly, the convergence time of centralized F-Lipschitz problems is at least superlinear. Distributed F-Lipschitz algorithms converge fast, as opposed to traditional La-grangian decomposition and parallelization methods, which generally converge slowly and at the price of many message passings. In both cases, the computational complexity is much lower than traditional Lagrangian methods. Examples of application of the new optimization method are given for distributed detection and radio power control in wireless sensor networks. The drawback of the F-Lipschitz optimization is that it might be difficult to check the qualifying properties. For more general optimization problems, it is suggested that it is convenient to have conditions ensuring that the solution satisfies the constraints at equality.
The problem of maximizing a utility function while limiting the outage probability below an appropriate threshold is investigated. A coded-division multi access wireless network under mixed Nakagami-lognormal fading is considered. Solving such a utility maximization problem is difficult because the problem is non-convex and non-geometric with mixed integer and real decision variables and no explicit functions of the constraints are available. In this paper, two methods to the solution of the utility maximization problem are proposed. By the first method, a simple explicit outage approximation is used and the constraint that rates are integers is relaxed yielding a standard convex programming optimization that can be solved quickly but at the price of a reduced accuracy. The second method uses a more accurate outage approximation, which allows one solving the utility maximization problem by the Lagrange duality for non-convex problems and contraction mapping theory. Numerical results show that the first method performs well for average values of the outage requirements, whereas the second one is always more accurate, but is also more computationally expensive.
Given the communication savings offered by self-triggered sampling, it is becoming an essential paradigm for closed-loop control over energy-constrained wireless sensor networks (WSNs). The understanding of the performance of self-triggered control systems when the feedback loops are closed over IEEE 802.15.4 WSNs is of major interest, since the communication standard IEEE 802.15.4 is the de-facto the reference protocol for energy-efficient WSNs. In this paper, a new approach to control several processes over a shared IEEE 802.15.4 network by self-triggered sampling is proposed. It is shown that the sampling time of the processes, the protocol parameters, and the scheduling of the transmissions must be jointly selected to ensure stability of the processes and energy efficiency of the network. The challenging part of the proposed analysis is ensuring stability and making an energy efficient scheduling of the state transmissions. These transmissions over IEEE 802.15.4 are allowed only at certain time slots, which are difficult to schedule when multiple control loops share the network. The approach establishes that the joint design of self-triggered samplers and the network protocol 1) ensures the stability of each loop, 2) increases the network capacity, 3) reduces the number of transmissions of the nodes, and 4) increases the sleep time of the nodes. A new dynamic scheduling problem is proposed to control each process, adapt the protocol parameters, and reduce the energy consumption. An algorithm is then derived, which adapts to any choice of the self-triggered samplers of every control loop. Numerical examples illustrate the analysis and show the benefits of the new approach.
A major constraint in deployments of resource-limited networks is the energy consumption related to the battery lifetime of the network nodes. To this end, power efficient digital modulation techniques such as On-Off keying (OOK) are highly attractive. In this paper, a novel complete probabilistic description of the Direct Sequence - Coded Division Multiple Access (DS-CDMA) system with random signatures employing OOK modulation is presented. The system scenario considers simultaneously transmitting nodes in Rayleigh fading conditions. Numerical simulations are provided to support the derived results.
The IEEE 802.15.4 communication protocol is a de-facto standard for wireless applications in industrial and home automation. Although the performance of the medium access control (MAC) of the IEEE 802.15.4 has been thoroughly investigated under the assumption of ideal wireless channel, there is still a lack of understanding of the cross-layer interactions between MAC and physical layer in the presence of realistic wireless channel models that include path loss, multi-path fading and shadowing. In this paper, an analytical model of these dynamics is proposed. The analysis considers simultaneously a composite Rayleigh-lognormal channel fading, interference generated by multiple terminals, the effects induced by hidden terminals, and the MAC reduced carrier sensing capabilities. It is shown that the reliability of the contention-based MAC over fading channels is often far from that derived under ideal channel assumptions. Moreover, it is established to what extent fading may be beneficial for the overall network performance.
There have been emerging lots of applications for distributed storage systems e.g., those in wireless sensor networks or cloud storage. Since storage nodes in wireless sensor networks have limited battery, it is valuable to find a repair scheme with optimal transmission costs (e.g., energy). The optimal-cost repair has been recently investigated in a centralized way. However a centralized control mechanism may not be available or is very expensive. For the scenarios, it is interesting to study optimal-cost repair in a decentralized setup. We formulate the optimal-cost repair as convex optimization problems for the network with convex transmission costs. Then we use primal and dual decomposition approaches to decouple the problem into subproblems to be solved locally. Thus, each surviving node, collaborating with other nodes, can minimize its transmission cost such that the global cost is minimized. We further study the optimality and convergence of the algorithms. Finally, we discuss the code construction and determine the field size for finding feasible network codes in our approaches.
Most applications of wireless sensor networks require reliable and timely data communication with maximum possible network lifetime under low traffic regime. These requirements are very critical especially for the stability of wireless sensor and actuator networks. Designing a protocol that satisfies these requirements in a network consisting of sensor nodes with traffic pattern and location varying over time and space is a challenging task. We propose an adaptive optimal duty-cycle algorithm running on top of the IEEE 802.15.4 medium access control tominimize power consumption while meeting the reliability and delay requirements. Such a problem is complicated because simple and accurate models of the effects of the duty cycle on reliability, delay, and power consumption are not available. Moreover, the scarce computational resources of the devices and the lack of prior information about the topology make it impossible to compute the optimal parameters of the protocols. Based on an experimental implementation, we propose simple experimental models to expose the dependency of reliability, delay, and power consumption on the duty cycle at the node and validate it through extensive experiments. The coefficients of the experimental-based models can be easily computed on existing IEEE 802.15.4 hardware platforms by introducing a learning phase without any explicit information about data traffic, network topology, and medium access control parameters. The experimental-based model is then used to derive a distributed adaptive algorithm for minimizing the power consumption while meeting the reliability and delay requirements in the packet transmission. The algorithm is easily implementable on top of the IEEE 802.15.4 medium access control without any modifications of the protocol. An experimental implementation of the distributed adaptive algorithm on a test bed with off-the-shelf wireless sensor devices is presented. The experimental performance of the algorithms is compared to the existing solutions from the literature. The experimental results show that the experimentalbased model is accurate and that the proposed adaptive algorithm attains the optimal value of the duty cycle, maximizing the lifetime of the network while meeting the reliability and delay constraints under both stationary and transient conditions. Specifically, even if the number of devices and their traffic configuration change sharply, the proposed adaptive algorithm allows the network to operate close to its optimal value. Furthermore, for Poisson arrivals, the duty-cycle protocol is modeled as a finite capacity queuing system in a star network. This simple analytical model provides insights into the performance metrics, including the reliability, average delay, and average power consumption of the duty-cycle protocol.
Ensuring privacy is an essential requirement in various contexts, such as social networks, healthcare data, ecommerce, banks, and government services. Here, different entities coordinate to address specific problems where the sensitive problem data are distributed among the involved entities and no entity wants to publish its data during the solution procedure. Existing privacy preserving solution methods are mostly based on cryptographic procedures and thus have the drawback of substantial computational complexity. Surprisingly, little attention has been devoted thus far to exploit mathematical optimization techniques and their inherent properties for preserving privacy. Yet, optimization based approaches to privacy require much less computational effort compared to cryptographic variants, which is certainly desirable in practice. In this paper, a unified framework for transformation based optimization methods that ensure privacy is developed. A general definition for the privacy in the context of transformation methods is proposed. A number of examples are provided to illustrate the ideas. It is concluded that the theory is still in its infancy and that huge benefits can be achieved by a substantial development.
Wireless sensor networks for industrial automation applications must offer timely, reliable, and energy efficient communications at both low and high data rate. While traditional communication technologies between 2.4 GHz and 5 GHz are sometimes incapable to efficiently achieve the aforementioned goals, new communication strategies are emerging, such as millimeterWave communications. In this overview paper, the general requirements that factory and process automation impose on the network design are reviewed. Moreover, this paper presents and qualitatively evaluates the 60 GHz millimeterWave communication technology for automation. It is argued that the upcoming 60 GHz millimeterWave technology brings an enormous potential and can influence the design of the future communication infrastructures in factory and process automation.
Developing an efficient spectrum access policy enables cognitive radios to dramatically increase spectrum utilization while assuring predetermined quality of service levels for the primary users. In this letter, modeling, performance analysis, and optimization of a distributed secondary network with random sensing order policy are studied. Specifically, the secondary users create a random order of the available channels to sense and find a transmission opportunity in a distributed manner. For this network, the average throughputs of the secondary users and average interference level between the secondary and primary users are evaluated by a new Markov model. Then, a maximization of the secondary network performance in terms of throughput while keeping under control the average interference is proposed. Then, a simple and practical adaptive algorithm is developed to optimize the network in a distributed manner. Interestingly, the proposed algorithm follows the variations of the wireless channels in non-stationary conditions and besides having substantially lower computational cost, it outperforms static brute force optimization. Finally, numerical results are provided to demonstrate the efficiencies of the proposed schemes. It is shown that fully distributed algorithms can achieve substantial performance improvements in cognitive radio networks without the need of centralized management or message passing among the users.
A major constraint in deployments of wireless sensor networks (WSNs) is the energy consumption related to the battery lifetime of the network nodes. To this end, power efficient digital modulation techniques such as On-Off keying (OOK) are highly attractive. However, the OOK detection thresholds, namely the thresholds against which the received signals are compared to detect which bit is transmitted, must be carefully selected to minimize the bit error rate. This is challenging to accomplish in resource-limited nodes with constrained computational capabilities. In this paper, the system scenario considers simultaneously transmitting nodes in Rayleigh fading conditions. Various iterative algorithms to numerically select the detection thresholds are established. Convergence analysis accompanies these algorithms. Numerical simulations are provided to support the derived results and to compare the proposed algorithms. It is concluded that OOK modulation is beneficial in resource constrained WSNs provided that efficient optimization algorithms are employed for the threshold selection.
In smart homes, it is essential to reliably detect events including water leakages. A control action, such as shutting the water pipes, relies on reliable event detection. In this demo, a wireless sensor network for detection and localization of events in smart homes is presented. The demo is based on novel distributed detection-estimation and localization algorithms. A graphical user interface to visualize in real-time the network status is developed. Upon a detected event, the user is alerted through a Twitter notification. In the experiments the false alarm probability is improved by 30% and the average relative localization error is 1.7%.
This paper considers a multi-input multi-output(MIMO) interference network in which each transmitter intendsto communicate with its dedicated receiver at a certain fixed rate. It is known that when perfect CSI is available at each terminal, the interference alignment technique can be applied to align theinterference signals at each receivers in a subspace independent of the desired signal subspace. The impact of interference canhence be eliminated. In practice, however, terminals in general can acquire only noisy CSI. Interference alignment cannot be perfectly performed to avoid interference leakage in the signal subspace. Thus, the quality of each communication link dependson the transmission power of the unintended transmitters. Tosolve this problem, we propose an iterative algorithm to performstochastic power control and transceiver design based on onlynoisy local CSI. The transceiver design is conducted based on the interference alignment concept, and the power control seeks solutions of efficiently assigning transmit powers to provide successful communications for all transmitter-receiver pairs.
Testing smart grid information and communication (ICT) infrastructures is imperative to ensure that they meet industry requirements and standards and do not compromise the grid reliability. Within the micro-grid, this requires identifying and testing ICT infrastructures for communication between distributed energy resources, building, substations, etc. To evaluate various ICT infrastructures for micro-grid deployment, this work introduces the Virtual Micro-Grid Laboratory (VMGL) and provides a preliminary analysis of Long-Term Evolution (LTE) as a micro-grid communication infrastructure.
A novel distributed estimation method for sensor networks is proposed. The goal is to track a time-varying signal that is jointly measured by a network of sensor nodes despite the presence of noise: each node computes its local estimate as a weighted sum of its own and its neighbors’ measurements and estimates and updates its weights to minimize both the variance and the mean of the estimation error by means of a suitable Pareto optimization problem. The estimator does not rely on a central coordination: both parameter optimization and estimation are distributed across the nodes. The performance of the distributed estimator is investigated in terms of estimation bias and estimation error. Moreover, an upper bound of the bias is provided. The effectiveness of the proposed estimator is illustrated via computer simulations and the performances are compared with other distributed schemes previously proposed in the literature. The results show that the estimation quality is comparable to that of one of the best existing distributed estimation algorithms, guaranteeing lower computational cost and time.
Fast-Lipschitz optimization is a recently proposed framework useful for an important class of centralized and distributed optimization problems over peer-to-peer networks. The properties of Fast-Lipschitz problems allow to compute the solution without having to introduce Lagrange multipliers, as in most other methods. This is highly beneficial, since multipliers need to be communicated across the network and thus increase the communication complexity of solution algorithms. Although the convergence speed of Fast-Lipschitz optimization methods often outperforms Lagrangian methods in practice, there is not yet a theoretical analysis. This paper provides a fundamental step towards such an analysis. Sufficient conditions for superior convergence of the Fast-Lipschitz method are established. The results are illustrated by simple examples. It is concluded that optimization problems with quadratic cost functions and linear constraints are always better solved by Fast-Lipschitz optimization methods, provided that certain conditions hold on the eigenvalues of the Hessian of the cost function and constraints.
The problem of allocating communication resources to multiple plants in a networked control system is investigated. In the presence of a shared communication medium, a total transmission rate constraint is imposed. For the purpose of optimizing the rate allocation to the plants over a finite horizon, two objective functions are considered. The first one is a single-objective function, and the second one is a multi-objective function. Because of the difficulty to derive the closed-form expression of these functions, which depend on the instantaneous communication rate, an approximation is proposed by using high-rate quantization theory. It is shown that the approximate objective functions are convex in the region of interest both in the scalar case and in the multi-objective case. This allows to establish a linear control policy given by the classical linear quadratic Gaussian theory as function of the channel. Based on this result, a new complex relation between the control performance and the channel error probability is characterized.
A peer-to-peer estimator computes local estimates at each node by combining the information from neighboring nodes without the need of central coordination. Although more flexible and scalable, peer-to-peer minimum variance estimators are difficult to design because of message losses and lack of network coordination. In this paper, we propose a new peer-to-peer estimator that allows to recover a time-varying scalar signal from measurements corrupted by an unknown non-zero mean independent noise or disturbances. Message losses occurring over the network and absence of central coordination are considered. Novel theoretical solutions are developed by taking advantage of a model of the signal dynamics. The proposed approach simultaneously guarantees a bounded mean value and minimum variance of the estimation error. Simulation results illustrate the performance of the proposed method.
Fast-Lipschitz optimization has been recently proposed as a new framework with numerous computational advantages for both centralized and decentralized convex and non-convex optimization problems. Such a framework generalizes the interference function optimization, which plays an essential role distributed radio power optimization over wireless networks. The characteristics of Fast-Lipschitz methods are low computational and coordination complexity compared to Lagrangian methods, with substantial benefits particularly for distributed optimization. These special properties of Fast-Lipschitz optimization can be ensured through qualifying conditions, which allow the Lagrange multipliers to be bound away from zero. In this paper, the Fast-Lipschitz optimization is substantially extended by establishing new qualifying conditions. The results are a generalization of the old qualifying conditions and a relaxation of the assumptions on problem structure so that the optimization framework can be applied to many more problems than previously possible. The new results are illustrated by a non-convex optimization problem, and by a radio power optimization problem which cannot be handled by the existing Fast-Lipschitz theory.
Platooning of vehicles allows to saving energy and increasing safety provided that there are reliable wireless communication protocols. In this paper, the optimization of the medium access control (MAC) protocol based on IEEE 802.11e for the platoon joining is investigated. The exchange of prosperous dynamic information among vehicules through certain bounded and closed-fitting timeout windows is challenging. On the one side, safe and secure joining of vehicles to a platoon is time-consuming and in the actual speed of the vehicles may be very difficult. On the other side, the decrease in joining timeout windows results in rising of joining failure. The analytical characterization of the appropriate timeout windows, which is dependent on the rate of exchange messages to request and verify joining, is proposed. By using such a new characterization, the estimation of closed-fitting timeout windows for joining is achieved based on the rate of transferred joining messages. Numerical results show that regular joining timeout windows suffer unacceptable delay for platooning. By contrast, adaptive optimized timeout windows reduce delay of communication. It is concluded that the new optimization proposed in this paper can potentially reduce energy consumption of vehicles and increase safety.
The latest wireless network, 3GPP Long Term Evolution (LTE), is considered to be a promising solution for smart grids because it provides both low latency and large bandwidth. However, LTE was not originally intended for smart grids applications, where data generated by the grid have specific delay requirements that are different from traditional data or voice communications. In this paper, the specific requirements imposed by a smart grids on the LTE communication infrastructure is first determined. The latency offered by the LTE network to smart grids components is investigated and an empirical mathematical model of the distribution of the latency is established. It is shown by experimental results that with the current LTE up-link scheduler, smart grid latency requirements are not always satisfied and that only a limited number of components can be accommodated. To overcome such a deficiency, a new scheduler of the LTE medium access control is proposed for smart grids. The scheduler is based on a mathematical linear optimization problem that considers simultaneously both the smart grid components and common user equipments. An algorithm for the solution to such a problem is derived based on a theoretical analysis. Simulation results based on this new scheduler illustrate the analysis. It is concluded that LTE can be effectively used in smart grids if new schedulers are employed for improving latency.
Control applications over wireless sensor networks (WSNs) require timely, reliable, and energy efficient communications. This is challenging because reliability and latency of delivered packets and energy are at odds, and resource constrained nodes support only simple algorithms. In this chapter, a new system-level design approach for protocols supporting control applications over WSNs is proposed. The approach suggests a joint optimization, or co-design, of the control specifications, networking layer, the medium access control layer, and physical layer. The protocol parameters are adapted by an optimization problem whose objective function is the network energy consumption, and the constraints are the reliability and latency of the packets as requested by the control application. The design method aims at the definition of simple algorithms that are easily implemented on resource constrained sensor nodes. These algorithms allow the network to meet the reliability and latency required by the control application while minimizing for energy consumption. The design method is illustrated by two protocols: Breath and TREnD, which are implemented on a test-bed and compared to some existing solutions. Experimental results show good performance of the protocols based on this design methodology in terms of reliability, latency, low duty cycle, and load balancing for both static and time-varying scenarios. It is concluded that a system-level design is the essential paradigm to exploit the complex interaction among the layers of the protocol stack and reach a maximum WSN efficiency.
The IEEE 802.15.4 standard for wireless sensor networks (WSNs) can support energy efficient, reliable, and timely packet transmission by tuning the medium access control (MAC) parameters macMinBE; macMaxCSMABackoffs, and macMaxFrameRetries. Such a tuning is difficult, because simple and accurate models of the influence of these parameters on the probability of successful packet transmission, packet delay, and energy consumption are not available. Moreover, it is not clear how to adapt the parameters to the changes of the network and traffic regimes by algorithms that can run on resource-constrained nodes. In this chapter, a generalizedMarkov chain is proposed to model these relations by simple expressions without giving up the accuracy. In contrast to previous work, the presence of limited number of retransmissions, acknowledgments, unsaturated traffic, and packet size is accounted for. The model is then used to derive an adaptive algorithm forminimizing the power consumptionwhile guaranteeing reliability and delay constraints in the packet transmission. The algorithm does not require any modification of the IEEE 802.15.4 standard and can be easily implemented on network nodes. Numerical results show that the analysis is accurate and that the proposed algorithm satisfies reliability and delay constraints, and ensures a longer lifetime of the network under both stationary and transient network conditions.
In this paper, we investigate strategies for radio power control for wireless sensor networks that guarantee a desired packet error probability. Efcient power control algorithms are of major concern for these networks, not only because the power consumption can be signicantly decreased but also because the interference can be reduced, allowing for higher throughput. An analytical model of the Received Signal Strength Indicator (RSSI), which is link quality metric, is proposed. The model relates the RSSI to the Signal to Interference plus Noise Ratio (SINR), and thus provides a connection between the powers and the packet error probability. Two power control mechanisms are studied: a Multiplicative-Increase Additive-Decrease (MIAD) power control described by a Markov chain, and a power control based on the average packet error rate. A component-based software implementation using the Contiki operating system is provided for both the power control mechanisms. Experimental results are reported for a test-bed with Telos motes.
Optimal rate allocation in a networked control system with highly limited communication resources is instrumental to achieve satisfactory overall performance. In this paper, we propose a rate allocation technique for state feedback control in linear dynamic systems over a noisy channel. Our method consists of two steps: (i) the overall distortion is expressed as a function of rates at all time instants by means of high-rate quantization theory, and (ii) a constrained optimization problem to minimize the overall distortion is solved. We show that a non-uniform quantization is in general the best strategy for state feedback control over noisy channels. Monte Carlo simulations illustrate the proposed scheme, which is shown to have good performance compared to arbitrarily selected rate allocations.
The main contribution of this paper is the implementation and experimental evaluation of thee radio power control algorithms for wireless sensor networks. We illustrate the necessity of lightweight radio power control algorithms for the deployment of wireless sensor networks in realistic situations. Furthermore, based on a simple loss model, we develop an algorithm that optimizes the transmit power while guaranteeing a desired packet error probability. The simple power control strategy is also compared with two other strategies in experiments using Tmote Sky sensor nodes. A component-based software implementation in the Contiki operating system is used.
The communication protocol IEEE 802.15.4 is becoming pervasive for low power and low data rate wireless sensor networks (WSNs) applications, including control and automation. Nevertheless, there is not yet any adequate study about control systems networked by this protocol. In this paper, the stability of IEEE 802.15.4 networked control systems (NCSs) is addressed. While in recent works fundamental results are developed for networks that are abstracted only in terms of packet loss and time delays, here the constraints imposed by the protocol to the feedback channel and the network energy consumption are explicitly considered. A general analysis for linear systems with parameter uncertainty and external bounded disturbances with control loops closed over IEEE 802.15.4 networks is proposed. To reduce the number of transmissions and thus save energy, a self-triggered control strategy is used. A sufficient stability condition is given as function of both the protocol and control parameters. A decentralized algorithm to adapt jointly the self-triggered control and the protocol parameters is proposed. It is concluded that stability is not always guaranteed unless protocol parameters are appropriately tuned, and that event-triggered control strategies may be difficult to use with the current version of IEEE 802.15.4.
To overcome the limitations of specific positioning techniques for mobile wireless nodes and achieve a high accuracy, the fusion of heterogeneous sensor information is an appealing strategy. In this paper, the problem of optimal fusion of ranging information typically provided by Ultra-Wideband radio with speed and absolute orientation information is addressed. A new distributed recursive estimation method is proposed. The method does not assume any motion model of mobile nodes and is based on a Pareto optimization. The challenging part of the new estimator is the characterization of the statistical information needed to model the optimization problem. The proposed estimator is validated by Monte Carlo simulations, and the performance is compared to several Kalman-based filters commonly employed for localization and sensor fusion. Much better performance is achieved, but at the price of an increased computational complexity.
IEEE 802.15.4 multi-hop wireless networks are an important communication infrastructure for many applications, including industrial control, home automation, and smart grids. Existing analysis of the IEEE 802.15.4 medium access control (MAC) protocol are often based on assumptions of homogeneous traffic and ideal carrier sensing, which are far from the reality when predicting performance for multi-hop networks. In this paper, a generalized analysis of the unslotted IEEE 802.15.4 MAC is presented. The model considers heterogeneous traffic and hidden terminals due to limited carrier sensing capabilities, and allows us to investigate jointly IEEE 802.15.4 MAC and routing algorithms. The analysis is validated via Monte Carlo simulations, which show that routing over multi-hop networks is significantly influenced by the IEEE 802.15.4 MAC performance. Routing decisions based on packet loss probability may lead to an unbalanced distribution of the traffic load across paths, thus motivating the need of a joint optimization of routing and MAC.
A system level design methodology for clustered wireless sensor networks based on a semi-random communication protocol called SERAN is presented. The protocol is grounded on a mathematical model that allows to optimize the protocol parameters, and a network initialization and maintenance procedure. SERAN is a two-layer (routing and MAC) protocol. At both layers, SERAN combines a randomized and a deterministic approach. While the randomized component provides robustness over unreliable channels, the deterministic component avoids an explosion of packet collisions and allows our protocol to scale with network size. The combined result is a high reliability and major energy savings when dense clusters are used. Our solution is based on a mathematical model that characterizes performance accurately without resorting to extensive simulations. Thanks to this model, the user needs only to specify the application requirements in terms of end-to-end packet delay and packet loss probability, select the intended hardware platform, and the protocol parameters are set automatically to satisfy latency requirements and optimize for energy consumption.
Mining ventilation is an interesting example of a large scale system with high environmental impact where advanced control strategies can bring major improvements. Indeed, one of the first objectives of modern mining industry is to fulfill environmental specifications [1] during the ore extraction and crushing, by optimizing the energy consumption or the production of polluting agents. The mine electric consumption was 4 % of total industrial electric demand in the US in 1994 (6 % in 2007 in South Africa) and 90 % of it was related to motor system energy [2]. Another interesting figure is given in [3] where it is estimated that the savings associated with global control strategies for fluid systems (pumps, fans and compressors) represent approximately 20 % of the total manufacturing motor system energy savings. This motivates the development of new control strategies for large scale aerodynamic processes based on appropriate automation and a global consideration of the system. More specifically, the challenge in this work is focused on the mining ventilation since as much as 50 % or more of the energy consumed by the mining process may go into the ventilation (including heating the air). It is clear that investigating automatic control solutions and minimizing the amount of pumped air to save energy consumption (proportional to the cube of airflow quantity [4]) is of great environmental and industrial interest.
We present a novel approach for Medium Access Control (MAC) protocol design based on protocol engine. Current way of designing MAC protocols for a specific application is based on two steps: First the application specifications (such as network topology and packet generation rate), the requirements for energy consumption, delay and reliability, and the resource constraints from the underlying physical layer (such as energy consumption and data rate) are specified, and then the protocol that satisfies all these constraints is designed. Main drawback of this procedure is that we have to restart the design process for each possible application, which may be a waste of time and efforts. The goal of a MAC protocol engine is to provide a library of protocols together with their analysis such that for each new application the optimal protocol is chosen automatically among its library with optimal parameters. We illustrate the MAC engine idea by including an original analysis of IEEE 802.15.4 unslotted random access and Time Division Multiple Access (TDMA) protocols, and implementing these protocols in the software framework called SPINE, which runs on top of TinyOS and is designed for health care applications. Then we validate the analysis and demonstrate how the protocol engine chooses the optimal protocol under different application scenarios via an experimental implementation.
The novel cross-layer protocol Breath for wireless sensor networks is designed, implemented, and experimentally evaluated. The Breath protocol is based on randomized routing, MAC and duty-cycling, which allow it to minimize the energy consumption of the network while ensuring a desired packet delivery end-to-end reliability and delay. The system model includes a set of source nodes that transmit packets via multi-hop communication to the destination. A constrained optimization problem, for which the objective function is the network energy consumption and the constraints are the packet latency and reliability, is posed and solved. It is shown that the communication layers can be jointly optimized for energy efficiency. The optimal working point of the network is achieved with a simple algorithm, which adapts to traffic variations with negligible overhead. The protocol was implemented on a test-bed with off-the-shelf wireless sensor nodes. It is compared with a standard IEEE 802.15.4 solution. Experimental results show that Breath meets the latency and reliability requirements, and that it exhibits a good distribution of the working load, thus ensuring a long lifetime of the network.
A distributed adaptive algorithm to estimate a time-varying signal, measured by a wireless sensor network, is designed and analyzed. The presence of measurement noises and of packet losses is considered. Each node of the network locally computes adaptive weights that guarantee to minimize the estimation error variance. Decentralized conditions on the weights, which ensure the stability of the estimates throughout the overall network, are also considered. A theoretical performance analysis of the scheme is carried out both in the presence of perfect and lossy links. Numerical simulations illustrate performance for various network topologies and packet loss probabilities. © 2007 IEEE.
Remote control over wireless multi-hop networks is considered. Time-varying delays for the transmission of sensor and control data over the wireless network are caused by a randomized multi-hop routing protocol. The characterstics of the routing protocol together with lower-layer network mechanisms give rise to a delay process with high variance and stepwise changing mean. A new predictive control scheme with a delay estimator is proposed in the paper. The estimator is based on a Kalman filter with a change detection algorithm. It is able to track the delay mean changes but efficiently attenuate the high frequency jitter The control scheme is analyzed and its implementation detailed. Network data from an experimental setup are used to illustrate the efficiency of the approach.
Ensuring a seamless connection when users are moving across radio cells is essential to guarantee a high communication quality. In this paper, performance of TCP during the handover in a Long Term Evolution (LTE) network is investigated. Specifically, mobile users with high bit rates TCP services are considered, and the impacts of the intra LTE handover over their perceived throughput are studied. Due to the mobility of the users across radio cells, the high bandwidth required, and possible network congestions, it is shown that the handover may cause sudden degradation of the quality of the communication if the process is not correctly controlled. To alleviate these problems, three solutions are proposed: fast path switch, handover prediction, and active queue management. The first two solutions avoids excessive delay in the packet delivery during the handover, whereas the second solution acts at the transport network with an active queue management. Simulation results, obtained by an extension of the ns-2 simulator, show that the proposed solutions present advantages, and that the handover prediction used with the active queue management increases TCP performance significantly.
A comprehensive performance evaluation of a cross-layer solution to increase users’ downlink data rates over HSDPA is provided. The solution consists of a proxy entity between a server and the Radio Network Controller, and cross-layer signalling from the base station to the proxy. The performance of the solution is evaluated though a detailed ns-2 simulator environment, which includes all HSDPA features, as well as some existing TCP enhancing protocols widely adopted for internet traffic over wireless links. Numerical results show that the proxy significantly increases the users’ throughput, while also improving the utilization of the radio resources.
A new distributed estimation algorithm for tracking using a wireless sensor network is presented. We investigate how to track a time varying signal, noisily sensed by the nodes of the network. The algorithm is distributed, meaning that it does not require a central coordination among the nodes. Moreover, the proposed approach is scalable with respect to the network size, which means that its complexity does not grow with respect to the total number of nodes. The algorithm designed turns out to be composed by a cascade structure. Local constraints are determined to guarantee the global asymptotic stability of the estimation error. The algorithm can be applied e.g., for the position estimation, temporal synchronization, as well as tracking of signals. Performance is illustrated by simulations, where our filter is shown to behave better than other distributed schemes proposed in the literature.
We characterize the maximum throughput achievable for the up-link of a power-controlled WCDMA wireless system with variable spreading factor. Our system model includes multi access interference caused by users with heterogeneous data sources, and quality of service expressed in terms of outage probability. Inner loop and outer loop power control mechanisms are also explicitly taken into account. We express the system throughput as the sum of the users transmission rates, and propose a mixed integer optimization program where the objective function is the sum of the rates under outage probability constraints. Then, we solve the optimization problem proposing an efficient optimal approach based on two steps: firstly, a modified problem provides feasible solutions, and then the optimal solution is obtained with branch and bound criteria. Numerical results confirm the validity of our approach, and show how the throughput depends on the power control fluctuation, activity of the sources, and quality of service.
Time-critical applications for wireless sensor networks (WSNs) are an important class of services supported by the standard IEEE 802.15.4. Control, actuation, and monitoring are all examples of applications where information must be delivered within some deadline. Understanding the delay in the packet delivery is fundamental to assess performance limitation for the standard. In this paper we analyze the guaranteed time slot (GTS) allocation mechanism used in IEEE 802.15.4 networks for time-critical applications. Specifically, we propose a Markov chain to model the stability, delay, and throughput of GTS allocation. We analyze the impact of the protocol parameters on these performance indexes. Monte Carlo simulations show that the theoretical analysis is quite accurate. Thus, our analysis can be used to design efficient GTS allocation for IEEE 802.15.4.
Motivated by a peer-to-peer estimation algorithm in which adaptive weights are optimized to minimize the estimation error variance, we formulate and solve a novel non-convex Lipschitz optimization problem that guarantees global stability of a large class of peer-to-peer consensus-based algorithms for wireless sensor network. Because of packet. losses, the solution of this optimization problem cannot be achieved efficiently with either traditional centralized methods or distributed Lagrangian message passing. The prove that the optimal solution can be obtained by solving a set of nonlinear equations. A fast distributed algorithm, which requires only local computations, is presented for solving these equations. Analysis and computer simulations illustrate the algorithm and its application to various network topologies.
Accurate analytical expressions of delay and packet reception probabilities, and energy consumption of duty-cycled wireless sensor networks with random medium access control (MAC) are instrumental for the efficient design and optimization of these resource-constrained networks. Given a clustered network topology with unslotted IEEE 802.15.4 and preamble sampling MAC, a novel approach to the modeling of the delay, reliability, and energy consumption is proposed. The challenging part in such a modeling is the random MAC and sleep policy of the receivers, which prevents to establish the exact time of data packet transmission. The analysis gives expressions as function of sleep time, listening time, traffic rate and MAC parameters. The analytical results are then used to optimize the duty cycle of the nodes and MAC protocol parameters. The approach provides a significant reduction of the energy consumption compared to existing solutions in the literature. Monte Carlo simulations by ns2 assess the validity of the analysis.
A new control structure is proposed to improve user experience of wireless Internet. Information on radio bandwidth and queue length available in the radio network, close to the base station, is used in a proxy that resides between the Internet and the cellular system. The control algorithm in the proxy sets the window size according to event-triggered information on radio bandwidth changes and time-triggered information on the queue length at the wireless link. The mechanism is compared to TCP Reno in two simulation scenarios. The first scenario models a dedicated channel with stepwise changes in the bandwidth, while the second scenario models the High-speed Downlink Shared Channel recently introduced by 3GPP. The proposed mechanism significantly reduces the amount of buffer space needed in the radio network, and it also gives modest improvements to user response time and link utilization. Reduced buffering is particularly beneficial for third-party end-to-end real-time services such as voice, video, and online gaming.
A theoretical framework is proposed for accurate performance analysis of minimum energy coding schemes in Coded Division Multiple Access (CDMA) wireless sensor networks. Bit error rate and average energy consumption is analyzed for two coding schemes proposed in the literature: Minimum Energy coding (ME), and Modified Minimum Energy coding (MME). Since CDMA wireless systems are strongly limited by multi access interference, the system model includes all the relevant characteristics of the wireless propagation. Furthermore, a detailed model of the energy consumption is described as function of the coding schemes, the radio transmit powers, the characteristics of the transceivers, and the dynamics of the wireless channel. A distributed radio power minimization algorithm is also addressed. Numerical results show that ME and MME coding schemes exhibit similar bit error probabilities, whereas MME outperforms ME only in the case of low data rate and large coding codewords.
In this paper, a lognormal approximation is proposed for the sum of lognormal processes weighted by binary processes. The analytical approach moves from the method early proposed by Wilkinson for approximating first-order statistics of a sum of lognormal components, and extends to incorporate second-order statistics and the presence of both time-correlated random binary weights and cross-correlated lognormal components in moments’ matching. Since the sum of weighted lognormal processes models the signal-to-interference-plus-noise ratio (SINR) of wireless systems, the method can be applied to evaluate in an effective and accurate way the outage occurrence rate and outage duration for different wireless systems of practical interest. In a frequency-reuse-based cellular system, the method is applied for various propagation scenarios, characterized by different shadowing correlation decay distances and correlations among shadowing components. A further case of relevant interest is related to power-controlled wideband wireless systems, where the random weights are binary random variables denoting the activity status of each interfering source. Finally, simulation results are used to confirm the validity of the analysis.