admin

Random Access Control in NB-IoT with Model-Based Reinforcement Learning

In NB-IoT, the cell can be divided into up to three coverage enhancement (CE) levels, each associated with a narrowband Physical Random Access Channel (NPRACH) that has a CE level-specific configuration. Providing resources to NPRACHs increases the success rate of the random access procedure but detracts resources from the uplink carrier for other transmissions. To effectively address this trade-off we propose to adjust the NPRACH parameters along with the power thresholds that determine the CE levels, which allows to control at the same time the traffic distribution between CE levels and the resources allocated to each CE level. Since the traffic is dynamic and random, reinforcement learning (RL) is a suitable approach for finding an optimal control policy, but its inherent sample inefficiency is a drawback for online learning in an operational network. To overcome this issue, we propose a new model-based RL algorithm that achieves high efficiency even in the early stages of learning.

Random Access Control in NB-IoT with Model-Based Reinforcement Learning Read More »

Transmission power allocation in flow-guided nanocommunication networks

Flow-guided electromagnetic nanonetworks hold tremendous potential for transformative medical applications, enabling monitoring, information gathering, and data transmission within the human body. Operating in challenging environments with stringent computational and power constraints within human vascular systems, these nanonetworks face significant hurdles. Successful transmissions between in-body nanonodes and on-body nanorouters are infrequent, requiring novel approaches to enhance network throughput under such circumstances. Traditional flow-guided nanonetworks rely on nanonodes to transmit packets if they possess sufficient energy, irrespective of their proximity to the nanorouter. In this paper, we present an extended model for legacy flow-guided nanonetworks that offers substantial throughput improvements while reducing the required number of nanonodes compared to the baseline blind transmission approach. By allocating transmission energy to allow more than one transmission during a charging cycle, our proposed model significantly enhances network throughput, facilitating the deployment of nanocommunication-supported medical applications. For example, with only two transmissions, it is possible to increase throughput by around 46% with the same number of nanonodes or, equivalently, reduce the number of nanonodes by the same amount to achieve the same throughput.

Transmission power allocation in flow-guided nanocommunication networks Read More »

Dynamic transmission policy for enhancing LoRa network performance: A deep reinforcement learning approach

Long Range (LoRa) communications, operating through the LoRaWAN protocol, have received increasing attention from the low-power and wide-area network communities. Efficient energy consumption and reliable communication performance are critical aspects of LoRa-based applications. However, current scientific literature tends to focus on minimizing energy consumption while disregarding channel changes affecting communication performance. Other works attain appropriate communication performance without adequately considering energy expenditure. To fill this gap, we propose a novel solution to maximize the energy efficiency of devices while considering the desired network performance. This is done using a maximum allowed Bit Error Rate (BER) that can be specified by users and applications. We characterize this problem as a Markov Decision Process and solve it using Deep Reinforcement Learning to dynamically and quickly select the transmission parameters that jointly satisfy energy and performance requirements over time. Moreover, we support different payload sizes, ensuring suitability for applications with varying packet lengths. The proposed selection of parameters is evaluated in three different scenarios by comparing it with the traditional Adaptive Data Rate (ADR) mechanism of LoRaWAN. The first scenario involves static nodes with varying BER requirements. The second one realistically simulates urban environments with mobile nodes and fluctuating channel conditions. Finally, the third scenario studies the proposed solution under dynamic frame payload length variations. These scenarios cover a wide range of operational conditions to ensure a comprehensive evaluation. The results of our experiments demonstrate that our proposal achieves a 60% improvement in performance metrics over the default ADR mechanism.

Dynamic transmission policy for enhancing LoRa network performance: A deep reinforcement learning approach Read More »

Transmission Control in NB-IoT With Model-Based Reinforcement Learning

In Narrowband Internet of Things (NB-IoT), the control of uplink transmissions is a complex task involving device scheduling, resource allocation in the carrier, and the configuration of link-adaptation parameters. Existing heuristic proposals partially address the problem, but reinforcement learning (RL) seems to be the most effective approach a priori, given its success in similar control problems. However, the low sample efficiency of conventional (model-free) RL algorithms is an important limitation for their deployment in real systems. During their initial learning stages, RL agents need to explore the policy space selecting actions that are, in general, highly ineffective. In an NB-IoT access network this implies a disproportionate increase in transmission delays. In this paper, we make two contributions to enable the adoption of RL in NB-IoT: first, we present a multi-agent architecture based on the principle of task division. Second, we propose a new model-based RL algorithm for link adaptation characterized by its high sample efficiency. The combination of these two strategies results in an algorithm that, during the learning phase, is able to maintain the transmission delay in the order of hundreds of milliseconds, whereas model-free RL algorithms cause delays of up to several seconds. This allows our approach to be deployed, without prior training, in an operating NB-IoT network and learn to control it efficiently without degrading its performance.

Transmission Control in NB-IoT With Model-Based Reinforcement Learning Read More »