IEEE Access (Jan 2024)
Accelerating Network Resource Allocation in LoRaWAN via Distributed Big Data Computing
Abstract
LoRaWAN is a Low Power infrastructure for the Internet of Things (IoT) with a centralized architecture where a single node, the network server, handles all data collection and network management decisions. Given the proliferation and widespread adoption of IoT devices, it becomes essential to incorporate Big Data paradigms at the network server to efficiently manage the enormous volumes of data. In this paper, we introduce a distributed and high-performance methodology for resource allocation in dense LoRaWAN networks, addressing the scalability issues that arise when processing large amounts of information from IoT devices, such as radio link quality. Our contributions establish the groundwork for a distributed implementation of the EXPLORA-C allocation strategy, capable of efficiently operating in large-scale networks. We present two approaches for implementing this distributed scheme: the Multi-Thread (MT) scheme and the Fully-Distributed (FD) scheme. Furthermore, we demonstrate the feasibility of this distributed implementation on top of the NebulaStream stream-based end-to-end data management platform. To validate the proposed approach, we exploit our co-simulation framework, EXPLoSIM, where the distributed implementation is fed with data from a simulated LoRaWAN network. This validation shows significant savings in execution time, latency, and scalability. Additionally, we generalize the concept by decomposing a centralized data aggregation scheme into a chain of stream-processing operators, which can be dynamically allocated across device, Edge, and Cloud levels. In the best scenario, our approach improves metrics such as execution time and data reduction by over 90% when compared to its centralized operation.
Keywords