Computation Offloading

Dynamic resource optimization and altitude selection in uav-based multi-access edge computing

The aim of this work is to develop a dynamic optimization strategy to allocate communication and computation resources in a Multi-access Edge Computing (MEC) scenario, where Unmanned Aerial Vehicles (UAVs) act as flying base station platforms endowed with computation capabilities to provide edge cloud services on demand. Hinging on stochastic optimization tools, we propose a dynamic algorithmic framework that minimizes the overall energy spent by the system, while imposing latency constraints, and optimizing the altitude of the UAV in an online fashion.

Optimal association of mobile users to multi-access edge computing resources

Multi-access edge computing (MEC) plays a key role in fifth-generation (5G) networks in bringing cloud functionalities at the edge of the radio access network, in close proximity to mobile users. In this paper we focus on mobile-edge computation offloading, a way to transfer heavy demanding, and latency-critical applications from mobile handsets to close-located MEC servers, in order to reduce latency and/or energy consumption.

Network energy efficient mobile edge computing with reliability guarantees

This paper proposes a novel algorithmic solution for dynamic computation offloading, aimed at reducing the energy consumption of a mobile network endowed with multi-access edge computing. The dynamic evolution of the system is modeled through three queues: a local queue at the user side, a computation queue at the edge server, and a queue of results at the network access point. The optimization problem is cast as the minimization of the long-term average energy consumption of the whole system, comprising user devices, servers, and access points.

Latency-constrained dynamic computation offloading with energy harvesting IoT devices

In this paper, we address the problem of dynamic computation offloading with Multi-Access Edge Computing (MEC), considering an Internet of Things (IoT) environment where computation requests are continuously generated locally at each device, and are handled through dynamic queue systems. In such context, we consider simple devices (e.g., sensors) with limited battery and energy harvesting capabilities.

Dynamic joint resource allocation and user assignment in multi-access edge computing

Multi-Access Edge Computing (MEC) is one of the key technology enablers of the 5G ecosystem, in combination with the high speed access provided by mmWave communications. In this paper, among all services enabled by MEC, we focus on computation offloading, devising an algorithm to optimize computation and communication resources jointly with the assignment of mobile users to Access Points and Mobile Edge Hosts, in a dynamic scenario where computation tasks are continuously generated according to (unknown) random arrival processes at each user.

Dynamic resource allocation for wireless edge machine learning with latency and accuracy guarantees

In this paper, we address the problem of dynamic allocation of communication and computation resources for Edge Machine Learning (EML) exploiting Multi-Access Edge Computing (MEC). In particular, we consider an IoT scenario, where sensor devices collect data from the environment and upload them to an edge server that runs a learning algorithm based on Stochastic Gradient Descent (SGD). The aim is to explore the optimal tradeoff between the overall system energy consumption, including IoT devices and edge server, the overall service latency, and the learning accuracy.

Dynamic computation offloading in multi-access edge computing via ultra-reliable and low-latency communications

The goal of this work is to propose an energy-efficient algorithm for dynamic computation offloading, in a multi-access edge computing scenario, where multiple mobile users compete for a common pool of radio and computational resources. We focus on delay-critical applications, incorporating an upper bound on the probability that the overall time required to send the data and process them exceeds a prescribed value. In a dynamic setting, the above constraint translates into preventing the sum of the communication and computation queues' lengths from exceeding a given value.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma