Communication-Aware Consistent Edge Selection for Mobile Users and Autonomous Vehicles

Nazish Tahir, Ramviyas Parasuraman, Haijian Sun: Communication-Aware Consistent Edge Selection for Mobile Users and Autonomous Vehicles. 2024 IEEE 100th Vehicular Technology Conference (VTC2024-Fall), 2024.

Abstract

Offloading time-sensitive, computationally intensive tasks-such as advanced learning algorithms for autonomous driving-from vehicles to nearby edge servers, vehicle-to-infrastructure (V2I) systems, or other collaborating vehicles via vehicle-to-vehicle (V2V) communication enhances service efficiency. However, whence traversing the path to the destination, the vehicle's mobility necessitates frequent handovers among the access points (APs) to maintain continuous and uninterrupted wireless connections to maintain the network's Quality of Service (QoS). These frequent handovers subsequently lead to task migrations among the edge servers associated with the respective APs. This paper addresses the joint problem of task migration and access-point handover by proposing a deep reinforcement learning framework based on the Deep Deterministic Policy Gradient (DDPG) algorithm. A joint allocation method of communication and computation of APs is proposed to minimize computational load, service latency, and interruptions with the overarching goal of maximizing QoS. We implement and evaluate our proposed framework on simulated experiments to achieve smooth and seamless task switching among edge servers, ultimately reducing latency.

BibTeX (Download)

@conference{Tahir2024,
title = {Communication-Aware Consistent Edge Selection for Mobile Users and Autonomous Vehicles},
author = {Nazish Tahir, Ramviyas Parasuraman, Haijian Sun},
url = {https://ieeexplore.ieee.org/abstract/document/10757784},
doi = {10.1109/VTC2024-Fall63153.2024.10757784},
year  = {2024},
date = {2024-10-10},
booktitle = {2024 IEEE 100th Vehicular Technology Conference (VTC2024-Fall)},
pages = {2577-2465},
abstract = {Offloading time-sensitive, computationally intensive tasks-such as advanced learning algorithms for autonomous driving-from vehicles to nearby edge servers, vehicle-to-infrastructure (V2I) systems, or other collaborating vehicles via vehicle-to-vehicle (V2V) communication enhances service efficiency. However, whence traversing the path to the destination, the vehicle's mobility necessitates frequent handovers among the access points (APs) to maintain continuous and uninterrupted wireless connections to maintain the network's Quality of Service (QoS). These frequent handovers subsequently lead to task migrations among the edge servers associated with the respective APs. This paper addresses the joint problem of task migration and access-point handover by proposing a deep reinforcement learning framework based on the Deep Deterministic Policy Gradient (DDPG) algorithm. A joint allocation method of communication and computation of APs is proposed to minimize computational load, service latency, and interruptions with the overarching goal of maximizing QoS. We implement and evaluate our proposed framework on simulated experiments to achieve smooth and seamless task switching among edge servers, ultimately reducing latency.
},
keywords = {computing, multi-robot systems, networking},
pubstate = {published},
tppubtype = {conference}
}