Skip to content

Latest commit

 

History

History
18 lines (13 loc) · 2.35 KB

File metadata and controls

18 lines (13 loc) · 2.35 KB

Multi-Agent Deep Reinforcement Learning for Collaborative Computation Offloading in Mobile Edge-Computing

GitHub release (latest) DOI GitHub repo size GitHub stars GitHub forks GitHub issues GitHub license

Abstract—

In this work, we study collaborative computation offloading in mobile edge computing (MEC) to support computation-intensive applications. Mobile devices (MDs) can offload their computation to edge nodes (ENs), where we leverage edge-to-edge offloading to further enhance the MEC’s computing capabilities. This however presents significant challenges due to the need for real-time and decentralized decision-making in the highly dynamic MEC environment especially with collaborative offloading. We design a queue-based multi-layer model scenario and formulate the joint offloading problem as a decentralized partially observable markov decision process (Dec-POMDP), where each MD and EN constructs and trains offloading agents to achieve high performance and efficient resource utilization in MEC. To solve the formulated problem, we propose a multi-agent deep reinforcement learning (DRL)-based approach, where multiple agents collaborate to make distributed decisions in an uncertain MEC environment through global optimization.

Index Terms—

Mobile edge computing, Computation offloading, Des-POMDP, Multi-agent deep reinforcement learning

Work in progress ...