Data Allocation and Benchmarking in Parallelized Mobile Edge Learning

Loading...
Thumbnail Image

Date

Authors

Mays, Duncan

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Democratizing Edge Computing (EC) by tapping into the copious yet underutilized computational resources of IoT devices can facilitate the use of Mobile Edge Learning (MEL). In MEL, it is important to address system heterogeneity in a way that minimizes staleness to improve learning accuracy, particularly in Parallelized Learning (PL). To do so, a centralized data allocation approach is typically used. However, this approach tends to overlook the privacy of learners, since learners’ capabilities are assumed to be known beforehand by the orchestrator. In this context, we propose the Data Allocation via Benchmarking (DAB) scheme. DAB is a decentralized data allocation scheme that eliminates staleness and achieves a certain Quality of Service (QoS), while preserving the privacy of learners. DAB also introduces a novel method to enable each learner to accurately estimate its own hardware characteristics via benchmarking. In addition, we propose the Minimize Expected Delay (MED) scheme. MED enables multi-task allocation for PL under uncertainty in learners’ capabilities. Given the state probabilities of learners, MED makes uncertainty-aware decisions by formulating the data allocation problem as an Integer Linear Program (ILP) that aims to minimize the sum of the maximum expected delay of all tasks, while abiding by certain training time and budget constraints. Furthermore, we propose a novel MEL framework, called Axon, to foster testing on real testbeds. Extensive performance evaluations on a real testing environment show that DAB outperforms a prominent representative of the centralized data allocation approach by up to 12% and 26% in terms of loss and prediction accuracy, respectively. In addition, the proposed benchmarking scheme yields an 83% reduction in benchmarking error compared to a prominent baseline scheme. Performance evaluations also show that MED outperforms an uncertainty naive baseline by up to 10% and 42% in terms of training time and data drop rate, respectively.

Description

Keywords

Machine learning, Edge computing, Distributed learning, Neural networks, Mobile edge learning, Federated learning, Parallel learning, Resource allocation

Citation

Endorsement

Review

Supplemented By

Referenced By

Creative Commons license

Except where otherwised noted, this item's license is described as Attribution 3.0 United States