KungFu: Making Training in Distributed Machine Learning Adaptive

Abstract

When using distributed machine learning (ML) systems to train models on a cluster of worker machines, users must configure a large number of parameters: hyper-parameters (e.g. the batch size and the learning rate) affect model convergence; system parameters (e.g. the number of workers and their communication topology) impact training performance. In current systems, adapting such parameters during training is ill-supported. Users must set system parameters at deployment time, and provide fixed adaptation schedules for hyper-parameters in the training program. We describe KungFu, a distributed ML library for Tensor- Flow that is designed to enable adaptive training. KungFu allows user to express high-level Adaptation Policies (APs) that describe how to change hyper- and system parameters during training. APs take real-time monitored metrics (e.g. signal-to-noise ratios and noise scale) as input and trigger control actions (e.g. cluster rescaling or updating the synchronisation strategy). For execution, APs are translated into monitoring and control operators, which are embedded in the dataflow graph. APs exploit an efficient asynchronous collective communication layer, which ensures concurrency and consistency of monitoring and adaptation operations.

Publication
In USENIX Symposium on Operating Systems Design and Implementation (OSDI)
Luo Mai
Luo Mai
Assistant Professor

My research interests include computer systems, machine learning and data management.

Marcel Wagenlander
Visiting Student

Related