Distributed learning of deep neural networks using independent subnet training
Distributed machine learning (ML) can bring more computational resources to bear than single-machine learning, reducing training time. Further, distribution allows models to be partitioned over many machines, allowing very large models to be trained—models that may be much larger than the available memory of any individual machine. However, in practice, distributed ML remains challenging, primarily due to high communication costs.
We propose a new approach to distributed neural network learning, called independent subnet training (IST). In IST, a neural network is decomposed into a set of subnetworks of the same depth as the original network, each of which is trained locally, before the various subnets are exchanged and the process is repeated. IST training has many advantages over standard data parallel approaches. Because the subsets are independent, communication frequency is reduced. Because the original network is decomposed into independent parts, communication volume is reduced. Further, the decomposition makes IST naturally ``model parallel’’, and so IST scales to very large models that cannot fit on any single machine. We show experimentally that IST results in training time that are much lower than data parallel approaches to distributed learning, and that it scales to large models that cannot be learned using standard approaches.
About the Speaker
Anastasios Kirillidis is a Noah HardingAssistant Professor in the Computer Science Department at Rice University. Prior to his appointment, he was a Goldstine Fellowship PostDoctoral Researcher at IBM T. J. Watson Research Center and a Simons PostDoc at the Electrical and Computer Engineering department of the University of Texas, Austin.
He obtained his Ph.D. from the School of Computer and Communication Sciences at Ecole Polytechnique Federale de Lausanne (EPFL) in 2014. Before that, he completed his M.Sc. and Diploma (5-year) studies at the Technical University of Crete, Greece.
His research interests include machine learning and, convex and non-convex analysis and optimization.