TorchOpt: An Efficient Library for Differentiable Optimization

Abstract

Recent years have witnessed the booming of various differentiable optimization algorithms. These algorithms exhibit different execution patterns, and their execution needs massive computational resources that go beyond a single CPU and GPU. Existing differentiable optimization libraries, however, cannot support efficient algorithm development and multi-CPU/GPU execution, making the development of differentiable optimization algorithms often cumbersome and expensive. This paper introduces TorchOpt, a PyTorch-based efficient library for differentiable optimization. TorchOpt provides a unified and expressive bi-level optimization programming abstraction. This abstraction allows users to efficiently declare and analyze various differentiable optimization programs with explicit gradients, implicit gradients, and zero-order gradients. TorchOpt further provides a high-performance distributed execution runtime. This runtime can fully parallelize computation-intensive differentiation operations (e.g. tensor tree flatten) on CPUs/GPUs and automatically distribute computation to distributed devices. Experimental results show that TorchOpt outperforms state-of-the-art libraries by 7x on an 8-GPU server.

Publication
In Optimization for Machine Learning Workshop (Co-located with NeurIPS 2022)
Jie Ren
Jie Ren
Collaborator
Yao Fu
Yao Fu
PhD Student
Luo Mai
Luo Mai
Assistant Professor

My research interests include computer systems, machine learning and data management.

Related