A fast and differentiable model predictive control (MPC) solver for PyTorch. Crafted by Brandon Amos, Ivan Jimenez, Jacob Sacks, Byron Boots, and J. Zico Kolter. For more context and details, see our ICML 2017 paper on OptNet and our NIPS 2018 paper on differentiable MPC.
Optimal control is a widespread field that involve finding an optimal sequence of future actions to take in a system or environment. This is the most useful in domains when you can analytically model your system and can easily define a cost to optimize over your system. This project focuses on solving model predictive control (MPC) with the box-DDP heuristic. MPC is a powerhouse in many real-world domains ranging from short-time horizon robot control tasks to long-time horizon control of chemical processing plants. More recently, the reinforcement learning community, strife with poor sample-complexity and instability issues in model-free learning, has been actively searching for useful model-based applications and priors.
Going deeper, model predictive control (MPC) is the strategy of controlling a system by repeatedly solving a model-based optimization problem in a receding horizon fashion. At each time step in the environment, MPC solves the non-convex optimization problem
where $x_t, u_t$ denote the state and control at time $t$, $\mathcal{X}$ and $\mathcal{U}$ denote constraints on valid states and controls, $C_t : \mathcal{X} \times \mathcal{U} \rightarrow \mathbb{R}$ is a (potentially time-varying) cost function, $f : \mathcal{X} \times \mathcal{U} \rightarrow \mathcal{X}$ is a (potentially non-linear) dynamics model, and $x_{\rm init}$ denotes the initial state of the system. After solving this problem, we execute the first returned control $u_1$, on the real system, step forward in time, and repeat the process. The MPC optimization problem can be efficiently solved with a number of methods, for example the finite-horizon iterative Linear Quadratic Regulator (iLQR) algorithm. We focus on the box-DDP heuristic which adds control bounds to the problem.
There has been an indisputable rise in control and model-based algorithms in the learning communities lately and integrating these techniques with learning-based methods is important. PyTorch is a strong foundational Python library for implementing and coding learning systems. Before our library, there was a significant barrier to integrating PyTorch learning systems with control methods. The appropriate data and tensors would have to be transferred to the CPU, converted to numpy, and then passed into 1) one of the few Python control libraries, like python-control, 2) a hand-coded solver using CPLEX or Gurobi, or 3) your hand-rolled bindings to C/C++/matlab control libraries such as fast_mpc. All of these sound like fun!
We provide a PyTorch library for solving the non-convex control problem
Our code currently supports a quadratic cost function $C$ (non-quadratic support coming soon!) and non-linear system transition dynamics $f$ that can be defined by hand if you understand your environment or a neural network if you don’t.
We have baked in a lot of tricks to optimize the performance. Our CPU runtime is competitive with other solvers and our library shines brightly on the GPU as we have implemented it with efficient GPU-based PyTorch operations. This lets us solve many MPC problems simultaneously on the GPU with minimal overhead.
More details on this are in the box-DDP paper that we implement.
Our MPC layer is also differentiable! You can do learning directly through it. The backwards pass is nearly free. More details on this are in our NIPS 2018 paper Differentiable MPC for End-to-end Planning and Control.
Sometimes the controller does not run for long enough to reach a fixed point, or a fixed point doesn’t exist, which often happens when using neural networks to approximate the dynamics. When this happens, our solver cannot be used to differentiate through the controller, because it assumes a fixed point happens. Differentiating through the final iLQR iterate that’s not a fixed point will usually give the wrong gradients. Treating the iLQR procedure as a compute graph and differentiating through the unrolled operations is a reasonable alternative in this scenario that obtains surrogate gradients to the control problem, but this is not currently implemented as an option in this library.
To help catch fixed-point differentiation errors, our
solver has the options exit_unconverged
that forcefully
exits the program if a fixed-point is not hit
(to make sure users are aware of this issue) and
detach_unconverged
that more silently detaches unconverged examples
from the batch so that they are not differentiated through.
pip install mpc
Our approach to MPC requires that the dynamics function $f(\tau)$ where $\tau=[x u]$ is linearized at each time step around the current iterate $\tau_i$ by computing the first-order Taylor expansion as
Depending on what function you are using to model your dynamics
computing $\nabla_\tau f(\tau_i)$ may be easy or difficult
to implement. We provide three options of how our solver
internally computes $\nabla_\tau f(\tau_i)$ that
are passed in to the solver as the grad_method
argument.
GradMethods
is defined in our mpc
module.
GradMethods.ANALYTIC
: Use a manually-defined Jacobian.
The is the fastest and most accurate way to compute the Jacobian.
Use this if possible.
Caveat: We do not check the Jacobian for correctness
and you will get silent errors if it is incorrect.GradMethods.AUTO_DIFF
: Use PyTorch’s autograd.
This is a convenient albeit slow option if you implement the
forward pass of your dynamics with PyTorch operations
and want to use PyTorch’s automatic differentiation.GradMethods.FINITE_DIFF
: Use naive finite differences.
This is convenient if you want to do control in non-PyTorch
environments that don’t give you easy access to the Jacobians,
such as Mujoco/DART/Bullet simulators.
This option may result in inaccurate Jacobians.You can set the slew_rate_penalty
option in our solver
to add a $\lambda$ term to the objective that penalizes the slew rate,
or difference between controls at adjacent timesteps:
This turns the control problem into:
This example shows how our package can be used to solve a time-varying linear control (LQR) problem of the form
This code is available in a notebook here.
This example shows how to do control in a simple pendulum environment that we have implemented in PyTorch here. The state is the cosine/sin of the angle of the pendulum and the velocity and the control is the torque to apply. The full source code for this example is available in a notebook here.
We’ll initialize the non-convex dynamics with:
Let’s do control to make the Pendulum swing up by solving the problem
where the cost function $C$ is the distance from the nominal states to the upright position. Thus this optimization problem will find the control sequence that minimizes this distance. We can easily implement $C$ as the quadratic function that takes a weighted distance as
where $g_w$ is the weights on each component of the states and actions and $\tau^\star$ is the goal location. In proper quadratic form, this becomes
Now we can implement this function in PyTorch:
Ignoring some of the more nuanced details we can then do control with:
Just set the cost to maximize the velocity $\dot \theta$ and add a ridge term because the cost needs to be SPD.
Voila. 🎉
verbose
parameter so make analyzing
the convergence easier.If you find this repository helpful for your research please consider citing the control-limited DDP paper and our paper on differentiable MPC.
@inproceedings{tassa2014control,
title={Control-limited differential dynamic programming},
author={Tassa, Yuval and Mansard, Nicolas and Todorov, Emo},
booktitle={Robotics and Automation (ICRA), 2014 IEEE International Conference on},
pages={1168--1175},
year={2014},
organization={IEEE}
}
@article{amos2018differentiable,
title={{Differentiable MPC for End-to-end Planning and Control}},
author={Brandon Amos and Ivan Jimenez and Jacob Sacks and Byron Boots and J. Zico Kolter},
booktitle={{Advances in Neural Information Processing Systems}},
year={2018}
}
Unless otherwise stated, the source code is copyright Carnegie Mellon University and licensed under the MIT License.