Model Predictive Control
Cross-source consensus on Model Predictive Control from 1 sources and 4 claims.
1 sources · 4 claims
Uses
Benefits
Preparation
Comparisons
Highlighted claims
- The paper treats MPC as a motivating setting where related optimization problems are repeatedly solved online for changing initial states. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
- The MPC experiment varies only the initial state while fixing the dynamics, dimensions, cost matrices, horizon, and box constraints. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
- For MPC with fixed rho, learned policies reduce mean iteration count by about 18 percent and best runtime by about 17 percent. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
- For MPC with adaptive rho, learned policies reduce iterations by up to 11 percent, but runtime differences are small. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees