Over-Relaxation Policies
Cross-source consensus on Over-Relaxation Policies from 1 sources and 6 claims.
1 sources · 6 claims
How it works
Comparisons
Highlighted claims
- The learned policies update the relaxation matrix every ten ADMM iterations without requiring matrix refactorization. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
- The scalar policy predicts one relaxation value from global residual and penalty features. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
- The vector policy predicts per-constraint relaxation values using shared row-wise weights and per-constraint features. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
- Vector policies can exploit per-constraint information but may lose runtime gains because of per-row inference overhead. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
- The paper learns low-dimensional solver hyperparameters rather than high-dimensional perturbations to the optimization iteration map. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
- Scalar policies are cheaper to evaluate and often produce the best wall-clock runtime. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees