OSQP
Cross-source consensus on OSQP from 1 sources and 5 claims.
1 sources · 5 claims
Benefits
Risks & contraindications
Interactions
Comparisons
Highlighted claims
- In OSQP-like solvers, changing the penalty parameter may trigger costly matrix refactorization. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
- Changing the relaxation parameter does not alter the factorization, making it attractive for frequent online adaptation. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
- OSQP already adapts a diagonal penalty matrix heuristically while keeping the relaxation value fixed at 1.6I. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
- Under fixed rho, learned relaxation policies outperform baseline OSQP across five benchmark families in both iteration count and runtime. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
- Under adaptive rho, learned Gamma_k can reduce iterations while runtime depends on interactions with OSQP's rho-update heuristic. — Learning Over-Relaxation Policies for ADMM with Convergence Guarantees