Nonconvex Optimization
Cross-source consensus on Nonconvex Optimization from 1 sources and 5 claims.
1 sources · 5 claims
Uses
How it works
Benefits
Risks & contraindications
Highlighted claims
- The paper addresses fixed-budget nonconvex minimization in settings where local descent may fail by being too stable. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize
- A first-order optimizer can waste remaining oracle calls refining a nearby attractive stationary basin. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize
- The target problem minimizes an objective over a domain using oracle observations that may include values and several gradient types. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize
- Finite-budget nonconvex optimization can benefit from treating stagnation as an event rather than only reducing local step sizes. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize
- A learned optimizer can combine structured local descent with memory and energy shaping to escape or redirect. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize