Theoretical Guarantees
Cross-source consensus on Theoretical Guarantees from 1 sources and 6 claims.
1 sources · 6 claims
How it works
Evidence quality
Highlighted claims
- The guarantees are conditional and local rather than universal. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize
- The theoretical analysis separates frozen-stage local contraction from event-level improvement. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize
- The analysis does not prove a universal positive lower bound for the planner-memory useful-basin probability. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize
- The Lyapunov function adds a small cross term to the Hamiltonian gap because damping acts on momentum rather than directly on the gradient. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize
- Under stated local assumptions, frozen-stage dynamics converge exponentially to equilibrium when port input vanishes. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize
- The finite-budget proposition bounds stage improvement probability by the product of planner proposal probability and rollout reach probability. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize