Empirical Benchmarks
Cross-source consensus on Empirical Benchmarks from 1 sources and 6 claims.
1 sources · 6 claims
Uses
Benefits
Comparisons
Evidence quality
Highlighted claims
- In the multi-well study, SHAPE achieved a success rate of 0.602 and best gap of 0.477 over 500 tasks. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize
- Baselines were evaluated under matched instances, starts, oracle streams, projection or clipping, and total oracle budgets. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize
- The results support reporting terminal and best-so-far metrics separately. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize
- The main benchmarks included synthetic functions, Lennard-Jones objectives, phase retrieval, and control trajectory optimization. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize
- On phase retrieval, SHAPE's averaged full or mini-batch first-order results had lower final and best gaps than NAG in the reported table. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize
- Across the fixed-budget summary, SHAPE improved best-so-far performance and hit rate on several benchmark families. — When Descent Is Too Stable: Event-Triggered Hamiltonian Learning to Optimize