LEAP
Cross-source consensus on LEAP from 1 sources and 5 claims.
1 sources · 5 claims
Uses
How it works
Benefits
Risks & contraindications
Highlighted claims
- LEAP is introduced to make distilled embedding transformers compatible with convergence-based early exit without adding inference-time parameters. — LEAP: Layer-wise Exit-Aware Pretraining for Efficient Transformer Inference
- LEAP requires retraining and cannot be added post hoc to an existing distilled checkpoint. — LEAP: Layer-wise Exit-Aware Pretraining for Efficient Transformer Inference
- LEAP modifies training by encouraging intermediate layers to approximate final-layer representations while preserving embedding quality. — LEAP: Layer-wise Exit-Aware Pretraining for Efficient Transformer Inference
- LEAP trains intermediate representations toward both the teacher final embedding and the student final embedding. — LEAP: Layer-wise Exit-Aware Pretraining for Efficient Transformer Inference
- LEAP achieved substantial early-exit efficiency at the recommended threshold while maintaining STS-B quality near the baseline. — LEAP: Layer-wise Exit-Aware Pretraining for Efficient Transformer Inference