Federated Learning Baselines
Cross-source consensus on Federated Learning Baselines from 1 sources and 5 claims.
1 sources · 5 claims
Risks & contraindications
Comparisons
Highlighted claims
- Synchronous FedAvg is vulnerable in cross-facility HPC because the slowest queued facility determines round progress. — FedQueue: Queue-Aware Federated Learning for Cross-Facility HPC Training
- Fully asynchronous federated learning avoids blocking but can aggregate very stale updates when queue times spike. — FedQueue: Queue-Aware Federated Learning for Cross-Facility HPC Training
- FedCompass profiles stable compute-throughput heterogeneity rather than stochastic scheduler admission delay. — FedQueue: Queue-Aware Federated Learning for Cross-Facility HPC Training
- Buffered and semi-asynchronous methods generally rely on bounded staleness assumptions or stale-update filtering. — FedQueue: Queue-Aware Federated Learning for Cross-Facility HPC Training
- FedAsync reached early loss thresholds quickly but plateaued at a worse final loss in the production experiment. — FedQueue: Queue-Aware Federated Learning for Cross-Facility HPC Training