Federated Fine-Tuning
Cross-source consensus on Federated Fine-Tuning from 1 sources and 5 claims.
1 sources · 5 claims
Uses
How it works
Risks & contraindications
Where it comes from
Highlighted claims
- Federated fine-tuning adapts large language models to private user or institutional data without centralizing that data. — FED-FSTQ: Fisher-Guided Token Quantization for Communication-Efficient Federated Fine-Tuning of LLMs on Edge Devices
- Cross-device federated deployments are often constrained by uplink communication rather than local computation. — FED-FSTQ: Fisher-Guided Token Quantization for Communication-Efficient Federated Fine-Tuning of LLMs on Edge Devices
- Synchronous federated learning can be bottlenecked by the slowest selected client in each round. — FED-FSTQ: Fisher-Guided Token Quantization for Communication-Efficient Federated Fine-Tuning of LLMs on Edge Devices
- The paper’s system model uses a central server and heterogeneous edge clients with non-IID private data distributions. — FED-FSTQ: Fisher-Guided Token Quantization for Communication-Efficient Federated Fine-Tuning of LLMs on Edge Devices
- Client updates are aggregated with FedAvg weights proportional to local sample counts among selected clients. — FED-FSTQ: Fisher-Guided Token Quantization for Communication-Efficient Federated Fine-Tuning of LLMs on Edge Devices