LoRA
Cross-source consensus on LoRA from 1 sources and 5 claims.
1 sources · 5 claims
How it works
Dosage & preparation
Comparisons
Highlighted claims
- LoRA adapters were inserted into the query, key, value, and output projection matrices. — Multi-Task LLM with LoRA Fine-Tuning for Automated Cancer Staging and Biomarker Extraction
- Only LoRA matrices and classification head weights were trained while the backbone stayed frozen. — Multi-Task LLM with LoRA Fine-Tuning for Automated Cancer Staging and Biomarker Extraction
- The LoRA configuration trained about 13.7 million parameters, roughly 0.34% of total model weights. — Multi-Task LLM with LoRA Fine-Tuning for Automated Cancer Staging and Biomarker Extraction
- LoRA achieved the best aggregate metrics and fastest training among the compared PEFT methods, although it used the most memory. — Multi-Task LLM with LoRA Fine-Tuning for Automated Cancer Staging and Biomarker Extraction
- LoRA was interpreted as stronger than IA3 because it modifies weight matrices through low-rank updates instead of only rescaling activations. — Multi-Task LLM with LoRA Fine-Tuning for Automated Cancer Staging and Biomarker Extraction