Multi-task Learning
Cross-source consensus on Multi-task Learning from 1 sources and 6 claims.
1 sources · 6 claims
Uses
How it works
Benefits
Comparisons
Highlighted claims
- Across six reported tasks, the multi-task LoRA LLM achieved average accuracy 0.981, macro F1 0.976, and AUROC 0.996. — Multi-Task LLM with LoRA Fine-Tuning for Automated Cancer Staging and Biomarker Extraction
- The model jointly predicted T stage, N stage, M stage, histologic grade, ER, PR, and HER2. — Multi-Task LLM with LoRA Fine-Tuning for Automated Cancer Staging and Biomarker Extraction
- The multi-task architecture reduced deployment from six models to one and cut trainable parameters relative to single-task LoRA adapters. — Multi-Task LLM with LoRA Fine-Tuning for Automated Cancer Staging and Biomarker Extraction
- Multi-task learning improved aggregate accuracy and macro F1 compared with independent single-task LoRA adapters. — Multi-Task LLM with LoRA Fine-Tuning for Automated Cancer Staging and Biomarker Extraction
- The largest multi-task benefit appeared for HER2, where the joint model outperformed the single-task adapter under sample imbalance. — Multi-Task LLM with LoRA Fine-Tuning for Automated Cancer Staging and Biomarker Extraction
- The article interprets multi-task learning as statistically beneficial because related clinical variables share narrative context in pathology reports. — Multi-Task LLM with LoRA Fine-Tuning for Automated Cancer Staging and Biomarker Extraction