Cross-Modal Retrieval
Cross-source consensus on Cross-Modal Retrieval from 1 sources and 5 claims.
1 sources · 5 claims
How it works
Benefits
Comparisons
Evidence quality
Highlighted claims
- Retrieval evaluation used global held-out galleries across all 336 paired test slices rather than within-slice restricted retrieval. — Linking spatial biology and clinical histology via Haiku
- Haiku achieved strong global patch-level cross-modal retrieval on held-out paired data. — Linking spatial biology and clinical histology via Haiku
- Zero-shot 1-nearest-neighbor patch annotation favored Haiku over MUSK and random baselines. — Linking spatial biology and clinical histology via Haiku
- The article interprets strong H&E-to-mIF and mIF-to-H&E retrieval as evidence of learnable correspondences between morphology and spatial protein organization. — Linking spatial biology and clinical histology via Haiku
- Text-to-mIF retrieval was weaker than image-to-image retrieval but still above baseline. — Linking spatial biology and clinical histology via Haiku