scpFormer
Cross-source consensus on scpFormer from 1 sources and 6 claims.
1 sources · 6 claims
How it works
Where it comes from
Other
Highlighted claims
- The architecture has 21 million parameters, 12 transformer encoder blocks, hidden dimension 512, and 8 attention heads. — scpFormer: A Foundation Model for Unified Representation and Integration of the Single-Cell Proteomics
- Each protein token combines an ESM-derived semantic identity embedding with a continuous expression-value embedding. — scpFormer: A Foundation Model for Unified Representation and Integration of the Single-Cell Proteomics
- scpFormer replaces index-based protein tokens with continuous sequence-anchored protein identity embeddings and continuous expression-value embeddings. — scpFormer: A Foundation Model for Unified Representation and Integration of the Single-Cell Proteomics
- scpFormer uses a learnable classification token and a transformer encoder to produce contextualized protein embeddings plus a global cell embedding. — scpFormer: A Foundation Model for Unified Representation and Integration of the Single-Cell Proteomics
- The model uses ESM-derived protein embeddings to position proteins by structural and functional similarity, enabling unseen proteins to be incorporated without retraining a discrete vocabulary. — scpFormer: A Foundation Model for Unified Representation and Integration of the Single-Cell Proteomics
- scpFormer is introduced as a foundation model for single-cell proteomics where no comparable foundation model had been established. — scpFormer: A Foundation Model for Unified Representation and Integration of the Single-Cell Proteomics