Temporal Masked Autoencoder Pretraining
Cross-source consensus on Temporal Masked Autoencoder Pretraining from 1 sources and 5 claims.
1 sources · 5 claims
Uses
How it works
Benefits
Preparation
Evidence quality
Highlighted claims
- The T-MAE decoder reconstructs masked log-Mel patches during pretraining and is discarded before fine-tuning. — Mixed-Precision Information Bottlenecks for On-Device Trait-State Disentanglement in Bipolar Agitation Detection
- T-MAE uses 75% time-frequency masking over random 16x16 spectrogram patches. — Mixed-Precision Information Bottlenecks for On-Device Trait-State Disentanglement in Bipolar Agitation Detection
- T-MAE pretraining is used to improve generalization when labeled data are scarce. — Mixed-Precision Information Bottlenecks for On-Device Trait-State Disentanglement in Bipolar Agitation Detection
- Controlled pretraining experiments found MP-IB improved from rho 0.034 without T-MAE to 0.117 with T-MAE. — Mixed-Precision Information Bottlenecks for On-Device Trait-State Disentanglement in Bipolar Agitation Detection
- T-MAE was the largest contributor in component ablation. — Mixed-Precision Information Bottlenecks for On-Device Trait-State Disentanglement in Bipolar Agitation Detection