Human OSCE-Style Evaluation
Cross-source consensus on Human OSCE-Style Evaluation from 1 sources and 5 claims.
1 sources · 5 claims
Comparisons
Evidence quality
Highlighted claims
- The main human evaluation was randomized and blinded but was exploratory rather than a randomized clinical trial. — Advancing conversational diagnostic AI with multimodal reasoning
- The study compared multimodal AMIE with board-certified primary care physicians in synchronous multimodal text-chat consultations. — Advancing conversational diagnostic AI with multimodal reasoning
- The evaluation used 105 multimodal clinical scenarios, each performed once with AMIE and once with a PCP. — Advancing conversational diagnostic AI with multimodal reasoning
- Each consultation was evaluated by three independent specialist physicians matched to the scenario specialty. — Advancing conversational diagnostic AI with multimodal reasoning
- The study included 19 board-certified primary care physicians and 25 validated patient-actors in India and Canada. — Advancing conversational diagnostic AI with multimodal reasoning