Medical Misinformation
Cross-source consensus on Medical Misinformation from 1 sources and 5 claims.
1 sources · 5 claims
Risks & contraindications
Comparisons
Highlighted claims
- Nearly half of all chatbot responses were judged problematic. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit
- Response quality varied significantly by health category, with vaccines and cancer performing best and nutrition, athletic performance, and stem cells performing worst. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit
- The two most common sources of problematic answers were consensus mismatch and false balance. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit
- Open-ended prompts generated more highly problematic responses than closed-ended prompts. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit
- Even stronger-performing categories still produced substantial problematic output rates. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit