Readability
Cross-source consensus on Readability from 1 sources and 5 claims.
1 sources · 5 claims
Risks & contraindications
Comparisons
Evidence quality
Highlighted claims
- The chatbot responses were consistently too difficult for broad public health communication. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit
- All models averaged Flesch Reading Ease scores in the difficult range. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit
- Gemini was easier to read than Grok, Meta AI, and DeepSeek, although all chatbots remained difficult on average. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit
- High linguistic complexity can undermine comprehension and increase susceptibility to misinformation. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit
- Longer chatbot answers may increase confidence even when they do not improve accuracy. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit