Citation Quality
Cross-source consensus on Citation Quality from 1 sources and 5 claims.
1 sources · 5 claims
Risks & contraindications
Comparisons
Evidence quality
Highlighted claims
- No chatbot produced a fully complete and accurate reference list for any prompt. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit
- The chatbots returned about 81% of the requested scientific references. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit
- Citation outputs frequently contained errors, fabrications, hallucinations, broken links, and incomplete elements. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit
- Grok and DeepSeek produced the highest reference scores and most complete references among the audited chatbots. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit
- Some models acknowledged that generated references may be unreliable or fictional. — Generative artificial intelligence-driven chatbots and medical misinformation: an accuracy, referencing and readability audit