Answered By: Gina Brander Last Updated: Dec 31, 2024 Views: 136
While ChatGPT has various practical applications, it is not appropriate for summarizing or synthesizing clinical evidence and research. There are a few reasons for this:
- ChatGPT fabricates information, including bibliographies, citations and references, primarily by mimicking human language. While it may provide some accurate references, the chatbot will also generate references and citations that do not exist. ChatGPT does this because it is was designed to mimic human-generated language rather than to make truthful statements. As one researcher notes, "the tool has the capacity to produce output that would be considered untrustworthy at best, and at worst, deceitful."[1]
- ChatGPT is trained on textual data that is neither current nor comprehensive.
- ChatGPT relies on information selected and inputted by humans, which implies that biases and prejudices could arise from its outputs.
AI may be more reliable as a research tool in the future as the technology evolves. Until then, the SHA Library encourages you to request research assistance from an SHA clinical librarian.
For more information about ChatGPT in research, please explore the following publications:
-
Nature: ChatGPT listed as author on research papers: Many scientists disapprove
-
Nature: Tools such as ChatGPT threaten transparent science; Here are out ground rules for their use
-
Scholarly Kitchen: Guest Post – ChatGPT: Applications in scholarly publishing
-
Wired.com: ChatGPT is making universities rethink plagiarism.
Reference:
- David, P. Did ChatGPT just lie to me? [Internet] . The Scholarly Kitchen: Society for Scholarly Publishing. 2023, Jan 14 - [cited 2023 July 11]. Available from: https://scholarlykitchen.sspnet.org/2023/01/13/did-chatgpt-just-lie-to-me/#comment-113794
Was this helpful? 2 0