‘Unreliable research assistant’: False outputs from AI chatbots pose risk to science, report says

Published by
Euronews (English)

Large language models (LLMs) such as ChatGPT and Bard could pose a threat to science due to false responses, Oxford AI researchers argue in a new paper, that insists that their use in scientific research should be restricted. LLMs are deep learning models that power artificial intelligence (AI) chatbots and are capable of generating human-like text. Researchers from the Oxford Internet Institute say that people are too trusting of these models and view them as a human-like resource. “This is, in part, due to the design of LLMs as helpful, human-sounding agents that converse with users and answ…

Read More

See also  Mohammed bin Rashid's vision inspired the world: Mona Al Marri

Leave a Reply