AI chatbots ‘hallucinate’ but can ChatGPT or Bard be ‘hypnotised’ to give malicious recommendations?

Published by
Euronews (English)

Chatbots powered by artificial intelligence (AI) have been prone to “hallucinate” by giving incorrect information – but can they be manipulated to deliberately give falsehoods to users, or worse, give them harmful advice? Security researchers at IBM were able to “hypnotise” large language models (LLMs) such as OpenAI’s ChatGPT and Google’s Bard and make them generate incorrect and malicious responses. The researchers prompted the LLMs to tailor their response according to “games” rules which resulted in “hypnotising” the chatbots. As part of the multi-layered, inception games, the language mod…

Read More

See also  Emerging Technologies In Fleet Management That’s Helping The Worrisome Global Supply Chain

Leave a Reply