by Lucas Nolan, Breitbart:
Recent research has uncovered a worrying issue — AI chatbots, like ChatGPT, have the capability to discern sensitive personal information about individuals through casual conversations, like an evil cybernetic version of Sherlock Holmes.
Wired reports that AI chatbots have emerged as intelligent conversationalists, capable of engaging users in seemingly meaningful and humanlike interactions. However, beneath the surface of casual conversation lurks a concerning capability. New research spearheaded by computer science experts has revealed that chatbots, armed with sophisticated language models, can subtly extract a wealth of personal information from users, even in the midst of the most mundane conversations. In other words, AI can determine all sorts of sensitive facts about you based on simple conversations, which could then be used for intrusive advertising or even worse purposes if the information falls into the wrong hands.