Study shows users can be primed to believe certain things about an AI chatbot’s motives, which influences their interactions with the chatbot.
Someone’s prior beliefs about an artificial intelligence agent, like a chatbot, have a significant effect on their interactions with that agent and their perception of its trustworthiness, empathy, and effectiveness, according to a new study.
Researchers from MIT and Arizona State University found that priming users — by telling them that a conversational AI agent for mental health support was either empathetic, neutral, or manipulative — influenced their perception of the chatbot and shaped how they communicated with it, even though they were speaking to the exact same chatbot.
Most users who were told the AI agent was caring believed that it was, and they also gave it higher performance ratings than those who believed it was manipulative. At the same time, less than half of the users who were told the agent had manipulative motives thought the chatbot was actually malicious, indicating that people may try to “see the good” in AI the same way they do in their fellow humans.
Read more at Massachusetts Institute of Technology
Photo Credit: Alexandra_Koch via Pixabay