Technology’s power users don’t trust artificial intelligence

Artificial intelligence has a question of trust it must answer, says a new report, with stark differences emerging between two distinct groups of AI users

People who distrust their fellow humans will have more trust in artificial intelligence (AI), according to a new study, and the intriguing results may have practical implications for designers and users of AI tools in social media.

The study, published in the journal of New Media & Society, found users who consider themselves experienced in information technology trust AI moderators less because they believe machines lack the ability to detect nuances of human language.

But researchers found another group of people who were willing to give AI moderators the benefit of the doubt. “We found a systematic pattern of individuals who have less trust in other humans showing greater trust in AI’s classification,” says S. Shyam Sundar, co-author and the James P. Jimirro Professor of Media Effects at Penn State University. “Based on our analysis, this seems to be due to the users invoking the idea that machines are accurate, objective and free from ideological bias.”

The study focused on content moderation of social media posts looking for content including hate speech and talk of suicide. Distrust of others and a user’s opinion of their own technical prowess predict whether users will have positive or negative feelings towards machines when faced with an AI-based system for content moderation, the report indicates. 

This will ultimately influence their trust in the system, say researchers, who have suggested personalised interfaces based on individual differences could positively affect user experiences.

“One of the reasons why some may be hesitant to trust content moderation technology is that we are used to freely expressing our opinions online. We feel like content moderation may take that away from us,” says report co-author Maria D. Molina, Assistant Professor of Communication Arts and Sciences at Michigan State University. 

“This study may offer a solution to that problem by suggesting that for people who hold negative stereotypes of AI for content moderation, it is important to reinforce human involvement when making a determination. On the other hand, for people with positive stereotypes of machines, we may reinforce the strength of the machine by highlighting elements like the accuracy of AI.”

Custom AI experience could help build trust in automated systems

The study involved 676 participants based in the United States who were told they were helping test a content moderating system in development. They were given definitions of hate speech and suicidal ideation, followed by one of four different social media posts. Participants were also told if the decision to flag the post or not was made by AI, a human or a combination of both.

“We are bombarded with so much problematic content, from misinformation to hate speech,” says Molina. “But, at the end of the day, it’s about how we can help users calibrate their trust toward AI due to the actual attributes of the technology, rather than being swayed by those individual differences.”

Molina and Sundar say their results may help shape future acceptance of AI. By creating systems customised to the user, designers could alleviate scepticism and distrust, and build appropriate reliance on AI.

“A major practical implication of the study is to figure out communication and design strategies for helping users calibrate their trust in automated systems,” says Sundar. “Certain groups of people who tend to have too much faith in AI technology should be alerted to its limitations and those who do not believe in its ability to moderate content should be fully informed about the extent of human involvement in the process.”

Share

Featured Articles

Agility Robotics: Digit, Amazon and workplace efficiencies

From Amazon partnerships, to creating humanoid robots who can pick themselves up off the ground, Agility Robotics continues to innovate in the workplace

Unlocking the possibilities of AI with Mondee’s Kymber Lowe

Chief Marketing Officer at Mondee, Kymber Lowe, offers insight into AI potential within the travel industry & how AI is revolutionising the workplace

ChatGPT's first birthday: A year in review

With 30th November 2023 marking a year to the day that OpenAI’s large language model-based chatbot was launched, AI Magazine considers some highlights

Sam Altman returns as CEO after being ousted by OpenAI

Machine Learning

Emmett Shear: Who is the new interim CEO of OpenAI?

Machine Learning

Coca-Cola and AI: Unleashing digital creativity

AI Applications