Technology’s power users don’t trust artificial intelligence

Share
Artificial intelligence has a question of trust it must answer, says a new report, with stark differences emerging between two distinct groups of AI users

People who distrust their fellow humans will have more trust in artificial intelligence (AI), according to a new study, and the intriguing results may have practical implications for designers and users of AI tools in social media.

The study, published in the journal of New Media & Society, found users who consider themselves experienced in information technology trust AI moderators less because they believe machines lack the ability to detect nuances of human language.

But researchers found another group of people who were willing to give AI moderators the benefit of the doubt. “We found a systematic pattern of individuals who have less trust in other humans showing greater trust in AI’s classification,” says S. Shyam Sundar, co-author and the James P. Jimirro Professor of Media Effects at Penn State University. “Based on our analysis, this seems to be due to the users invoking the idea that machines are accurate, objective and free from ideological bias.”

The study focused on content moderation of social media posts looking for content including hate speech and talk of suicide. Distrust of others and a user’s opinion of their own technical prowess predict whether users will have positive or negative feelings towards machines when faced with an AI-based system for content moderation, the report indicates. 

This will ultimately influence their trust in the system, say researchers, who have suggested personalised interfaces based on individual differences could positively affect user experiences.

“One of the reasons why some may be hesitant to trust content moderation technology is that we are used to freely expressing our opinions online. We feel like content moderation may take that away from us,” says report co-author Maria D. Molina, Assistant Professor of Communication Arts and Sciences at Michigan State University. 

“This study may offer a solution to that problem by suggesting that for people who hold negative stereotypes of AI for content moderation, it is important to reinforce human involvement when making a determination. On the other hand, for people with positive stereotypes of machines, we may reinforce the strength of the machine by highlighting elements like the accuracy of AI.”

Custom AI experience could help build trust in automated systems

The study involved 676 participants based in the United States who were told they were helping test a content moderating system in development. They were given definitions of hate speech and suicidal ideation, followed by one of four different social media posts. Participants were also told if the decision to flag the post or not was made by AI, a human or a combination of both.

“We are bombarded with so much problematic content, from misinformation to hate speech,” says Molina. “But, at the end of the day, it’s about how we can help users calibrate their trust toward AI due to the actual attributes of the technology, rather than being swayed by those individual differences.”

Molina and Sundar say their results may help shape future acceptance of AI. By creating systems customised to the user, designers could alleviate scepticism and distrust, and build appropriate reliance on AI.

“A major practical implication of the study is to figure out communication and design strategies for helping users calibrate their trust in automated systems,” says Sundar. “Certain groups of people who tend to have too much faith in AI technology should be alerted to its limitations and those who do not believe in its ability to moderate content should be fully informed about the extent of human involvement in the process.”

Share

Featured Articles

JLL PDS CEO: How AI is Transforming the Built Environment

JLL’s property services chief Cynthia Kantor outlines how artificial intelligence and data analytics are reshaping commercial real estate development

How Trump Scrapping AI Safety Regulations Impacts Global AI

US President Donald Trump's executive order removes federal oversight requirements for AI companies developing high-risk AI, shifting US tech regulations

SAP’s Enterprise AI Adoption Trends For 2025

SAP identifies five AI themes that will shape 2025, as enterprises pivot towards practical AI applications, anticipating tangible returns on investments

Global Tech Leaders Responses to The UK’s AI Action Plan

AI Strategy

AI Adoption Challenges for Australian Tech Leaders

AI Strategy

Why Dynatrace Signs Analytics Deal With F1 Team VCARB

AI Strategy