Technology’s power users don’t trust artificial intelligence

Artificial intelligence has a question of trust it must answer, says a new report, with stark differences emerging between two distinct groups of AI users

People who distrust their fellow humans will have more trust in artificial intelligence (AI), according to a new study, and the intriguing results may have practical implications for designers and users of AI tools in social media.

The study, published in the journal of New Media & Society, found users who consider themselves experienced in information technology trust AI moderators less because they believe machines lack the ability to detect nuances of human language.

But researchers found another group of people who were willing to give AI moderators the benefit of the doubt. “We found a systematic pattern of individuals who have less trust in other humans showing greater trust in AI’s classification,” says S. Shyam Sundar, co-author and the James P. Jimirro Professor of Media Effects at Penn State University. “Based on our analysis, this seems to be due to the users invoking the idea that machines are accurate, objective and free from ideological bias.”

The study focused on content moderation of social media posts looking for content including hate speech and talk of suicide. Distrust of others and a user’s opinion of their own technical prowess predict whether users will have positive or negative feelings towards machines when faced with an AI-based system for content moderation, the report indicates. 

This will ultimately influence their trust in the system, say researchers, who have suggested personalised interfaces based on individual differences could positively affect user experiences.

“One of the reasons why some may be hesitant to trust content moderation technology is that we are used to freely expressing our opinions online. We feel like content moderation may take that away from us,” says report co-author Maria D. Molina, Assistant Professor of Communication Arts and Sciences at Michigan State University. 

“This study may offer a solution to that problem by suggesting that for people who hold negative stereotypes of AI for content moderation, it is important to reinforce human involvement when making a determination. On the other hand, for people with positive stereotypes of machines, we may reinforce the strength of the machine by highlighting elements like the accuracy of AI.”

Custom AI experience could help build trust in automated systems

The study involved 676 participants based in the United States who were told they were helping test a content moderating system in development. They were given definitions of hate speech and suicidal ideation, followed by one of four different social media posts. Participants were also told if the decision to flag the post or not was made by AI, a human or a combination of both.

“We are bombarded with so much problematic content, from misinformation to hate speech,” says Molina. “But, at the end of the day, it’s about how we can help users calibrate their trust toward AI due to the actual attributes of the technology, rather than being swayed by those individual differences.”

Molina and Sundar say their results may help shape future acceptance of AI. By creating systems customised to the user, designers could alleviate scepticism and distrust, and build appropriate reliance on AI.

“A major practical implication of the study is to figure out communication and design strategies for helping users calibrate their trust in automated systems,” says Sundar. “Certain groups of people who tend to have too much faith in AI technology should be alerted to its limitations and those who do not believe in its ability to moderate content should be fully informed about the extent of human involvement in the process.”

Share

Featured Articles

AI and Broadcasting: BBC Commits to Transforming Education

The global broadcaster seeks to use AI to make its education offerings personalised and interactive to encourage young people to engage with the company

Why Businesses are Building AI Strategy on Amazon Bedrock

AWS partners such as Accenture, Delta Air Lines, Intuit, Salesforce, Siemens, Toyota & United Airlines are using Amazon Bedrock to build and deploy Gen AI

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

AI Strategy

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Technology

Should Tech Leaders be Concerned About the Power of AI?

Technology