Deceptive AI ad campaigns can cause confusion for consumers

Share
Study finds only half of AI-generated ads are open about the fact they were created by bots, and some campaigns may not comply with FCC and FTC guidelines

Ad campaigns designed by artificial intelligence could ignore guidelines set forth by agencies such as the FCC and FTC, trick consumers into false beliefs, and cause confusion and dissatisfaction, new research reveals.

A University of Kansas study analysed over 1,000 AI-generated ads from across the web and found that they are only labelled as ads about half the time — and that they intentionally appeal to consumers positively to influence them.

The technology has the potential to influence consumer behaviour and decisions without viewers understanding whether the content was an advertisement or if it was developed by humans or bots. The prevalence of AI in programmatic advertising shows how frequently the technology is used and that it can skirt guidelines that human-developed ads have to follow, according to researchers.

“AI is not just a passive technology anymore,” says Vaibhav Diwanji, Assistant Professor of Journalism & Mass Communications. “It’s actively being engaged in what we think — and in a way, how we make our decisions. The process has become more automated and is taking over the role of creative content online.”

Diwanji was the lead author of a study that analysed 1,375 AI-generated programmatic ads found on social media, news sites, search engines and video platforms. The study, written with Jaejin Lee and Juliann Cortese of Florida State University, was published in the Journal of Strategic Marketing. 

AI content may not be your friend

AI-generated ads are created by algorithms to develop contextual and personalised content for an individual based on their internet usage and demographics. The research team analysed the ads to better understand if they are labelled as ads, what appeals they made to consumers and how they used sentiment. Only about half of the ads were clearly labelled as such, meaning people frequently see content they might believe is organic, such as a post by a friend on social media or a news item.

The primary problem with that lack of transparency is that humans must follow guidelines set forth by agencies such as the FCC and FTC when creating advertising content. Today, AI is not bound by such restrictions, says Diwanji.

“Higher levels of nondisclosure in the AI-enabled ad content, similar to native advertising, would be likely to cause consumer deception, tricking them into false beliefs, confusion or dissatisfaction,” says Diwanji. “At its core, AI-enabled advertising should be a fine balance between providing consumers with clear source disclosure and offering content that meshes with and provides value similar to the context in which it is placed,” the researchers wrote.

In terms of approach, the ads tended to be positive in their appeals, containing messages that were neither negative nor neutral in the way they touted the good or service represented. They also tended to focus on the consumer and the benefit the individual could experience from what was being sold. Analysis showed that ads on social media revealed sponsorship most frequently, and news and publishing sites labelled them least frequently.

“You leave your footprint wherever you go online, and this is one more way for advertisers to try to persuade you in purchasing decisions,” says Diwanji. “It’s interesting how AI has evolved from a tool people could use to something unprompted. Only about half of the ads we saw revealed their brand sponsorship. From an ethical standpoint, you’re showing us sponsored content but not telling us. That can create a conflict.”

AI-generated programmatic ads can also be developed much faster than human-generated ads. And with creative optimisation, they could be far more effective in their appeals than traditional ads. While that may be good for businesses’ bottom lines, it could be deceptive and potentially threaten jobs in creative industries, including advertising. And when ads are not clearly labelled, AI can place them higher in the results of search engines, leading people to click without realising the link leads to sponsored content. For those reasons, the authors argue that FTC guidelines and federal policy should be updated to require more transparency of AI-generated advertising.

“It’s not wrong to use AI,”  says Diwanji. “It’s just important that you disclose that in an ad or marketing appeal. When humans create content, they are bound by guidelines of the FCC, FTC and others. If you’re not told it’s AI-sponsored content, it could influence your decisions outside of those restrictions.”

Share

Featured Articles

Responsibility in the Age of AI: O’Reilly President Examines

O’Reilly President Laura Baldwin discusses the legal challenges unmitigated and unobserved use of Gen AI may present to enterprises

Schneider Electric Enhances AI Data Centre Operations

Schneider Electric teams with Nvidia to advance AI data centres, whilst emphasising global sustainability in energy management

How Can AI Firms Pay Publishers? Perplexity Has a Plan

AI search firm Perplexity extends its content licensing programme to 14 new media partners, offering revenue share and API access for publisher content

PwC and AWS Join Forces on Enterprise AI Controls System

AI Strategy

How Amazon Nova is Redefining AI for Enterprise Solutions

AI Strategy

MHP Study: AI Reshapes Global Auto Industry Trust Landscape

AI Strategy