Life-enriching AI tech could have negative impact for users

By Stefano Puntoni
Marketers should be given a greater role in developing AI and tech-heavy products and services...

In 2018 Amazon’s Echo smart tech devices hit headlines around the world – but for all the wrong reasons. A US-based customer became the unwitting victim of a privacy breach when the devices she’d installed throughout her home recorded a private conversation she’d had with her husband without her knowledge or consent, and then sent it to a contact in her phone.

Speaking to a local TV news station she said she’d felt “a total privacy invasion” from the experience. Upon raising a complaint, Amazon concluded the breach had been due to a highly unlikely malfunction and promised to investigate. They also offered to de-provision her communications with Echo’s virtual assistant, Alexa, so that she could retain the smart home functionality of the devices without needing to link her personal account to the service. However, the customer was far from reassured and instead removed the devices from her home altogether. “I’m never plugging that device in again because I can’t trust it,” she said.

Not long ago, AI existed strictly within the realms of science fiction. Today it changes how its users live their day-to-day lives; influencing their lifestyle via fitness trackers, their personal lives via social media and dating apps, their music choices and exposure through playlists suggested by data capture and algorithms and, as originally intended by Amazon’s unfortunate customer in the US, influence how people run their homes. 

When marketing products and services to customers – particularly those that require customers to share personal data, establishing and maintaining trust is absolutely paramount for the organisations in charge of developing and delivering them. However, as the adoption of modern products and services by customers becomes increasingly dictated by their technological capability, organisations feel pressured to offer the latest innovations in order to remain competitive and desirable. Though this heavy focus on technological development creates smart products, it can come at a cost for customers when capability is prioritised over user experience.    

An article I recently published with my colleagues Prof Rebecca Walker Reczek of Ohio State University, Prof Markus Giesler of York University in Canada and Prof Simona Botti of London Business School, focuses on the growing influence these advanced technologies have on our day-to-day lives, and the lived realities of the customer experience. 

We argue that, whilst technology companies are continually required to find new ways to make monitoring and surveillance palatable to consumers by linking it to convenience, productivity, safety, or health and well-being, they must also constantly push the boundaries of what private information consumers should share through a complex landscape of notifications, reminders, and nudges intended to initiate behavioral change. Thus, AI can transform consumers into subjects who are complicit in the commercial exploitation of their own private experience.

And this is where the problems occur. A gulf exists between a product’s capabilities and the expectations and experiences of users. The biggest reason why is a significant lack of human influence at the development level. Software developers position AI as a neutral tool, appraising its success on its efficiency and accuracy. However, this approach does very little to consider the social and individual challenges that can occur when such AI is deployed into lifestyle-enhancing products. 

To bridge it, firms need to develop a customer-centric view of AI that focuses not just on its technological capability, but also on how these are actually experienced by consumers – not just the benefits but the potential costs.

In investigating this challenge, my co-researchers and I developed a framework that separates out the four core experiences consumers have with AI; 

  • Data capture – AI providing users with access to a customised service – for example providing users with a local weather report
  • Classification – allowing AI services to make recommendations based on your previous use and common characteristics of other users who fit your demographic,
  • Delegation – AI performing tasks on behalf of users, such as Siri searching for a phone number or making a call
  • Social – facilitating communication with humanised services such as a chatbot, 

and breaks these down to identify where the sociological and psychological tensions occur. For example, AI can capture and analyse personal data from social media users to make advertising recommendations, but this crosses an ethical line when these recommendations infiltrate what users believe to be a private user experience. Or, in a social context, a chatbot failing to grasp the sensitivity or urgency of the information shared with it, and responding with a tone-deaf reply, causing further aggravation for the customer. 

Such scenarios can cause upset and mistrust which can be hard to recover from, as Amazon experienced with their very dissatisfied US customer. Assurances of further tweaking the technology did little to but the customer’s mind at ease.   

So how can these challenges be solved? Our research, published in the Journal of Marketing, suggests that marketers should be given a greater role in developing AI and tech-heavy products and services, to better cater to the user experience. Enabling the technological expertise of software designers – whose focus is to create highly capable technology to work more closely with the human-focused values of marketers, whose priority is to ensure a meaningful consumer experience, would help to bridge the divide, ensuring consumer’s wellbeing is well-considered at every level of development. 

Doing so would help AI developers realise their algorithms are not neutral tools and are in fact inherently political, and would encourage them to better question themselves on their design, deployment and evaluation.

But this is only part of the battle. Though some organisations are beginning to create ethical guidelines around AI’s use, these efforts do not specifically carve out a role for marketers. Neither do guidelines on ethical AI used produced by organisations such as the European Commission. Similarly, official bodies such as the American Marketing Association do not currently include AI’s use in its code of conduct. However, such considerations are vital as our dependency on technology and the capabilities of AI continues to grow. 

By Stefano Puntoni, Professor of Marketing at Rotterdam School of Management, Erasmus University (RSM)


Featured Articles

AiDLab Culture x AI: The Future of the Fashion Industry

Hong Kong’s AiDLab launches its Culture x AI programme to push the boundaries of the fashion industry with its use of AI to inform the design process

Sophia Velastegui: Overcoming Enterprise AI Challenges

AI Magazine speaks with AI business leader Sophia Velastegui as she offers advice for businesses seeking to advance their AI use cases responsibly

Bigger Not Always Better as OpenAI Launch New GPT-4o mini

OpenAI release new GPT-4o mini model designed to be more cost-efficient whilst retaining a lot of the same capabilities of larger models

Why are the UK and China Leading in Gen AI Adoption?

AI Strategy

Moody's Gen AI Tool Alerts CRE Investors on Risk-Posing News

Data & Analytics

AWS Unveils AI Service That Makes Enterprise Apps in Minutes

Data & Analytics