Life-enriching AI tech could have negative impact for users
In 2018 Amazon’s Echo smart tech devices – but for all the wrong reasons. A US-based customer became the unwitting victim of a privacy breach when the devices she’d installed throughout her home recorded a private conversation she’d had with her husband without her knowledge or consent, and then sent it to a contact in her phone.
Speaking to a she said she’d felt “a total privacy invasion” from the experience. Upon raising a complaint, Amazon concluded the breach had been due to a highly unlikely malfunction and promised to investigate. They also offered to de-provision her communications with Echo’s virtual assistant, Alexa, so that she could retain the smart home functionality of the devices without needing to link her personal account to the service. However, the customer was far from reassured and instead removed the devices from her home altogether. “I’m never plugging that device in again because I can’t trust it,” she said.
Not long ago, AI existed strictly within the realms of science fiction. Today it changes how its users live their day-to-day lives; influencing their lifestyle via fitness trackers, their personal lives via social media and dating apps, their music choices and exposure through playlists suggested by data capture and algorithms and, as originally intended by Amazon’s unfortunate customer in the US, influence how people run their homes.
When marketing products and services to customers – particularly those that require customers to share personal data, establishing and maintaining trust is absolutely paramount for the organisations in charge of developing and delivering them. However, as the adoption of modern products and services by customers becomes increasingly dictated by their technological capability, organisations feel pressured to offer the latest innovations in order to remain competitive and desirable. Though this heavy focus on technological development creates smart products, it can come at a cost for customers when capability is prioritised over user experience.
An article I recently published with my colleagues Prof Rebecca Walker Reczek of Ohio State University, Prof Markus Giesler of York University in Canada and Prof Simona Botti of London Business School, focuses on the growing influence these advanced technologies have on our day-to-day lives, and the lived realities of the customer experience.
We argue that, whilst technology companies are continually required to find new ways to make monitoring and surveillance palatable to consumers by linking it to convenience, productivity, safety, or health and well-being, they must also constantly push the boundaries of what private information consumers should share through a complex landscape of notifications, reminders, and nudges intended to initiate behavioral change. Thus, AI can transform consumers into subjects who are complicit in the commercial exploitation of their own private experience.
And this is where the problems occur. A gulf exists between a product’s capabilities and the expectations and experiences of users. The biggest reason why is a significant lack of human influence at the development level. Software developers position AI as a neutral tool, appraising its success on its efficiency and accuracy. However, this approach does very little to consider the social and individual challenges that can occur when such AI is deployed into lifestyle-enhancing products.
To bridge it, firms need to develop a customer-centric view of AI that focuses not just on its technological capability, but also on how these are actually experienced by consumers – not just the benefits but the potential costs.
In investigating this challenge, my co-researchers and I developed a framework that separates out the four core experiences consumers have with AI;
- Data capture – AI providing users with access to a customised service – for example providing users with a local weather report
- Classification – allowing AI services to make recommendations based on your previous use and common characteristics of other users who fit your demographic,
- Delegation – AI performing tasks on behalf of users, such as Siri searching for a phone number or making a call
- Social – facilitating communication with humanised services such as a chatbot,
and breaks these down to identify where the sociological and psychological tensions occur. For example, AI can capture and analyse personal data from social media users to make advertising recommendations, but this crosses an ethical line when these recommendations infiltrate what users believe to be a private user experience. Or, in a social context, a chatbot failing to grasp the sensitivity or urgency of the information shared with it, and responding with a tone-deaf reply, causing further aggravation for the customer.
Such scenarios can cause upset and mistrust which can be hard to recover from, as Amazon experienced with their very dissatisfied US customer. Assurances of further tweaking the technology did little to but the customer’s mind at ease.
So how can these challenges be solved? Our research, published in the suggests that marketers should be given a greater role in developing AI and tech-heavy products and services, to better cater to the user experience. Enabling the technological expertise of software designers – whose focus is to create highly capable technology to work more closely with the human-focused values of marketers, whose priority is to ensure a meaningful consumer experience, would help to bridge the divide, ensuring consumer’s wellbeing is well-considered at every level of development.
Doing so would help AI developers realise their algorithms are not neutral tools and are in fact inherently political, and would encourage them to better question themselves on their design, deployment and evaluation.
But this is only part of the battle. Though some organisations are beginning to create ethical guidelines around AI’s use, these efforts do not specifically carve out a role for marketers. Neither do guidelines on ethical AI used produced by organisations such as the European Commission. Similarly, official bodies such as the American Marketing Association do not currently include AI’s use in its code of conduct. However, such considerations are vital as our dependency on technology and the capabilities of AI continues to grow.
By Stefano Puntoni, Professor of Marketing at Rotterdam School of Management, Erasmus University (RSM)
The advantages and disadvantages of AI in cloud computing
Cloud computing offers businesses more flexibility, agility, and cost savings by hosting data and applications in the cloud. AI capabilities are now combining with cloud computing and helping companies manage their data, look for patterns and insights in information, deliver customer experiences, and optimise workflows.
We take a look at some of the benefits and drawbacks of AI in cloud computing.
The benefits of AI in cloud computing
A major advantage of cloud computing is that it eliminates costs related to on-site data centers, such as hardware and maintenance. Those upfront costs can be restrictive with AI projects, but with cloud enterprises you can access these tools for a monthly fee, making research and development related costs more manageable. AI tools can also gain insights from the data and analyse it without human intervention, reducing staff costs.
AI is able to identify patterns and trends in large data sets. Using historical data, AI compares it to the most recent data, which provides IT teams with well-informed, data-backed intelligence. AI tools can also perform data analysis fast so enterprises can rapidly and efficiently address customer queries and issues. The observations and valuable advice gained from AI capabilities result in quicker and more accurate results.
Improved data management
AI enables extensive data management, and cloud computing maximises information security, making it possible to deal with massive amounts of data in a programmed manner to analyse them properly, allowing the business to leverage information that has been “mined” and filtered to meet each need. AI can also be used to transfer data between on-premises and cloud environments.
Businesses use AI-driven cloud computing to be more efficient and insight-driven. AI can automate repetitive tasks to boost productivity, and also perform data analysis without any human intervention. IT teams can also use AI to manage and monitor core workflows. IT teams can focus more on strategic operations while AI performs the mundane tasks.
With businesses deploying more applications in the cloud, security is crucial in order to keep data safe. IT teams can use different AI-powered network security tools which can track network traffic, they can flag issues, such as finding an anomaly.
The drawbacks of AI in cloud computing
Enterprises need to create privacy policies and secure all data when using AI in cloud computing. AI applications require a large amount of data, which can include consumer and vendor information. While some data can be anonymous and can't be tied to personally identifiable information, knowing who the data belongs to makes it more valuable. When sensitive information is used, data protection and compliance is a major concern.
IT teams use the internet to send raw data to the cloud service and recover processed data. Poor internet access can hinder the advantages of cloud-based machine learning algorithms, as cloud-based machine learning systems need consistent internet connectivity.
While processing data in the cloud is quicker than conventional computing, there is a time lag between transmitting data to the cloud and receiving responses. This is a significant issue when using machine learning algorithms for cloud servers, where prediction speed is one of the primary concerns.