Oct 29, 2020

LEF: We need the critical literacy to make AI accountable

Dr Caitlin McDonald
4 min
trust in ai
Twitter’s recent u-turn over its image AI should make us think, says LEF’s Dr Caitlin McDonald. Trust is central to the infrastructure of the digita...

“Trust is the critical competitive factor of the future,” says our guest Tim Gordon on Leading Edge Forum’s podcast series “Growing Digital Ethics in Practice.” Our lives are increasingly enmeshed with data-driven processes that we don’t even see: data is the infrastructure of the digital age.

To keep the digital equivalent of the lights on and the taps running, businesses need an incredible amount of our data to flow through decision-making systems. Organisations which are highly trusted will find it easy to obtain the data they need, while low-trust businesses will find themselves hindered by barriers like reputational costs, increasingly stringent regulatory frameworks, or simply pure operational costs to obtain the data they need to function. 

Recently Amazon announced it will be paying people for data about their non-Amazon purchases, for example. Businesses which have an increasing trust gap with their customers will find it harder and harder to keep up with the competition. Critically, trust happens in the relationship between the organisation and the service user, not with the technical system itself. 

Trust in AI: ‘Twitter had tested the tool for racial and gender bias’

Machines have no agency; only people can be held accountable for problematic ethical decisions – and only people can work to correct those problems. A great example of this in practice is Twitter’s recent response to a massive outcry over photo cropping in the Twitter timeline: despite having tested the photo cropping functionality for racial and gender bias before releasing it, users were finding the cropping tool displayed these biases in production.

Twitter reviewed the problem, acknowledged that even though their tests weren’t showing any bias, the reality was that the tests themselves weren’t reflecting the bias that clearly existed. To address this, they’re actually removing some of the machine learning automation from the photo cropping tool, relying more on users’ individual choices about where to focus the image. AI as an abstract concept can’t be held to account, but the team responsible for managing the tool absolutely can – and in this case, they did respond appropriately.

Trust in AI: ‘We don’t have airbags on the outside of our cars’

We think there are unique ethical concerns when we automate decision-making systems but most often these simply reflect existing non-digitised ethical dilemmas. Researcher Joanna Bryson has pointed out that people often ask whether self-driving cars should prioritise the driver or pedestrian in an emergency situation, but in reality the existing non-self-driving cars already prioritise the driver: we don’t have airbags on the outside of our cars now and nobody ever questions that.

Ethics is not a straightforward process; there often isn’t one right answer, especially when taking different cultural values and priorities into account. Often it’s about landing on the least worst option and being able to demonstrate how that decision was made so that others can critically examine the decision.

Machines may not be any better than humans at ethical decision making but one of the great opportunities of trying to automate a tricky problem is that it often leads to recognition of an existing systemic bias. This can in turn lead to wider organisational and societal conversations about developing more equitable outcomes for all stakeholders – and if there’s one guiding principle of ethics which cuts across most if not all ethical frameworks, it’s fairness.

Trust in AI: ‘Leaders are starting to coalesce around some universally recognisable principles’

We’ve reached a point where business and civic sector leaders recognise the importance and urgency around AI ethics issues, and the proliferation of AI ethics models, toolkits, brainstorming tools and so on is starting to coalesce around some universally recognisable principles like fairness, accountability, and transparency. The next stage is building structured accountability mechanisms for AI ethics, similar to the role that auditing plays in the financial sector to check whether agreed industry-wide principles and regulatory frameworks are being used in practice.

We’re already starting to see these kinds of mechanisms emerging, both at the technical model level for data scientists and technical practitioners to use in their day-to-day systems building, and at the organisational level for business leaders and board members to build the right governance structures.

Trust in AI: ‘We might not all be data scientists, but every one of us is impacted by AI’

There is also an increasing grassroots movement to help everyone develop the confidence to understand and interrogate the automated decision-making systems that impact our lives. We might not all be data scientists, but every one of us is impacted by AI.

We need to have the critical literacy to demand accountability from the people building these systems. This can sound intimidating, but doesn’t always require deep technical expertise: just as in the Twitter example I mentioned earlier, the ability to recognise what’s wrong and push for change is something we should all feel empowered to do.

Dr Caitlin McDonald is the Leading Edge Forum’s resident digital anthropologist. LEF is a global research and thought leadership programme.

Share article

Jun 15, 2021

The advantages and disadvantages of AI in cloud computing

3 min
AI is being used in cloud computing, which works by allowing client devices to access data over the internet remotely, but are there pros and cons?

Cloud computing offers businesses more flexibility, agility, and cost savings by hosting data and applications in the cloud. AI capabilities are now combining with cloud computing and helping companies manage their data, look for patterns and insights in information, deliver customer experiences, and optimise workflows.

We take a look at some of the benefits and drawbacks of AI in cloud computing. 

The benefits of AI in cloud computing


Lower costs

A major advantage of cloud computing is that it eliminates costs related to on-site data centers, such as hardware and maintenance. Those upfront costs can be restrictive with AI projects, but with cloud enterprises you can access these tools for a monthly fee, making research and development related costs more manageable. AI tools can also gain insights from the data and analyse it without human intervention, reducing staff costs.

Deeper insights 

AI is able to identify patterns and trends in large data sets. Using historical data, AI compares it to the most recent data, which provides IT teams with well-informed, data-backed intelligence. AI tools can also perform data analysis fast so enterprises can rapidly and efficiently address customer queries and issues. The observations and valuable advice gained from AI capabilities result in quicker and more accurate results.

Improved data management

AI enables extensive data management, and cloud computing maximises information security, making it possible to deal with massive amounts of data in a programmed manner to analyse them properly, allowing the business to leverage information that has been “mined” and filtered to meet each need. AI can also be used to transfer data between on-premises and cloud environments. 

Intelligent automation 

Businesses use AI-driven cloud computing to be more efficient and insight-driven. AI can automate repetitive tasks to boost productivity, and also perform data analysis without any human intervention. IT teams can also use AI to manage and monitor core workflows. IT teams can focus more on strategic operations while AI performs the mundane tasks. 

Increased security 

With businesses deploying more applications in the cloud, security is crucial in order to keep data safe. IT teams can use different AI-powered network security tools which can track network traffic, they can flag issues, such as finding an anomaly. 

The drawbacks of AI in cloud computing


Data privacy 

 Enterprises need to create privacy policies and secure all data when using AI in cloud computing. AI applications require a large amount of data, which can include consumer and vendor information. While some data can be anonymous and can't be tied to personally identifiable information, knowing who the data belongs to makes it more valuable. When sensitive information is used, data protection and compliance is a major concern.

Connectivity concerns 

IT teams use the internet to send raw data to the cloud service and recover processed data. Poor internet access can hinder the advantages of cloud-based machine learning algorithms, as cloud-based machine learning systems need consistent internet connectivity. 

While processing data in the cloud is quicker than conventional computing, there is a time lag between transmitting data to the cloud and receiving responses. This is a significant issue when using machine learning algorithms for cloud servers, where prediction speed is one of the primary concerns.

Share article