LEF: We need the critical literacy to make AI accountable

By Dr Caitlin McDonald
Twitter’s recent u-turn over its image AI should make us think, says LEF’s Dr Caitlin McDonald. Trust is central to the infrastructure of the digita...

“Trust is the critical competitive factor of the future,” says our guest Tim Gordon on Leading Edge Forum’s podcast series “Growing Digital Ethics in Practice.” Our lives are increasingly enmeshed with data-driven processes that we don’t even see: data is the infrastructure of the digital age.

To keep the digital equivalent of the lights on and the taps running, businesses need an incredible amount of our data to flow through decision-making systems. Organisations which are highly trusted will find it easy to obtain the data they need, while low-trust businesses will find themselves hindered by barriers like reputational costs, increasingly stringent regulatory frameworks, or simply pure operational costs to obtain the data they need to function. 

Recently Amazon announced it will be paying people for data about their non-Amazon purchases, for example. Businesses which have an increasing trust gap with their customers will find it harder and harder to keep up with the competition. Critically, trust happens in the relationship between the organisation and the service user, not with the technical system itself. 

Trust in AI: ‘Twitter had tested the tool for racial and gender bias’

Machines have no agency; only people can be held accountable for problematic ethical decisions – and only people can work to correct those problems. A great example of this in practice is Twitter’s recent response to a massive outcry over photo cropping in the Twitter timeline: despite having tested the photo cropping functionality for racial and gender bias before releasing it, users were finding the cropping tool displayed these biases in production.

Twitter reviewed the problem, acknowledged that even though their tests weren’t showing any bias, the reality was that the tests themselves weren’t reflecting the bias that clearly existed. To address this, they’re actually removing some of the machine learning automation from the photo cropping tool, relying more on users’ individual choices about where to focus the image. AI as an abstract concept can’t be held to account, but the team responsible for managing the tool absolutely can – and in this case, they did respond appropriately.

Trust in AI: ‘We don’t have airbags on the outside of our cars’

We think there are unique ethical concerns when we automate decision-making systems but most often these simply reflect existing non-digitised ethical dilemmas. Researcher Joanna Bryson has pointed out that people often ask whether self-driving cars should prioritise the driver or pedestrian in an emergency situation, but in reality the existing non-self-driving cars already prioritise the driver: we don’t have airbags on the outside of our cars now and nobody ever questions that.

Ethics is not a straightforward process; there often isn’t one right answer, especially when taking different cultural values and priorities into account. Often it’s about landing on the least worst option and being able to demonstrate how that decision was made so that others can critically examine the decision.

Machines may not be any better than humans at ethical decision making but one of the great opportunities of trying to automate a tricky problem is that it often leads to recognition of an existing systemic bias. This can in turn lead to wider organisational and societal conversations about developing more equitable outcomes for all stakeholders – and if there’s one guiding principle of ethics which cuts across most if not all ethical frameworks, it’s fairness.

Trust in AI: ‘Leaders are starting to coalesce around some universally recognisable principles’

We’ve reached a point where business and civic sector leaders recognise the importance and urgency around AI ethics issues, and the proliferation of AI ethics models, toolkits, brainstorming tools and so on is starting to coalesce around some universally recognisable principles like fairness, accountability, and transparency. The next stage is building structured accountability mechanisms for AI ethics, similar to the role that auditing plays in the financial sector to check whether agreed industry-wide principles and regulatory frameworks are being used in practice.

We’re already starting to see these kinds of mechanisms emerging, both at the technical model level for data scientists and technical practitioners to use in their day-to-day systems building, and at the organisational level for business leaders and board members to build the right governance structures.

Trust in AI: ‘We might not all be data scientists, but every one of us is impacted by AI’

There is also an increasing grassroots movement to help everyone develop the confidence to understand and interrogate the automated decision-making systems that impact our lives. We might not all be data scientists, but every one of us is impacted by AI.

We need to have the critical literacy to demand accountability from the people building these systems. This can sound intimidating, but doesn’t always require deep technical expertise: just as in the Twitter example I mentioned earlier, the ability to recognise what’s wrong and push for change is something we should all feel empowered to do.

Dr Caitlin McDonald is the Leading Edge Forum’s resident digital anthropologist. LEF is a global research and thought leadership programme.

Share

Featured Articles

Accenture Commits to Expanding its AI Vision with Adobe

Focusing its AI strategy on company transformation, Accenture partners with Adobe to develop industry-specific solutions using Gen AI to empower businesses

Businesses are not ‘Data Ready’ for Gen AI, says Alteryx

A report by Alteryx finds that organisations must prepare, as they are not ready to unlock real value from Gen AI as a result of insufficient data stacks

TacticAI: Google DeepMind Pioneer a Sports-Led AI Assistant

Google DeepMind’s TacticAI has been launched as part of a research collaboration with Liverpool FC to transform the sporting experience with AI

Bumble: Harnessing AI to Power Human Relationships

Data & Analytics

Kheiron Medical Technology can Detect Cancer with AI Test

Data & Analytics

Who is Mustafa Suleyman? DeepMind Founder Turned AI CEO

Machine Learning