Experts Question the Future of Ethical AI

By Elise Leise
According to the Pew Research Centre, 68% of technology leaders don’t believe AI systems will be used for good. Is there an answer?

To measure technology leaders’ current outlook on ethical AI, the Pew Research Centre and Elon University’s Imagining the Internet Centre asked 602 business executives, policy developers, and researchers a simple question: By 2030, will most of the AI systems being used by organisations of all sorts employ ethical principles focused primarily on the public good? 

The majority of respondents—68%—said no, explaining that they worry that AI will be used to optimise business profits and gain social control. While the survey was not representative of a larger population, and the respondents weren’t randomly selected, the Pew Research Centre’s research still raises valid and critical concerns. 

What’s the Issue? 

By 2030, AI will not only affect our jobs, housing, finance, and international trade systems, but also air pollution, warfare, cultural traditions, and civil rights. ‘We don’t acknowledge that our technologies change us as we use them; that our thinking and behaviours are altered by the cyber effect; that devices and gadgets don’t just turn us into gadget junkies, [but] may abridge our humanity, compassion, empathy, and social fabric’, warned Barry Chudakov, founder and principal of Sertain Research.

Can We Regulate It? 

National and international organisations have started to develop ad hoc AI committees. Often set the task of drafting policy documents, the following includes some of the most notable global committees: 

In addition, companies from Google to SAP, professional organisations such as the Association of Computing Machinery (ACM), and non-profit groups like Amnesty International have publicly released AI guidelines and best practices. Still, technology leaders have found it difficult to achieve global consensus. 

What’s At Stake? 

According to a content analysis by the Health Ethics and Policy Lab at ETH Zurich, eleven ethical principles continue to recur throughout AI academic literature: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, dignity, sustainability, and solidarity. 

Yet Jamais Casico, a research fellow at the Institute for the Future, argues that “the most important ethical dilemmas are ones in which the correct behaviour is situational: healthcare AI that intentionally lies to memory care patients rather than re-traumatise them; military AI that refuses an illegal order; all of the “trolley-problem” dilemmas where there are no good answers, only varieties of bad outcomes’. 

What Can We Conclude? 

Among the Pew Research Centre’s notes recurred three types of responses: 

  • Reminders. ‘You can’t force unethical players to follow the ethics playbook.’
  • Potential solutions. ‘AIs built to be reciprocally competitive could keep an eye on each other.’
  • Warnings. ‘We are ill-prepared for the onslaught and implications of bad AI applications.’

It bears noting that ethical issues are just as much about us—humans—as they are about artificial intelligence. ‘AI is just a small cog in a big system’, said Marcel Fafchamps, a professor of economics and a senior fellow at the Center on Democracy, Development, and the Rule of Law at Stanford University. ‘The main danger...is that machine learning reproduces past discrimination’. If that’s the case, the question may no longer be ‘What do we want AI to be?’ Instead, the real issue is us. What kind of humans are we? And how do we want to evolve as a species?

Share

Featured Articles

AI and Broadcasting: BBC Commits to Transforming Education

The global broadcaster seeks to use AI to make its education offerings personalised and interactive to encourage young people to engage with the company

Why Businesses are Building AI Strategy on Amazon Bedrock

AWS partners such as Accenture, Delta Air Lines, Intuit, Salesforce, Siemens, Toyota & United Airlines are using Amazon Bedrock to build and deploy Gen AI

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

AI Strategy

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Technology

Should Tech Leaders be Concerned About the Power of AI?

Technology