To measure technology leaders’ current outlook on ethical AI, the Pew Research Centre and Elon University’s Imagining the Internet Centre asked 602 business executives, policy developers, and researchers a simple question: By 2030, will most of the AI systems being used by organisations of all sorts employ ethical principles focused primarily on the public good?
The majority of respondents—68%—said no, explaining that they worry that AI will be used to optimise business profits and gain social control. While the survey was not representative of a larger population, and the respondents weren’t randomly selected, the Pew Research Centre’s research still raises valid and critical concerns.
What’s the Issue?
By 2030, AI will not only affect our jobs, housing, finance, and international trade systems, but also air pollution, warfare, cultural traditions, and civil rights. ‘We don’t acknowledge that our technologies change us as we use them; that our thinking and behaviours are altered by the cyber effect; that devices and gadgets don’t just turn us into gadget junkies, [but] may abridge our humanity, compassion, empathy, and social fabric’, warned Barry Chudakov, founder and principal of Sertain Research.
Can We Regulate It?
National and international organisations have started to develop ad hoc AI committees. Often set the task of drafting policy documents, the following includes some of the most notable global committees:
- High-Level Expert Group on Artificial Intelligence, appointed by the European Commission
- Artificial Intelligence in Society, run by the Organisation for Economic Cooperation and Development (OECD)
- Advisory Council on the Ethical Use of Artificial Intelligence and Data in Singapore
- Select Committee on Artificial Intelligence, as part of the UK House of Lords
In addition, companies from Google to SAP, professional organisations such as the Association of Computing Machinery (ACM), and non-profit groups like Amnesty International have publicly released AI guidelines and best practices. Still, technology leaders have found it difficult to achieve global consensus.
What’s At Stake?
According to a content analysis by the Health Ethics and Policy Lab at ETH Zurich, eleven ethical principles continue to recur throughout AI academic literature: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, dignity, sustainability, and solidarity.
Yet Jamais Casico, a research fellow at the Institute for the Future, argues that “the most important ethical dilemmas are ones in which the correct behaviour is situational: healthcare AI that intentionally lies to memory care patients rather than re-traumatise them; military AI that refuses an illegal order; all of the “trolley-problem” dilemmas where there are no good answers, only varieties of bad outcomes’.
What Can We Conclude?
Among the Pew Research Centre’s notes recurred three types of responses:
- Reminders. ‘You can’t force unethical players to follow the ethics playbook.’
- Potential solutions. ‘AIs built to be reciprocally competitive could keep an eye on each other.’
- Warnings. ‘We are ill-prepared for the onslaught and implications of bad AI applications.’
It bears noting that ethical issues are just as much about us—humans—as they are about artificial intelligence. ‘AI is just a small cog in a big system’, said Marcel Fafchamps, a professor of economics and a senior fellow at the Center on Democracy, Development, and the Rule of Law at Stanford University. ‘The main danger...is that machine learning reproduces past discrimination’. If that’s the case, the question may no longer be ‘What do we want AI to be?’ Instead, the real issue is us. What kind of humans are we? And how do we want to evolve as a species?