Should AI developers really be our ethics gatekeepers?

The power of data cannot be understated. Data-driven decisions continue to form the backbone of successful scientific, government and business strategies as more businesses and government bodies use data in new ways to gain actionable insights.
As the world’s volume of data continues to grow, there is a clear demand for more powerful technology such as AI that can not only communicate what happened yesterday but also predict what will happen tomorrow.
Used correctly, AI has real transformative potential - but the risks and opportunities around AI integration are also significant and still require consideration. In the training and programming stage, any AI relies on the human factor and is at the mercy of its creator and their own inherent views, experiences, and personal filters. Just as a hammer will not pick itself up and hammer a nail, AI is entirely dependent on its creator and user to perform.
AI has the potential to either completely redefine the way businesses work with data – delivering hyper-relevant predictive and prescriptive models and business insights – or fall into a loop of self-propagating bias.
Assessing the core need: Baking in good development practice
When it comes to AI, development priorities such as ethics, shareability, scalability, and security can often be considered as peripheral to the core development goals of building a functional product and getting it to market quickly.
Perhaps one of the most famous examples of this is with the Mirai botnet, where passwordless IoT devices resulted in the internet for an entire country going dark through sustained DDoS attacks.
Equally, recent Alteryx commissioned research into the state of data literacy in the UK found that, shockingly, 42% of employees responsible for data work saw data ethics as “irrelevant” to their role – casting a shadow over the future of numerous AI-based projects with billions of pounds at stake.
A key solution to this challenge is one we are now observing in the early stages – legislation for a more considered use of AI technology and making ethics a production requirement from start to finish.
It is a hugely promising step, to see recent recommendations for legislative changes making their way through the EU Parliament – particularly a recent decision around the ethics of AI. In it, they specify that “No human being or human community should be harmed or subordinated … during any phase of the life cycle of AI systems. Throughout the life cycle of AI systems, the quality of life of human beings should be enhanced.”
Indeed, the UK is also one of the first countries in the world to legislate an algorithmic transparency standard to strip bias out of algorithmic decision-making. In it, they necessitate a description of the tool – including how and why it is being used.
The second requirement – promisingly – focuses on the datasets used to train the models, the level of human oversight, and how the tool itself functions.
It is clear that AI currently doesn’t work alone. Humans remain a key puzzle piece when building and training successful models. With this in mind, biases in the historical data used for training, for example, can creep into models that may include human decisions and unintentionally discriminate.
Even when many fields that directly relate to potential biases are removed (such as gender or race), AI can still replicate the historic challenges through inferred information. Amazon, for example, mothballed their own AI recruiting algorithm, as the legacy CVs it used were found heavily originated from men, resulting in a bias against women.
Finding humanity in 1s and 0s
In November 2018 there were 5 billion consumers that interacted with data, by 2025 that number will increase to 6 billion or 75% of the world population.
Data is the bedrock upon which health, wealth and business success can be delivered, and developing AI tools that exclusively increase human quality of life is a significant undertaking. But data needs context and human intelligence when applied to AI.
There are often variables at play that only humans can interpret and understand. Being able to access, interpret, and deliver insights from that data at scale is vital, and AI is a core tool in achieving that… but even with legislation informing the creation of AI, the data that AI is fed by the end-user can often be filled with a brand-new unintentional bias.
The human factor is simultaneously the greatest strength and the greatest weakness of artificial intelligence. We, as humans, are all biased in some way due to individual circumstances, upbringings, or emotional reactions to different stimuli. It’s because we’re human that we have those biases. The human element can hinder real change by adding more unintentional human bias back into a ‘clean’ technology mix.
To truly unlock the value of AI, we need to see a combined approach - one where ethical AI development is integrated concurrently with deliberate and far-reaching employee upskilling campaigns. Data is the golden ticket that unlocks the insights, but programmes that promote upskilling to enable everyone to “speak data” will provide the people who will deliver these insights – people with the knowledge and experience to assess biased data, and deliver alternate solutions.
A business developed on a foundation of multiple, diverse viewpoints is far more prepared to thrive in today’s hyperglobal environment. Ultimately, businesses need to develop a culture that not only allows for the unique differences between people but celebrates them and integrates that diversity of thinking into a strong strategy to ensure biased data does not impact the business value offered by AI integration